repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
kylemede/DS-ML-sandbox | notebooks/.ipynb_checkpoints/CS_and_Python-checkpoint.ipynb | gpl-3.0 | for _ in xrange(10):
print "Do something"
"""
Explanation: xrange vs range looping
For long for loops with no need to track iteration use:
End of explanation
"""
for i in range(1,10):
vars()['x'+str(i)] = i
"""
Explanation: This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.
Automating variable names
To assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as:
End of explanation
"""
print repr(dir())
print repr(x1)
print repr(x5)
"""
Explanation: You can see the variables in memory with:
End of explanation
"""
bin(21)[2:]
"""
Explanation: Binary numbers and Python operators
A good review of Python operators can be found here: http://www.programiz.com/python-programming/operators
The wiki reviewing bitwise operations here: https://en.wikipedia.org/wiki/Bitwise_operation
OR
http://www.math.grin.edu/~rebelsky/Courses/152/97F/Readings/student-binary
Note that binary numbers follow:
2^4| 2^3| 2^2| 2^1| 2^0
1 0 -> 2+0 = 2
1 1 1 -> 4+2+1 = 7
1 0 1 0 1 -> 16+0+4+0+1 = 21
1 1 1 1 0 -> 16+8+4+2+0 = 30
Convert numbers from base 10 to binary with bin()
End of explanation
"""
a = 123
b = 234
a, b = bin(a)[2:], bin(b)[2:]
print "Before evening their lengths:\n{}\n{}".format(a,b)
diff = len(a)-len(b)
if diff > 0:
b = '0' * diff + b
elif diff < 0:
a = '0' * abs(diff) + a
print "After evening their lengths:\n{}\n{}".format(a,b)
"""
Explanation: Ensuring two binary numbers are the same length
End of explanation
"""
s = ''
for i in range(len(a)):
s += str(int(a[i]) | int(b[i]))
print "{}\n{}\n{}\n{}".format(a, b, '-'*len(a), s)
"""
Explanation: For bitwise or:
End of explanation
"""
sum(map(lambda x: 2**x[0] if int(x[1]) else 0, enumerate(reversed(s))))
"""
Explanation: bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>
Convert the resulting binary number to base 10:
End of explanation
"""
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print repr(s.isEmpty())+'\n'
s.push(4)
s.push('dog')
print repr(s.peek())+'\n'
s.push(True)
print repr(s.size())+'\n'
print repr(s.isEmpty())+'\n'
s.push(8.4)
print repr(s.pop())+'\n'
print repr(s.pop())+'\n'
print repr(s.size())+'\n'
"""
Explanation: Building a 'stack' in Python
End of explanation
"""
|
QInfer/qinfer-examples | custom_distributions.ipynb | agpl-3.0 | from __future__ import division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
try: plt.style.use('ggplot')
except: pass
"""
Explanation: Making Custom Distributions
Introduction
By using the InterpolatedUnivariateDistribution class, you can easily create a single-variable distribution by specifying its PDF as a callable function. Here, we'll demonstrate this functionality by implementing the asymmetric Lorentz distribution of Stancik and Brauns.
Preamble
As always, we start by setting up the Python environment for inline plotting and true division.
End of explanation
"""
from qinfer.distributions import InterpolatedUnivariateDistribution
"""
Explanation: Next, we import the InterpolatedUnivariateDistribution class.
End of explanation
"""
def asym_lorentz_scale(x, x_0, gamma_0, a):
return 2 * gamma_0 / (1 + np.exp(a * (x - x_0)))
def asym_lorentz_pdf(x, x_0, gamma_0, a):
gamma = asym_lorentz_scale(x, x_0, gamma_0, a)
return 2 * gamma / (np.pi * gamma_0 * (1 + 4 * ((x - x_0) / (gamma_0))**2))
"""
Explanation: Defining Distributions
The asymmetric Lorentz distribution is defined by letting the scale parameter $\gamma$ of a Lorentz distribution be a function of the random variable $x$,
$$
\gamma(x) = \frac{2\gamma_0}{1 + \exp(a [x - x_0])}.
$$
It is straightforward to implement this in a vectorized way by defining this function and then substituting it into the PDF of a Lorentz distribution.
End of explanation
"""
dist = InterpolatedUnivariateDistribution(lambda x: asym_lorentz_pdf(x, 0, 1, 2), 2, 1200)
"""
Explanation: Once we have this, we can pass the PDF as a lambda function to InterpolatedUnivariateDistribution in order to specify
the values of the location $x_0$, the nominal scale $\gamma_0$ and the asymmetry parameter $a$.
End of explanation
"""
hist(dist.sample(n=10000), bins=100);
"""
Explanation: The resulting distribution can be sampled like any other, such that we can quickly check that it produces something of the desired shape.
End of explanation
"""
%timeit dist = InterpolatedUnivariateDistribution(lambda x: asym_lorentz_pdf(x, 0, 1, 2), 2, 120)
%timeit dist = InterpolatedUnivariateDistribution(lambda x: asym_lorentz_pdf(x, 0, 1, 2), 2, 1200)
%timeit dist = InterpolatedUnivariateDistribution(lambda x: asym_lorentz_pdf(x, 0, 1, 2), 2, 12000)
"""
Explanation: We note that making this distribution object is fast enough that it can conceivably be embedded within a likelihood function itself, so as to enable using the method of hyperparameters to estimate the parameters of the asymmetric Lorentz distribution.
End of explanation
"""
|
w4zir/ml17s | lectures/.ipynb_checkpoints/lec04-multinomial-regression-checkpoint.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/example1.csv', encoding='utf-8')
# assign x and y
X = np.array(dataframe[['x']])
y = np.array(dataframe[['y']])
m = y.size # number of training examples
# check data by printing first few rows
dataframe.head()
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Lecture 4: Multinomial Regression
Overview
Machine Learning pipeline
# Linear Regression with one variable
Model Representation
Cost Function
Gradient Descent
Linear Regression Example
Read data
Plot data
Lets assume $\theta_0 = 0$ and $\theta_1=0$
Plot it
$\theta_1$ vs Cost
Gradient Descent
Run Gradient Descent
Plot Convergence
Predict output using trained model
Plot Results
Resources
Credits
<br>
<br>
Machine Learning pipeline
<img style="float: left;" src="images/model.png">
x is called input variables or input features.
y is called output or target variable. Also sometimes known as label.
h is called hypothesis or model.
pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
dataset of all training examples is called training set.
m is the number of samples in a dataset.
n is the number of features in a dataset excluding label.
<img style="float: left;" src="images/02_02.png", width=400>
<br>
<br>
Linear Regression with one variable
Model Representation
Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
<img style="float: left;" src="images/02_01.png">
$\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
<br>
<br>
<br>
Cost Function
Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Finally mean error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
<img style="float: left;" src="images/03_01.png", width=300> <img style="float: right;" src="images/03_02.png", width=300>
<br>
<br>
Gradient Descent
Cost function:
J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
Gradient descent equation:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
<br>
Replacing J($\theta$) for each j
\begin{align} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align}
<br>
<img style="float: left;" src="images/03_04.gif">
<br>
<br>
Linear Regression Example
| x | y |
| ------------- |:-------------:|
| 1 | 0.8 |
| 2 | 1.6 |
| 3 | 2.4 |
| 4 | 3.2 |
Read data
End of explanation
"""
#visualize results
plt.scatter(X, y)
plt.title("Dataset")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
"""
Explanation: Plot data
End of explanation
"""
theta0 = 0
theta1 = 0
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
"""
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=0$
End of explanation
"""
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 0")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
"""
Explanation: plot it
End of explanation
"""
# save theta1 and cost in a vector
cost_log = []
theta1_log = []
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
"""
Explanation: Plot $\theta1$ vs Cost
End of explanation
"""
theta0 = 0
theta1 = 1
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
"""
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=1$
End of explanation
"""
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 1")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
"""
Explanation: plot it
End of explanation
"""
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
"""
Explanation: Plot $\theta1$ vs Cost again
End of explanation
"""
theta0 = 0
theta1 = 2
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 2")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
"""
Explanation: Lets assume $\theta_0 = 0$ and $\theta_1=2$
End of explanation
"""
theta0 = 0
theta1 = -3.1
cost_log = []
theta1_log = [];
inc = 0.1
for j in range(61):
theta1 = theta1 + inc;
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
cost_log.append(cost)
theta1_log.append(theta1)
"""
Explanation: Run it for a while
End of explanation
"""
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
"""
Explanation: plot $\theta_1$ vs Cost
End of explanation
"""
theta0 = 0
theta1 = -3
alpha = 0.1
interations = 100
cost_log = []
iter_log = [];
inc = 0.1
for j in range(interations):
cost = 0
grad = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
grad += ((hx - y[i,0]))*X[i,0]
cost = cost/(2*m)
grad = grad/(2*m)
theta1 = theta1 - alpha*grad
cost_log.append(cost)
theta1
"""
Explanation: <br>
<br>
Lets do it with Gradient Descent now
End of explanation
"""
plt.plot(cost_log)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
"""
Explanation: Plot Convergence
End of explanation
"""
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for Theta1 from Gradient Descent")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
"""
Explanation: Predict output using trained model
End of explanation
"""
|
mbeyeler/opencv-machine-learning | notebooks/10.02-Combining-Decision-Trees-Into-a-Random-Forest.ipynb | mit | from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.25, random_state=100)
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], s=100, c=y)
plt.xlabel('feature 1')
plt.ylabel('feature 2');
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Understanding Ensemble Methods | Contents | Using Random Forests for Face Recognition >
Combining Decision Trees Into a Random Forest
A popular variation of bagged decision trees are the so-called random forests. These are
essentially a collection of decision trees, where each tree is slightly different from the others.
In contrast to bagged decision trees, each tree in a random forest is trained on a slightly
different subset of data features.
Although a single tree of unlimited depth might do a relatively good job of predicting the
data, it is also prone to overfitting. The idea behind random forests is to build a large
number of trees, each of them trained on a random subset of data samples and features.
Because of the randomness of the procedure, each tree in the forest will overfit the data in a
slightly different way. The effect of overfitting can then be reduced by averaging the
predictions of the individual trees.
Understanding the shortcomings of decision trees
The effect of overfitting the dataset, which a decision tree often falls victim of is best
demonstrated through a simple example.
For this, we will return to the make_moons function from scikit-learn's datasets module,
which we previously used in Chapter 8, Discovering Hidden Structures with Unsupervised
Learning to organize data into two interleaving half circles. Here, we choose to generate 100
data samples belonging to two half circles, in combination with some Gaussian noise with
standard deviation 0.25:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=100
)
"""
Explanation: Because of all the noise we added, the two half moons might not be apparent at first glance.
That's a perfect scenario for our current intentions, which is to show that decision trees are
tempted to overlook the general arrangement of data points (that is, the fact that they are
organized in half circles) and instead focus on the noise in the data.
To illustrate this point, we first need to split the data into training and test sets. We choose a
comfortable 75-25 split (by not specifying train_size), as we have done a number of times
before:
End of explanation
"""
import numpy as np
def plot_decision_boundary(classifier, X_test, y_test):
# create a mesh to plot in
h = 0.02 # step size in mesh
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_hypo = np.c_[xx.ravel().astype(np.float32),
yy.ravel().astype(np.float32)]
ret = classifier.predict(X_hypo)
if isinstance(ret, tuple):
zz = ret[1]
else:
zz = ret
zz = zz.reshape(xx.shape)
plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
"""
Explanation: Now let's have some fun. What we want to do is to study how the decision boundary of a
decision tree changes as we make it deeper and deeper.
For this, we will bring back the plot_decision_boundary function from Chapter 6,
Detecting Pedestrians with Support Vector Machines among others:
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
plt.figure(figsize=(16, 8))
for depth in range(1, 9):
plt.subplot(2, 4, depth)
tree = DecisionTreeClassifier(max_depth=depth)
tree.fit(X, y)
plot_decision_boundary(tree, X_test, y_test)
plt.axis('off')
plt.title('depth = %d' % depth)
"""
Explanation: Then we can code up a for loop, where at each iteration, we fit a tree of a different depth:
End of explanation
"""
import cv2
rtree = cv2.ml.RTrees_create()
"""
Explanation: As we continue to build deeper and deeper trees, we notice something strange: the deeper
the tree, the more likely it is to get strangely shaped decision regions, such as the tall and
skinny patches in the rightmost panel of the lower row. It's clear that these patches are more
a result of the noise in the data rather than some characteristic of the underlying data
distribution. This is an indication that most of the trees are overfitting the data. After all, we
know for a fact that the data is organized into two half circles! As such, the trees with
depth=3 or depth=5 are probably closest to the real data distribution.
There are at least two different ways to make a decision tree less powerful:
- Train the tree only on a subset of the data
- Train the tree only on a subset of the features
Random forests do just that. In addition, they repeat the experiment many times by
building an ensemble of trees, each of which is trained on a randomly chosen subset of data
samples and/or features.
Implementing our first random forest
In OpenCV, random forests can be built using the RTrees_create function from the ml
module:
End of explanation
"""
n_trees = 10
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
n_trees, eps)
rtree.setTermCriteria(criteria)
"""
Explanation: The tree object provides a number of options, the most important of which are the
following:
- setMaxDepth: This sets the maximum possible depth of each tree in the ensemble. The actual obtained depth may be smaller if other termination criteria are met first.
- setMinSampleCount: This sets the minimum number of samples that a node can contain for it to get split.
- setMaxCategories: This sets the maximum number of categories allowed. Setting the number of categories to a smaller value than the actual number of classes in the data performs subset estimation.
- setTermCriteria: This sets the termination criteria of the algorithm. This is also where you set the number of trees in the forest.
We can specify the number of trees in the forest by passing an integer n_trees to the
setTermCriteria method. Here, we also want to tell the algorithm to quit once the score
does not increase by at least eps from one iteration to the next:
End of explanation
"""
rtree.train(X_train.astype(np.float32), cv2.ml.ROW_SAMPLE, y_train);
"""
Explanation: Then we are ready to train the classifier on the data from the preceding code:
End of explanation
"""
_, y_hat = rtree.predict(X_test.astype(np.float32))
"""
Explanation: The test labels can be predicted with the predict method:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
"""
Explanation: Using scikit-learn's accuracy_score, we can evaluate the model on the test set:
End of explanation
"""
plt.figure(figsize=(10, 6))
plot_decision_boundary(rtree, X_test, y_test)
"""
Explanation: After training, we can pass the predicted labels to the plot_decision_boundary function:
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=10, random_state=200)
"""
Explanation: Implementing a random forest with scikit-learn
Alternatively, we can implement random forests using scikit-learn:
End of explanation
"""
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
"""
Explanation: Here, we have a number of options to customize the ensemble:
- n_estimators: This specifies the number of trees in the forest.
- criterion: This specifies the node splitting criterion. Setting criterion='gini' implements the Gini impurity, whereas setting criterion='entropy' implements information gain.
- max_features: This specifies the number (or fraction) of features to consider at each node split.
- max_depth: This specifies the maximum depth of each tree.
- min_samples: This specifies the minimum number of samples required to split a node.
We can then fit the random forest to the data and score it like any other estimator:
End of explanation
"""
plt.figure(figsize=(10, 6))
plot_decision_boundary(forest, X_test, y_test)
"""
Explanation: This gives roughly the same result as in OpenCV. We can use our helper function to plot the
decision boundary:
End of explanation
"""
from sklearn.ensemble import ExtraTreesClassifier
extra_tree = ExtraTreesClassifier(n_estimators=10, random_state=100)
"""
Explanation: Implementing extremely randomized trees
Random forests are already pretty arbitrary. But what if we wanted to take the randomness
to its extreme?
In extremely randomized trees (see ExtraTreesClassifier and ExtraTreesRegressor
classes), the randomness is taken even further than in random forests. Remember how
decision trees usually choose a threshold for every feature so that the purity of the node
split is maximized. Extremely randomized trees, on the other hand, choose these thresholds
at random. The best one of these randomly-generated thresholds is then used as the
splitting rule.
We can build an extremely randomized tree as follows:
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [0, 2]]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=100
)
"""
Explanation: To illustrate the difference between a single decision tree, a random forest, and extremely
randomized trees, let's consider a simple dataset, such as the Iris dataset:
End of explanation
"""
extra_tree.fit(X_train, y_train)
extra_tree.score(X_test, y_test)
"""
Explanation: We can then fit and score the tree object the same way we did before:
End of explanation
"""
forest = RandomForestClassifier(n_estimators=10, random_state=100)
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
"""
Explanation: For comparison, using a random forest would have resulted in the same performance:
End of explanation
"""
tree = DecisionTreeClassifier()
tree.fit(X_train, y_train)
tree.score(X_test, y_test)
"""
Explanation: In fact, the same is true for a single tree:
End of explanation
"""
classifiers = [
(1, 'decision tree', tree),
(2, 'random forest', forest),
(3, 'extremely randomized trees', extra_tree)
]
"""
Explanation: So what's the difference between them?
To answer this question, we have to look at the
decision boundaries. Fortunately, we have already imported our
plot_decision_boundary helper function in the preceding section, so all we need to do is
pass the different classifier objects to it.
We will build a list of classifiers, where each entry in the list is a tuple that contains an
index, a name for the classifier, and the classifier object:
End of explanation
"""
plt.figure(figsize=(17, 5))
for sp, name, model in classifiers:
plt.subplot(1, 3, sp)
plot_decision_boundary(model, X_test, y_test)
plt.title(name)
plt.axis('off')
"""
Explanation: Then it's easy to pass the list of classifiers to our helper function such that the decision
landscape of every classifier is drawn in its own subplot:
End of explanation
"""
|
charlesreid1/empirical-model-building | ipython/Factorial - Two-Level Three-Factor Design.ipynb | mit | import pandas as pd
import numpy as np
from numpy.random import rand
"""
Explanation: A Two-Level, Three-Factor Full Factorial Design
<br />
<br />
<br />
Table of Contents
Introduction
Factorial Experimental Design:
Two-Level Three-Factor Full Factorial Design
Design of the Experiment
Inputs and Responses
Effects and Interactions:
Computing Main Effects
Analyzing Main Effects
Two Way Interactions
Analyzing Two Way Interactions
Three Way Interactions
Analyzing Three Way Interactions
Fitting a Polynomial Response Surface
Uncertainty:
The Impact of Uncertainty
Uncertainty Quantification: A Factory Example
Uncertainty Numbers
Uncertainty Measurements
Accounting for Uncertainty in the Model
Discussion
Conclusion
<br />
<br />
<br />
<a name="intro"></a>
Introduction
As with other notebooks in this repository, this notebook follows, more or less closely, content from Box and Draper's Empirical Model-Building and Response Surfaces (Wiley, 1984). This content is covered by Chapter 4 of Box and Draper.
In this notebook, we'll carry out an anaylsis of a full factorial design, and show how we can obtain inforomation about a system and its responses, and a quantifiable range of certainty about those values. This is the fundamental idea behind empirical model-building and allows us to construct cheap and simple models to represent complex, nonlinear systems.
Once we've nailed this down for simple models and small numbers of inputs and responses, we can expand on it, use more complex models, and link this material with machine learning algorithms.
We'll start by importing numpy for numerical analysis, and pandas for convenient data containers.
End of explanation
"""
inputs_labels = {'x1' : 'Length of specimen (mm)',
'x2' : 'Amplitude of load cycle (mm)',
'x3' : 'Load (g)'}
dat = [('x1',250,350),
('x2',8,10),
('x3',40,50)]
inputs_df = pd.DataFrame(dat,columns=['index','low','high'])
inputs_df = inputs_df.set_index(['index'])
inputs_df['label'] = inputs_df.index.map( lambda z : inputs_labels[z] )
inputs_df
"""
Explanation: Box and Draper cover different experimental design methods in the book, but begin with the simplest type of factorial design in Chapter 4: a full factorial design with two levels. A factorial experimental design is appropriate for exploratory stages, when the effects of variables or their interactions on a system response are poorly understood or not quantifiable.
<a name="twolevelfactorial"></a>
Two-Level Full Factorial Design
The analysis begins with a two-level, three-variable experimental design - also written $2^3$, with $n=2$ levels for each factor, $k=3$ different factors. We start by encoding each fo the three variables to something generic: $(x_1,x_2,x_3)$. A dataframe with input variable values is then populated.
End of explanation
"""
inputs_df['average'] = inputs_df.apply( lambda z : ( z['high'] + z['low'])/2 , axis=1)
inputs_df['span'] = inputs_df.apply( lambda z : ( z['high'] - z['low'])/2 , axis=1)
inputs_df['encoded_low'] = inputs_df.apply( lambda z : ( z['low'] - z['average'] )/( z['span'] ), axis=1)
inputs_df['encoded_high'] = inputs_df.apply( lambda z : ( z['high'] - z['average'] )/( z['span'] ), axis=1)
inputs_df = inputs_df.drop(['average','span'],axis=1)
inputs_df
"""
Explanation: Next, we encode the variable values. For an arbitrary variable value $\phi_1$, the value of the variable can be coded to be between -1 and 1 according to the formula:
$$
x_i = \dfrac{ \phi_i - \mbox{avg }(\phi) }{ \mbox{span }(\phi) }
$$
where the average and the span of the variable $\phi_i$ are defined as:
$$
\mbox{avg }(\phi) = \left( \dfrac{ \phi_{\text{high}} + \phi_{\text{low}} }{2} \right)
$$
$$
\mbox{span }(\phi) = \left( \dfrac{ \phi_{\text{high}} - \phi_{\text{low}} }{2} \right)
$$
End of explanation
"""
import itertools
encoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1]) )
encoded_inputs
"""
Explanation: <a name="designexperiment"></a>
Design of the Experiment
While everything preceding this point is important to state, to make sure we're being consistent and clear about our problem statement and assumptions, nothing preceding this point is particularly important to understanding how experimental design works. This is simply illustrating the process of transforming one's problem from a problem-specific problem space to a more general problem space.
<a name="inputs_responses"></a>
Inputs and Responses
Box and Draper present the results (observed outcomes) of a $2^3$ factorial experiment. The $2^3$ comes from the fact that there are 2 levels for each variable (-1 and 1) and three variables (x1, x2, and x3). The observed, or output, variable is the number of cycles to failure for a particular piece of machinery; this variable is more conveniently cast as a logarithm, as it can be a very large number.
Each observation data point consists of three input variable values and an output variable value, $(x_1, x_2, x_3, y)$, and can be thought of as a point in 3D space $(x_1,x_2,x_3)$ with an associated point value of $y$. Alternatively, this might be thought of as a point in 4D space (the first three dimensions are the location in 3D space where the point will appear, and the $y$ value is when it will actually appear).
The input variable values consist of all possible input value combinations, which we can produce using the itertools module:
End of explanation
"""
results = [(-1, -1, -1, 674),
( 1, -1, -1, 3636),
(-1, 1, -1, 170),
( 1, 1, -1, 1140),
(-1, -1, 1, 292),
( 1, -1, 1, 2000),
(-1, 1, 1, 90),
( 1, 1, 1, 360)]
results_df = pd.DataFrame(results,columns=['x1','x2','x3','y'])
results_df['logy'] = results_df['y'].map( lambda z : np.log10(z) )
results_df
"""
Explanation: Now we implement the observed outcomes; as we mentioned, these numbers are large (hundreds or thousands of cycles), and are more conveniently scaled by taking $\log_{10}()$ (which will rescale them to be integers between 1 and 4).
End of explanation
"""
real_experiment = results_df
var_labels = []
for var in ['x1','x2','x3']:
var_label = inputs_df.ix[var]['label']
var_labels.append(var_label)
real_experiment[var_label] = results_df.apply(
lambda z : inputs_df.ix[var]['low'] if z[var]<0 else inputs_df.ix[var]['high'] ,
axis=1)
print("The values of each real variable in the experiment:")
real_experiment[var_labels]
"""
Explanation: The variable inputs_df contains all input variables for the expeirment design, and results_df contains the inputs and responses for the experiment design; these variables are the encoded levels. To obtain the original, unscaled values, which allows us to check what experiments must be run, we can always convert the dataframe back to its originals by defining a function to un-apply the scaling equation. This is as simple as finding
End of explanation
"""
# Compute the mean effect of the factor on the response,
# conditioned on each variable
labels = ['x1','x2','x3']
main_effects = {}
for key in labels:
effects = results_df.groupby(key)['logy'].mean()
main_effects[key] = sum( [i*effects[i] for i in [-1,1]] )
main_effects
"""
Explanation: <a name="computing_main_effects"></a>
Computing Main Effects
Now we compute the main effects of each variable using the results of the experimental design. We'll use some shorthand Pandas functions to compute these averages: the groupby function, which groups rows of a dataframe according to some condition (in this case, the value of our variable of interest $x_i$).
End of explanation
"""
import itertools
twoway_labels = list(itertools.combinations(labels, 2))
twoway_effects = {}
for key in twoway_labels:
effects = results_df.groupby(key)['logy'].mean()
twoway_effects[key] = sum([ i*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1] ])
# This somewhat hairy one-liner takes the mean of a set of sum-differences
#twoway_effects[key] = mean([ sum([ i*effects[i][j] for i in [-1,1] ]) for j in [-1,1] ])
twoway_effects
"""
Explanation: <a name="analyzing_main_effects"></a>
Analyzing Main Effects
The main effect of a given variable (as defined by Yates 1937) is the average difference in the level of response as the input variable moves from the low to the high level. If there are other variables, the change in the level of response is averaged over all combinations of the other variables.
Now that we've computed the main effects, we can analyze the results to glean some meaningful information about our system. The first variable x1 has a positive effect of 0.74 - this indicates that when x1 goes from its low level to its high level, it increases the value of the response (the lieftime of the equipment). This means x1 should be increased, if we want to make our equipment last longer. Furthermore, this effect was the largest, meaning it's the variable we should consider changing first.
This might be the case if, for example, changing the value of the input variables were capital-intensive. A company might decide that they can only afford to change one variable, x1, x2, or x3. If this were the case, increasing x1 would be the way to go.
In contrast, increasing the variables x2 and x3 will result in a decrease in the lifespan of our equipment (makes the response smaller), since these have a negative main effect. These variables should be kept at their lower levels, or decreased, to increase the lifespan of the equipment.
<a name="twowayinteractions"></a>
Two-Way Interactions
In addition to main effects, a factorial design will also reveal interaction effects between variables - both two-way interactions and three-way interactions. We can use the itertools library to compute the interaction effects using the results from the factorial design.
We'll use the Pandas groupby function again, grouping by two variables this time.
End of explanation
"""
import itertools
threeway_labels = list(itertools.combinations(labels, 3))
threeway_effects = {}
for key in threeway_labels:
effects = results_df.groupby(key)['logy'].mean()
threeway_effects[key] = sum([ i*j*k*effects[i][j][k]/4 for i in [-1,1] for j in [-1,1] for k in [-1,1] ])
threeway_effects
"""
Explanation: This one-liner is a bit hairy:
twoway_effects[key] = sum([ i*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1] ])
What this does is, computes the two-way variable effect with a multi-step calculation, but does it with a list comprehension. First, let's just look at this part:
i*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1]
This computes the prefix i*j, which determines if the interaction effect effects[i][j] is positive or negative. We're also looping over one additional dimension; we multiply by 1/2 for each additional dimension we loop over. These are all summed up to yield the final interaction effect for every combination of the input variables.
If we were computing three-way interaction effects, we would have a similar-looking one-liner, but with i, j, and k:
i*j*k*effects[i][j][k]/4 for i in [-1,1] for j in [-1,1] for k in [-1,1]
<a name="analyzing_twowayinteractions"></a>
Analyzing Two-Way Interactions
As with main effects, we can analyze the results of the interaction effects analysis to come to some useful conclusions about our physical system. A two-way interaction is a measure of how the main effect of one variable changes as the level of another variable changes. A negative two-way interaction between $x_2$ and $x_3$ means that if we increase $x_3$, the main effect of $x_2$ will be to decrase the response; or, alternatively, if we increase $x_2$, the main effect of $x_3$ will be to decrease the response.
In this case, we see that the $x_2-x_3$ interaction effect is the largest, and it is negative. This means that if we decrease both $x_2$ and $x_3$, it will increase our response - make the equipment last longer. In fact, all of the variable interactions have the same result - increasing both variables will decrease the lifetime of the equipment - which indicates that any gains in equipment lifetime accomplished by increasing $x_1$ will be nullified by increases to $x_2$ or $x_3$, since these variables will interact.
Once again, if we are limited in the changes that we can actually make to the equipment and input levels, we would want to keep $x_2$ and $x_3$ both at their low levels to keep the response variable value as high as possible.
<a name="threewayinteractions"></a>
Three-Way Interactions
Now let's comptue the three-way effects (in this case, we can only have one three-way effect, since we only have three variables). We'll start by using the itertools library again, to create a tuple listing the three variables whose interactions we're computing. Then we'll use the Pandas groupby() feature to partition each output according to its inputs, and use it to compute the three-way effects.
End of explanation
"""
s = "yhat = "
s += "%0.3f "%(results_df['logy'].mean())
for i,k in enumerate(main_effects.keys()):
if(main_effects[k]<0):
s += "%0.3f x%d "%( main_effects[k]/2.0, i+1 )
else:
s += "+ %0.3f x%d "%( main_effects[k]/2.0, i+1 )
print(s)
"""
Explanation: <a name="analyzing_threewayinteractions"></a>
Analysis of Three-Way Effects
While three-way interactions are relatively rare, typically smaller, and harder to interpret, a negative three-way interaction esssentially means that increasing these variables, all together, will lead to interactions which lower the response (the lifespan of the equipment) by -0.082, which is equivalent to decreasing the lifespan of the equipment by one cycle. However, this effect is very weak comapred to main and interaction effects.
<a name="fitting_responsesurface"></a>
Fitting a Polynomial Response Surface
While identifying general trends and the effects of different input variables on a system response is useful, it's more useful to have a mathematical model for the system. The factorial design we used is designed to get us coefficients for a linear model $\hat{y}$ that is a linear function of input variables $x_i$, and that predicts the actual system response $y$:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + a_{12} x_1 x_2 + a_{13} x_1 x_3 + a_{23} x_2 x_3 + a_{123} x_1 x_2 x_3
$$
To determine these coefficients, we can obtain the effects we computed above. When we computed effects, we defined them as measuring the difference in the system response that changing a variable from -1 to +1 would have. Because this quantifies the change per two units of x, and the coefficients of a polynomial quantify the change per one unit of x, the effect must be divided by two.
End of explanation
"""
sigmasquared = 0.0050
k = len(inputs_df.index)
Vmean = (sigmasquared)/(2**k)
Veffect = (4*sigmasquared)/(2**k)
print("Variance in mean: %0.6f"%(Vmean))
print("Variance in effects: %0.6f"%(Veffect))
"""
Explanation: Thus, the final result of the experimental design matrix and the 8 experiments that were run is the following polynomial for $\hat{y}$, which is a model for $y$, the system response:
$$
\hat{y} = 2.744 - 0.295 x_1 - 0.175 x_2 + 0.375 x_3
$$
<a name="uncertainty"></a>
The Impact of Uncertainty
The main and interaction effects give us a more quantitative idea of what variables are important, yes. They can also be important for identifying where a model can be improved (if an input is linked strongly to a system response, more effort should be spent understanding the nature of the relationship).
But there are still some practical considerations missing from the implementation above. Specifically, in the real world it is impossible to know the system repsonse, $y$, perfectly. Rather, we may measure the response with an instrument whose uncertainty has been quantified, or we may measure a quantity multiple times (or both). How do we determine the impact of that uncertainty on the model?
Ultimately, factorial designs are based on the underlying assumption that the response $y$ is a linear function of the inputs $x_i$. Thus, for the three-factor full factorial experiment design, we are collecting data and running experiments in such a way that we obtain a model $\hat{y}$ for our system response $y$, and $\hat{y}$ is a linear function of each factor:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3
$$
The experiment design allows us to obtain a value for each coefficient $a_0$, $a_1$, etc. that will fit $\hat{y}$ to $y$ to the best of its abilities.
Thus, uncertainty in the measured responses $y$ propagates into the linear model in the form of uncertainty in the coefficients $a_0$, $a_1$, etc.
<a name="uncertainty_example"></a>
Uncertainty Quantfication: Factory Example
For example, suppose that we're dealing with a machine on a factory floor, and we're measuring the system response $y$, which is a machine failure. Now, how do we know if a machine has failed? Perhaps we can't see its internals, and it still makes noise. We might find out that a machine has failed by seeing it emit smoke. But sometimes, machines will emit smoke before they fail, while other times, machines will only smoke after they've failed. We don't know exactly how many life cycles the machines went through, but we can quantify what we know. We can measure the mean $\overline{y}$ and variance $\sigma^2$ in a controlled setting, so that when a machine starts smoking, we have a probability distribution assigning probabilities to different times of failure (i.e., there is a 5% chance it failed more than 1 hour ago).
Once we obtain the variance, or $\sigma^2$, we can obtain the value of $\sigma$, which represents the distribution of uncertainty. Assuming 2 sigma is acceptable (covers 95% of cases), we can add or subtract $\sigma$ from the estimate of parameters.
<a name="uncertainty_numbers"></a>
Uncertainty Numbers
To obtain an estimate of the uncertainty, the experimentalist will typically make several measurements at the center point, that is, where all parameter levels are 0. The more samples are taken at this condition, the better characterized the distribution of uncertainty becomes. These center point samples can be used to construct a Gaussian probability distribution function, which yeilds a variance, $\sigma^2$ (or, to be proper, an estimate $s^2$ of the real variance $\sigma^2$). This parameter is key for quantifying uncertainty.
<a name="uncertainty_measurements"></a>
Using Uncertainty Measurements
Suppose we measure $s^2 = 0.0050$. Now what?
Now we can obtain the variance of all measurements, and the variance in the effects that we computed above. These are computed via:
$$
Var_{mean} = V(\overline{y}) = \dfrac{\sigma^2}{2^k} \
Var_{effect} = \dfrac{4 \sigma^2}{2^k}
$$
End of explanation
"""
print(np.sqrt(Vmean))
print(np.sqrt(Veffect))
"""
Explanation: Alternatively, if the responses $y$ are actually averages of a given number $r$ of $y$-observations, $\overline{y}$, then the variance will shrink:
$$
Var_{mean} = \dfrac{\sigma^2}{r 2^k} \
Var_{effect} = \dfrac{4 \sigma^2}{r 2^k}
$$
The variance gives us an estimate of sigma squared, and if we have sigma squared we can obtain sigma. Sigma is the quantity that represents the range of response values that captures 1 sigma, or 66%, of the probable values of $y$ with $\hat{y}$. Adding a plus or minus sigma means we are capturing 2 sigma, or 95%, of the probable values of $y$.
Taking the square root of the variance gives $\sigma$:
End of explanation
"""
unc_a_0 = np.sqrt(Vmean)
print(unc_a_0)
"""
Explanation: <a name="uncertainty_accounting"></a>
Accounting for Uncertainty in Model
Now we can convert the values of the effects, and the values of $\sigma$, to values for the final linear model:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + a_{12} x_1 x_2 + a_{13} x_1 x_3 + a_{23} x_2 x_3 + a_{123} x_1 x_2 x_3
$$
We begin with the case where each variable value is at its middle point (all non-constant terms are 0), and
$$
\hat{y} = a_0
$$
In this case, the standard error is $\pm \sigma$ as computed for the mean (or overall) system response,
$$
\hat{y} = a_0 \pm \sigma_{mean}
$$
where $\sigma_{mean} = \sqrt{Var(mean)}$.
End of explanation
"""
|
zzsza/Datascience_School | 30. 딥러닝/03. 신경망 성능 개선.ipynb | mit | sigmoid = lambda x: 1/(1+np.exp(-x))
sigmoid_prime = lambda x: sigmoid(x)*(1-sigmoid(x))
xx = np.linspace(-10, 10, 1000)
plt.plot(xx, sigmoid(xx));
plt.plot(xx, sigmoid_prime(xx));
"""
Explanation: 신경망 성능 개선
신경망의 예측 성능 및 수렴 성능을 개선하기 위해서는 다음과 같은 추가적인 고려를 해야 한다.
오차(목적) 함수 개선: cross-entropy cost function
정규화: regularization
가중치 초기값: weight initialization
Softmax 출력
Activation 함수 선택: hyper-tangent and ReLu
기울기와 수렴 속도 문제
일반적으로 사용하는 잔차 제곱합(sum of square) 형태의 오차 함수는 대부분의 경우에 기울기 값이 0 이므로 (near-zero gradient) 수렴이 느려지는 단점이 있다.
http://neuralnetworksanddeeplearning.com/chap3.html
$$
\begin{eqnarray}
z = \sigma (wx+b)
\end{eqnarray}
$$
$$
\begin{eqnarray}
C = \frac{(y-z)^2}{2},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & (z-y)\sigma'(a) x \
\frac{\partial C}{\partial b} & = & (z-y)\sigma'(a)
\end{eqnarray}
$$
if $x=1$, $y=0$,
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & a \sigma'(a) \
\frac{\partial C}{\partial b} & = & a \sigma'(z)
\end{eqnarray}
$$
$\sigma'$는 대부분의 경우에 zero.
End of explanation
"""
%cd /home/dockeruser/neural-networks-and-deep-learning/src
%ls
import mnist_loader
import network2
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
net = network2.Network([784, 30, 10], cost=network2.QuadraticCost)
net.large_weight_initializer()
%time result1 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
%time result2 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
plt.plot(result1[1], 'bo-', label="quadratic cost")
plt.plot(result2[1], 'rs-', label="cross-entropy cost")
plt.legend(loc=0)
plt.show()
"""
Explanation: 교차 엔트로피 오차 함수 (Cross-Entropy Cost Function)
이러한 수렴 속도 문제를 해결하는 방법의 하나는 오차 제곱합 형태가 아닌 교차 엔트로피(Cross-Entropy) 형태의 오차함수를 사용하는 것이다.
$$
\begin{eqnarray}
C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right],
\end{eqnarray}
$$
미분값은 다음과 같다.
$$
\begin{eqnarray}
\frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left(
\frac{y }{z} -\frac{(1-y)}{1-z} \right)
\frac{\partial z}{\partial w_j} \
& = & -\frac{1}{n} \sum_x \left(
\frac{y}{\sigma(a)}
-\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \
& = &
\frac{1}{n}
\sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))}
(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x (z-y) x_j\ \
\frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y)
\end{eqnarray}
$$
이 식에서 보다시피 기울기(gradient)가 예측 오차(prediction error) $z-y$에 비례하기 때문에
오차가 크면 수렴 속도가 빠르고
오차가 적으면 속도가 감소하여 발산을 방지한다.
교차 엔트로피 구현 예
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py
```python
Define the quadratic and cross-entropy cost functions
class QuadraticCost(object):
@staticmethod
def fn(a, y):
"""Return the cost associated with an output ``a`` and desired output
``y``.
"""
return 0.5*np.linalg.norm(a-y)**2
@staticmethod
def delta(z, a, y):
"""Return the error delta from the output layer."""
return (a-y) * sigmoid_prime(z)
class CrossEntropyCost(object):
@staticmethod
def fn(a, y):
"""Return the cost associated with an output ``a`` and desired output
``y``. Note that np.nan_to_num is used to ensure numerical
stability. In particular, if both ``a`` and ``y`` have a 1.0
in the same slot, then the expression (1-y)*np.log(1-a)
returns nan. The np.nan_to_num ensures that that is converted
to the correct value (0.0).
"""
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))
@staticmethod
def delta(z, a, y):
"""Return the error delta from the output layer. Note that the
parameter ``z`` is not used by the method. It is included in
the method's parameters in order to make the interface
consistent with the delta method for other cost classes.
"""
return (a-y)
```
End of explanation
"""
from ipywidgets import interactive
from IPython.display import Audio, display
def softmax_plot(z1=0, z2=0, z3=0, z4=0):
exps = np.array([np.exp(z1), np.exp(z2), np.exp(z3), np.exp(z4)])
exp_sum = exps.sum()
plt.bar(range(len(exps)), exps/exp_sum)
plt.xlim(-0.3, 4.1)
plt.ylim(0, 1)
plt.xticks([])
v = interactive(softmax_plot, z1=(-3, 5, 0.01), z2=(-3, 5, 0.01), z3=(-3, 5, 0.01), z4=(-3, 5, 0.01))
display(v)
"""
Explanation: 과최적화 문제
신경망 모형은 파라미터의 수가 다른 모형에 비해 많다.
* (28x28)x(30)x(10) => 24,000
* (28x28)x(100)x(10) => 80,000
이렇게 파라미터의 수가 많으면 과최적화 발생 가능성이 증가한다. 즉, 정확도가 나아지지 않거나 나빠져도 오차 함수는 계속 감소하는 현상이 발생한다.
예:
python
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data,
monitor_evaluation_accuracy=True, monitor_training_cost=True)
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting1.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting3.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting4.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting2.png" style="width:90%;">
L2 정규화
이러한 과최적화를 방지하기 위해서는 오차 함수에 다음과 같이 정규화 항목을 추가하여야 한다.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i w_i^2
\end{eqnarray}
$$
또는
$$
\begin{eqnarray} C = C_0 + \frac{\lambda}{2n}
\sum_i w_i^2,
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} w \
\frac{\partial C}{\partial b} & = & \frac{\partial C_0}{\partial b}
\end{eqnarray}
$$
$$
\begin{eqnarray}
w & \rightarrow & w-\eta \frac{\partial C_0}{\partial w}-\frac{\eta \lambda}{n} w \
& = & \left(1-\frac{\eta \lambda}{n}\right) w -\eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
L2 정규화 구현 예
`python
def total_cost(self, data, lmbda, convert=False):
"""Return the total cost for the data setdata. The flagconvertshould be set to False if the data set is the
training data (the usual case), and to True if the data set is
the validation or test data. See comments on the similar (but
reversed) convention for theaccuracy`` method, above.
"""
cost = 0.0
for x, y in data:
a = self.feedforward(x)
if convert: y = vectorized_result(y)
cost += self.cost.fn(a, y)/len(data)
cost += 0.5(lmbda/len(data))sum(np.linalg.norm(w)**2 for w in self.weights)
return cost
def update_mini_batch(self, mini_batch, eta, lmbda, n):
"""Update the network's weights and biases by applying gradient
descent using backpropagation to a single mini batch. The
mini_batch is a list of tuples (x, y), eta is the
learning rate, lmbda is the regularization parameter, and
n is the total size of the training data set.
"""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [(1-eta(lmbda/n))w-(eta/len(mini_batch))nw for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))nb for b, nb in zip(self.biases, nabla_b)]
```
python
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data, lmbda = 0.1,
monitor_evaluation_cost=True, monitor_evaluation_accuracy=True,
monitor_training_cost=True, monitor_training_accuracy=True)
<img src="http://neuralnetworksanddeeplearning.com/images/regularized1.png" style="width:90%;" >
<img src="http://neuralnetworksanddeeplearning.com/images/regularized2.png" style="width:90%;" >
L1 정규화
L2 정규화 대신 다음과 같은 L1 정규화를 사용할 수도 있다.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i \| w_i \|
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} = \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} \, {\rm sgn}(w)
\end{eqnarray}
$$
$$
\begin{eqnarray}
w \rightarrow w' = w-\frac{\eta \lambda}{n} \mbox{sgn}(w) - \eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
Dropout 정규화
Dropout 정규화 방법은 epoch 마다 임의의 hidden layer neurons $100p$%(보통 절반)를 dropout 하여 최적화 과정에 포함하지 않는 방법이다. 이 방법을 사용하면 가중치 값들 값들이 동시에 움직이는 것(co-adaptations) 방지하며 모형 averaging 효과를 가져다 준다.
<img src="http://neuralnetworksanddeeplearning.com/images/tikz31.png">
가중치 갱신이 끝나고 테스트 시점에는 가중치에 $p$를 곱하여 스케일링한다.
<img src="https://datascienceschool.net/upfiles/8e5177d1e7dd46a69d5b316ee8748e00.png">
가중치 초기화 (Weight initialization)
뉴런에 대한 입력의 수 $n_{in}$가 증가하면 가중 총합 $a$값의 표준편차도 증가한다.
$$ \text{std}(a) \propto \sqrt{n_{in}} $$
<img src="http://neuralnetworksanddeeplearning.com/images/tikz32.png">
예를 들어 입력이 1000개, 그 중 절반이 1이면 표준편차는 약 22.4 이 된다.
$$ \sqrt{501} \approx 22.4 $$
<img src="https://docs.google.com/drawings/d/1PZwr7wS_3gg7bXtp16XaZCbvxj4tMrfcbCf6GJhaX_0/pub?w=608&h=153">
이렇게 표준 편가가 크면 수렴이 느려지기 때문에 입력 수에 따라 초기화 가중치의 표준편차를 감소하는 초기화 값 조정이 필요하다.
$$\dfrac{1}{\sqrt{n_{in}} }$$
가중치 초기화 구현 예
python
def default_weight_initializer(self):
"""Initialize each weight using a Gaussian distribution with mean 0
and standard deviation 1 over the square root of the number of
weights connecting to the same neuron. Initialize the biases
using a Gaussian distribution with mean 0 and standard
deviation 1.
Note that the first layer is assumed to be an input layer, and
by convention we won't set any biases for those neurons, since
biases are only ever used in computing the outputs from later
layers.
"""
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)/np.sqrt(x) for x, y in zip(self.sizes[:-1], self.sizes[1:])]
<img src="http://neuralnetworksanddeeplearning.com/images/weight_initialization_30.png" style="width:90%;">
소프트맥스 출력
소프트맥스(softmax) 함수는 입력과 출력이 다변수(multiple variable) 인 함수이다. 최고 출력의 위치를 변화하지 않으면서 츨력의 합이 1이 되도록 조정하기 때문에 출력에 확률론적 의미를 부여할 수 있다. 보통 신경망의 최종 출력단에 적용한다.
$$
\begin{eqnarray}
y^L_j = \frac{e^{a^L_j}}{\sum_k e^{a^L_k}},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sum_j y^L_j & = & \frac{\sum_j e^{a^L_j}}{\sum_k e^{a^L_k}} = 1
\end{eqnarray}
$$
<img src="https://www.tensorflow.org/versions/master/images/softmax-regression-scalargraph.png" style="width:60%;">
End of explanation
"""
z = np.linspace(-5, 5, 100)
a = np.tanh(z)
plt.plot(z, a)
plt.show()
"""
Explanation: Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation
시그모이드 함수 이외에도 하이퍼 탄젠트 및 ReLu 함수를 사용할 수도 있다.
하이퍼 탄젠트 activation 함수는 음수 값을 가질 수 있으며 시그모이드 activation 함수보다 일반적으로 수렴 속도가 빠르다.
$$
\begin{eqnarray}
\tanh(w \cdot x+b),
\end{eqnarray}
$$
$$
\begin{eqnarray}
\tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sigma(a) = \frac{1+\tanh(a/2)}{2},
\end{eqnarray}
$$
End of explanation
"""
z = np.linspace(-5, 5, 100)
a = np.maximum(z, 0)
plt.plot(z, a)
plt.show()
"""
Explanation: Rectified Linear Unit (ReLu) Activation 함수는 무한대 크기의 activation 값이 가능하며 가중치총합 $a$가 큰 경우에도 기울기(gradient)가 0 이되며 사라지지 않는다는 장점이 있다.
$$
\begin{eqnarray}
\max(0, w \cdot x+b).
\end{eqnarray}
$$
End of explanation
"""
|
bspalding/research_public | lectures/drafts/Multiple linear regression.ipynb | apache-2.0 | # Import the libraries we'll be using
import numpy as np
import statsmodels.api as sm
# If the observations are in a dataframe, you can use statsmodels.formulas.api to do the regression instead
from statsmodels import regression
import matplotlib.pyplot as plt
# Construct and plot series
X1 = np.arange(100)
X2 = np.array([i^2 for i in range(100)]) + X
Y = X1 + 2*X2
plt.plot(X1, label='X1')
plt.plot(X2, label='X2')
plt.plot(Y, label='Y')
plt.legend();
"""
Explanation: Multiple linear regression
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Multiple linear regression generalizes linear regression, allowing the dependent variable to be a linear function of multiple independent variables. As before, we assume that the variable $Y$ is a linear function of $X_1,\ldots, X_k$:
$$ Y_i = \beta_0 + \beta_1 X_{1i} + \ldots + \beta_k X_{ki} + \epsilon_i $$
for observations $i = 1,2,\ldots, n$. We solve for the coefficients by using the method of ordinary least-squares, trying to minimize the error $\sum_{i=1}^n \epsilon_i^2$ to find the (hyper)plane of best fit. Once we have the coefficients, we can predict values of $Y$ outside of our observations.
Each coefficient $\beta_j$ tells us how much $Y_i$ will change if we change $X_j$ by one while holding all of the other dependent variables constant. This lets us separate out the contributions of different effects.
We start by artificially constructing a $Y$ for which we know the result.
End of explanation
"""
# Use column_stack to combine independent variables, then add a column of ones so we can fit an intercept
results = regression.linear_model.OLS(Y, sm.add_constant(np.column_stack((X1,X2)))).fit()
print 'Beta_0:', results.params[0], 'Beta_1:', results.params[1], ' Beta_2:', results.params[2]
"""
Explanation: We can use the same function from statsmodels as we did for a single linear regression.
End of explanation
"""
# Load pricing data for two arbitrarily-chosen assets and SPY
start = '2014-01-01'
end = '2015-01-01'
asset1 = get_pricing('DTV', fields='price', start_date=start, end_date=end)
asset2 = get_pricing('FISV', fields='price', start_date=start, end_date=end)
benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end)
# First, run a linear regression on the two assets
slr = regression.linear_model.OLS(asset1, sm.add_constant(asset2)).fit()
print 'SLR beta of asset2:', slr.params[1]
# Run multiple linear regression using asset2 and SPY as independent variables
mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((asset2, benchmark)))).fit()
prediction = mlr.params[0] + mlr.params[1]*asset2 + mlr.params[2]*benchmark
print 'MLR beta of asset2:', mlr.params[1], ' MLR beta of S&P 500', mlr.params[2]
# Plot the three variables along with the prediction given by the MLR
asset1.plot()
asset2.plot()
benchmark.plot()
prediction.plot(color='y')
plt.legend(bbox_to_anchor=(1,1), loc=2);
# Plot only the dependent variable and the prediction to get a closer look
asset1.plot()
prediction.plot(color='y');
"""
Explanation: The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly $3X_1$ plus a parabola. However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Multiple linear regression separates out contributions from different variables, so that the coefficient of $X_1$ is different from what it would be if we did a single linear regression on $X_1$ and $Z$.
Similarly, running a linear correlation on two securities might give a high $\beta$. However, if we bring in a third security (like SPY, which tracks the S&P 500) as an independent variable, we may find that the correlation between the first two securities is almost entirely due to them both being correlated with the S&P 500. This is useful because the S&P 500 may then be a more reliable predictor of both securities than they were of each other. We can also better see whether the correlation between the two securitites is significant.
End of explanation
"""
mlr.summary()
"""
Explanation: Evaluating
We can get some statistics about the fit from the result returned by the regression:
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/notes2017/W10/GongSu23_Statistics_Correlation.ipynb | gpl-3.0 | from GongSu22_Statistics_Population_Variance import *
"""
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
상관분석
안내사항
지난 시간에 다룬 21장과 22장 내용을 활용하고자 한다.
따라서 아래와 같이 21장과 22장 내용을 모듈로 담고 있는 파이썬 파일을 임포트 해야 한다.
주의: 아래 두 개의 파일이 동일한 디렉토리에 위치해야 한다.
* GongSu21_Statistics_Averages.py
* GongSu22_Statistics_Population_Variance.py
End of explanation
"""
prices_pd.head()
"""
Explanation: 주의
위 모듈을 임포트하면 아래 모듈 또한 자동으로 임포트 된다.
GongSu21_Statistics_Averages.py
주요 내용
상관분석
공분산
상관관계와 인과관계
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주에서 거래된 담배(식물) 도매가와 뉴욕 주에서 거래된 담배(식물) 도매가의 상관관계를 다룬다.
오늘 사용할 데이터
주별 담배(식물) 도매가격 및 판매일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png" style="width:600">
</td>
</tr>
</table>
</p>
주의: 언급된 파일이 GongSu21_Statistics_Averages 모듈에서 prices_pd 라는 변수에 저장되었음.
또한 주(State)별, 거래날짜별(date) 기준으로 이미 정렬되어 있음.
따라서 아래에서 볼 수 있듯이 예를 들어, prices_pd의 첫 다섯 줄의 내용은 알파벳순으로 가장 빠른 이름을 가진 알라바마(Alabama) 주에서 거래된 데이터 중에서 가정 먼저 거래된 5개의 거래내용을 담고 있다.
End of explanation
"""
ny_pd = prices_pd[prices_pd['State'] == 'New York'].copy(True)
ny_pd.head(10)
"""
Explanation: 상관분석 설명
상관분석은 두 데이터 집단이 어떤 관계를 갖고 있는 지를 분석하는 방법이다.
두 데이터 집단이 서로 관계가 있을 때 상관관계를 계산할 수도 있으며, 상관관계의 정도를 파악하기 위해서 대표적으로 피어슨 상관계수가 사용된다. 또한 상관계수를 계산하기 위해 공분산을 먼저 구해야 한다.
공분산(Covariance)
공분산은 두 종류의 데이터 집단 x와 y가 주어졌을 때 한쪽에서의 데이터의 변화와
다른쪽에서의 데이터의 변화가 서로 어떤 관계에 있는지를 설명해주는 개념이다.
공분산은 아래 공식에 따라 계산한다.
$$Cov(x, y) = \frac{\Sigma_{i=1}^{n} (x_i - \bar x)(y_i - \bar y)}{n-1}$$
캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
준비 작업: 뉴욕 주 데이터 정리하기
먼저 뉴욕 주에서 거래된 담배(식물) 도매가의 정보를 따로 떼서 ny_pd 변수에 저장하자.
방식은 california_pd를 구할 때와 동일하게 마스크 인덱싱을 사용한다.
End of explanation
"""
ny_pd_HighQ = ny_pd.iloc[:, [1, 7]]
"""
Explanation: 이제 정수 인덱싱을 사용하여 상품(HighQ)에 대한 정보만을 가져오도록 하자.
End of explanation
"""
ny_pd_HighQ.columns = ['NY_HighQ', 'date']
ny_pd_HighQ.head()
"""
Explanation: 위 코드에 사용된 정수 인덱싱은 다음과 같다.
[:, [1, 7]]
':' 부분 설명: 행 전체를 대상으로 한다.
'[1, 7]' 부분 설명: 1번 열과 7번 열을 대상으로 한다.
결과적으로 1번 열과 7번 열 전체만을 추출하는 슬라이싱을 의미한다.
이제 각 열의 이름을 새로 지정하고자 한다. 뉴욕 주에서 거래된 상품(HighQ) 이기에 NY_HighQ라 명명한다.
End of explanation
"""
ca_pd_HighQ = california_pd.iloc[:, [1, 7]]
ca_pd_HighQ.head()
"""
Explanation: 준비 작업: 캘리포니아 주 데이터 정리하기
비슷한 일을 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가에 대해서 실행한다.
End of explanation
"""
ca_ny_pd = pd.merge(ca_pd_HighQ, ny_pd_HighQ, on="date")
ca_ny_pd.head()
"""
Explanation: 준비 작업: 정리된 두 데이터 합치기
이제 두 개의 테이블을 date를 축으로 하여, 즉 기준으로 삼아 합친다.
End of explanation
"""
ca_ny_pd.rename(columns={"HighQ": "CA_HighQ"}, inplace=True)
ca_ny_pd.head()
"""
Explanation: 캘리포니아 주의 HighQ 열의 이름을 CA_HighQ로 변경한다.
End of explanation
"""
ny_mean = ca_ny_pd.NY_HighQ.mean()
ny_mean
"""
Explanation: 준비 작업: 합친 데이터를 이용하여 공분산 계산 준비하기
먼저 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값을 계산한다.
End of explanation
"""
ca_ny_pd['ca_dev'] = ca_ny_pd['CA_HighQ'] - ca_mean
ca_ny_pd.head()
ca_ny_pd['ny_dev'] = ca_ny_pd['NY_HighQ'] - ny_mean
ca_ny_pd.head()
"""
Explanation: 이제 ca_ny_pd 테이블에 새로운 열(column)을 추가한다. 추가되는 열의 이름은 ca_dev와 ny_dev이다.
ca_dev: 공분산 계산과 관련된 캘리포니아 주의 데이터 연산 중간 결과값
ny_dev: 공분산 계산과 관련된 뉴욕 주의 데이터 연산 중간 결과값
즉, 아래 공식에서의 분자에 사용된 값들의 리스트를 계산하는 과정임.
$$Cov(x, y) = \frac{\Sigma_{i=1}^{n} (x_i - \bar x)(y_i - \bar y)}{n-1}$$
End of explanation
"""
ca_ny_cov = (ca_ny_pd['ca_dev'] * ca_ny_pd['ny_dev']).sum() / (ca_count - 1)
ca_ny_cov
"""
Explanation: 캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
이제 공분산을 쉽게 계산할 수 있다.
주의:
* DataFrame 자료형의 연산은 넘파이 어레이의 연산처럼 항목별로 실행된다.
* sum 메소드의 활용을 기억한다.
End of explanation
"""
ca_highq_std = ca_ny_pd.CA_HighQ.std()
ny_highq_std = ca_ny_pd.NY_HighQ.std()
ca_ny_corr = ca_ny_cov / (ca_highq_std * ny_highq_std)
ca_ny_corr
"""
Explanation: 피어슨 상관계수
피어슨 상관계수(Pearson correlation coefficient)는 두 변수간의 관련성 정도를 나타낸다.
두 변수 x와 y의 상관계수(r) = x와 y가 함께 변하는 정도와 x와 y가 따로 변하는 정도 사이의 비율
즉, $$r = \frac{Cov(X, Y)}{s_x\cdot s_y}$$
의미:
r = 1: X와 Y 가 완전히 동일하다.
r = 0: X와 Y가 아무 연관이 없다
r = -1: X와 Y가 반대방향으로 완전히 동일 하다.
선형관계 설명에도 사용된다.
-1.0 <= r < -0.7: 강한 음적 선형관계
-0.7 <= r < -0.3: 뚜렷한 음적 선형관계
-0.3 <= r < -0.1: 약한 음적 선형관계
-0.1 <= r <= 0.1: 거의 무시될 수 있는 관계
0.1 < r <= +0.3: 약한 양적 선형관계
0.3 < r <= 0.7: 뚜렷한 양적 선형관계
0.7 < r <= 1.0: 강한 양적 선형관계
주의
위 선형관계 설명은 일반적으로 통용되지만 예외가 존재할 수도 있다.
예를 들어, 아래 네 개의 그래프는 모두 피어슨 상관계수가 0.816이지만, 전혀 다른 상관관계를 보여주고 있다.
(출처: https://en.wikipedia.org/wiki/Correlation_and_dependence)
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/pearson_relation.png" style="width:600">
</td>
</tr>
</table>
</p>
캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 상관계수 계산하기
End of explanation
"""
california_pd.describe()
"""
Explanation: 상관관계(Correlation)와 인과관계(Causation)
상관관계: 두 변수 사이의 상관성을 보여주는 관계. 즉, 두 변수 사이에 존재하는 모종의 관련성을 의미함.
예를 들어, 캘리포니아 주의 상품 담배(식물) 도매가와 뉴육 주의 상품 담배(식물) 도매가 사이에는 모종의 관계가 있어 보임.
캘리포니아 주에서의 가격이 오르면 뉴욕 주에서의 가격도 비슷하게 오른다. 상관정도는 0.979 정도로 매우 강한 양적 선형관계를 보인다.
인과관계: 두 변수 사이에 서로 영향을 주거나 실제로 연관되어 있음을 보여주는 관계.
주의: 두 변수 사이에 상관관계가 있다고 해서 그것이 반드시 어느 변수가 다른 변수에 영향을 준다든지, 아니면 실제로 연관되어 있음을 뜻하지는 않는다.
예를 들어, 캘리포니아 주의 담배(식물) 도매가와 뉴욕 주의 담배(식물) 도매가 사이에 모종의 관계가 있는 것은 사실이지만, 그렇다고 해서 한 쪽에서의 가격 변동이 다른 쪽에서의 가격변동에 영향을 준다는 근거는 정확하게 알 수 없다.
연습문제
연습
모집단의 분산과 표준편차에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
describe() 메소드를 캘리포니아 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 california_pd에서 실행하면 아래와 같은 결과를 보여준다.
count: 총 빈도수, 즉 표본의 크기
mean: 평균값
std: 모집단 표준편차 점추정 값
min: 표본의 최소값
25%: 하한 사분위수 (하위 4분의 1을 구분하는 위치에 자리하는 수)
50%: 중앙값
75%: 상한 사분위수 (상위 4분의 1을 구분하는 위치에 자리하는 수)
max: 최대값
End of explanation
"""
ca_ny_pd.cov()
"""
Explanation: 연습
공분산에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
cov() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다.
End of explanation
"""
ca_ny_pd.corr()
"""
Explanation: 위 테이블에서 CA_HighQ와 NY_HighQ가 만나는 부분의 값을 보면 앞서 계산한 공분산 값과 일치함을 확인할 수 있다.
연습
상관계수에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
corr() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다.
End of explanation
"""
|
lin99/NLPTM-2016 | 4.Docs/assign1.ipynb | mit | # import word2vec model from gensim
from gensim.models.word2vec import Word2Vec
# load pre-trained model
model = Word2Vec.load_word2vec_format('eswikinews.bin', binary=True)
"""
Explanation: NLP and TM Módulo 4
Taller 1: word2vec
Nombres:
Obtenga el archivo del modelo word2vec entrenado con WikiNews en Español: eswikinews.bin
End of explanation
"""
def presidents_comp(country):
### Su código debe ir aquí
return []
for country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:
print country
for president in presidents_comp(country):
print ' ', president
"""
Explanation: 1. Comparando composicionalidad y analogía.
Composicionalidad y analogía son dos mecanismos diferentes que se pueden usar con representaciones distribuidas. La idea es usar independientemente composicionalidad y analogía para resolver el mismo problema. El problema a resolver es encontrar el presidente de un país dado.
Primero usaremos composicionalidad. La función siguiente debe recibir el nombre de un país y retornar una lista de palabras que posiblemente corresponden a presidentes.
Por ejemplo, si la función se invoca con 'ecuador' como argumento:
```python
presidents_comp('ecuador')
[u'jamil_mahuad',
u'presidencia',
u'jose_maria_velasco_ibarra',
u'republica',
u'rafael_correa',
u'gustavo_noboa',
u'lucio_gutierrez',
u'abdala_bucaram',
u'vicepresidente',
u'gabriel_garcia_moreno']
```
End of explanation
"""
def presidents_analogy(country):
### Su código debe ir aquí
return []
for country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:
print country
for president in presidents_analogy(country):
print ' ', president
"""
Explanation: El siguiente paso es usar analogías para encontrar el presidente de un país dado.
End of explanation
"""
def antonimo(palabra):
### Su código debe ir aquí
return '-'
for palabra in ['blanco', 'menor', 'rapido', 'arriba']:
print palabra, antonimo(palabra)
"""
Explanation: ¿Cual versión funciona mejor? Explique claramente. ¿Por qué cree que este es le caso?
3. Escriba una función que calcule el antónimo de una palabra
End of explanation
"""
model.doesnt_match("azul rojo abajo verde".split())
"""
Explanation: Busque más ejemplos en los que funcione y otros en los que no funcione. Explique.
4. Una de estas cosas no es como las otras...
Gensim provee la función doesnt_match, la cual permite encontrar, dentro de una lista de palabras, una palabra que está fuera de lugar. Por ejemplo:
End of explanation
"""
print model.similarity('azul', 'rojo')
print model.similarity('azul', 'abajo')
import numpy as np
def no_es_como_las_otras(lista):
### Su código debe ir aquí
return '-'
print no_es_como_las_otras("azul rojo abajo verde".split())
"""
Explanation: La idea es implementar la misma funcionalidad por nuestra cuenta. La condición es que solo podemos usar la función
similarity de Gensim la cual calcula la similitud de dos palabras:
End of explanation
"""
|
flaviobarros/spyre | tutorial/pydata2015_seattle/pydata2015_seattle.ipynb | mit | from spyre import server
class SimpleApp(server.App):
title = "Simple App"
app = SimpleApp()
app.launch() # launching from ipython notebook is not recommended
"""
Explanation: twitter: @adamhajari
github: github.com/adamhajari/spyre
this notebook: http://bit.ly/pydata2015_spyre
Before we start
make sure you have the latest version of spyre
pip install --upgrade dataspyre
there have been recent changes to spyre, so if you installed more than a day ago, go ahead and upgrade
Who Am I?
Adam Hajari
Data Scientist on the Next Big Sound team at Pandora
adam@nextbigsound.com
@adamhajari
Simple Interactive Web Applications with Spyre
Spyre is a web application framework for turning static data tables and plots into interactive web apps. Spyre was motivated by <a href="http://shiny.rstudio.com/">Shiny</a>, a similar framework for R created by the developers of Rstudio.
Where does Spyre Live?
GitHub: <a href='https://github.com/adamhajari/spyre'>github.com/adamhajari/spyre</a>
Live example of a spyre app:
- <a href='http://adamhajari.com'>adamhajari.com</a>
- <a href='http://dataspyre.herokuapp.com'>dataspyre.herokuapp.com</a>
- <a href='https://spyre-gallery.herokuapp.com'>spyre-gallery.herokuapp.com</a>
Installing Spyre
Spyre depends on:
- cherrypy (server and backend)
- jinja2 (html and javascript templating)
- matplotlib (displaying plots and images)
- pandas (for working within tabular data)
Assuming you don't have any issues with the above dependencies, you can install spyre via pip:
bash
$ pip install dataspyre
Launching a Spyre App
Spyre's server module has a App class that every Spyre app will needs to inherit. Use the app's launch() method to deploy your app.
End of explanation
"""
from spyre import server
class SimpleApp(server.App):
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
app = SimpleApp()
app.launch()
"""
Explanation: If you put the above code in a file called simple_app.py you can launch the app from the command line with
$ python simple_app.py
Make sure you uncomment the last line first.
A Very Simple Example
There are two variables of the App class that need to be overridden to create the UI for a Spyre app: inputs and outputs (a third optional type called controls that we'll get to later). All three variables are lists of dictionaries which specify each component's properties. For instance, to create a text box input, overide the App's inputs variable:
End of explanation
"""
from spyre import server
class SimpleApp(server.App):
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html"}]
app = SimpleApp()
app.launch()
"""
Explanation: Now let's add an output. We first need to list all our out outputs and their attributes in the outputs dictionary.
End of explanation
"""
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
"""
Explanation: To generate the output, we can override a server.App method specific to that output type. In the case of html output, we overide the getHTML method. Each output method should return an object specific to that output type. In the case of html output, we just return a string.
End of explanation
"""
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html",
"control_id":"button1"}]
controls = [{"type":"button",
"label":"press to update",
"id":"button1"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
"""
Explanation: Great. We've got inputs and outputs, but we're not quite finished. As it is, the content of our output is static. That's because the output doesn't know when it needs to get updated. We can fix this in one of two ways:
1. We can add a button to our app and tell our output to update whenever the button is pressed.
2. We can add an action_id to our input that references the output that we want refreshed when the input value changes.
Let's see what the first approach looks like.
End of explanation
"""
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"look ma, no buttons",
"action_id":"some_html"}]
outputs = [{"type":"html",
"id":"some_html"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
"""
Explanation: Our app now has a button with id "button1", and our output references our control's id, so that when we press the button we update the output with the most current input values.
<img src="input_output_control.png">
Is a button a little overkill for this simple app? Yeah, probably. Let's get rid of it and have the output update just by changing the value in the text box. To do this we'll add an action_id attribute to our input dictionary that references the output's id.
End of explanation
"""
%pylab inline
import pandas as pd
import urllib2
import json
def getData(params):
ticker = params['ticker']
# make call to yahoo finance api to get historical stock data
api_url = 'https://chartapi.finance.yahoo.com/instrument/1.0/{}/chartdata;type=quote;range=3m/json'.format(ticker)
result = urllib2.urlopen(api_url).read()
data = json.loads(result.replace('finance_charts_json_callback( ','')[:-1]) # strip away the javascript and load json
company_name = data['meta']['Company-Name']
df = pd.DataFrame.from_records(data['series'])
df['Date'] = pd.to_datetime(df['Date'],format='%Y%m%d')
return df.drop('volume',axis=1)
params = {'ticker':'GOOG'}
df = getData(params)
df.head()
"""
Explanation: Now the output gets updated with a change to the input.
<img src="no_control.png">
Another Example
Let's suppose you've written a function to grab historical stock price data from the web. Your function returns a pandas dataframe.
End of explanation
"""
from spyre import server
import pandas as pd
import urllib2
import json
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{ "type":'dropdown',
"label": 'Company',
"options" : [ {"label": "Google", "value":"GOOG"},
{"label": "Yahoo", "value":"YHOO"},
{"label": "Apple", "value":"AAPL"}],
"key": 'ticker',
"action_id": "table_id"}]
outputs = [{ "type" : "table",
"id" : "table_id"}]
def getData(self, params):
ticker = params['ticker']
# make call to yahoo finance api to get historical stock data
api_url = 'https://chartapi.finance.yahoo.com/instrument/1.0/{}/chartdata;type=quote;range=3m/json'.format(ticker)
result = urllib2.urlopen(api_url).read()
data = json.loads(result.replace('finance_charts_json_callback( ','')[:-1]) # strip away the javascript and load json
self.company_name = data['meta']['Company-Name']
df = pd.DataFrame.from_records(data['series'])
df['Date'] = pd.to_datetime(df['Date'],format='%Y%m%d')
return df.drop('volume',axis=1)
app = StockExample()
app.launch()
"""
Explanation: Let's turn this into a spyre app. We'll use a dropdown menu input this time and start by displaying the data in a table. In the previous example we overrode the getHTML method and had it return a string to generate HTML output. To get a table output we need to override the getData method and have it return a pandas dataframe (conveniently, we've already done that!)
End of explanation
"""
df.plot()
"""
Explanation: One really convenient feature of pandas is that you can plot directly from a dataframe using the plot method.
End of explanation
"""
from spyre import server
import pandas as pd
import urllib2
import json
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{ "type":'dropdown',
"label": 'Company',
"options" : [ {"label": "Google", "value":"GOOG"},
{"label": "Yahoo", "value":"YHOO"},
{"label": "Apple", "value":"AAPL"}],
"key": 'ticker'}]
controls = [{ "type" : "button",
"label":"get stock data",
"id" : "update_data"}]
outputs = [{ "type" : "plot",
"id" : "plot",
"control_id" : "update_data"},
{ "type" : "table",
"id" : "table_id",
"control_id" : "update_data"}]
def getData(self, params):
ticker = params['ticker']
# make call to yahoo finance api to get historical stock data
api_url = 'https://chartapi.finance.yahoo.com/instrument/1.0/{}/chartdata;type=quote;range=3m/json'.format(ticker)
result = urllib2.urlopen(api_url).read()
data = json.loads(result.replace('finance_charts_json_callback( ','')[:-1]) # strip away the javascript and load json
self.company_name = data['meta']['Company-Name']
df = pd.DataFrame.from_records(data['series'])
df['Date'] = pd.to_datetime(df['Date'],format='%Y%m%d')
return df.drop(['volume'],axis=1)
app = StockExample()
app.launch()
"""
Explanation: Let's take advantage of this convenience and add a plot to our app. To generate a plot output, we need to add another dictionary to our list of outputs.
End of explanation
"""
from spyre import server
import pandas as pd
import urllib2
import json
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{ "type":'dropdown',
"label": 'Company',
"options" : [ {"label": "Google", "value":"GOOG"},
{"label": "Yahoo", "value":"YHOO"},
{"label": "Apple", "value":"AAPL"}],
"key": 'ticker'}]
controls = [{ "type" : "button",
"label":"get stock data",
"id" : "update_data"}]
outputs = [{ "type" : "plot",
"id" : "plot",
"control_id" : "update_data"},
{ "type" : "table",
"id" : "table_id",
"control_id" : "update_data"}]
def getData(self, params):
ticker = params['ticker']
# make call to yahoo finance api to get historical stock data
api_url = 'https://chartapi.finance.yahoo.com/instrument/1.0/{}/chartdata;type=quote;range=3m/json'.format(ticker)
result = urllib2.urlopen(api_url).read()
data = json.loads(result.replace('finance_charts_json_callback( ','')[:-1]) # strip away the javascript and load json
self.company_name = data['meta']['Company-Name']
df = pd.DataFrame.from_records(data['series'])
df['Date'] = pd.to_datetime(df['Date'],format='%Y%m%d')
return df.drop(['volume'],axis=1)
def getPlot(self, params):
df = self.getData(params)
plt_obj = df.set_index('Date').plot()
plt_obj.set_ylabel("Price")
plt_obj.set_title(self.company_name)
return plt_obj
app = StockExample()
app.launch()
"""
Explanation: Notice that we didn't have to add a new method for our plot output. getData is pulling double duty here serving the data for our table and our plot. If you wanted to alter the data or the plot object, you could do that by overriding the getPlot method. Under the hood, if you don't specify a getPlot method for your plot output, server.App's built-in getPlot method will look for a getData method, and just return the result of calling the plot() method on its dataframe.
End of explanation
"""
from spyre import server
import pandas as pd
import urllib2
import json
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{ "type":'dropdown',
"label": 'Company',
"options" : [ {"label": "Google", "value":"GOOG"},
{"label": "Yahoo", "value":"YHOO"},
{"label": "Apple", "value":"AAPL"}],
"key": 'ticker',
"action_id": "update_data"}]
controls = [{ "type" : "hidden",
"id" : "update_data"}]
tabs = ["Plot", "Table"]
outputs = [{ "type" : "plot",
"id" : "plot",
"control_id" : "update_data",
"tab" : "Plot"},
{ "type" : "table",
"id" : "table_id",
"control_id" : "update_data",
"tab" : "Table" }]
def getData(self, params):
ticker = params['ticker']
# make call to yahoo finance api to get historical stock data
api_url = 'https://chartapi.finance.yahoo.com/instrument/1.0/{}/chartdata;type=quote;range=3m/json'.format(ticker)
result = urllib2.urlopen(api_url).read()
data = json.loads(result.replace('finance_charts_json_callback( ','')[:-1]) # strip away the javascript and load json
self.company_name = data['meta']['Company-Name']
df = pd.DataFrame.from_records(data['series'])
df['Date'] = pd.to_datetime(df['Date'],format='%Y%m%d')
return df.drop('volume',axis=1)
def getPlot(self, params):
df = self.getData(params)
plt_obj = df.set_index('Date').plot()
plt_obj.set_ylabel("Price")
plt_obj.set_title(self.company_name)
fig = plt_obj.get_figure()
return fig
app = StockExample()
app.launch()
"""
Explanation: Finally we'll put each of the outputs in separate tabs and add an action_id to the dropdown input that references the "update_data" control. Now, a change to the input state triggers the button to be "clicked". This makes the existence of a "button" supurfluous, so we'll change the control type to "hidden"
End of explanation
"""
|
igabr/Metis_Projects_Chicago_2017 | 03-Project-McNulty/feature_reduction_35.ipynb | mit | df = unpickle_object("dummied_dataset.pkl")
df.shape
#this logic will be important for flask data entry.
float_columns = df.select_dtypes(include=['float64']).columns
for col in float_columns:
if "mths" not in col:
df[col].fillna(df[col].median(), inplace=True)
else:
if col == "inq_last_6mths":
df[col].fillna(0, inplace=True)
elif col == "mths_since_last_delinq":
df[col].fillna(999, inplace=True)
elif col == "mths_since_last_record":
df[col].fillna(999, inplace=True)
elif col == "collections_12_mths_ex_med":
df[col].fillna(0, inplace=True)
elif col == "mths_since_last_major_derog":
df[col].fillna(999, inplace=True)
elif col == "mths_since_rcnt_il":
df[col].fillna(999, inplace=True)
elif col == "acc_open_past_24mths":
df[col].fillna(0, inplace=True)
elif col == "chargeoff_within_12_mths":
df[col].fillna(0, inplace=True)
elif col == "mths_since_recent_bc":
df[col].fillna(999, inplace=True)
elif col == "mths_since_recent_bc_dlq":
df[col].fillna(999, inplace=True)
elif col == "mths_since_recent_inq":
df[col].fillna(999, inplace=True)
elif col == "mths_since_recent_revol_delinq":
df[col].fillna(999, inplace=True)
top_35 = ["int_rate",
"dti",
"term_ 60 months",
"bc_open_to_buy",
"revol_util",
"installment",
"avg_cur_bal",
"tot_hi_cred_lim",
"revol_bal",
"funded_amnt_inv",
"bc_util",
"tot_cur_bal",
"total_bc_limit",
"total_rev_hi_lim",
"funded_amnt",
"loan_amnt",
"mo_sin_old_rev_tl_op",
"total_bal_ex_mort",
"issue_d_Dec-2016",
"total_acc",
"mo_sin_old_il_acct",
"mths_since_recent_bc",
"total_il_high_credit_limit",
"inq_last_6mths",
"acc_open_past_24mths",
"mo_sin_rcnt_tl",
"mo_sin_rcnt_rev_tl_op",
"percent_bc_gt_75",
"num_rev_accts",
"mths_since_last_delinq",
"open_acc",
"mths_since_recent_inq",
"grade_B",
"num_bc_tl",
"loan_status_Late"]
df_reduced_features = df.loc[:, top_35]
df_reduced_features.shape
scaler = StandardScaler()
matrix_df = df_reduced_features.as_matrix()
matrix = scaler.fit_transform(matrix_df)
scaled_df = pd.DataFrame(matrix, columns=df_reduced_features.columns)
scaler = StandardScaler()
matrix_df = df_reduced_features.as_matrix()
scalar_object_35 = scaler.fit(matrix_df)
matrix = scalar_object_35.transform(matrix_df)
scaled_df_35 = pd.DataFrame(matrix, columns=df_reduced_features.columns)
check = scaled_df_35 == scaled_df # lets pickle the scaler
check.head()
pickle_object(scalar_object_35, "scaler_35_features")
pickle_object(scaled_df, "rf_df_35")
upload_to_bucket('rf_df_35.pkl', "rf_df_35.pkl","gabr-project-3")
upload_to_bucket("scaler_35_features.pkl", "scaler_35_features.pkl", "gabr-project-3")
df = unpickle_object("rf_df_35.pkl")
engine = create_engine(os.environ["PSQL_CONN"])
df.to_sql("dummied_dataset", con=engine)
"""
Explanation: This notebook will select the top 35 features from out dataset.
I will rescale the resulting columns - while I am keenly aware this makes no difference to the Random Forest Model, I am just doing it for consistency.
I also pickle the scaler as we will be using this in our flask web app to transform the input data.
End of explanation
"""
pd.read_sql_query('''SELECT * FROM dummied_dataset LIMIT 5''', engine)
"""
Explanation: BELOW WE DIRECTLY QUERY THE DATABASE BELOW: Nothing has to be held in memory again!
End of explanation
"""
|
jbwhit/jupyter-tips-and-tricks | notebooks/08-old.ipynb | mit | df2 = df[df['Mine_State'] != "Wyoming"].groupby('Mine_State').sum()
df3 = df.groupby('Mine_State').sum()
# have to run this from the home dir of this repo
# cd insight/
# python setup.py develop
%aimport insight.plotting
insight.plotting.plot_prod_vs_hours(df3, color_index=1)
# insight.plotting.plot_prod_vs_hours(df2, color_index=1)
def plot_prod_vs_hours(
df, color_index=0, output_file="../img/production-vs-hours-worked.png"
):
fig, ax = plt.subplots(figsize=(10, 8))
sns.regplot(
df["Labor_Hours"],
df["Production_short_tons"],
ax=ax,
color=sns.color_palette()[color_index],
)
ax.set_xlabel("Labor Hours Worked")
ax.set_ylabel("Total Amount Produced")
x = ax.set_xlim(-9506023.213266129, 204993853.21326613)
y = ax.set_ylim(-51476801.43653282, 746280580.4034251)
fig.tight_layout()
fig.savefig(output_file)
plot_prod_vs_hours(df2, color_index=0)
plot_prod_vs_hours(df3, color_index=1)
# make a change via qgrid
df3 = qgrid_widget.get_changed_df()
"""
Explanation: QGrid
Interactive pandas dataframes: https://github.com/quantopian/qgrid
pip install qgrid --upgrade
End of explanation
"""
qgrid_widget = qgrid.show_grid(
df2[["Year", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
"""
Explanation: Github
https://github.com/jbwhit/jupyter-tips-and-tricks/commit/d3f2c0cef4dfd28eb3b9077595f14597a3022b1c?short_path=04303fc#diff-04303fce5e9bb38bcee25d12d9def22e
End of explanation
"""
|
jdhp-docs/python-notebooks | python_re.ipynb | mit | s = "Maison 3 pièce(s) - 68.05 m² - 860 € par mois charges comprises"
re.findall(r'\d+\.?\d*', s)
re.findall(r'\b\d+\.?\d*\b', s)
"""
Explanation: Extract numbers from a string
End of explanation
"""
s = "Maison 3 pièce(s) - 68.05 m² - 860 € par mois charges comprises"
if re.search(r'Maison', s):
print("Found!")
else:
print("Not found!")
if re.search(r'Appartement', s):
print("Found!")
else:
print("Not found!")
if re.match(r'Maison', s):
print("Found!")
else:
print("Not found!")
"""
Explanation: Search patterns
See:
- https://docs.python.org/3/library/re.html#search-vs-match
- https://stackoverflow.com/questions/180986/what-is-the-difference-between-pythons-re-search-and-re-match
End of explanation
"""
s = "Maison 3 pièce(s) - 68.05 m² - 860 € par mois charges comprises"
m = re.search(r'\b(\d+) pièce', s)
if m:
print(int(m.group(1)))
else:
print("Not found!")
m = re.search(r'\b(\d+\.?\d*) m²', s)
if m:
print(float(m.group(1)))
else:
print("Not found!")
m = re.search(r'\b(\d+\.?\d*) €', s)
if m:
print(float(m.group(1)))
else:
print("Not found!")
"""
Explanation: Search and capture patterns
End of explanation
"""
s = "Maison 3 PIÈce(s) - 68.05 m² - 860 € par mois charges comprises"
"""
Explanation: Case insensitive search
End of explanation
"""
m = re.search(r'\b(\d+) pièce', s, re.IGNORECASE)
if m:
print(int(m.group(1)))
else:
print("Not found!")
"""
Explanation: Without re.compile()
End of explanation
"""
num_pieces = re.compile(r'\b(\d+) pièce', re.IGNORECASE)
m = num_pieces.search(s)
if m:
print(int(m.group(1)))
else:
print("Not found!")
"""
Explanation: With re.compile()
End of explanation
"""
|
CompPhysics/MachineLearning | doc/pub/week38/ipynb/week38.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
# Generate the data.
nsamples = 100
x = np.random.randn(nsamples)
y = 3*x**2 + np.random.randn(nsamples)
## Cross-validation on Ridge regression using KFold only
# Decide degree on polynomial to fit
poly = PolynomialFeatures(degree = 6)
# Decide which values of lambda to use
nlambdas = 500
lambdas = np.logspace(-3, 5, nlambdas)
# Initialize a KFold instance
k = 5
kfold = KFold(n_splits = k)
# Perform the cross-validation to estimate MSE
scores_KFold = np.zeros((nlambdas, k))
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
j = 0
for train_inds, test_inds in kfold.split(x):
xtrain = x[train_inds]
ytrain = y[train_inds]
xtest = x[test_inds]
ytest = y[test_inds]
Xtrain = poly.fit_transform(xtrain[:, np.newaxis])
ridge.fit(Xtrain, ytrain[:, np.newaxis])
Xtest = poly.fit_transform(xtest[:, np.newaxis])
ypred = ridge.predict(Xtest)
scores_KFold[i,j] = np.sum((ypred - ytest[:, np.newaxis])**2)/np.size(ypred)
j += 1
i += 1
estimated_mse_KFold = np.mean(scores_KFold, axis = 1)
## Cross-validation using cross_val_score from sklearn along with KFold
# kfold is an instance initialized above as:
# kfold = KFold(n_splits = k)
estimated_mse_sklearn = np.zeros(nlambdas)
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
X = poly.fit_transform(x[:, np.newaxis])
estimated_mse_folds = cross_val_score(ridge, X, y[:, np.newaxis], scoring='neg_mean_squared_error', cv=kfold)
# cross_val_score return an array containing the estimated negative mse for every fold.
# we have to the the mean of every array in order to get an estimate of the mse of the model
estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds)
i += 1
## Plot and compare the slightly different ways to perform cross-validation
plt.figure()
plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score')
plt.plot(np.log10(lambdas), estimated_mse_KFold, 'r--', label = 'KFold')
plt.xlabel('log10(lambda)')
plt.ylabel('mse')
plt.legend()
plt.show()
"""
Explanation: <!-- dom:TITLE: Data Analysis and Machine Learning: Logistic Regression -->
Data Analysis and Machine Learning: Logistic Regression
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Oct 26, 2021
Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Plans for week 38
Thursday: Summary of regression methods and discussion of project 1. Start Logistic Regression
Video of Lecture September 23
Friday: Logistic Regression and Optimization methods
Video of Lecture September 24
Thursday September 23
Ridge and LASSO Regression, reminder
The expression for the standard Mean Squared Error (MSE) which we used to define our cost function and the equations for the ordinary least squares (OLS) method, that is
our optimization problem is
$$
{\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
or we can state it as
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2,
$$
where we have used the definition of a norm-2 vector, that is
$$
\vert\vert \boldsymbol{x}\vert\vert_2 = \sqrt{\sum_i x_i^2}.
$$
By minimizing the above equation with respect to the parameters
$\boldsymbol{\beta}$ we could then obtain an analytical expression for the
parameters $\boldsymbol{\beta}$. We can add a regularization parameter $\lambda$ by
defining a new cost function to be optimized, that is
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_2^2
$$
which leads to the Ridge regression minimization problem where we
require that $\vert\vert \boldsymbol{\beta}\vert\vert_2^2\le t$, where $t$ is
a finite number larger than zero. By defining
$$
C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1,
$$
we have a new optimization equation
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1
$$
which leads to Lasso regression. Lasso stands for least absolute shrinkage and selection operator.
Here we have defined the norm-1 as
$$
\vert\vert \boldsymbol{x}\vert\vert_1 = \sum_i \vert x_i\vert.
$$
<!-- !split -->
Various steps in cross-validation
When the repetitive splitting of the data set is done randomly,
samples may accidently end up in a fast majority of the splits in
either training or test set. Such samples may have an unbalanced
influence on either model building or prediction evaluation. To avoid
this $k$-fold cross-validation structures the data splitting. The
samples are divided into $k$ more or less equally sized exhaustive and
mutually exclusive subsets. In turn (at each split) one of these
subsets plays the role of the test set while the union of the
remaining subsets constitutes the training set. Such a splitting
warrants a balanced representation of each sample in both training and
test set over the splits. Still the division into the $k$ subsets
involves a degree of randomness. This may be fully excluded when
choosing $k=n$. This particular case is referred to as leave-one-out
cross-validation (LOOCV).
<!-- !split -->
How to set up the cross-validation for Ridge and/or Lasso
Define a range of interest for the penalty parameter.
Divide the data set into training and test set comprising samples ${1, \ldots, n} \setminus i$ and ${ i }$, respectively.
Fit the linear regression model by means of ridge estimation for each $\lambda$ in the grid using the training set, and the corresponding estimate of the error variance $\boldsymbol{\sigma}_{-i}^2(\lambda)$, as
$$
\begin{align}
\boldsymbol{\beta}{-i}(\lambda) & = ( \boldsymbol{X}{-i, \ast}^{T}
\boldsymbol{X}{-i, \ast} + \lambda \boldsymbol{I}{pp})^{-1}
\boldsymbol{X}{-i, \ast}^{T} \boldsymbol{y}{-i}
\end{align}
$$
Evaluate the prediction performance of these models on the test set by $C[y_i, \boldsymbol{X}{i, \ast}; \boldsymbol{\beta}{-i}(\lambda), \boldsymbol{\sigma}{-i}^2(\lambda)]$. Or, by the prediction error $|y_i - \boldsymbol{X}{i, \ast} \boldsymbol{\beta}_{-i}(\lambda)|$, the relative error, the error squared or the R2 score function.
Repeat the first three steps such that each sample plays the role of the test set once.
Average the prediction performances of the test sets at each grid point of the penalty bias/parameter. It is an estimate of the prediction performance of the model corresponding to this value of the penalty parameter on novel data.
Cross-validation in brief
For the various values of $k$
shuffle the dataset randomly.
Split the dataset into $k$ groups.
For each unique group:
a. Decide which group to use as set for test data
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
Summarize the model using the sample of model evaluation scores
Code Example for Cross-validation and $k$-fold Cross-validation
The code here uses Ridge regression with cross-validation (CV) resampling and $k$-fold CV in order to fit a specific polynomial.
End of explanation
"""
#Model training, we compute the mean value of y and X
y_train_mean = np.mean(y_train)
X_train_mean = np.mean(X_train,axis=0)
X_train = X_train - X_train_mean
y_train = y_train - y_train_mean
# The we fit our model with the training data
trained_model = some_model.fit(X_train,y_train)
#Model prediction, we need also to transform our data set used for the prediction.
X_test = X_test - X_train_mean #Use mean from training data
y_pred = trained_model(X_test)
y_pred = y_pred + y_train_mean
"""
Explanation: To think about, first part
When you are comparing your own code with for example Scikit-Learn's
library, there are some technicalities to keep in mind. The examples
here demonstrate some of these aspects with potential pitfalls.
The discussion here focuses on the role of the intercept, how we can
set up the design matrix, what scaling we should use and other topics
which tend confuse us.
The intercept can be interpreted as the expected value of our
target/output variables when all other predictors are set to zero.
Thus, if we cannot assume that the expected outputs/targets are zero
when all predictors are zero (the columns in the design matrix), it
may be a bad idea to implement a model which penalizes the intercept.
Furthermore, in for example Ridge and Lasso regression, the default solutions
from the library Scikit-Learn (when not shrinking $\beta_0$) for the unknown parameters
$\boldsymbol{\beta}$, are derived under the assumption that both $\boldsymbol{y}$ and
$\boldsymbol{X}$ are zero centered, that is we subtract the mean values.
More thinking
If our predictors represent different scales, then it is important to
standardize the design matrix $\boldsymbol{X}$ by subtracting the mean of each
column from the corresponding column and dividing the column with its
standard deviation. Most machine learning libraries do this as a default. This means that if you compare your code with the results from a given library,
the results may differ.
The
Standadscaler
function in Scikit-Learn does this for us. For the data sets we
have been studying in our various examples, the data are in many cases
already scaled and there is no need to scale them. You as a user of different machine learning algorithms, should always perform a
survey of your data, with a critical assessment of them in case you need to scale the data.
If you need to scale the data, not doing so will give an unfair
penalization of the parameters since their magnitude depends on the
scale of their corresponding predictor.
Suppose as an example that you
you have an input variable given by the heights of different persons.
Human height might be measured in inches or meters or
kilometers. If measured in kilometers, a standard linear regression
model with this predictor would probably give a much bigger
coefficient term, than if measured in millimeters.
This can clearly lead to problems in evaluating the cost/loss functions.
Still thinking
Keep in mind that when you transform your data set before training a model, the same transformation needs to be done
on your eventual new data set before making a prediction. If we translate this into a Python code, it would could be implemented as follows
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
np.random.seed(2021)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
def fit_beta(X, y):
return np.linalg.pinv(X.T @ X) @ X.T @ y
true_beta = [2, 0.5, 3.7]
x = np.linspace(0, 1, 11)
y = np.sum(
np.asarray([x ** p * b for p, b in enumerate(true_beta)]), axis=0
) + 0.1 * np.random.normal(size=len(x))
degree = 3
X = np.zeros((len(x), degree))
# Include the intercept in the design matrix
for p in range(degree):
X[:, p] = x ** p
beta = fit_beta(X, y)
# Intercept is included in the design matrix
skl = LinearRegression(fit_intercept=False).fit(X, y)
print(f"True beta: {true_beta}")
print(f"Fitted beta: {beta}")
print(f"Sklearn fitted beta: {skl.coef_}")
ypredictOwn = X @ beta
ypredictSKL = skl.predict(X)
print(f"MSE with intercept column")
print(MSE(y,ypredictOwn))
print(f"MSE with intercept column from SKL")
print(MSE(y,ypredictSKL))
plt.figure()
plt.scatter(x, y, label="Data")
plt.plot(x, X @ beta, label="Fit")
plt.plot(x, skl.predict(X), label="Sklearn (fit_intercept=False)")
# Do not include the intercept in the design matrix
X = np.zeros((len(x), degree - 1))
for p in range(degree - 1):
X[:, p] = x ** (p + 1)
# Intercept is not included in the design matrix
skl = LinearRegression(fit_intercept=True).fit(X, y)
# Use centered values for X and y when computing coefficients
y_offset = np.average(y, axis=0)
X_offset = np.average(X, axis=0)
beta = fit_beta(X - X_offset, y - y_offset)
intercept = np.mean(y_offset - X_offset @ beta)
print(f"Manual intercept: {intercept}")
print(f"Fitted beta (wiothout intercept): {beta}")
print(f"Sklearn intercept: {skl.intercept_}")
print(f"Sklearn fitted beta (without intercept): {skl.coef_}")
ypredictOwn = X @ beta
ypredictSKL = skl.predict(X)
print(f"MSE with Manual intercept")
print(MSE(y,ypredictOwn+intercept))
print(f"MSE with Sklearn intercept")
print(MSE(y,ypredictSKL))
plt.plot(x, X @ beta + intercept, "--", label="Fit (manual intercept)")
plt.plot(x, skl.predict(X), "--", label="Sklearn (fit_intercept=True)")
plt.grid()
plt.legend()
plt.show()
"""
Explanation: What does centering (subtracting the mean values) mean mathematically?
Let us try to understand what this may imply mathematically when we
subtract the mean values, also known as zero centering. For
simplicity, we will focus on ordinary regression, as done in the above example.
The cost/loss function for regression is
$$
C(\beta_0, \beta_1, ... , \beta_{p-1}) = \frac{1}{n}\sum_{i=0}^{n} \left(y_i - \beta_0 - \sum_{j=1}^{p-1} X_{ij}\beta_j\right)^2,.
$$
Recall also that we use the squared value since this leads to an increase of the penalty for higher differences between predicted and output/target values.
What we have done is to single out the $\beta_0$ term in the definition of the mean squared error (MSE).
The design matrix
$X$ does in this case not contain any intercept column.
When we take the derivative with respect to $\beta_0$, we want the derivative to obey
$$
\frac{\partial C}{\partial \beta_j} = 0,
$$
for all $j$. For $\beta_0$ we have
$$
\frac{\partial C}{\partial \beta_0} = -\frac{2}{n}\sum_{i=0}^{n-1} \left(y_i - \beta_0 - \sum_{j=1}^{p-1} X_{ij} \beta_j\right).
$$
Multiplying away the constant $2/n$, we obtain
$$
\sum_{i=0}^{n-1} \beta_0 = \sum_{i=0}^{n-1}y_i - \sum_{i=0}^{n-1} \sum_{j=1}^{p-1} X_{ij} \beta_j.
$$
Further Manipulations
Let us special first to the case where we have only two parameters $\beta_0$ and $\beta_1$.
Our result for $\beta_0$ simplifies then to
$$
n\beta_0 = \sum_{i=0}^{n-1}y_i - \sum_{i=0}^{n-1} X_{i1} \beta_1.
$$
We obtain then
$$
\beta_0 = \frac{1}{n}\sum_{i=0}^{n-1}y_i - \beta_1\frac{1}{n}\sum_{i=0}^{n-1} X_{i1}.
$$
If we define
$$
\mu_1=\frac{1}{n}\sum_{i=0}^{n-1} (X_{i1},
$$
and if we define the mean value of the outputs as
$$
\mu_y=\frac{1}{n}\sum_{i=0}^{n-1}y_i,
$$
we have
$$
\beta_0 = \mu_y - \beta_1\mu_{1}.
$$
In the general case, that is we have more parameters than $\beta_0$ and $\beta_1$, we have
$$
\beta_0 = \frac{1}{n}\sum_{i=0}^{n-1}y_i - \frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{p-1} X_{ij}\beta_j.
$$
Replacing $y_i$ with $y_i - y_i - \overline{\boldsymbol{y}}$ and centering also our design matrix results in a cost function (in vector-matrix disguise)
$$
C(\boldsymbol{\beta}) = (\boldsymbol{\tilde{y}} - \tilde{X}\boldsymbol{\beta})^T(\boldsymbol{\tilde{y}} - \tilde{X}\boldsymbol{\beta}).
$$
Wrapping it up
If we minimize with respect to $\boldsymbol{\beta}$ we have then
$$
\hat{\boldsymbol{\beta}} = (\tilde{X}^T\tilde{X})^{-1}\tilde{X}^T\boldsymbol{\tilde{y}},
$$
where $\boldsymbol{\tilde{y}} = \boldsymbol{y} - \overline{\boldsymbol{y}}$
and $\tilde{X}{ij} = X{ij} - \frac{1}{n}\sum_{k=0}^{n-1}X_{kj}$.
For Ridge regression we need to add $\lambda \boldsymbol{\beta}^T\boldsymbol{\beta}$ to the cost function and get then
$$
\hat{\boldsymbol{\beta}} = (\tilde{X}^T\tilde{X} + \lambda I)^{-1}\tilde{X}^T\boldsymbol{\tilde{y}}.
$$
What does this mean? And why do we insist on all this? Let us look at some examples.
Linear Regression code, Intercept handling first
This code shows a simple first-order fit to a data set using the above transformed data, where we consider the role of the intercept first, by either excluding it or including it (code example thanks to Øyvind Sigmundson Schøyen). Here our scaling of the data is done by subtracting the mean values only.
Note also that we do not split the data into training and test.
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)
Maxpolydegree = 20
X = np.zeros((n,Maxpolydegree))
#We include explicitely the intercept column
for degree in range(Maxpolydegree):
X[:,degree] = x**degree
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
p = Maxpolydegree
I = np.eye(p,p)
# Decide which values of lambda to use
nlambdas = 6
MSEOwnRidgePredict = np.zeros(nlambdas)
MSERidgePredict = np.zeros(nlambdas)
lambdas = np.logspace(-4, 2, nlambdas)
for i in range(nlambdas):
lmb = lambdas[i]
OwnRidgeBeta = np.linalg.pinv(X_train.T @ X_train+lmb*I) @ X_train.T @ y_train
# Note: we include the intercept column and no scaling
RegRidge = linear_model.Ridge(lmb,fit_intercept=False)
RegRidge.fit(X_train,y_train)
# and then make the prediction
ytildeOwnRidge = X_train @ OwnRidgeBeta
ypredictOwnRidge = X_test @ OwnRidgeBeta
ytildeRidge = RegRidge.predict(X_train)
ypredictRidge = RegRidge.predict(X_test)
MSEOwnRidgePredict[i] = MSE(y_test,ypredictOwnRidge)
MSERidgePredict[i] = MSE(y_test,ypredictRidge)
print("Beta values for own Ridge implementation")
print(OwnRidgeBeta)
print("Beta values for Scikit-Learn Ridge implementation")
print(RegRidge.coef_)
print("MSE values for own Ridge implementation")
print(MSEOwnRidgePredict[i])
print("MSE values for Scikit-Learn Ridge implementation")
print(MSERidgePredict[i])
# Now plot the results
plt.figure()
plt.plot(np.log10(lambdas), MSEOwnRidgePredict, 'r', label = 'MSE own Ridge Test')
plt.plot(np.log10(lambdas), MSERidgePredict, 'g', label = 'MSE Ridge Test')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
"""
Explanation: The intercept is the value of our output/target variable
when all our features are zero and our function crosses the $y$-axis (for a one-dimensional case).
Printing the MSE, we see first that both methods give the same MSE, as
they should. However, when we move to for example Ridge regression,
the way we treat the intercept may give a larger or smaller MSE,
meaning that the MSE can be penalized by the value of the
intercept. Not including the intercept in the fit, means that the
regularization term does not include $\beta_0$. For different values
of $\lambda$, this may lead to differeing MSE values.
To remind the reader, the regularization term, with the intercept in Ridge regression is given by
$$
\lambda \vert\vert \boldsymbol{\beta} \vert\vert_2^2 = \lambda \sum_{j=0}^{p-1}\beta_j^2,
$$
but when we take out the intercept, this equation becomes
$$
\lambda \vert\vert \boldsymbol{\beta} \vert\vert_2^2 = \lambda \sum_{j=1}^{p-1}\beta_j^2.
$$
For Lasso regression we have
$$
\lambda \vert\vert \boldsymbol{\beta} \vert\vert_1 = \lambda \sum_{j=1}^{p-1}\vert\beta_j\vert.
$$
It means that, when scaling the design matrix and the outputs/targets, by subtracting the mean values, we have an optimization problem which is not penalized by the intercept. The MSE value can then be smaller since it focuses only on the remaining quantities. If we however bring back the intercept, we will get a MSE which then contains the intercept.
Code Examples
Armed with this wisdom, we attempt first to simply set the intercept equal to False in our implementation of Ridge regression for our well-known vanilla data set.
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.preprocessing import StandardScaler
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(315)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)
Maxpolydegree = 20
X = np.zeros((n,Maxpolydegree-1))
for degree in range(1,Maxpolydegree): #No intercept column
X[:,degree-1] = x**(degree)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#For our own implementation, we will need to deal with the intercept by centering the design matrix and the target variable
X_train_mean = np.mean(X_train,axis=0)
#Center by removing mean from each feature
X_train_scaled = X_train - X_train_mean
X_test_scaled = X_test - X_train_mean
#The model intercept (called y_scaler) is given by the mean of the target variable (IF X is centered)
#Remove the intercept from the training data.
y_scaler = np.mean(y_train)
y_train_scaled = y_train - y_scaler
p = Maxpolydegree-1
I = np.eye(p,p)
# Decide which values of lambda to use
nlambdas = 6
MSEOwnRidgePredict = np.zeros(nlambdas)
MSERidgePredict = np.zeros(nlambdas)
lambdas = np.logspace(-4, 2, nlambdas)
for i in range(nlambdas):
lmb = lambdas[i]
OwnRidgeBeta = np.linalg.pinv(X_train_scaled.T @ X_train_scaled+lmb*I) @ X_train_scaled.T @ (y_train_scaled)
intercept_ = y_scaler - X_train_mean@OwnRidgeBeta #The intercept can be shifted so the model can predict on uncentered data
#Add intercept to prediction
ypredictOwnRidge = X_test_scaled @ OwnRidgeBeta + y_scaler
RegRidge = linear_model.Ridge(lmb)
RegRidge.fit(X_train,y_train)
ypredictRidge = RegRidge.predict(X_test)
MSEOwnRidgePredict[i] = MSE(y_test,ypredictOwnRidge)
MSERidgePredict[i] = MSE(y_test,ypredictRidge)
print("Beta values for own Ridge implementation")
print(OwnRidgeBeta) #Intercept is given by mean of target variable
print("Beta values for Scikit-Learn Ridge implementation")
print(RegRidge.coef_)
print('Intercept from own implementation:')
print(intercept_)
print('Intercept from Scikit-Learn Ridge implementation')
print(RegRidge.intercept_)
print("MSE values for own Ridge implementation")
print(MSEOwnRidgePredict[i])
print("MSE values for Scikit-Learn Ridge implementation")
print(MSERidgePredict[i])
# Now plot the results
plt.figure()
plt.plot(np.log10(lambdas), MSEOwnRidgePredict, 'b--', label = 'MSE own Ridge Test')
plt.plot(np.log10(lambdas), MSERidgePredict, 'g--', label = 'MSE SL Ridge Test')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
"""
Explanation: The results here agree when we force Scikit-Learn's Ridge function to include the first column in our design matrix.
We see that the results agree very well. Here we have thus explicitely included the intercept column in the design matrix.
What happens if we do not include the intercept in our fit?
Let us see how we can change this code by zero centering (thanks to Stian Bilek for inpouts here).
Taking out the mean
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
"""
Explanation: We see here, when compared to the code which includes explicitely the
intercept column, that our MSE value is actually smaller. This is
because the regularization term does not include the intercept value
$\beta_0$ in the fitting. This applies to Lasso regularization as
well. It means that our optimization is now done only with the
centered matrix and/or vector that enter the fitting procedure. Note
also that the problem with the intercept occurs mainly in these type
of polynomial fitting problem.
The next example is indeed an example where all these discussions about the role of intercept are not present.
More complicated Example: The Ising model
The one-dimensional Ising model with nearest neighbor interaction, no
external field and a constant coupling constant $J$ is given by
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
H = -J \sum_{k}^L s_k s_{k + 1},
\label{_auto1} \tag{1}
\end{equation}
$$
where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins
in the system is determined by $L$. For the one-dimensional system
there is no phase transition.
We will look at a system of $L = 40$ spins with a coupling constant of
$J = 1$. To get enough training data we will generate 10000 states
with their respective energies.
End of explanation
"""
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
"""
Explanation: Here we use ordinary least squares
regression to predict the energy for the nearest neighbor
one-dimensional Ising model on a ring, i.e., the endpoints wrap
around. We will use linear regression to fit a value for
the coupling constant to achieve this.
Reformulating the problem to suit regression
A more general form for the one-dimensional Ising model is
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
\label{_auto2} \tag{2}
\end{equation}
$$
Here we allow for interactions beyond the nearest neighbors and a state dependent
coupling constant. This latter expression can be formulated as
a matrix-product
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
\boldsymbol{H} = \boldsymbol{X} J,
\label{_auto3} \tag{3}
\end{equation}
$$
where $X_{jk} = s_j s_k$ and $J$ is a matrix which consists of the
elements $-J_{jk}$. This form of writing the energy fits perfectly
with the form utilized in linear regression, that is
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon},
\label{_auto4} \tag{4}
\end{equation}
$$
We split the data in training and test data as discussed in the previous example
End of explanation
"""
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
def ols_inv(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return scl.inv(x.T @ x) @ (x.T @ y)
beta = ols_inv(X_train_own, y_train)
"""
Explanation: Linear regression
In the ordinary least squares method we choose the cost function
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
C(\boldsymbol{X}, \boldsymbol{\beta})= \frac{1}{n}\left{(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})\right}.
\label{_auto5} \tag{5}
\end{equation}
$$
We then find the extremal point of $C$ by taking the derivative with respect to $\boldsymbol{\beta}$ as discussed above.
This yields the expression for $\boldsymbol{\beta}$ to be
$$
\boldsymbol{\beta} = \frac{\boldsymbol{X}^T \boldsymbol{y}}{\boldsymbol{X}^T \boldsymbol{X}},
$$
which immediately imposes some requirements on $\boldsymbol{X}$ as there must exist
an inverse of $\boldsymbol{X}^T \boldsymbol{X}$. If the expression we are modeling contains an
intercept, i.e., a constant term, we must make sure that the
first column of $\boldsymbol{X}$ consists of $1$. We do this here
End of explanation
"""
def ols_svd(x: np.ndarray, y: np.ndarray) -> np.ndarray:
u, s, v = scl.svd(x)
return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y
beta = ols_svd(X_train_own,y_train)
"""
Explanation: Singular Value decomposition
Doing the inversion directly turns out to be a bad idea since the matrix
$\boldsymbol{X}^T\boldsymbol{X}$ is singular. An alternative approach is to use the singular
value decomposition. Using the definition of the Moore-Penrose
pseudoinverse we can write the equation for $\boldsymbol{\beta}$ as
$$
\boldsymbol{\beta} = \boldsymbol{X}^{+}\boldsymbol{y},
$$
where the pseudoinverse of $\boldsymbol{X}$ is given by
$$
\boldsymbol{X}^{+} = \frac{\boldsymbol{X}^T}{\boldsymbol{X}^T\boldsymbol{X}}.
$$
Using singular value decomposition we can decompose the matrix $\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^T$,
where $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal(unitary) matrices and $\boldsymbol{\Sigma}$ contains the singular values (more details below).
where $X^{+} = V\Sigma^{+} U^T$. This reduces the equation for
$\omega$ to
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\boldsymbol{\beta} = \boldsymbol{V}\boldsymbol{\Sigma}^{+} \boldsymbol{U}^T \boldsymbol{y}.
\label{_auto6} \tag{6}
\end{equation}
$$
Note that solving this equation by actually doing the pseudoinverse
(which is what we will do) is not a good idea as this operation scales
as $\mathcal{O}(n^3)$, where $n$ is the number of elements in a
general matrix. Instead, doing $QR$-factorization and solving the
linear system as an equation would reduce this down to
$\mathcal{O}(n^2)$ operations.
End of explanation
"""
J = beta[1:].reshape(L, L)
"""
Explanation: When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here
End of explanation
"""
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J, **cmap_args)
plt.title("OLS", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
"""
Explanation: A way of looking at the coefficients in $J$ is to plot the matrices as images.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import sklearn.linear_model as skl
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
"""
Explanation: It is interesting to note that OLS
considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as
valid matrix elements for $J$.
In our discussion below on hyperparameters and Ridge and Lasso regression we will see that
this problem can be removed, partly and only with Lasso regression.
In this case our matrix inversion was actually possible. The obvious question now is what is the mathematics behind the SVD?
The one-dimensional Ising model
Let us bring back the Ising model again, but now with an additional
focus on Ridge and Lasso regression as well. We repeat some of the
basic parts of the Ising model and the setup of the training and test
data. The one-dimensional Ising model with nearest neighbor
interaction, no external field and a constant coupling constant $J$ is
given by
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
H = -J \sum_{k}^L s_k s_{k + 1},
\label{_auto7} \tag{7}
\end{equation}
$$
where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition.
We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies.
End of explanation
"""
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96)
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
"""
Explanation: A more general form for the one-dimensional Ising model is
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
\label{_auto8} \tag{8}
\end{equation}
$$
Here we allow for interactions beyond the nearest neighbors and a more
adaptive coupling matrix. This latter expression can be formulated as
a matrix-product on the form
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
H = X J,
\label{_auto9} \tag{9}
\end{equation}
$$
where $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the
elements $-J_{jk}$. This form of writing the energy fits perfectly
with the form utilized in linear regression, viz.
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}.
\label{_auto10} \tag{10}
\end{equation}
$$
We organize the data as we did above
End of explanation
"""
clf = skl.LinearRegression().fit(X_train, y_train)
"""
Explanation: We will do all fitting with Scikit-Learn,
End of explanation
"""
J_sk = clf.coef_.reshape(L, L)
"""
Explanation: When extracting the $J$-matrix we make sure to remove the intercept
End of explanation
"""
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_sk, **cmap_args)
plt.title("LinearRegression from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
"""
Explanation: And then we plot the results
End of explanation
"""
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train)
J_ridge_sk = clf_ridge.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_ridge_sk, **cmap_args)
plt.title("Ridge from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
"""
Explanation: The results perfectly with our previous discussion where we used our own code.
Ridge regression
Having explored the ordinary least squares we move on to ridge
regression. In ridge regression we include a regularizer. This
involves a new cost function which leads to a new estimate for the
weights $\boldsymbol{\beta}$. This results in a penalized regression problem. The
cost function is given by
3
7
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
End of explanation
"""
clf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train)
J_lasso_sk = clf_lasso.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_lasso_sk, **cmap_args)
plt.title("Lasso from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
"""
Explanation: LASSO regression
In the Least Absolute Shrinkage and Selection Operator (LASSO)-method we get a third cost function.
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
C(\boldsymbol{X}, \boldsymbol{\beta}; \lambda) = (\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y}) + \lambda \sqrt{\boldsymbol{\beta}^T\boldsymbol{\beta}}.
\label{_auto12} \tag{12}
\end{equation}
$$
Finding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function Lasso from Scikit-Learn.
End of explanation
"""
lambdas = np.logspace(-4, 5, 10)
train_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
test_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
plot_counter = 1
fig = plt.figure(figsize=(32, 54))
for i, _lambda in enumerate(tqdm.tqdm(lambdas)):
for key, method in zip(
["ols_sk", "ridge_sk", "lasso_sk"],
[skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)]
):
method = method.fit(X_train, y_train)
train_errors[key][i] = method.score(X_train, y_train)
test_errors[key][i] = method.score(X_test, y_test)
omega = method.coef_.reshape(L, L)
plt.subplot(10, 5, plot_counter)
plt.imshow(omega, **cmap_args)
plt.title(r"%s, $\lambda = %.4f$" % (key, _lambda))
plot_counter += 1
plt.show()
"""
Explanation: It is quite striking how LASSO breaks the symmetry of the coupling
constant as opposed to ridge and OLS. We get a sparse solution with
$J_{j, j + 1} = -1$.
Performance as function of the regularization parameter
We see how the different models perform for a different set of values for $\lambda$.
End of explanation
"""
fig = plt.figure(figsize=(20, 14))
colors = {
"ols_sk": "r",
"ridge_sk": "y",
"lasso_sk": "c"
}
for key in train_errors:
plt.semilogx(
lambdas,
train_errors[key],
colors[key],
label="Train {0}".format(key),
linewidth=4.0
)
for key in test_errors:
plt.semilogx(
lambdas,
test_errors[key],
colors[key] + "--",
label="Test {0}".format(key),
linewidth=4.0
)
plt.legend(loc="best", fontsize=18)
plt.xlabel(r"$\lambda$", fontsize=18)
plt.ylabel(r"$R^2$", fontsize=18)
plt.tick_params(labelsize=18)
plt.show()
"""
Explanation: We see that LASSO reaches a good solution for low
values of $\lambda$, but will "wither" when we increase $\lambda$ too
much. Ridge is more stable over a larger range of values for
$\lambda$, but eventually also fades away.
Finding the optimal value of $\lambda$
To determine which value of $\lambda$ is best we plot the accuracy of
the models when predicting the training and the testing set. We expect
the accuracy of the training set to be quite good, but if the accuracy
of the testing set is much lower this tells us that we might be
subject to an overfit model. The ideal scenario is an accuracy on the
testing set that is close to the accuracy of the training set.
End of explanation
"""
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from sklearn.metrics import mean_squared_error
from IPython.display import display
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("chddata.csv"),'r')
# Read the chd data as csv file and organize the data into arrays with age group, age, and chd
chd = pd.read_csv(infile, names=('ID', 'Age', 'Agegroup', 'CHD'))
chd.columns = ['ID', 'Age', 'Agegroup', 'CHD']
output = chd['CHD']
age = chd['Age']
agegroup = chd['Agegroup']
numberID = chd['ID']
display(chd)
plt.scatter(age, output, marker='o')
plt.axis([18,70.0,-0.1, 1.2])
plt.xlabel(r'Age')
plt.ylabel(r'CHD')
plt.title(r'Age distribution and Coronary heart disease')
plt.show()
"""
Explanation: From the above figure we can see that LASSO with $\lambda = 10^{-2}$
achieves a very good accuracy on the test set. This by far surpasses the
other models for all values of $\lambda$.
<!-- !split -->
Logistic Regression
In linear regression our main interest was centered on learning the
coefficients of a functional fit (say a polynomial) in order to be
able to predict the response of a continuous variable on some unseen
data. The fit to the continuous variable $y_i$ is based on some
independent variables $\boldsymbol{x}_i$. Linear regression resulted in
analytical expressions for standard ordinary Least Squares or Ridge
regression (in terms of matrices to invert) for several quantities,
ranging from the variance and thereby the confidence intervals of the
parameters $\boldsymbol{\beta}$ to the mean squared error. If we can invert
the product of the design matrices, linear regression gives then a
simple recipe for fitting our data.
<!-- !split -->
Classification problems
Classification problems, however, are concerned with outcomes taking
the form of discrete variables (i.e. categories). We may for example,
on the basis of DNA sequencing for a number of patients, like to find
out which mutations are important for a certain disease; or based on
scans of various patients' brains, figure out if there is a tumor or
not; or given a specific physical system, we'd like to identify its
state, say whether it is an ordered or disordered system (typical
situation in solid state physics); or classify the status of a
patient, whether she/he has a stroke or not and many other similar
situations.
The most common situation we encounter when we apply logistic
regression is that of two possible outcomes, normally denoted as a
binary outcome, true or false, positive or negative, success or
failure etc.
Optimization and Deep learning
Logistic regression will also serve as our stepping stone towards
neural network algorithms and supervised deep learning. For logistic
learning, the minimization of the cost function leads to a non-linear
equation in the parameters $\boldsymbol{\beta}$. The optimization of the
problem calls therefore for minimization algorithms. This forms the
bottle neck of all machine learning algorithms, namely how to find
reliable minima of a multi-variable function. This leads us to the
family of gradient descent methods. The latter are the working horses
of basically all modern machine learning algorithms.
We note also that many of the topics discussed here on logistic
regression are also commonly used in modern supervised Deep Learning
models, as we will see later.
<!-- !split -->
Basics
We consider the case where the dependent variables, also called the
responses or the outcomes, $y_i$ are discrete and only take values
from $k=0,\dots,K-1$ (i.e. $K$ classes).
The goal is to predict the
output classes from the design matrix $\boldsymbol{X}\in\mathbb{R}^{n\times p}$
made of $n$ samples, each of which carries $p$ features or predictors. The
primary goal is to identify the classes to which new unseen samples
belong.
Let us specialize to the case of two classes only, with outputs
$y_i=0$ and $y_i=1$. Our outcomes could represent the status of a
credit card user that could default or not on her/his credit card
debt. That is
$$
y_i = \begin{bmatrix} 0 & \mathrm{no}\ 1 & \mathrm{yes} \end{bmatrix}.
$$
Linear classifier
Before moving to the logistic model, let us try to use our linear
regression model to classify these two outcomes. We could for example
fit a linear model to the default case if $y_i > 0.5$ and the no
default case $y_i \leq 0.5$.
We would then have our
weighted linear combination, namely
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{\beta} + \boldsymbol{\epsilon},
\label{_auto13} \tag{13}
\end{equation}
$$
where $\boldsymbol{y}$ is a vector representing the possible outcomes, $\boldsymbol{X}$ is our
$n\times p$ design matrix and $\boldsymbol{\beta}$ represents our estimators/predictors.
Some selected properties
The main problem with our function is that it takes values on the
entire real axis. In the case of logistic regression, however, the
labels $y_i$ are discrete variables. A typical example is the credit
card data discussed below here, where we can set the state of
defaulting the debt to $y_i=1$ and not to $y_i=0$ for one the persons
in the data set (see the full example below).
One simple way to get a discrete output is to have sign
functions that map the output of a linear regressor to values ${0,1}$,
$f(s_i)=sign(s_i)=1$ if $s_i\ge 0$ and 0 if otherwise.
We will encounter this model in our first demonstration of neural networks. Historically it is called the perceptron" model in the machine learning
literature. This model is extremely simple. However, in many cases it is more
favorable to use asoft" classifier that outputs
the probability of a given category. This leads us to the logistic function.
Simple example
The following example on data for coronary heart disease (CHD) as function of age may serve as an illustration. In the code here we read and plot whether a person has had CHD (output = 1) or not (output = 0). This ouput is plotted the person's against age. Clearly, the figure shows that attempting to make a standard linear regression fit may not be very meaningful.
End of explanation
"""
agegroupmean = np.array([0.1, 0.133, 0.250, 0.333, 0.462, 0.625, 0.765, 0.800])
group = np.array([1, 2, 3, 4, 5, 6, 7, 8])
plt.plot(group, agegroupmean, "r-")
plt.axis([0,9,0, 1.0])
plt.xlabel(r'Age group')
plt.ylabel(r'CHD mean values')
plt.title(r'Mean values for each age group')
plt.show()
"""
Explanation: Plotting the mean value for each group
What we could attempt however is to plot the mean value for each group.
End of explanation
"""
"""The sigmoid function (or the logistic curve) is a
function that takes any real number, z, and outputs a number (0,1).
It is useful in neural networks for assigning weights on a relative scale.
The value z is the weighted sum of parameters involved in the learning algorithm."""
import numpy
import matplotlib.pyplot as plt
import math as mt
z = numpy.arange(-5, 5, .1)
sigma_fn = numpy.vectorize(lambda z: 1/(1+numpy.exp(-z)))
sigma = sigma_fn(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, sigma)
ax.set_ylim([-0.1, 1.1])
ax.set_xlim([-5,5])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('sigmoid function')
plt.show()
"""Step Function"""
z = numpy.arange(-5, 5, .02)
step_fn = numpy.vectorize(lambda z: 1.0 if z >= 0.0 else 0.0)
step = step_fn(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, step)
ax.set_ylim([-0.5, 1.5])
ax.set_xlim([-5,5])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('step function')
plt.show()
"""tanh Function"""
z = numpy.arange(-2*mt.pi, 2*mt.pi, 0.1)
t = numpy.tanh(z)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, t)
ax.set_ylim([-1.0, 1.0])
ax.set_xlim([-2*mt.pi,2*mt.pi])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('tanh function')
plt.show()
"""
Explanation: We are now trying to find a function $f(y\vert x)$, that is a function which gives us an expected value for the output $y$ with a given input $x$.
In standard linear regression with a linear dependence on $x$, we would write this in terms of our model
$$
f(y_i\vert x_i)=\beta_0+\beta_1 x_i.
$$
This expression implies however that $f(y_i\vert x_i)$ could take any
value from minus infinity to plus infinity. If we however let
$f(y\vert y)$ be represented by the mean value, the above example
shows us that we can constrain the function to take values between
zero and one, that is we have $0 \le f(y_i\vert x_i) \le 1$. Looking
at our last curve we see also that it has an S-shaped form. This leads
us to a very popular model for the function $f$, namely the so-called
Sigmoid function or logistic model. We will consider this function as
representing the probability for finding a value of $y_i$ with a given
$x_i$.
The logistic function
Another widely studied model, is the so-called
perceptron model, which is an example of a "hard classification" model. We
will encounter this model when we discuss neural networks as
well. Each datapoint is deterministically assigned to a category (i.e
$y_i=0$ or $y_i=1$). In many cases, and the coronary heart disease data forms one of many such examples, it is favorable to have a "soft"
classifier that outputs the probability of a given category rather
than a single value. For example, given $x_i$, the classifier
outputs the probability of being in a category $k$. Logistic regression
is the most common example of a so-called soft classifier. In logistic
regression, the probability that a data point $x_i$
belongs to a category $y_i={0,1}$ is given by the so-called logit function (or Sigmoid) which is meant to represent the likelihood for a given event,
$$
p(t) = \frac{1}{1+\mathrm \exp{-t}}=\frac{\exp{t}}{1+\mathrm \exp{t}}.
$$
Note that $1-p(t)= p(-t)$.
Examples of likelihood functions used in logistic regression and nueral networks
The following code plots the logistic function, the step function and other functions we will encounter from here and on.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
# Load the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
# Logistic Regression
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(X_train, y_train)
print("Test set accuracy with Logistic Regression: {:.2f}".format(logreg.score(X_test,y_test)))
#now scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Logistic Regression
logreg.fit(X_train_scaled, y_train)
print("Test set accuracy Logistic Regression with scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
"""
Explanation: Two parameters
We assume now that we have two classes with $y_i$ either $0$ or $1$. Furthermore we assume also that we have only two parameters $\beta$ in our fitting of the Sigmoid function, that is we define probabilities
$$
\begin{align}
p(y_i=1|x_i,\boldsymbol{\beta}) &= \frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}},\nonumber\
p(y_i=0|x_i,\boldsymbol{\beta}) &= 1 - p(y_i=1|x_i,\boldsymbol{\beta}),
\end{align}
$$
where $\boldsymbol{\beta}$ are the weights we wish to extract from data, in our case $\beta_0$ and $\beta_1$.
Note that we used
$$
p(y_i=0\vert x_i, \boldsymbol{\beta}) = 1-p(y_i=1\vert x_i, \boldsymbol{\beta}).
$$
<!-- !split -->
Maximum likelihood
In order to define the total likelihood for all possible outcomes from a
dataset $\mathcal{D}={(y_i,x_i)}$, with the binary labels
$y_i\in{0,1}$ and where the data points are drawn independently, we use the so-called Maximum Likelihood Estimation (MLE) principle.
We aim thus at maximizing
the probability of seeing the observed data. We can then approximate the
likelihood in terms of the product of the individual probabilities of a specific outcome $y_i$, that is
$$
\begin{align}
P(\mathcal{D}|\boldsymbol{\beta})& = \prod_{i=1}^n \left[p(y_i=1|x_i,\boldsymbol{\beta})\right]^{y_i}\left[1-p(y_i=1|x_i,\boldsymbol{\beta}))\right]^{1-y_i}\nonumber \
\end{align}
$$
from which we obtain the log-likelihood and our cost/loss function
$$
\mathcal{C}(\boldsymbol{\beta}) = \sum_{i=1}^n \left( y_i\log{p(y_i=1|x_i,\boldsymbol{\beta})} + (1-y_i)\log\left[1-p(y_i=1|x_i,\boldsymbol{\beta}))\right]\right).
$$
The cost function rewritten
Reordering the logarithms, we can rewrite the cost/loss function as
$$
\mathcal{C}(\boldsymbol{\beta}) = \sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right).
$$
The maximum likelihood estimator is defined as the set of parameters that maximize the log-likelihood where we maximize with respect to $\beta$.
Since the cost (error) function is just the negative log-likelihood, for logistic regression we have that
$$
\mathcal{C}(\boldsymbol{\beta})=-\sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right).
$$
This equation is known in statistics as the cross entropy. Finally, we note that just as in linear regression,
in practice we often supplement the cross-entropy with additional regularization terms, usually $L_1$ and $L_2$ regularization as we did for Ridge and Lasso regression.
Minimizing the cross entropy
The cross entropy is a convex function of the weights $\boldsymbol{\beta}$ and,
therefore, any local minimizer is a global minimizer.
Minimizing this
cost function with respect to the two parameters $\beta_0$ and $\beta_1$ we obtain
$$
\frac{\partial \mathcal{C}(\boldsymbol{\beta})}{\partial \beta_0} = -\sum_{i=1}^n \left(y_i -\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right),
$$
and
$$
\frac{\partial \mathcal{C}(\boldsymbol{\beta})}{\partial \beta_1} = -\sum_{i=1}^n \left(y_ix_i -x_i\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right).
$$
A more compact expression
Let us now define a vector $\boldsymbol{y}$ with $n$ elements $y_i$, an
$n\times p$ matrix $\boldsymbol{X}$ which contains the $x_i$ values and a
vector $\boldsymbol{p}$ of fitted probabilities $p(y_i\vert x_i,\boldsymbol{\beta})$. We can rewrite in a more compact form the first
derivative of cost function as
$$
\frac{\partial \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = -\boldsymbol{X}^T\left(\boldsymbol{y}-\boldsymbol{p}\right).
$$
If we in addition define a diagonal matrix $\boldsymbol{W}$ with elements
$p(y_i\vert x_i,\boldsymbol{\beta})(1-p(y_i\vert x_i,\boldsymbol{\beta})$, we can obtain a compact expression of the second derivative as
$$
\frac{\partial^2 \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}^T} = \boldsymbol{X}^T\boldsymbol{W}\boldsymbol{X}.
$$
Extending to more predictors
Within a binary classification problem, we can easily expand our model to include multiple predictors. Our ratio between likelihoods is then with $p$ predictors
$$
\log{ \frac{p(\boldsymbol{\beta}\boldsymbol{x})}{1-p(\boldsymbol{\beta}\boldsymbol{x})}} = \beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p.
$$
Here we defined $\boldsymbol{x}=[1,x_1,x_2,\dots,x_p]$ and $\boldsymbol{\beta}=[\beta_0, \beta_1, \dots, \beta_p]$ leading to
$$
p(\boldsymbol{\beta}\boldsymbol{x})=\frac{ \exp{(\beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p)}}{1+\exp{(\beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p)}}.
$$
Including more classes
Till now we have mainly focused on two classes, the so-called binary
system. Suppose we wish to extend to $K$ classes. Let us for the sake
of simplicity assume we have only two predictors. We have then following model
$$
\log{\frac{p(C=1\vert x)}{p(K\vert x)}} = \beta_{10}+\beta_{11}x_1,
$$
and
$$
\log{\frac{p(C=2\vert x)}{p(K\vert x)}} = \beta_{20}+\beta_{21}x_1,
$$
and so on till the class $C=K-1$ class
$$
\log{\frac{p(C=K-1\vert x)}{p(K\vert x)}} = \beta_{(K-1)0}+\beta_{(K-1)1}x_1,
$$
and the model is specified in term of $K-1$ so-called log-odds or
logit transformations.
More classes
In our discussion of neural networks we will encounter the above again
in terms of a slightly modified function, the so-called Softmax function.
The softmax function is used in various multiclass classification
methods, such as multinomial logistic regression (also known as
softmax regression), multiclass linear discriminant analysis, naive
Bayes classifiers, and artificial neural networks. Specifically, in
multinomial logistic regression and linear discriminant analysis, the
input to the function is the result of $K$ distinct linear functions,
and the predicted probability for the $k$-th class given a sample
vector $\boldsymbol{x}$ and a weighting vector $\boldsymbol{\beta}$ is (with two
predictors):
$$
p(C=k\vert \mathbf {x} )=\frac{\exp{(\beta_{k0}+\beta_{k1}x_1)}}{1+\sum_{l=1}^{K-1}\exp{(\beta_{l0}+\beta_{l1}x_1)}}.
$$
It is easy to extend to more predictors. The final class is
$$
p(C=K\vert \mathbf {x} )=\frac{1}{1+\sum_{l=1}^{K-1}\exp{(\beta_{l0}+\beta_{l1}x_1)}},
$$
and they sum to one. Our earlier discussions were all specialized to
the case with two classes only. It is easy to see from the above that
what we derived earlier is compatible with these equations.
To find the optimal parameters we would typically use a gradient
descent method. Newton's method and gradient descent methods are
discussed in the material on optimization
methods.
Friday September 24
Wisconsin Cancer Data
We show here how we can use a simple regression case on the breast
cancer data using Logistic regression as our algorithm for
classification.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
cancer = load_breast_cancer()
import pandas as pd
# Making a data frame
cancerpd = pd.DataFrame(cancer.data, columns=cancer.feature_names)
fig, axes = plt.subplots(15,2,figsize=(10,20))
malignant = cancer.data[cancer.target == 0]
benign = cancer.data[cancer.target == 1]
ax = axes.ravel()
for i in range(30):
_, bins = np.histogram(cancer.data[:,i], bins =50)
ax[i].hist(malignant[:,i], bins = bins, alpha = 0.5)
ax[i].hist(benign[:,i], bins = bins, alpha = 0.5)
ax[i].set_title(cancer.feature_names[i])
ax[i].set_yticks(())
ax[0].set_xlabel("Feature magnitude")
ax[0].set_ylabel("Frequency")
ax[0].legend(["Malignant", "Benign"], loc ="best")
fig.tight_layout()
plt.show()
import seaborn as sns
correlation_matrix = cancerpd.corr().round(1)
# use the heatmap function from seaborn to plot the correlation matrix
# annot = True to print the values inside the square
plt.figure(figsize=(15,8))
sns.heatmap(data=correlation_matrix, annot=True)
plt.show()
"""
Explanation: Using the correlation matrix
In addition to the above scores, we could also study the covariance (and the correlation matrix).
We use Pandas to compute the correlation matrix.
End of explanation
"""
cancerpd = pd.DataFrame(cancer.data, columns=cancer.feature_names)
"""
Explanation: Discussing the correlation data
In the above example we note two things. In the first plot we display
the overlap of benign and malignant tumors as functions of the various
features in the Wisconsing breast cancer data set. We see that for
some of the features we can distinguish clearly the benign and
malignant cases while for other features we cannot. This can point to
us which features may be of greater interest when we wish to classify
a benign or not benign tumour.
In the second figure we have computed the so-called correlation
matrix, which in our case with thirty features becomes a $30\times 30$
matrix.
We constructed this matrix using pandas via the statements
End of explanation
"""
correlation_matrix = cancerpd.corr().round(1)
"""
Explanation: and then
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
# Load the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
# Logistic Regression
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(X_train, y_train)
print("Test set accuracy with Logistic Regression: {:.2f}".format(logreg.score(X_test,y_test)))
#now scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Logistic Regression
logreg.fit(X_train_scaled, y_train)
print("Test set accuracy Logistic Regression with scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_validate
#Cross validation
accuracy = cross_validate(logreg,X_test_scaled,y_test,cv=10)['test_score']
print(accuracy)
print("Test set accuracy with Logistic Regression and scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
import scikitplot as skplt
y_pred = logreg.predict(X_test_scaled)
skplt.metrics.plot_confusion_matrix(y_test, y_pred, normalize=True)
plt.show()
y_probas = logreg.predict_proba(X_test_scaled)
skplt.metrics.plot_roc(y_test, y_probas)
plt.show()
skplt.metrics.plot_cumulative_gain(y_test, y_probas)
plt.show()
"""
Explanation: Diagonalizing this matrix we can in turn say something about which
features are of relevance and which are not. This leads us to
the classical Principal Component Analysis (PCA) theorem with
applications. This will be discussed later this semester (week 43).
Other measures in classification studies: Cancer Data again
End of explanation
"""
|
cmorgan/toyplot | docs/table-axes.ipynb | bsd-3-clause | import numpy
import toyplot.data
data_table = toyplot.data.read_csv("temperatures.csv")
data_table = data_table[:10]
"""
Explanation: .. _table-axes:
Table Axes
Data tables, with rows containing observations and columns containing variables or series, are arguably the cornerstone of science. Much of the functionality of Toyplot or any other plotting package can be reduced to a process of mapping data series from tables to properties like coordinates and colors. Nevertheless, much tabular information is still best understood in its "native" tabular form, and we believe that even a humble table benefits from good layout and design - which is why Toyplot supports rendering tables as data graphics, treating them as first-class objects instead of specialized markup.
To accomplish this, Toyplot provides :class:toyplot.axes.Table, which is a specialized coordinate system. Just like :ref:cartesian-axes, table axes map domain coordinates to canvas coordinates. Unlike traditional Cartesian axes, table axes map integer coordinates that increase from left-to-right and top-to-bottom to rectangular regions of the canvas called cells.
Be careful not to confuse the table axes described in this section with :ref:data-tables, which are purely a data storage mechanism. To make this distinction clear, let's start by loading some sample data into a data table:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table)
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
"""
Explanation: Now, we can use the data table to initialize a set of table axes:
End of explanation
"""
data_table["TMAX"] = data_table["TMAX"].astype("int32")
data_table["TMIN"] = data_table["TMIN"].astype("int32")
data_table["TOBS"] = data_table["TOBS"].astype("int32")
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table)
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
"""
Explanation: With surprisingly little effort, this produces a very clean, easy to read table. Note that, like regular Cartesian axes, the table axes fill the available Canvas by default, so you can adjust your canvas width and height to expand or contract the rows and columns in your table. Also, each row and column in the table receives an equal amount of the available space, unless they are individually overridden as we've done here. Of course, you're free to use all of the mechanisms outlined in :ref:canvas-layout to add multiple sets of table axes to a canvas.
When you load a CSV file using :func:toyplot.data.read_csv, the resulting table columns all contain string values. Note that the columns in the graphic are left-justified, the default for string data. Let's see what happens when we convert some of our columns to integers:
End of explanation
"""
data_table["TMAX"] = data_table["TMAX"] * 0.1
data_table["TMIN"] = data_table["TMIN"] * 0.1
data_table["TOBS"] = data_table["TOBS"] * 0.1
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table)
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
"""
Explanation: After converting the TMAX, TMIN, and TOBS columns to integers, they are right-justified within their columns, so their digits all align, making it easy to judge magnitudes. As it happens, the data in this file is stored as integers representing tenths-of-a-degree Celsius, so let's convert them to floating-point Celsius degrees and see what happens:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table)
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.column(3).format = toyplot.format.FloatFormatter("{:.1f}")
table.column(4).format = toyplot.format.FloatFormatter("{:.1f}")
table.column(5).format = toyplot.format.FloatFormatter("{:.1f}")
"""
Explanation: Now, all of the decimal points are properly aligned within each column, even for values without a decimal point! If you wanted to, you could switch to a fixed number of decimal points:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
"""
Explanation: Next, let's title our figure. Just like regular axes, table axes have a label property that can be set at construction time:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.grid.hlines[...] = "single"
table.grid.vlines[...] = "single"
table.grid.hlines[1,...] = "double"
"""
Explanation: And although we don't recommend it, you can go crazy with gridlines:
End of explanation
"""
low_index = numpy.argsort(data_table["TMIN"])[0]
high_index = numpy.argsort(data_table["TMAX"])[-1]
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.row(high_index).style = {"font-weight":"bold", "fill":"red"}
"""
Explanation: ... for a table with $M$ rows and $N$ columns, the table.grid.hlines matrix will control the appearance of $M+1 \times N$ horizontal lines, while table.grid.vlines will control $M \times N+1$ vertical lines. Use "single" for single lines, "double" for double lines, or any value that evaluates to False to hide the lines.
Suppose you wanted to highlight the observations in the dataset with the highest high temperature and the lowest low temperature. You could do so by changing the style of the given rows:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.body.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.body.row(high_index).style = {"font-weight":"bold", "fill":"red"}
"""
Explanation: Wait a second ... those colored rows are both off-by-one! The actual minimum and maximum values are in the rows immediately following the colored rows. What happened? Note that the table has an "extra" row for the column headers, so row zero in the data is actually row one in the table, making the data rows "one-based" instead of "zero-based" the way all good programmers are accustomed. We could fix the problem by offsetting the indices we calculated from the raw data, but that would be error-prone and annoying. The offset would also change if we ever changed the number of header rows (we'll see how this is done in a moment).
What we really need is a way to refer to the "header" rows and the "body" rows in the table separately, using zero-based indices. Fortunately, Toyplot does just that - we can use a pair of special accessors to target our changes to the header or the body, using coordinates that won't be affected by changes to other parts of the table:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, hrows=2, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.body.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.body.row(high_index).style = {"font-weight":"bold", "fill":"red"}
"""
Explanation: Now the correct rows have been highlighted. Let's change the number of header rows to verify that the highlighting isn't affected:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, hrows=2, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.body.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.body.row(high_index).style = {"font-weight":"bold", "fill":"red"}
table.header.grid.hlines[...] = "single"
table.header.grid.vlines[...] = "single"
table.header.cell(0, 0, colspan=2).merge().data = "Location"
table.header.cell(0, 3, colspan=3).merge().data = u"Temperature \u00b0C"
"""
Explanation: Sure enough, the correct rows are still highlighted, and while it isn't obvious, the header does contain a second row. Let's make it obvious with some grid lines, and provide some top-level labels of our own:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, hrows=2, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 100
table.body.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.body.row(high_index).style = {"font-weight":"bold", "fill":"red"}
merged = table.header.cell(0, 0, colspan=2).merge()
merged.data = "Location"
merged.align = "center"
merged.style = {"font-size":"14px"}
merged = table.header.cell(0, 3, colspan=3).merge()
merged.data = u"Temperature \u00b0C"
merged.style = {"font-size":"14px"}
"""
Explanation: Note that by accessing the grid via the "header" accessor, we were able to easily set lines just for the header cells, and that we can use the data attribute to assign arbitrary cell contents, in this case to a pair of merged header cells.
Also, you may have noticed that the merged cells took on the attributes (alignment, style, etc.) of the cells that were merged, which is why the "Location" label is left-justified, while the "Temperature" label is centered. Let's center-justify the Location label, make both a little more prominent, and lose the gridlines:
End of explanation
"""
canvas = toyplot.Canvas(width=700, height=400)
table = canvas.table(data_table, columns=7, hrows=2, label="Temperature Readings")
table.column(0).width = 150
table.column(1).width = 150
table.column(2).width = 70
table.column(6).width = 80
table.body.row(low_index).style = {"font-weight":"bold", "fill":"blue"}
table.body.row(high_index).style = {"font-weight":"bold", "fill":"red"}
merged = table.header.cell(0, 0, colspan=2).merge()
merged.data = "Location"
merged.align = "center"
merged.style = {"font-size":"14px"}
merged = table.header.cell(0, 3, colspan=3).merge()
merged.data = u"Temperature \u00b0C"
merged.style = {"font-size":"14px"}
axes = table.body.column(6).merge().axes(show=False, padding=14)
axes.plot(data_table["TMIN"][::-1], along="y", marker="o", color="blue", style={"stroke-width":1.0})
axes.plot(data_table["TMAX"][::-1], along="y", marker="o", color="red", style={"stroke-width":1.0});
"""
Explanation: Finally, let's finish-off our grid by plotting the minimum and maximum temperatures vertically along the right-hand side. This will provide an intuitive guide to trends in the data. To do this, we'll add an extra column to the table, merge it into a single cell, and then embed a set of axes into the cell:
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/Metpy_Introduction/Introduction to MetPy.ipynb | mit | # Import the MetPy unit registry
from metpy.units import units
length = 10.4 * units.inches
width = 20 * units.meters
print(length, width)
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Introduction to MetPy</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
Overview:
Teaching: 25 minutes
Exercises: 20 minutes
Questions
What is MetPy?
How is MetPy structured?
How are units handled in MetPy?
Objectives
<a href="#whatis">What is MetPy?</a>
<a href="#units">Units and MetPy</a>
<a href="#constants">MetPy Constants</a>
<a href="#calculations">MetPy Calculations</a>
<a name="whatis"></a>
What is MetPy?
MetPy is a modern meteorological open-source toolkit for Python. It is a maintained project of Unidata to serve the academic meteorological community. MetPy consists of three major areas of functionality:
Plots
As meteorologists, we have many field specific plots that we make. Some of these, such as the Skew-T Log-p require non-standard axes and are difficult to plot in most plotting software. In MetPy we've baked in a lot of this specialized functionality to help you get your plots made and get back to doing science. We will go over making different kinds of plots during the workshop.
Calculations
Meteorology also has a common set of calculations that everyone ends up programming themselves. This is error-prone and a huge duplication of work! MetPy contains a set of well tested calculations that is continually growing in an effort to be at feature parity with other legacy packages such as GEMPAK.
File I/O
Finally, there are a number of odd file formats in the meteorological community. MetPy has incorporated a set of readers to help you deal with file formats that you may encounter during your research.
<a name="units"></a>
Units and MetPy
In order for us to discuss any of the functionality of MetPy, we first need to understand how units are inherently a part of MetPy and how to use them within this library.
Early in our scientific careers we all learn about the importance of paying attention to units in our calculations. Unit conversions can still get the best of us and have caused more than one major technical disaster, including the crash and complete loss of the $327 million Mars Climate Orbiter.
In MetPy, we use the pint library and a custom unit registry to help prevent unit mistakes in calculations. That means that every quantity you pass to MetPy should have units attached, just like if you were doing the calculation on paper! Attaching units is easy:
End of explanation
"""
area = length * width
print(area)
"""
Explanation: Don't forget that you can use tab completion to see what units are available! Just about every imaginable quantity is there, but if you find one that isn't, we're happy to talk about adding it.
While it may seem like a lot of trouble, let's compute the area of a rectangle defined by our length and width variables above. Without units attached, you'd need to remember to perform a unit conversion before multiplying or you would end up with an area in inch-meters and likely forget about it. With units attached, the units are tracked for you.
End of explanation
"""
area.to('m^2')
"""
Explanation: That's great, now we have an area, but it is not in a very useful unit still. Units can be converted using the .to() method. While you won't see m$^2$ in the units list, we can parse complex/compound units as strings:
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
Create a variable named speed with a value of 25 knots.
Create a variable named time with a value of 1 fortnight.
Calculate how many furlongs you would travel in time at speed.
End of explanation
"""
# %load solutions/distance.py
"""
Explanation: Solution
End of explanation
"""
10 * units.degC - 5 * units.degC
"""
Explanation: Temperature
Temperature units are actually relatively tricky (more like absolutely tricky as you'll see). Temperature is a non-multiplicative unit - they are in a system with a reference point. That means that not only is there a scaling factor, but also an offset. This makes the math and unit book-keeping a little more complex. Imagine adding 10 degrees Celsius to 100 degrees Celsius. Is the answer 110 degrees Celsius or 383.15 degrees Celsius (283.15 K + 373.15 K)? That's why there are delta degrees units in the unit registry for offset units. For more examples and explanation you can watch MetPy Monday #13.
Let's take a look at how this works and fails:
We would expect this to fail because we cannot add two offset units (and it does fail as an "Ambiguous operation with offset unit").
<pre style='color:#000000;background:#ffffff;'><span style='color:#008c00; '>10</span> <span style='color:#44aadd; '>*</span> units<span style='color:#808030; '>.</span>degC <span style='color:#44aadd; '>+</span> <span style='color:#008c00; '>5</span> <span style='color:#44aadd; '>*</span> units<span style='color:#808030; '>.</span>degC
</pre>
On the other hand, we can subtract two offset quantities and get a delta:
End of explanation
"""
25 * units.degC + 5 * units.delta_degF
"""
Explanation: We can add a delta to an offset unit as well:
End of explanation
"""
273 * units.kelvin + 10 * units.kelvin
273 * units.kelvin - 10 * units.kelvin
"""
Explanation: Absolute temperature scales like Kelvin and Rankine do not have an offset and therefore can be used in addition/subtraction without the need for a delta verion of the unit.
End of explanation
"""
# 12 UTC temperature
temp_initial = 20 * units.degC
temp_initial
"""
Explanation: Example
Let's say we're given a 12 UTC sounding, but want to know how the profile has changed when we have had several hours of diurnal heating. How do we update the surface temperature?
End of explanation
"""
# New 18 UTC temperature
temp_new = temp_initial + 5 * units.delta_degC
temp_new
"""
Explanation: Maybe the surface temperature increased by 5 degrees Celsius so far today - is this a temperature of 5 degC, or a temperature change of 5 degC? We subconsciously know that its a delta of 5 degC, but often write it as just adding two temperatures together, when it really is: temperature + delta(temperature)
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
A cold front is moving through, decreasing the ambient temperature of 25 degC at a rate of 2.3 degF every 10 minutes. What is the temperature after 1.5 hours?
End of explanation
"""
# %load solutions/temperature_change.py
"""
Explanation: Solution
End of explanation
"""
import metpy.constants as mpconst
mpconst.earth_avg_radius
mpconst.dry_air_molecular_weight
"""
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="constants"></a>
MetPy Constants
Another common place that problems creep into scientific code is the value of constants. Can you reproduce someone else's computations from their paper? Probably not unless you know the value of all of their constants. Was the radius of the earth 6000 km, 6300km, 6371 km, or was it actually latitude dependent?
MetPy has a set of constants that can be easily accessed and make your calculations reproducible. You can view a full table in the docs, look at the module docstring with metpy.constants? or checkout what's available with tab completion.
End of explanation
"""
mpconst.Re
mpconst.Md
"""
Explanation: You may also notice in the table that most constants have a short name as well that can be used:
End of explanation
"""
import metpy.calc as mpcalc
import numpy as np
# Make some fake data for us to work with
np.random.seed(19990503) # So we all have the same data
u = np.random.randint(0, 15, 10) * units('m/s')
v = np.random.randint(0, 15, 10) * units('m/s')
print(u)
print(v)
"""
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="calculations"></a>
MetPy Calculations
MetPy also encompasses a set of calculations that are common in meteorology (with the goal of have all of the functionality of legacy software like GEMPAK and more). The calculations documentation has a complete list of the calculations in MetPy.
We'll scratch the surface and show off a few simple calculations here, but will be using many during the workshop.
End of explanation
"""
direction = mpcalc.wind_direction(u, v)
print(direction)
"""
Explanation: Let's use the wind_direction function from MetPy to calculate wind direction from these values. Remember you can look at the docstring or the website for help.
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
Calculate the wind speed using the wind_speed function.
Print the wind speed in m/s and mph.
End of explanation
"""
# %load solutions/wind_speed.py
"""
Explanation: Solution
End of explanation
"""
mpcalc.dewpoint_rh(25 * units.degC, 75 * units.percent)
"""
Explanation: As one final demonstration, we will calculation the dewpoint given the temperature and relative humidity:
End of explanation
"""
|
eshlykov/mipt-day-after-day | statistics/hw-11/11.7-9.ipynb | unlicense | import numpy
import scipy.stats
n = 800 # Размер выборки
mu = numpy.array([74, 92, 83, 79, 80, 73, 77, 75, 76, 91])
expected = n * numpy.full(10, 0.1)
"""
Explanation: Задача 11.7
Цифры $0, 1, 2, \ldots, 9$ среди $800$ первых десятичных знаков числа $\pi$ появились $74, 92, 83, 79,
80, 73, 77, 75, 76, 91$ раз соответственно. С помощью хи-квадрат критерия проверьте гипотезу
о согласии этих данных с законом равномерного распределения на множестве ${0, 1, \ldots , 9}$ на
уровне значимости $0.05$. Задачу можно выполнить в Python.
End of explanation
"""
chisquare = scipy.stats.chisquare(mu, f_exp=expected, ddof=0)
print(chisquare)
"""
Explanation: Размер выборки $n = 800 \geqslant 50$, размер разбиения $k = 10 \approx 9.5 = \log_2{10}$, $np_j^0 = 800 \cdot 0.1 = 80 \geqslant 5$. Поэтому критерий хи-квадрат можно применить.
End of explanation
"""
# Считаем, что оставшиеся 5 зубов были выбиты вообще не выбиты, скажем, 100 попыток.
sample = numpy.array([1] * 52 + [2] * 31 + [3] * 3 + [4] * 5)
print(scipy.stats.ks_2samp(sample, scipy.stats.geom(p=2/3).rvs(size=sample.size)))
"""
Explanation: Таким образом, $0.83 > 0.05$, поэтому гипотезу, что это распределение нормальное, нужно не отвергнуть.
Задача 11.8
Профессиональный дантист научился выбивать зубы мудрости кулаком. Известно, что $52$ зуба
мудрости он выбил с первой попытки, $31$ - со второй, $3$ - с третьей, на выбивание оставшихся
$5$ зубов ему потребовалось более $4$ попыток. Проверить гипотезу о том, что дантист выбивает
произвольный зуб мудрости с вероятностью $2/3$, на уровне значимости $0.05$. Задачу можно
выполнить в Python.
End of explanation
"""
size = 5000
mu = numpy.array([0, 950, 2200, 1010])
mu[0] = size - mu.sum()
print(mu)
"""
Explanation: Гипотеза отвергается, так как p-value $ < 0.05$ даже при наилучшем раскладе.
Задача 11.9
Среди 5000 семей, имеющих трех детей, есть ровно 1010 семей с тремя мальчиками, 2200 семей
с двумя мальчиками и одной девочкой, 950 семей с одним мальчиком и двумя девочками (во
всех остальных семьях все дети — девочки). Можно ли с уровнем значимости 𝛼 = 0.02 считать,
что количество мальчиков 𝜉 в семье с тремя детьми имеет следующее распределение P(𝜉 = 0) =
𝜃, P(𝜉 = 1) = 𝜃, P(𝜉 = 2) = 2𝜃, P(𝜉 = 3) = 1 − 4𝜃, где 𝜃 ∈ (0, 1/4)? Задачу можно выполнить в Python.
End of explanation
"""
theta = 3990 / 20000
expected = size * numpy.array([theta, theta, 2 * theta, 1 - 4 * theta])
print(expected)
chisquare = scipy.stats.chisquare(mu, f_exp=expected, ddof=1)
print(chisquare)
"""
Explanation: $2^{2000} \theta^{840+950+2200}(1-4\theta)^{1010} = c\theta^{3990}(1-4\theta)^{1010}$
$\log{c}+3990\log{\theta}+1010\log{(1-4\theta)}$
$\frac{3990}{\theta} - 4\frac{1010}{1-4\theta}=0$
$3990(1-4\theta)-4040\theta=0$
$20000\theta=3990$
End of explanation
"""
|
geoneill12/phys202-2015-work | assignments/assignment04/MatplotlibEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 1
Imports
End of explanation
"""
import os
assert os.path.isfile('yearssn.dat')
"""
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
"""
data = np.loadtxt('yearssn.dat')
year = data[:, 0]
ssc = data[:,1]
data
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
"""
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
"""
plt.figure(figsize=(20,6))
plt.plot(year, ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(1700, 2015)
plt.grid(True)
assert True # leave for grading
"""
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
plt.figure(figsize=(20,6))
plt.subplot(2,2,1)
plt.plot(year, ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(1700, 1800)
plt.grid(True)
plt.subplot(2,2,2)
plt.plot(year, ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(1800, 1900)
plt.grid(True)
plt.subplot(2,2,3)
plt.plot(year, ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(1900, 2000)
plt.grid(True)
plt.subplot(2,2,4)
plt.plot(year, ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(2000, 2100)
plt.grid(True)
assert True # leave for grading
"""
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
Since there are so many highs and lows in this graph, I made the graph very wide so that these highs and lows are spread out and easier to tell apart. I turned on the gridlines so that it would be easier to see where the values on the very right of the graph lie with respect to the y-axis. I left the box turned on to make it easy to distinguish the graph from the rest of the page, since there is so much whitespace the two tend to blend together. The blue line sets the graph line apart from the box and gridlines, but is also a nice, solid color so it is easily readable.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
ahwillia/RecNetLearn | tutorials/FORCE_Learning_recurrent_feedforward.ipynb | mit | from __future__ import division
from scipy.integrate import odeint,ode
from numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,pi,exp,tile,arange,reshape
from numpy.random import uniform,normal,choice
import pylab as plt
import numpy as np
%matplotlib inline
"""
Explanation: Embedding a Feedforward Cascade in a Recurrent Network
Alex Williams 10/24/2015
If you are viewing a static version of this notebook (e.g. on nbviewer), you can launch an interactive session by clicking below:
There has been renewed interest in feedforward networks in both theoretical (Ganguli et al., 2008; Goldman, 2009; Murphy & Miller, 2009) and experimental (Long et al. 2010; Harvey et al. 2012) neuroscience lately. On a structural level, most neural circuits under study are highly recurrent. However, recurrent networks can still encode simple feedforward dynamics as we'll show in this notebook (also see Ganguli & Latham, 2009, for an intuitive overview).
<img src="./feedforward.png" width=450>
End of explanation
"""
## Network parameters and initial conditions
N1 = 20 # neurons in chain
N2 = 20 # neurons not in chain
N = N1+N2
tI = 10
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-1,1,N)
tmax = 2*N1+2*tI
dt = 0.5
u = uniform(-1,1,N)
g = 1.5
## Target firing rate for neuron i and time t0
target = lambda t0,i: 2.0*exp(-(((t0%tmax)-(2*i+tI+3))**2)/(2.0*9)) - 1.0
def f1(t0,x):
## input to network at beginning of trial
if (t0%tmax) < tI: return -x + g*dot(J,tanh_x) + u
## no input after tI units of time
else: return -x + g*dot(J,tanh_x)
P = []
for i in range(N1):
# Running estimate of the inverse correlation matrix
P.append(eye(N))
lr = 1.0 # learning rate
# simulation data: state, output, time, weight updates
x,z,t,wu = [x0],[],[0],[zeros(N1).tolist()]
# Set up ode solver
solver = ode(f1)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < 25*tmax:
tanh_x = tanh(x[-1]) # cache firing rates
wu.append([])
# train rates at the beginning of the simulation
if t[-1]<22*tmax:
for i in range(N1):
error = target(t[-1],i) - tanh_x[i]
q = dot(P[i],tanh_x)
c = lr / (1 + dot(q,tanh_x))
P[i] = P[i] - c*outer(q,q)
J[i,:] += c*error*q
wu[-1].append(np.sum(np.abs(c*error*q)))
else:
# Store zero for the weight update
for i in range(N1): wu[-1].append(0)
solver.integrate(solver.t+dt)
x.append(solver.y)
t.append(solver.t)
x = np.array(x)
r = tanh(x) # firing rates
t = np.array(t)
wu = np.array(wu)
wu = reshape(wu,(len(t),N1))
pos = 2*arange(N)
offset = tile(pos[::-1],(len(t),1))
targ = np.array([target(t,i) for i in range(N1)]).T
plt.figure(figsize=(12,11))
plt.subplot(3,1,1)
plt.plot(t,targ + offset[:,:N1],'-r')
plt.plot(t,r[:,:N1] + offset[:,:N1],'-k')
plt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])
plt.title('Trained subset of network (target pattern in red)')
plt.subplot(3,1,2)
plt.plot(t,r[:,N1:] + offset[:,N1:],'-k')
plt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])
plt.title('Untrained subset of network')
plt.subplot(3,1,3)
plt.plot(t,wu + offset[:,:N1],'-k')
plt.yticks([]),plt.xlim([t[0],t[-1]]),plt.xlabel('time (a.u.)')
plt.title('Change in presynaptic weights for each trained neuron')
plt.show()
"""
Explanation: Methods
Consider a recurrent network initialized with random connectivity. We split the network into two groups — neurons in the first group participate in the feedforward cascade, and neurons in the second group do not. We use recursive least-squares to train the presynaptic weights for each of the neurons in the cascade. The presynaptic weights for the second group of neurons are left untrained.
The second group of neurons provides chaotic behavior that helps stabilize and time the feedforward cascade. This is necessary for the target feedforward cascade used in this example. The intrinsic dynamics of the system are too fast to match the slow timescale of the target pattern we use.
<img src="./rec-ff-net.png" width=600>
We previously applied FORCE learning to the output/readout weights of a recurrent network (see notebook here). In this case we will train a subset of the recurrent connections in the network (blue lines in the schematic above). This is described in the supplemental materials of Susillo & Abbott (2009). We start with random initial synaptic weights for all recurrent connections and random weights for the input stimulus to the network.<sup><a href="#f1b" id="f1t">[1]</a></sup> The dynamics are given by:
$$\mathbf{\dot{x}} = -\mathbf{x} + J \tanh(\mathbf{x}) + \mathbf{u}(t)$$
where $\mathbf{x}$ is a vector holding the activation of all neurons, the firing rates are $\tanh(\mathbf{x})$, the matrix $J$ holds the synaptic weights of the recurrent connections, and $\mathbf{u}(t)$ is the input/stimulus, which is applied in periodic step pulses.
Each neuron participating in the feedforward cascade/sequence has a target function for its firing rate. We use a Gaussian for this example:
$$f_i(t) = 2 \exp \left [ \frac{-(t-\mu_i)^2}{18} \right ] - 1$$
where $\mu_i$ is the time of peak firing for neuron $i$. Here, $t$ is the time since the last stimulus pulse was delivered — to reiterate, we repeatedly apply the stimulus as a step pulse during training.
We apply recursive least-squares to train the pre-synaptic weights for each neuron participating in the cascade. Denote the $i$<sup>th</sup> row of $J$ as $\mathbf{j}_i$ (these are the presynaptic inputs to neuron $i$). For each neuron, we store a running estimate of the inverse correlation matrix, $P_i$, and use this to tune our update of the presynaptic weights:
$$\mathbf{q} = P_i \tanh [\mathbf{x}]$$
$$c = \frac{1}{1+ \mathbf{q}^T \tanh(\mathbf{x})}$$
$$\mathbf{j}_i \rightarrow \mathbf{j}_i + c(f_i(t)- \tanh (x_i) ) \mathbf{q}$$
$$P_{i} \rightarrow P_{i} - c \mathbf{q} \mathbf{q}^T$$
We initialize each $P_i$ to the identity matrix at the beginning of training.
Training the Network
End of explanation
"""
tstim = [80,125,170,190]
def f2(t0,x):
## input to network at beginning of trial
for ts in tstim:
if t0 > ts and t0 < ts+tI: return -x + g*dot(J,tanh(x)) + u
## no input after tI units of time
return -x + g*dot(J,tanh(x))
# Set up ode solver
solver = ode(f2)
solver.set_initial_value(x[-1,:])
x_test,t_test = [x[-1,:]],[0]
while t_test[-1] < 250:
solver.integrate(solver.t + dt)
x_test.append(solver.y)
t_test.append(solver.t)
x_test = np.array(x_test)
r_test = tanh(x_test) # firing rates
t_test = np.array(t_test)
pos = 2*arange(N)
offset = tile(pos[::-1],(len(t_test),1))
plt.figure(figsize=(10,5))
plt.plot(t_test,r_test[:,:N1] + offset[:,:N1],'-k')
plt.plot(tstim,ones(len(tstim))*80,'or',ms=8)
plt.ylim([37,82]), plt.yticks([]), plt.xlabel('time (a.u.)')
plt.title('After Training. Stimulus applied at red points.\n',fontweight='bold')
plt.show()
"""
Explanation: Test the behavior
We want the network to only produce a feedforward cascade only in response to a stimulus input. Note that this doesn't always work — it is difficult for the network to perform this task. Nonetheless, the training works pretty well most of the time.<sup><a href="#f2b" id="f2t">[2]</a></sup>
End of explanation
"""
plt.matshow(J)
plt.title("Connectivity Matrix, Post-Training")
"""
Explanation: Note that when we apply two inputs in quick succession (the last two inputs) the feedforward cascade restarts.
Connectivity matrix
End of explanation
"""
|
gourie/training_RL | gym_ex_taxi.ipynb | bsd-3-clause | env = gym.make("Taxi-v2")
env.reset() # init state value of env
env.observation_space.n # number of possible values in this state space
env.action_space.n # number of possible actions
# print(env.action_space)
# 0 = down
# 1 = up
# 2 = right
# 3 = left
# 4 = pickup
# 5 = drop-off
env.render()
# In this environment the yellow square represents the taxi, the (“|”) represents a wall, the blue letter represents the pick-up location, and the purple letter is the drop-off location. The taxi will turn green when it has a passenger aboard.
env.env.s = 114
env.render()
state, reward, done, info = env.step(1)
env.render()
"""
Explanation: Gym Taxi-v2 environment
lets have a look at the problem and how the env has been setup
End of explanation
"""
def taxiRandomSearch(env):
""" Randomly pick an action and keep guessing until the env is solved
:param env: Gym Taxi-v2 env
:return: number of steps required to solve the Gym Taxi-v2 env
"""
state = env.reset()
stepCounter = 0
reward = None
while reward != 20: # reward 20 means that the env has been solved
state, reward, done, info = env.step(env.action_space.sample())
stepCounter += 1
return stepCounter
print(taxiRandomSearch(env))
"""
Explanation: The environment is considered solved when you successfully pick up a passenger and drop them off at their desired location. Upon doing this, you will receive a reward of 20 and done will equal True.|
A first naive solution
at every step, randomly choose one of the available 6 actions
A core part of evaluating any agent's performance is to compare it against a completely random agent.
End of explanation
"""
Q = np.zeros([env.observation_space.n, env.action_space.n]) # memory, stores the value (reward) for every single state and every action you can take
G = 0 # accumulated reward for each episode
alpha = 0.618 # learning rate
Q[114]
def taxiQlearning(env):
""" basic Q learning algo
:param env: Gym Taxi-v2 env
:return: None
"""
for episode in range(1,1001):
stepCounter = 0
done = False
G, reward = 0,0
state = env.reset()
while done != True:
action = np.argmax(Q[state]) # 1: find action with highest value/reward at the given state
state2, reward, done, info = env.step(action) # 2: take that 'best action' and store the future state
Q[state,action] += alpha * (reward + np.max(Q[state2]) - Q[state,action]) # 3: update the q-value using Bellman equation
G += reward
state = state2
stepCounter += 1
if episode % 50 == 0:
print('Episode {} Total Reward: {}'.format(episode,G))
print('Steps required for this episode: %i'% stepCounter)
taxiQlearning(env)
print(Q[14])
"""
Explanation: Let's build in some memory to remember actions and their associated rewards
the memory is going to be a Q action value table (using a np array of size 500x6, nb of states x nb of actions)
In short, the problem is solved multiple times (each time called an episode) and the Q-table (memory) is updated to improve the algorithm's efficiency and performance.
End of explanation
"""
|
darkomen/TFG | medidas/03082015/.ipynb_checkpoints/modelado-checkpoint.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import signal
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
%pylab inline
#Abrimos el fichero csv con los datos de la muestra para filtrarlos
err_u = 2
err_d = 1
datos_sin_filtrar = pd.read_csv('datos.csv')
datos = datos_sin_filtrar[(datos_sin_filtrar['Diametro X'] >= err_d) & (datos_sin_filtrar['Diametro Y'] >= err_d) & (datos_sin_filtrar['Diametro X'] <= err_u) & (datos_sin_filtrar['Diametro Y'] <= err_u)]
datos.describe()
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
#columns = ['temperatura', 'entrada']
columns = ['Diametro X', 'RPM TRAC']
"""
Explanation: Modelado de un sistema con ipython
Uso de ipython para el modelado de un sistema a partir de los datos obtenidos en un ensayo. Ensayo en lazo abierto
End of explanation
"""
#Mostramos en varias gráficas la información obtenida tras el ensayo
th_u = 1.95
th_d = 1.55
datos[columns].plot(secondary_y=['RPM TRAC'],figsize=(10,5),title='Modelo matemático del sistema').hlines([th_d ,th_u],0,2000,colors='r')
#datos_filtrados['RPM TRAC'].plot(secondary_y=True,style='g',figsize=(20,20)).set_ylabel=('RPM')
"""
Explanation: Representación
Representamos los datos mostrados en función del tiempo. De esta manera, vemos la respuesta física que tiene nuestro sistema.
End of explanation
"""
# Buscamos el polinomio de orden 4 que determina la distribución de los datos
reg = np.polyfit(datos['time'],datos['Diametro X'],4)
# Calculamos los valores de y con la regresión
ry = np.polyval(reg,datos['time'])
print (reg)
d = {'Diametro X' : datos['Diametro X'],
'Ajuste': ry, 'RPM TRAC' : datos['RPM TRAC']}
df = pd.DataFrame(d)
df.plot(subplots=True,figsize=(20,20))
plt.figure(figsize=(10,10))
plt.plot(datos['time'],datos['Diametro X'], label=('f(x)'))
plt.plot(datos['time'],ry,'ro', label=('regression'))
plt.legend(loc=0)
plt.grid(True)
plt.xlabel('x')
plt.ylabel('f(x)')
"""
Explanation: Cálculo del polinomio característico
Hacemos una regresión con un polinomio de orden 4 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
End of explanation
"""
##Respuesta en frecuencia del sistema
num = [25.9459 ,0.00015733 ,0.00000000818174]
den = [1,0,0]
tf = signal.lti(num,den)
w, mag, phase = signal.bode(tf)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6))
ax1.semilogx(w, mag) # Eje x logarítmico
ax2.semilogx(w, phase) # Eje x logarítmico
w, H = signal.freqresp(tf)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
ax1.plot(H.real, H.imag)
ax1.plot(H.real, -H.imag)
ax2.plot(tf.zeros.real, tf.zeros.imag, 'o')
ax2.plot(tf.poles.real, tf.poles.imag, 'x')
t, y = signal.step2(tf) # Respuesta a escalón unitario
plt.plot(t, 2250 * y) # Equivalente a una entrada de altura 2250
"""
Explanation: El polinomio caracteristico de nuestro sistema es:
$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$
Transformada de laplace
Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:
\newline
$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$
End of explanation
"""
|
claudiuskerth/PhDthesis | Data_analysis/SNP-indel-calling/dadi/05_1D_model_synthesis.ipynb | mit | # load dadi module
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
%ll
%ll dadiExercises
# import 1D spectrum of ery
fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
# import 1D spectrum of ery
fs_par = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format')
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Plot-the-data" data-toc-modified-id="Plot-the-data-1"><span class="toc-item-num">1 </span>Plot the data</a></div><div class="lev2 toc-item"><a href="#To-fold-or-not-to-fold-by-ANGSD" data-toc-modified-id="To-fold-or-not-to-fold-by-ANGSD-11"><span class="toc-item-num">1.1 </span>To fold or not to fold by ANGSD</a></div><div class="lev1 toc-item"><a href="#$\theta$-and-implied-$N_{ref}$" data-toc-modified-id="$\theta$-and-implied-$N_{ref}$-2"><span class="toc-item-num">2 </span>$\theta$ and implied $N_{ref}$</a></div><div class="lev2 toc-item"><a href="#standard-neutral-model" data-toc-modified-id="standard-neutral-model-21"><span class="toc-item-num">2.1 </span>standard neutral model</a></div><div class="lev2 toc-item"><a href="#reducing-noise-and-bias" data-toc-modified-id="reducing-noise-and-bias-22"><span class="toc-item-num">2.2 </span>reducing noise and bias</a></div><div class="lev1 toc-item"><a href="#exponential-growth-model" data-toc-modified-id="exponential-growth-model-3"><span class="toc-item-num">3 </span>exponential growth model</a></div><div class="lev2 toc-item"><a href="#erythropus" data-toc-modified-id="erythropus-31"><span class="toc-item-num">3.1 </span><em>erythropus</em></a></div><div class="lev2 toc-item"><a href="#parallelus" data-toc-modified-id="parallelus-32"><span class="toc-item-num">3.2 </span><em>parallelus</em></a></div><div class="lev1 toc-item"><a href="#two-epoch-model" data-toc-modified-id="two-epoch-model-4"><span class="toc-item-num">4 </span>two epoch model</a></div><div class="lev2 toc-item"><a href="#erythropus" data-toc-modified-id="erythropus-41"><span class="toc-item-num">4.1 </span><em>erythropus</em></a></div><div class="lev2 toc-item"><a href="#parallelus" data-toc-modified-id="parallelus-42"><span class="toc-item-num">4.2 </span><em>parallelus</em></a></div><div class="lev1 toc-item"><a href="#bottleneck-then-exponential-growth-model" data-toc-modified-id="bottleneck-then-exponential-growth-model-5"><span class="toc-item-num">5 </span>bottleneck then exponential growth model</a></div><div class="lev2 toc-item"><a href="#erythropus" data-toc-modified-id="erythropus-51"><span class="toc-item-num">5.1 </span><em>erythropus</em></a></div><div class="lev2 toc-item"><a href="#parallelus" data-toc-modified-id="parallelus-52"><span class="toc-item-num">5.2 </span><em>parallelus</em></a></div><div class="lev1 toc-item"><a href="#three-epoch-model" data-toc-modified-id="three-epoch-model-6"><span class="toc-item-num">6 </span>three epoch model</a></div><div class="lev2 toc-item"><a href="#erythropus" data-toc-modified-id="erythropus-61"><span class="toc-item-num">6.1 </span><em>erythropus</em></a></div><div class="lev2 toc-item"><a href="#parallelus" data-toc-modified-id="parallelus-62"><span class="toc-item-num">6.2 </span><em>parallelus</em></a></div><div class="lev1 toc-item"><a href="#exponential-growth-then-bottleneck-model" data-toc-modified-id="exponential-growth-then-bottleneck-model-7"><span class="toc-item-num">7 </span>exponential growth then bottleneck model</a></div><div class="lev2 toc-item"><a href="#erythropus" data-toc-modified-id="erythropus-71"><span class="toc-item-num">7.1 </span><em>erythropus</em></a></div><div class="lev2 toc-item"><a href="#parallelus" data-toc-modified-id="parallelus-72"><span class="toc-item-num">7.2 </span><em>parallelus</em></a></div><div class="lev1 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-8"><span class="toc-item-num">8 </span>Conclusion</a></div><div class="lev1 toc-item"><a href="#Can-gene-flow-explain-the-difficulty-of-fitting-1D-models?" data-toc-modified-id="Can-gene-flow-explain-the-difficulty-of-fitting-1D-models?-9"><span class="toc-item-num">9 </span>Can gene flow explain the difficulty of fitting 1D models?</a></div>
End of explanation
"""
import pylab
%matplotlib inline
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
pylab.plot(fs_par, 'go-', label='par')
pylab.plot(fs_ery, 'rs-', label='ery')
pylab.xlabel('minor allele frequency')
pylab.ylabel('SNP count')
pylab.legend()
"""
Explanation: Plot the data
End of explanation
"""
! cat ERY.unfolded.sfs
! cat PAR.unfolded.sfs
# import 1D unfolded spectrum of ery
fs_ery_unfolded = dadi.Spectrum.from_file('ERY.unfolded.sfs')
# import 1D unfolded spectrum of par
fs_par_unfolded = dadi.Spectrum.from_file('PAR.unfolded.sfs')
pylab.plot(fs_ery, 'ro-', label='folded by ANGSD')
pylab.plot(fs_ery_unfolded.fold(), 'bs-', label='folded by ' + r'$\delta$a$\delta$i')
pylab.xlabel('minor allele frequency')
pylab.ylabel('SNP count')
pylab.legend()
pylab.plot(fs_par, 'go-', label='folded by ANGSD')
pylab.plot(fs_par_unfolded.fold(), 'bs-', label='folded by ' + r'$\delta$a$\delta$i')
pylab.xlabel('minor allele frequency')
pylab.ylabel('SNP count')
pylab.legend()
"""
Explanation: To fold or not to fold by ANGSD
Does estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the reference allele? Matteo Fumagalli thinks that this is sensible.
End of explanation
"""
fs_ery = fs_ery_unfolded.fold()
fs_par = fs_par_unfolded.fold()
"""
Explanation: For parallelus, the spectrum folded by dadi looks better than the spectrum folded by ANGSD. I am therefore going to use spectra folded in dadi.
End of explanation
"""
# create link to built-in model function
func = dadi.Demographics1D.snm
# make the extrapolating version of the demographic model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# setting the smallest grid size slightly larger than the largest population sample size
pts_l = [40, 50, 60]
ns = fs_ery.sample_sizes
ns
# calculate unfolded AFS under standard neutral model (up to a scaling factor theta)
neutral_model = func_ex(0, ns, pts_l)
neutral_model
theta_ery = dadi.Inference.optimal_sfs_scaling(neutral_model, fs_ery)
theta_ery
theta_par = dadi.Inference.optimal_sfs_scaling(neutral_model, fs_par)
theta_par
"""
Explanation: $\theta$ and implied $N_{ref}$
standard neutral model
End of explanation
"""
mu = 3e-9
L = fs_ery.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites
print "The total sequence length for the ery spectrum is {0:,}.".format(int(L))
N_ref_ery = theta_ery/L/mu/4
print "The effective ancestral population size of ery (in number of diploid individuals) implied by this theta is: {0:,}.".format(int(N_ref_ery))
mu = 3e-9
L = fs_par.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites
print "The total sequence length for par spectrum is {0:,}.".format(int(L))
N_ref_par = theta_par/L/mu/4
print "The effective ancestral population size of par (in number of diploid individuals) implied by this theta is: {0:,}.".format(int(N_ref_par))
"""
Explanation: What effective ancestral population size would that imply?
According to section 5.2 in the dadi manual:
$$
\theta = 4 N_{ref} \mu_{L} \qquad \text{L: sequence length}
$$
Then
$$
\mu_{L} = \mu_{site} \times L
$$
So
$$
\theta = 4 N_{ref} \mu_{site} \times L
$$
and
$$
N_{ref} = \frac{\theta}{4 \mu_{site} L}
$$
Let's assume the mutation rate per nucleotide site per generation is $3\times 10^{-9}$ (see e. g. Liu2017).
End of explanation
"""
# compare neutral model prediction with ery spectrum
dadi.Plotting.plot_1d_comp_multinom(neutral_model.fold()[:19], fs_ery[:19], residual='linear')
# compare neutral model prediction with par spectrum
dadi.Plotting.plot_1d_comp_multinom(neutral_model.fold()[:19], fs_par[:19], residual='linear')
"""
Explanation: These effective population sizes are consistent with those reported for other insect species (Lynch2016, fig. 3b).
In dadi, times are given in units of $2N_{ref}$ generations.
reducing noise and bias
End of explanation
"""
import dill
# loading optimisation results from previous analysis
growth_ery = dill.load(open('OUT_exp_growth_model/ERY_perturb_ar_ery.dill'))
def flatten(array):
"""
Returns a list of flattened elements of every inner lists (or tuples)
****RECURSIVE****
"""
import numpy
res = []
for el in array:
if isinstance(el, (list, tuple, numpy.ndarray)):
res.extend(flatten(el))
continue
res.append(el)
return list( res )
import pandas as pd
i = 4 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_ery = [flatten(out)[:5] for out in growth_ery if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_ery, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
print "The most likely parameter combination for this model suggests that the ery population size started to exponentially shrink at {0:,} generations ago to a current population size of {1:,}.".format(int(0.00769*2*N_ref_ery), int(0.14929*N_ref_ery))
"""
Explanation: The lower plot (green line) is for the scaled Poisson residuals.
$$
residuals = (model - data)/\sqrt{model}
$$
The model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expected standard deviation of the model counts.
Although a standard neutral model is unrealistic for either population, the observed counts deviate by up to 90 standard deviations from this model!
What could be done about this?
The greatest deviations are seen for the first two frequency classes, the ones that should provide the greatest amount of information for theta (Fu1994) and therefore probably also other parameters. I think masking out just one of the first two frequency classes will lead to highly biased inferences. Masking both frequency classes will reduce a lot of the power to infer the demographic history. I therefore think that masking will most likely not lead to better estimates.
Toni has suggested that the doubleton class is inflated due to "miscalling" heterozygotes as homozygotes. When they contain a singleton they will be "called" as homozygote and therefore contribute to the doubleton count. This is aggravated by the fact that the sequenced individuals are all male which only possess one X chromosome. The X chromosome is the fourth largest of the 9 chromosomes of these grasshoppers (8 autosomes + X) (see Gosalvez1988, fig. 2). That is, about 1/9th of the sequenced RAD loci are haploid but ANGSD assumes all loci to be diploid. The genotype likelihoods it calculates are all referring to diploid genotypes.
I think one potential reason for the extreme deviations is that the genotype likelihoods are generally biased toward homozygote genotypes (i. e. also for autosomal loci) due to PCR duplicates (see eq. 1 in Nielsen2012). So, one potential improvement would be to remove PCR duplicates.
Another potential improvement could be found by subsampling 8/9th to 8/10th of the contigs in the SAF files and estimating an SFS from these. Given enough subsamples, one should eventually be found that maximally excludes loci from the X chromosome. This subsample is expected to produce the least squared deviations from an expected SFS under the standard neutral model. However, one could argue that this attempt to exclude problematic loci could also inadvertently remove loci that strongly deviate from neutral expectations due to non-neutral evolution, again reducing power to detect deviations from the standard neutral model. I think one could also just apply the selection criterion of the second MAF class to be lower than the first and just save all contig subsamples and SFS's that fulfill that criterioin, since that should be true for all demographic scenarios. This approach, however, is compute intensive and difficult to implement.
Ludovic has suggested optimising for a fraction $p$ of all SNP's that lie on the X chromosome. SNP's on the X chromosome are counted twice in the SFS. This fraction $p$ should therefore be subtracted from even frequency classes and added to the respective frequency class that contains SNP's that are 1/2 as frequent, e. g. from class 2 --> 1 or from 8 --> 4. One could optimise for the minimum deviation from a neutral spectrum.
Finally, it may be possible to acquire the the Locusta genome contigs scaffolded or annotated by linkage group (see Wang2014). One could then blast-annotate the RAD contigs with the homologous Locusta linkage group and create subsets of RAD contigs for each linkage group that exclude that linkage group. There may be a subset (presumably the one that excluded RAD contigs from the X chromosome) that has markedly reduced residuals when compared to a neutral spectrum.
exponential growth model
This model assumes that exponential growth (or decline) started some time $T$ in the past and the current effective population size is a multiple $\nu$ of the ancient populations size, i. e. before exponential growth began. So this model just estimates two parameters. If $\nu$ is estimated to be 1, this indicates that the population size hasn't changed (although see Myers2008). If it is below one, this indicates exponential population decline (how realistic is that?). If it is above 1, this indicates exponential population growth.
erythropus
End of explanation
"""
ar_par_te = dill.load(open("OUT_two_epoch/PAR_perturb_ar_par.dill"))
i = 4 # index of flag with NM algorithm
successfull_popt = [flatten(out)[:5] for out in ar_par_te if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt, \
columns=['nu_0','T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
"""
Explanation: This optimal parameter set was difficult to find. It can only be found from parameter values that are already quite close to the optimal values. The model enforces exponential population decline. This may not be very realistic.
parallelus
A sweep through the parameter space did not lead to convergence of parameters for the parallelus spectrum. Optimisations starting from up to 6-fold randomly perturbed neutral parameter values (1, 1) also did not lead to successful optimisations. The exponential growth/decline model could therefore not be fit to the par spectrum.
two epoch model
erythropus
The built-in two epoch model defines a piecewise history, where at some time $T$ in the past the ancestral population instantaneously changed in size to the contemporary population size, which has ratio of $\nu$ to the ancestral population size.
The parameters values with the highest likelihood ($\nu=0.0001$ and $T=0.000002$) are hitting the lower bound of parameter space that I've set. The time parameter $T$ that dadi infers with the highest likelihood corresponds to 3 generations and therefore does not make sense. The two epoch model could therefore not be fit to the erythropus spectrum.
parallelus
End of explanation
"""
print "A T of 6 corresponds to {0:,} generations.".format(6*2*N_ref_par)
"""
Explanation: Very different population size histories have the same likelihood. The most likely parameter combinations are hitting the upper parameter bound for the time parameter (6).
End of explanation
"""
ar_ery = dill.load(open("OUT_bottlegrowth/ERY_perturb_ar_ery.dill"))
successfull_popt = [flatten(out)[:7] for out in ar_ery if out[1][4] == 0]
df = pd.DataFrame(data=successfull_popt, columns=['nuB_0', 'nuF_0', 'T_0', 'nuB_opt', 'nuF_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=False)
"""
Explanation: From the dadi manual:
If your fits often push the bounds of your parameter space (i.e., results are often at the bounds of one or more parameters), this indicates a problem. It may be that your bounds are too conservative, so try widening them. It may also be that your model is misspecified or that there are unaccounted biases in your data.
Ryan Gutenkunst in a dadi forum thread on parameters hitting the boundary:
This indicates that dadi is having trouble fitting your data. One possibility is that the history of the population includes important events that aren’t in your models. Another possibility is that your data is biased in ways that aren’t in your models. For example, maybe your missing calls for rare alleles.
I am not sure whether an even higher $T$ can reasonably be assumed. The expected time to the most recent common ancestor (MRCA) in a neutral genealogy is:
$$
E\big[T_{MRCA}\big] = 2 \Big(1-\frac{1}{n}\Big)
$$
measured in $N_e$ generations (for coalescent time unit see p. 66, 92 in Wakeley2009). Note, that $T_{MRCA}$ is close to its large sample size limit of 2 already for moderate sample sizes. In order to put dadi's time parameter on the coalescent time scale it needs to be multiplied by 2! A T of 6 from dadi therefore corresponds to a T of 12 on the coalescent time scale, which should be completely beyond the possible $T_{MRCA}$ of any genealogy.
See figure 3.4, p. 79, in Wakeley2009 for the distribution of the $T_{MRCA}$. For $n=36$ it has a mode close to 1.2 and an expected value of 1.94. Values for $T_{MRCA}$ greater than 4 are very unlikely given a standard coalescent model, but may be more likely under models including population expansion or gene flow from another population.
I think a two epoch model cannot be fit to the parallelus spectrum.
bottleneck then exponential growth model
The built-in bottlegrowth model specifies an instantaneous size change $T \times 2N_{ref}$ generations in the past immediately followed by the start of exponential growth (or decline) toward the contemporary population size. The model has three parameters:
- ratio of population size ($\nu_B$) after instantaneous size change with respect to the ancient population size ($N_{ref}$)
- time of instantaneous size change ($T$) in $2N_{ref}$ generations in the past
- ratio of contemporary to ancient population size ($\nu_F$)
erythropus
End of explanation
"""
print "At time {0:,} generations ago, the ery population size instantaneously increased by almost 40-fold (to {1:,}).".format(int(0.36794*2*N_ref_ery), int(39.2*N_ref_ery))
"""
Explanation: These optimal parameter values were found by a broad sweep across the parameter space (data not shown).
The optimal parameters suggest the following:
End of explanation
"""
print "This was followed by exponential decline towards a current population size of {0:,}.".format(int(0.4576*N_ref_ery))
"""
Explanation: Can a shortterm effective population size of 21 million reasonably be assumed?
End of explanation
"""
# load optimisation results from file
ar_ery = []
import glob
for filename in glob.glob("OUT_three_epoch/ERY_perturb_ar_ery*.dill"):
ar_ery.extend(dill.load(open(filename)))
# extract successful (i. e. converged) optimisation
successfull_popt = [flatten(out)[:9] for out in ar_ery if out[1][4] == 0]
# create data frame
df = pd.DataFrame(data=successfull_popt, \
columns=['nuB_0', 'nuF_0', 'TB_0', 'TF_0', 'nuB_opt', 'nuF_opt', 'TB_opt', 'TF_opt', '-logL'])
# sort data frame by negative log likelihood
df.sort_values(by='-logL', ascending=True) # smaller is better
"""
Explanation: parallelus
There was no convergence on a set of optimal parameter values.
three epoch model
The built-in three_epoch model specifies a piecewise history (with only instantaneous population size changes instead of gradual changes). At time $TF+TB$ the ancient population underwent a size change, stayed at this size ($\nu_B \times N_{ref}$) for $TB \times 2N_{ref}$ generations and then underwent a second size size change $TF \times 2N_{ref}$ generations in the past to the contemporary population size ($\nu_F \times N_{ref}$). The model has therefore two population size parameters, $\nu_B$ and $\nu_F$ as well as two time parameters, $TB$ and $TF$.
With 4 parameters, this model is already so complex that a sweep through the parameter space (i. e. starting optimisations from many different positions) is prohibitively time-consuming. I have therefore started optimisations with initial parameter values that were generated by randomly perturbing a neutral parameter combination, $\nu_B = \nu_F = TB = TF = 1$, by up to 4-fold. I can therefore not be certain whether the optimal parameter combinations thus found correspond to the global or a just a local optimum of the likelihood function.
erythropus
End of explanation
"""
# remove one parameter combination with slightly lower logL than the others
df = df.sort_values(by='-logL').head(-1)
# the time of the ancient pop size change is TB+TF
df['TB+TF'] = pd.Series(df['TB_opt']+df['TF_opt'])
# extract columns from table
nuB = df.loc[:,'nuB_opt']
nuF = df.loc[:, 'nuF_opt']
Tb_Tf = df.loc[:, 'TB+TF']
TF = df.loc[:, 'TF_opt']
# turn nu (a ratio) into absolute Ne and T into generations
nuB_n = nuB*N_ref_ery
nuF_n = nuF*N_ref_ery
Tb_Tf_g = Tb_Tf*2*N_ref_ery
TF_g = TF*2*N_ref_ery
anc = [N_ref_ery] * len(nuB) # ancestral pop size
pres = [1] * len(nuB) # 1 generation in the past
past = [max(Tb_Tf_g)+1000] * len(nuB) # furthest time point in the past
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
pylab.rcParams['font.size'] = 12.0
# plot the best 21 parameter combinations
for x, y in zip(zip(past, Tb_Tf_g, Tb_Tf_g, TF_g, TF_g, pres), zip(anc, anc, nuB_n, nuB_n, nuF_n, nuF_n)):
pylab.semilogy(x, y)
pylab.xlabel('generations in the past')
pylab.ylabel('effective population size')
"""
Explanation: Note, all but the last parameter combination have the same likelihood. Fairly different parameter combinations have practically identical likelihood. A reduction of the contemporary population size to 1/4 of the ancient population size is quite conistently inferred ($\nu_F$). The ancient population size change ($\nu_B$) is not inferred consistently. It ranges from a 186-fold increase to a reduction to 1/3 of the ancient population size.
End of explanation
"""
pylab.plot(df['nuB_opt']**df['TB_opt'], (1.0/df['nuF_opt'])**df['TF_opt'], 'bo')
pylab.xlabel('strength of first size change')
pylab.ylabel('strength of second size change')
"""
Explanation: This plot visualises the range of stepwise population size histories that the above parameter combinations imply (all with likelihood equal to 2168.11186). Most parameter combinations infer an ancient population size increase followed by a drastic population size collapse to less than 1/3 of the ancient population size that happened more than 400,000 generations ago.
Are the strength of population size expansion, $(\nu_B)^{TB}$, and the strength population size reduction, $(\frac{1}{\nu_F})^{TF}$, correlated with each other?
End of explanation
"""
import dill
ar_par = dill.load(open("OUT_three_epoch/PAR_perturb_ar_par.dill"))
ar_par_extreme = dill.load(open("OUT_three_epoch/PAR_perturb_extreme_ar_par.dill"))
# add new output to previous output
successfull_popt = [flatten(out)[:9] for out in ar_par if out[1][4] == 0]
successfull_popt.extend([flatten(out)[:9] for out in ar_par_extreme if out[1][4] == 0])
# create data frame
df = pd.DataFrame(data=successfull_popt, \
columns=['nuB_0', 'nuF_0', 'TB_0', 'TF_0', 'nuB_opt', 'nuF_opt', 'TB_opt', 'TF_opt', '-logL'])
# sort data frame by negative log likelihood
df.sort_values(by='nuB_opt', ascending=True)
"""
Explanation: If a long period of increased population size would be correlated with a long period of decreased population size, this plot should show a positive correlation. This does not seem to be the case.
parallelus
End of explanation
"""
# add time for ancient size change
df['TB+TF'] = pd.Series(df['TB_opt']+df['TF_opt'])
# extract columns from table
nuB = df.loc[:,'nuB_opt']
nuF = df.loc[:, 'nuF_opt']
Tb_Tf = df.loc[:, 'TB+TF']
TF = df.loc[:, 'TF_opt']
# turn nu into absolute Ne and T into generations
nuB_n = nuB*N_ref_par
nuF_n = nuF*N_ref_par
Tb_Tf_g = Tb_Tf*2*N_ref_par
TF_g = TF*2*N_ref_par
# auxilliary
anc = [N_ref_par] * len(nuB)
pres = [1] * len(nuB)
past = [max(Tb_Tf_g)+1000] * len(nuB)
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
pylab.rcParams['font.size'] = 12.0
for x, y in zip(zip(past, Tb_Tf_g, Tb_Tf_g, TF_g, TF_g, pres), zip(anc, anc, nuB_n, nuB_n, nuF_n, nuF_n)):
pylab.semilogy(x, y)
pylab.xlabel('generations in the past')
pylab.ylabel('effective population size')
"""
Explanation: As can be seen, extremely different population size histories have the same likelihood.
End of explanation
"""
ar_ery = dill.load(open("OUT_expGrowth_bottleneck/ERY_perturb_ar_ery.dill"))
success = [flatten(out)[:9] for out in ar_ery if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
"""
Explanation: All parameter combinations consistently show a reduction in the contemporary population size with respect to the ancient population size ($\nu_F$). The ancient population size change is much less clear and ranges from a 300-fold increase to 100-fold reduction. Also this event is inferred to have occurred quite distant in the past, generally much more than 1 million generations ago.
exponential growth then bottleneck model
This model specifies exponential growth/decline toward $\nu_B \times N_{ref}$ during a time period of $TB \times 2N_{ref}$ generations, after which (at time $TF \times 2N_{ref}$) the population size undergoes an instantaneous size change to the contemporary size ($\nu_F \times N_{ref}$).
erythropus
End of explanation
"""
F = df[df['-logL'] < 2172].loc[:,['nuF_opt', 'TF_opt']]
pylab.plot(F['TF_opt']*2*N_ref_ery, F['nuF_opt']*N_ref_ery, 'bo')
pylab.xlabel('generations')
pylab.ylabel('contemporary pop. size')
"""
Explanation: Except for the last two, all optimal parameter combinations have the same likelihood and they do not deviate much from their respective initial values. In all successful optimisations above, $\nu_F$, the ratio of contemporary population size to ancient population size, converges to a value below 1/3. I think an ancient size increase or decrease cannot be inferred.
End of explanation
"""
ar_par = dill.load(open("OUT_expGrowth_bottleneck/PAR_perturb_ar_ery.dill"))
success = [flatten(out)[:9] for out in ar_par if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
"""
Explanation: There is a clear linear correlation between the size of the inferred contemporary population size ($\nu_F \times N_{ref}$) and the time ($TF$) for which the population has been at this size.
parallelus
End of explanation
"""
F = df[df['TF_opt'] < 3].loc[:,['nuF_opt', 'TF_opt']]
pylab.plot(F['TF_opt']*2*N_ref_par, F['nuF_opt']*N_ref_par, 'bo')
pylab.xlabel('TF generations')
pylab.ylabel('contemporary pop. size')
"""
Explanation: Often, but not always are optimal parameter values close to their respective initial values. All optimal parameter combinations have the same likelihood. Only a recent (stepwise) population decrease is consistently inferred ($\nu_F$).
End of explanation
"""
# load results of fitting isolation with migration model to 2D sepctrum
import dill, glob
split_mig_res = []
for filename in glob.glob("OUT_2D_models/split_mig*dill"):
split_mig_res.append(dill.load(open(filename)))
from utility_functions import *
import pandas as pd
success = [flatten(out)[:9] for out in split_mig_res if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nu1_0','nu2_0', 'T_0', 'm_0', 'nu1_opt', 'nu2_opt', 'T_opt', 'm_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
"""
Explanation: There could be a linear correlation between the duration of population size reduction and the size of the reduction, but it's weaker than for erythropus.
Conclusion
Not much can be inferred from the 1D spectra of both erythropus and parallelus. However, the dadi optimisations have consistently suggested a population size reduction more than 400,000 generations ago to less than 1/3 of the ancient population size. The reduction seems to have been much stronger in parallelus than in erythropus.
Can gene flow explain the difficulty of fitting 1D models?
The fitting of 2D models was substantially easier than fitting 1D models and led to conistent and robust parameter estimates.
Could it be that the inability to model gene flow from an external source is responsible for the issues with fitting 1D models?
I have saved to files the results of each optimisation run (i. e. for each combination of starting parameter values).
End of explanation
"""
popt = df.sort_values(by='-logL', ascending=True).iloc[0, 4:8]
popt
import numpy as np
popt = np.array(popt)
func = dadi.Demographics2D.split_mig # divergence-with-gene-flow model function (built-in)
func_ex = dadi.Numerics.make_extrap_log_func(func)
ns = np.array([36, 36])
pts_l = [40, 50, 60]
# get optimal model spectrum from 2D model 'divergence-with-migration' model
split_mig_2D_best_fit_model_spectrum = func_ex(popt, ns, pts_l)
split_mig_2D_best_fit_model_spectrum
# import 2D unfolded spectrum
sfs2d_unfolded = dadi.Spectrum.from_file('dadiExercises/EryPar.unfolded.2dsfs.dadi_format')
# add population labels
sfs2d_unfolded.pop_ids = ["ery", "par"]
# fold the spectrum
sfs2d = sfs2d_unfolded.fold()
# need to scale model spectrum by optimal theta, which depends on the number of sites in the data
model_spectrum = dadi.Inference.optimally_scaled_sfs(split_mig_2D_best_fit_model_spectrum, sfs2d)
model_spectrum
import pylab
%matplotlib inline
pylab.rcParams['figure.figsize'] = [12, 10]
pylab.rcParams['font.size'] = 14
model_spectrum.pop_ids = sfs2d.pop_ids
dadi.Plotting.plot_single_2d_sfs(model_spectrum.fold(), vmin=1, cmap=pylab.cm.jet)
# get the marginal model spectra for ery and par from the optimal 2D model spectrum
fs_ery_model = model_spectrum.marginalize([1])
fs_par_model = model_spectrum.marginalize([0])
pylab.plot(fs_par_model.fold(), 'go-', label='par')
pylab.plot(fs_ery_model.fold(), 'rs-', label='ery')
pylab.xlabel('minor allele frequency')
pylab.ylabel('SNP count')
pylab.legend()
# import 1D unfolded spectrum of ery
fs_ery_unfolded = dadi.Spectrum.from_file('ERY.unfolded.sfs')
# import 1D unfolded spectrum of par
fs_par_unfolded = dadi.Spectrum.from_file('PAR.unfolded.sfs')
fs_ery = fs_ery_unfolded.fold()
fs_par = fs_par_unfolded.fold()
dadi.Plotting.plot_1d_comp_multinom(fs_ery_model.fold()[:19], fs_ery[:19])
dadi.Plotting.plot_1d_comp_multinom(fs_par_model.fold()[:19], fs_par[:19])
# log likelihoods
ll_model_ery = dadi.Inference.ll_multinom(fs_ery_model.fold(), fs_ery)
ll_model_ery
ll_model_par = dadi.Inference.ll_multinom(fs_par_model.fold(), fs_par)
ll_model_par
"""
Explanation: These are the successful parameter optimisations I have for the model that specifies a split at time $T$ in the past into two daughter populations with population size ratios (relative to $N_{ref}$) of $\nu_1$ for ery and $\nu_2$ for par. $m$ is the rate of migration, which is assumed to be symmetrical. The migration rate is the fraction of individuals from the other population times $2N_{ref}$, i. e. number of immigrant alleles.
The best paramter combination, that I got from these optimisation runs is:
End of explanation
"""
|
madHatter106/DataScienceCorner | posts/xarray-geoviews-a-new-perspective-on-oceanographic-data-part-ii.ipynb | mit | import xarray as xr
import os
import glob
"""
Explanation: In a previous post, I introduced xarray with some simple manipulation and data plotting. In this super-short post, I'm going to do some more manipulation, using multiple input files to create a new dimension, reorganize the data and store them in multiple output files. All but with a few lines of code.
<!-- TEASER_END -->
GOAL:
The ultimate goal here is to create new datasets, one for band, that aggregate results across experiments so as to facilitate inter-experiment comparisons.
HOW:
I will load netCDF files from a number of Monte-Carlo uncertainty experiments, among which the source of the uncertainty differs; Lt (sensor noise), wind, pressure, relative humidity, all the above.
At the end of this post, I will have 6 files, one per visible SeaWiFS visible band
containing one 3D array where dimensions are latitude, longitude, experiment.
WHY:
I'm doing this to create an interactive visualization (cf. next post) using GeoViews, where the goal is to compare, band-wise, cross-experiment results.
As usual, start with some imports...
End of explanation
"""
dataDir = '/accounts/ekarakoy/disk02/UNCERTAINTIES/Monte-Carlo/DATA/AncillaryMC/'
expDirs = ['Lt', 'AllAnc_Lt', 'Pressure', 'RH', 'WindSpeed', 'O3']
outDir = 'Synthesis'
fpattern = 'S20031932003196.L3m_4D_SU*.nc'
fpaths = [glob.glob(os.path.join(dataDir, expDir, fpattern))[0] for expDir in expDirs]
"""
Explanation: Now I set up some file path logic to avoid rewriting full file paths. I then accrue file paths into a list. I, fpaths. The new files I will next create will be stored in the 'Synthesis' directory for later retrieval.
End of explanation
"""
bands = [412, 443, 490, 510, 555, 670]
"""
Explanation: I'm only interested in the visible bands because of the black pixel assumption used in the atmospheric correction applied during the processing phase, which renders Rrs in the near-infrared bands useless.
End of explanation
"""
with xr.open_mfdataset(fpaths, concat_dim='experiment') as allData:
allData.coords['experiment'] = expDirs
for band in bands:
foutpath = os.path.join(dataDir, outDir, '%s%d%s' %(fpattern.split('SU')[0],
band, '.nc'))
if not os.path.exists(os.path.dirname(foutpath)):
os.makedirs(os.path.dirname(foutpath))
data = allData.data_vars['Rrs_unc_%d' % band]
data.name='rrs_unc'
dsData = data.to_dataset()
dsData.to_netcdf(path=foutpath, engine='netcdf4')
"""
Explanation: xarray has a nifty feature that allows opening multiple datasets, and automatically concatenating matching (by name and dimension) arrays, with the option of naming the thus newly created dimension. In our case, this is 'experiment'. The next line of code, below, opens what will end up being a temporary xarray Dataset - note that you will need dask installed for this. I'll then label the experiment dimension with the appropriate experiment names. Importantly, the concatenation direction reflects the order in which the file paths are specified, and it's also the order the experiment names are in in the 'expDirs' list defined above. I also make sure that the Rrs uncertainty data is labeled the same, 'rrs_unc'.
End of explanation
"""
os.listdir(os.path.dirname(foutpath))
"""
Explanation: Verify that all the files are where they should be - in the Synthesis directory
End of explanation
"""
|
llvll/motionml | ip[y]/motionml.ipynb | bsd-2-clause | from tinylearn import KnnDtwClassifier
from tinylearn import CommonClassifier
import pandas as pd
import numpy as np
import os
train_labels = []
test_labels = []
train_data_raw = []
train_data_hist = []
test_data_raw = []
test_data_hist = []
# Utility function for normalizing numpy arrays
def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
# Loading all data for training and testing from TXT files
def load_data():
for d in os.listdir("data"):
for f in os.listdir(os.path.join("data", d)):
if f.startswith("TRAIN"):
train_labels.append(d)
tr = normalize(np.ravel(pd.read_csv(os.path.join("data", d, f),
delim_whitespace=True,
header=None)))
train_data_raw.append(tr)
train_data_hist.append(np.histogram(tr, bins=20)[0])
else:
test_labels.append(d)
td = normalize(np.ravel(pd.read_csv(os.path.join("data", d, f),
delim_whitespace=True,
header=None)))
test_data_raw.append(td)
test_data_hist.append(np.histogram(td, bins=20)[0])
load_data()
"""
Explanation: MotionML
Motion pattern recognition using KNN-DTW and classifiers from TinyLearn
This is a domain-specific example of using TinyLearn module for recognizing (classifying) the motion patterns according to the supplied accelerometer data.
The following motion patterns are included into this demo:
Walking
Sitting down on a chair
Getting up from a bed
Drinking a glass
Descending stairs
Combing hair
Brushing teeth
The accelerometer data is based on the following public dataset from UCI: Dataset for ADL Recognition with Wrist-worn Accelerometer
This IP[y] Notebook performs a step-by-step execution of 'motion_rec_demo.py' file with extra comments. The source code is available on GitHub
Dynamic Time Warping (DTW) and K-Nearest Neighbors (KNN) algorithms for machine learning are used to demonstrate labeling of the varying-length sequences with accelerometer data. Such algorithms can be applied to time series classification or other cases, which require matching / training sequences with unequal lengths.
Scikit-Learn doesn't have any DTW implementations, so a custom class has been implemented (KnnDtwClassifier) as a part of TinyLearn module.
DTW is slow by default, taking into account its quadratic complexity, that's why we're speeding up the classification using an alternative approach with histograms and CommonClassifier from TinyLearn.
Let's start exploring the demo script ...
In the beginning we're loading the accelerometer data from TXT files:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
for i in range (0, 35, 5):
hist, bins = np.histogram(train_data_raw[i], bins=20)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.title(train_labels[i])
plt.bar(center, hist, align='center', width=width)
plt.show()
"""
Explanation: Let's plot several selected histograms for the train data:
End of explanation
"""
# Raw sequence labeling with KnnDtwClassifier and KNN=1
clf1 = KnnDtwClassifier(1)
clf1.fit(train_data_raw, train_labels)
for index, t in enumerate(test_data_raw):
print("KnnDtwClassifier prediction for " +
str(test_labels[index]) + " = " + str(clf1.predict(t)))
"""
Explanation: Before we will explore the classification with histograms let's try the default approach using KNN-DTW:
End of explanation
"""
# Let's do an extended prediction to get the distances to 3 nearest neighbors
clf2 = KnnDtwClassifier(3)
clf2.fit(train_data_raw, train_labels)
def classify2():
for index, t in enumerate(test_data_raw):
res = clf2.predict_ext(t)
nghs = np.array(train_labels)[res[1]]
print("KnnDtwClassifier neighbors for " + str(test_labels[index]) + " = " + str(nghs))
print("KnnDtwClassifier distances to " + str(nghs) + " = " + str(res[0]))
%time classify2()
"""
Explanation: Bingo! All classifications are correct! Though it took some time for executing KNN-DTW classification ... Let's go deeper into the details of KNN-DTW for our accelerometer data and use KNN=3 with distance information. This time we will measure the performance as well.
End of explanation
"""
# Let's use CommonClassifier with the histogram data for faster prediction
clf3 = CommonClassifier(default=True)
clf3.fit(train_data_hist, train_labels)
clf3.print_fit_summary()
print("\n")
def classify3():
for index, t in enumerate(test_data_hist):
print("CommonClassifier prediction for " + str(test_labels[index]) + " = "
+ str(clf3.predict(t)))
%time classify3()
"""
Explanation: All results are correct as well! But the executime time is not the best one - almost 1 min 30 seconds ... So let's try to use a faster alternative for classification, which doesn't depend upon DTW and its execution time:
End of explanation
"""
class KnnDtwClassifier(BaseEstimator, ClassifierMixin):
"""Custom classifier implementation for Scikit-Learn using Dynamic Time Warping (DTW)
and KNN (K-Nearest Neighbors) algorithms.
This classifier can be used for labeling the varying-length sequences, like time series
or motion data.
FastDTW library is used for faster DTW calculations - linear instead of quadratic complexity.
"""
def __init__(self, n_neighbors=1):
self.n_neighbors = n_neighbors
self.features = []
self.labels = []
def get_distance(self, x, y):
return fastdtw(x, y)[0]
def fit(self, X, y=None):
for index, l in enumerate(y):
self.features.append(X[index])
self.labels.append(l)
return self
def predict(self, X):
dist = np.array([self.get_distance(X, seq) for seq in self.features])
indices = dist.argsort()[:self.n_neighbors]
return np.array(self.labels)[indices]
def predict_ext(self, X):
dist = np.array([self.get_distance(X, seq) for seq in self.features])
indices = dist.argsort()[:self.n_neighbors]
return (dist[indices],
indices)
class CommonClassifier(object):
"""Helper class to execute the common classification workflow - from training to prediction
to metrics reporting with the popular ML algorithms, like SVM or Random Forest.
Includes the default list of estimators with instances and parameters, which have been
proven to work well.
"""
def __init__(self, default=True, cv=5, reduce_func=None):
self.cv = cv
self.default = default
self.reduce_func = reduce_func
self.reducer = None
self.grid_search = None
def add_estimator(self, name, instance, params):
self.grid_search.add_estimator(name, instance, params)
def fit(self, X, y=None):
if self.default:
self.grid_search = GridSearchEstimatorSelector(X, y, self.cv)
self.grid_search.add_estimator('SVC', SVC(), {'kernel': ["linear", "rbf"],
'C': [1, 5, 10, 50],
'gamma': [0.0, 0.001, 0.0001]})
self.grid_search.add_estimator('RandomForestClassifier', RandomForestClassifier(),
{'n_estimators': [5, 10, 20, 50]})
self.grid_search.add_estimator('ExtraTreeClassifier', ExtraTreesClassifier(),
{'n_estimators': [5, 10, 20, 50]})
self.grid_search.add_estimator('LogisticRegression', LogisticRegression(),
{'C': [1, 5, 10, 50], 'solver': ["lbfgs", "liblinear"]})
self.grid_search.add_estimator('SGDClassifier', SGDClassifier(),
{'n_iter': [5, 10, 20, 50], 'alpha': [0.0001, 0.001],
'loss': ["hinge", "modified_huber",
"huber", "squared_hinge", "perceptron"]})
if self.reduce_func is not None:
self.reducer = FeatureReducer(X, y, self.reduce_func)
self.reducer.reduce(10)
return self.grid_search.select_estimator()
def print_fit_summary(self):
return self.grid_search.print_summary()
def predict(self, X):
if self.grid_search.selected_name is not None:
if self.reduce_func is not None and len(self.reducer.dropped_cols) > 0:
X.drop(self.reducer.dropped_cols, axis=1, inplace=True)
return self.grid_search.best_estimator.predict(X)
else:
return None
"""
Explanation: 19.1 ms for the total execution - much better!
We're done with this demo script. Adding the source code for KnnDtwClassifier and CommonClassifier to clarify the implementation details.
End of explanation
"""
|
DiXiT-eu/collatex-tutorial | unit8/unit8-collatex-and-XML/Custom sort.ipynb | gpl-3.0 | import re
"""
Explanation: Defining a custom sort for a complex value
We need to sort data that is partially numeric and partially alphabetic, in this case the line numbers 1, 4008, 4008a, 4009, and 9. We can’t sort them numerically because the 'a' isn’t numeric. And we can’t sort them alphabetically because the numbers that begin with '4' (4008, 4008a, 4009) would all sort before '9'. We resolve the problem by writing a custom sort function that separates the values into leading numeric and optional trailing alphabetic parts. We then sort numerically by the numeric part, and break ties by subsorting alphabetically on the alphabetic part.
We’ll use a regular expression to parse our line number into two parts, so we load the regex library:
End of explanation
"""
lines = ['4008','4008a','4009','1','9']
sorted(lines)
"""
Explanation: We initialize a lines list of strings and demonstrate how the default alphabetic sort gives the wrong results:
End of explanation
"""
sorted(lines,key=int) # this raises an error
"""
Explanation: In Python 3, the key parameter specifies a function that should be applied to the list items before sorting them. If we use int to convert each of the string values to an integer so that we can perform a numerical sort, we raise an error because the 'a' can’t be converted to an integer:
End of explanation
"""
linenoRegex = re.compile('(\d+)(.*)')
def splitId(id):
"""Splits @id value like 4008a into parts, for sorting"""
results = linenoRegex.match(id).groups()
return (int(results[0]),results[1])
"""
Explanation: We create our own sort function, for which we define linenoRegex, which includes two capture groups, both of which are strings by default. The first captures all digits from the beginning of the line number value. The second captures anything after the numbers. The regex splits the input into a tuple that contains the two values as strings, and we convert the first value to an integer before we return it. For example, the input value '4008a' will return (4008,'a'), where the '4008' is an integer and the 'a' is a string.
End of explanation
"""
sorted(lines,key=splitId)
"""
Explanation: If we now specify our splitId function as the value of the key parameter in the sorted() function, the values will be split into two parts before sorting. Tuples are sorted part by part from start to finish, so we don’t have to tell the function explicitly how to sort once we’ve defined the two parts of our tuple:
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment04/MatplotlibEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 2
Imports
End of explanation
"""
!head -n 30 open_exoplanet_catalogue.txt
"""
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
"""
data = np.genfromtxt("open_exoplanet_catalogue.txt", dtype=float, delimiter= ',')
assert data.shape==(1993,24)
"""
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
"""
x = data[:,2]
y = x[~np.isnan(x)]
plt.hist(y, 200)
plt.xlim(0,30)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
plt.xlabel("Distibution of Planetary Masses (MJ)", fontsize = 14, color="#383838")
plt.ylabel("Number of Planets", fontsize = 14, color="#383838")
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
assert True # leave for grading
"""
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
"""
a = data[:,5]
b = data[:,6]
c = np.vstack((a,b))
d = np.transpose(c)
new = np.transpose(d[~np.isnan(d).any(axis=1)])
ex = new[0]
why = new[1]
plt.figure(figsize=(12,5))
plt.scatter(ex, why)
plt.xlim(0, 2)
plt.ylim(-.05, 1.0)
plt.xlabel("Semimajor Axis (AU)", fontsize = 14, color="#383838")
plt.ylabel("Eccentricity", fontsize = 14, color="#383838")
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
assert True # leave for grading
# used "Joe Kington"'s (from stackoverflow) method for setting axis tick and label colors
# used "timday"'s (from stackoverflow) method for hiding the top and right axis and ticks
"""
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
hiteshagrawal/python | udacity/nano-degree/.ipynb_checkpoints/L1_Starter_Code-checkpoint.ipynb | gpl-2.0 | import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open('enrollments.csv', 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def readme(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
return list(reader)
#####################################
# 1 #
#####################################
## Read in the data from daily_engagement.csv and project_submissions.csv
## and store the results in the below variables.
## Then look at the first row of each table.
enrollments = readme('enrollments.csv')
daily_engagement = readme('daily_engagement.csv')
project_submissions = readme('project_submissions.csv')
"""
Explanation: Before we get started, a couple of reminders to keep in mind when using iPython notebooks:
Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.
When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.
The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.
Load Data from CSVs
End of explanation
"""
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))
engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))
engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))
engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])
engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])
daily_engagement[0]
# Clean up the data types in the submissions table
for submission in project_submissions:
submission['completion_date'] = parse_date(submission['completion_date'])
submission['creation_date'] = parse_date(submission['creation_date'])
project_submissions[0]
"""
Explanation: Fixing Data Types
End of explanation
"""
#####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
unique_enrolled_students = set()
for enrollment in enrollments:
unique_enrolled_students.add(enrollment['account_key'])
len(unique_enrolled_students)
"""
Explanation: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data
End of explanation
"""
#####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
unique_engagement_students = set()
for engagement_record in daily_engagement:
unique_engagement_students.add(engagement_record['account_key'])
len(unique_engagement_students)
daily_engagement[1]
"""
Explanation: Problems in the Data
End of explanation
"""
#####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
for enrollment in enrollments:
student = enrollment['account_key']
if student not in unique_engagement_students:
print enrollment
break
"""
Explanation: Missing Engagement Records
End of explanation
"""
#####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
num_problem_students = 0
for enrollment in enrollments:
student = enrollment['account_key']
if (student not in unique_engagement_students and
enrollment['join_date'] != enrollment['cancel_date']):
print enrollment
num_problem_students += 1
num_problem_students
"""
Explanation: Checking for More Problem Records
End of explanation
"""
# Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print len(non_udacity_enrollments)
print len(non_udacity_engagement)
print len(non_udacity_submissions)
"""
Explanation: Tracking Down the Remaining Problems
End of explanation
"""
#####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
# paid_students = {}
# for enrollments in non_udacity_enrollments:
# if enrollments['days_to_cancel'] is None or enrollments['days_to_cancel'] > 7 :
# paid_students[enrollments['account_key']] = enrollments['join_date']
# len(paid_students)
paid_students = {}
for enrollment in non_udacity_enrollments:
if (not enrollment['is_canceled'] or
enrollment['days_to_cancel'] > 7):
account_key = enrollment['account_key']
enrollment_date = enrollment['join_date']
if (account_key not in paid_students or
enrollment_date > paid_students[account_key]):
paid_students[account_key] = enrollment_date
len(paid_students)
"""
Explanation: Refining the Question
End of explanation
"""
# Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
# def within_one_week(join_date, engagement_date):
# time_delta = engagement_date - join_date
# return time_delta.days < 7
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days >= 0 and time_delta.days < 7
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
def remove_free_trial_cancels(data):
new_data = []
for data_point in data:
if data_point['account_key'] in paid_students:
new_data.append(data_point)
return new_data
paid_enrollments = remove_free_trial_cancels(non_udacity_enrollments)
paid_engagement = remove_free_trial_cancels(non_udacity_engagement)
paid_submissions = remove_free_trial_cancels(non_udacity_submissions)
print len(paid_enrollments)
print len(paid_engagement)
print len(paid_submissions)
paid_engagement_in_first_week = []
for engagement_record in paid_engagement:
account_key = engagement_record['account_key']
join_date = paid_students[account_key]
engagement_record_date = engagement_record['utc_date']
if within_one_week(join_date, engagement_record_date):
paid_engagement_in_first_week.append(engagement_record)
print len(paid_engagement_in_first_week)
print paid_engagement_in_first_week[1:10]
"""
Explanation: Getting Data from First Week
End of explanation
"""
from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = total_minutes_by_account.values()
print 'Mean:', np.mean(total_minutes)
print 'Standard deviation:', np.std(total_minutes)
print 'Minimum:', np.min(total_minutes)
print 'Maximum:', np.max(total_minutes)
"""
Explanation: Exploring Student Engagement
End of explanation
"""
#####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
student_with_max_minutes = None
max_minutes = 0
for student, total_minutes in total_minutes_by_account.items():
if total_minutes > max_minutes:
max_minutes = total_minutes
student_with_max_minutes = student
max_minutes
#Alternatively, you can find the account key with the maximum minutes using this shorthand notation:
#max(total_minutes_by_account.items(), key=lambda pair: pair[1])
for engagement_record in paid_engagement_in_first_week:
if engagement_record['account_key'] == student_with_max_minutes:
print engagement_record
"""
Explanation: Debugging Data Analysis Code
End of explanation
"""
#####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
# def total_num(data):
# for account_key, engagement_for_student in engagement_by_account.items():
# total_lessons = 0
# for engagement_record in engagement_for_student:
# total_lessons += engagement_record[data]
# total_lessons_by_account[account_key] = total_lessons
# total_lessons_by_account = {}
# for account_key, engagement_for_student in engagement_by_account.items():
# total_lessons = 0
# for engagement_record in engagement_for_student:
# total_lessons += engagement_record['lessons_completed']
# total_lessons_by_account[account_key] = total_lessons
# print len(total_lessons_by_account)
# print total_lessons_by_account['619']
# student_with_max_lesson = None
# max_lessons_completed = 0
# for student, lessons_completed in total_lessons_by_account.items():
# if lessons_completed > max_lessons_completed:
# max_lessons_completed = lessons_completed
# student_with_max_lesson = student
# print max_lessons_completed, student_with_max_lesson
# total_lessons = total_lessons_by_account.values()
# print 'Mean:', np.mean(total_lessons)
# print 'Standard deviation:', np.std(total_lessons)
# print 'Minimum:', np.min(total_lessons)
# print 'Maximum:', np.max(total_lessons)
from collections import defaultdict
def group_data(data, key_name):
grouped_data = defaultdict(list)
for data_point in data:
key = data_point[key_name]
grouped_data[key].append(data_point)
return grouped_data
engagement_by_account = group_data(paid_engagement_in_first_week,
'account_key')
def sum_grouped_items(grouped_data, field_name):
summed_data = {}
for key, data_points in grouped_data.items():
total = 0
for data_point in data_points:
total += data_point[field_name]
summed_data[key] = total
return summed_data
import numpy as np
def describe_data(data):
print 'Mean:', np.mean(data)
print 'Standard deviation:', np.std(data)
print 'Minimum:', np.min(data)
print 'Maximum:', np.max(data)
total_minutes_by_account = sum_grouped_items(engagement_by_account,
'total_minutes_visited')
describe_data(total_minutes_by_account.values())
lessons_completed_by_account = sum_grouped_items(engagement_by_account,
'lessons_completed')
describe_data(lessons_completed_by_account.values())
print engagement_by_account['619']
"""
Explanation: Lessons Completed in First Week
End of explanation
"""
######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
for engagement_record in paid_engagement:
if engagement_record['num_courses_visited'] > 0:
engagement_record['has_visited'] = 1
else:
engagement_record['has_visited'] = 0
# days_visited_by_account = sum_grouped_items(engagement_by_account,
# 'has_visited')
# describe_data(days_visited_by_account.values())
def sum_grouped_items_record(grouped_data, field_name):
summed_data = {}
for key, data_points in grouped_data.items():
total = 0
for data_point in data_points:
#total += data_point[field_name]
if data_point[field_name] > 0: #Means student has visited the classroom
total += 1
summed_data[key] = total
return summed_data
days_visited_by_account = sum_grouped_items_record(engagement_by_account,
'num_courses_visited')
describe_data(days_visited_by_account.values())
"""
Explanation: Number of Visits in First Week
End of explanation
"""
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
# subway_project_lesson_keys = ['746169184', '3176718735']
# passing_engagement =
# non_passing_engagement =
paid_submissions[2]
# {u'account_key': u'256',
# u'assigned_rating': u'PASSED',
# u'completion_date': datetime.datetime(2015, 1, 20, 0, 0),
# u'creation_date': datetime.datetime(2015, 1, 20, 0, 0),
# u'lesson_key': u'3176718735',
# u'processing_state': u'EVALUATED'}
subway_project_lesson_keys = ['746169184', '3176718735']
pass_subway_project = set()
for submission in paid_submissions:
project = submission['lesson_key']
rating = submission['assigned_rating']
if ((project in subway_project_lesson_keys) and
(rating == 'PASSED' or rating == 'DISTINCTION')):
pass_subway_project.add(submission['account_key'])
len(pass_subway_project)
passing_engagement = []
non_passing_engagement = []
for engagement_record in paid_engagement_in_first_week:
if engagement_record['account_key'] in pass_subway_project:
passing_engagement.append(engagement_record)
else:
non_passing_engagement.append(engagement_record)
print len(passing_engagement)
print len(non_passing_engagement)
"""
Explanation: Splitting out Passing Students
End of explanation
"""
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
"""
Explanation: Comparing the Two Student Groups
End of explanation
"""
######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
"""
Explanation: Making Histograms
End of explanation
"""
######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
"""
Explanation: Improving Plots and Sharing Findings
End of explanation
"""
|
quantumlib/Cirq | docs/qubits.ipynb | apache-2.0 | try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
"""
Explanation: Qubits
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/qubits"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/qubits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/qubits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/qubits.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
"""
qubit = cirq.NamedQubit("myqubit")
# creates an equal superposition of |0> and |1> when simulated
circuit = cirq.Circuit(cirq.H(qubit))
# see the "myqubit" identifier at the left of the circuit
print(circuit)
# run simulation
result = cirq.Simulator().simulate(circuit)
print("result:")
print(result)
"""
Explanation: A qubit is the basic unit of quantum information, a quantum bit: a two level system that can exist in superposition of those two possible states. Cirq also supports higher dimensional systems, so called qudits that we won't cover here.
In Cirq, a Qubit is nothing else than an abstract object that has an identifier, a cirq.Qid and some other potential metadata to represent device specific properties that can be used to validate a circuit.
In contrast to real qubits, the Cirq qubit does not have any state. The reason for this is that the actual state of the qubits is maintained in the quantum processor, or, in case of simulation, in the simulated state vector.
End of explanation
"""
|
timothydmorton/VESPA | notebooks/predictions.ipynb | mit | from keputils import kicutils as kicu
stars = kicu.DATA # This is Q17 stellar table.
"""
Explanation: The question to explore is the following: given Kepler observations, how many events of the following type are we likely to observe as single eclipse events?
EBs
BEBs
HEBs
More specifically, given a period and depth range, how many of each do I predict?
End of explanation
"""
stars = stars.query('mass > 0') #require there to be a mass.
len(stars)
from vespa.stars import Raghavan_BinaryPopulation
binpop = Raghavan_BinaryPopulation(M=stars.mass)
binpop.stars.columns
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# e.g.,
plt.hist(np.log10(binpop.orbpop.P.value), histtype='step', lw=3);
plt.figure()
plt.hist(binpop.orbpop.ecc, histtype='step', lw=3);
plt.figure()
binpop.orbpop.scatterplot(ms=0.05, rmax=1000);
# Determine eclipse probabilities
import astropy.units as u
ecl_prob = ((binpop.stars.radius_A.values + binpop.stars.radius_B.values)*u.Rsun / binpop.orbpop.semimajor).decompose()
sum(ecl_prob > 1) # These will need to be ignored later.
ok = ecl_prob < 1
# Rough estimate of numbers of systems with eclipsing orientation
ecl_prob[ok].sum()
# OK, how many that have periods less than 1 years?
kep_ok = (ok & (binpop.orbpop.P < 3*u.yr))
ecl_prob[kep_ok].sum()
# And, including binary fraction
fB = 0.4
fB * ecl_prob[kep_ok].sum()
"""
Explanation: OK, first, let's calculate the number of expected EBs. That is, assign each Kepler target star a binary companion according to the Raghavan (2010) distribution, and see what the eclipsing population looks like. Let's use tools from vespa to make this happen.
End of explanation
"""
minP = 5*u.yr; maxP = 15*u.yr
kep_long = ok & (binpop.orbpop.P > minP) & (binpop.orbpop.P < maxP)
fB * ecl_prob[kep_long].sum()
"""
Explanation: This is actually a bit low compared to the Kepler EB catalog haul of 2878 EBs. However, keep in mind that this is only looking at binaries, not considering triple systems that tend to have much closer pairs. So to my eye this is actually a pretty believable number. (Also note that this does not take eccentricity into account.) How many of these would have periods from 5 to 15 years?
End of explanation
"""
in_window = np.clip(4*u.yr / binpop.orbpop.P, 0, 1).decompose()
fB*(ecl_prob[kep_long] * in_window[kep_long]).sum()
"""
Explanation: This seems like a lot. Now keep in mind this is just the number with eclipsing geometry, not the number that would actually show up within the Kepler data, taking the window function into account. A quick hack at this would be that the probability for an eclipse to be within the Kepler window would be 4yr / P.
End of explanation
"""
|
bill9800/House-Prediciton | HousePrediction.ipynb | mit | #missing data
total = df_train.isnull().sum().sort_values(ascending = False)
percent = (df_train.isnull().sum()/df_train.isnull().count()).sort_values(ascending = False)
missing_data = pd.concat([total,percent],axis=1,keys=['Total','Percent'])
missing_data.head(25)
# In the search for normality
sns.distplot(df_train['SalePrice'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'],plot=plt)
df_train['SalePrice'] = np.log(df_train['SalePrice'])
sns.distplot(df_train['SalePrice'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'],plot=plt)
sns.distplot(df_train['GrLivArea'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'],plot=plt)
# do log transformation
df_train['GrLivArea'] = np.log(df_train['GrLivArea'])
sns.distplot(df_train['GrLivArea'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'],plot=plt)
sns.distplot(df_train['GarageArea'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['GarageArea'],plot=plt)
sns.distplot(df_train['TotalBsmtSF'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['TotalBsmtSF'],plot=plt)
sns.distplot(df_train['OverallQual'])
sns.distplot(df_train['FullBath'])
sns.distplot(df_train['Fireplaces'])
sns.distplot(df_train['YearBuilt'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['YearBuilt'],plot=plt)
sns.distplot(df_train['YearRemodAdd'],fit=norm)
fig = plt.figure()
res = stats.probplot(df_train['YearRemodAdd'],plot=plt)
"""
Explanation: Choose the index
To simply train the model, I pick up 3 aspect of the index.
Area related : GrLivArea, GarageArea, TotalBsmtSF
Number/Rank related: OverallQual(don't know how it compute), FullBath, Fireplaces
Time related : YearBuilt, YearRemodAdd
End of explanation
"""
# create new training set by previous feature selection
X_data = pd.concat([df_train['GrLivArea'],df_train['GarageArea'],df_train['TotalBsmtSF'],df_train['OverallQual'],df_train['FullBath'],df_train['Fireplaces'],df_train['YearBuilt'],df_train['YearRemodAdd']],axis=1)
y_data = df_train['SalePrice']
#split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_data,y_data,test_size=0.2,random_state=0)
# train the models
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit (X_train,y_train)
print("Root Mean squared error: %.8f"
% rmse((reg.predict(X_test)),y_test))
from sklearn import linear_model
reg = linear_model.RidgeCV(alphas=[0.1,0.25,0.3,0.33,0.375,0.5,1.0,5.0,10.0])
reg.fit (X_train,y_train)
print (reg.alpha_)
print("Root Mean squared error: %.8f"
% rmse((reg.predict(X_test)),y_test))
from sklearn import linear_model
reg = linear_model.LassoCV(alphas=[0.0001,0.001,0.0125,0.025,0.05])
reg.fit (X_train,y_train)
print (reg.alpha_)
print("Root Mean squared error: %.8f"
% rmse((reg.predict(X_test)),y_test))
"""
Explanation: Pick the data we need
SalePrice
Area related : GrLivArea, GarageArea, TotalBsmtSF
Number/Rank related: OverallQual(don't know how it compute), FullBath, Fireplaces
Time related : YearBuilt, YearRemodAdd
End of explanation
"""
import xgboost as xgb
regr = xgb.XGBRegressor(
colsample_bytree=0.8,
gamma=0.0,
learning_rate=0.001,
max_depth=4,
min_child_weight=1.5,
n_estimators=10000,
reg_alpha=0.9,
reg_lambda=0.6,
subsample=0.8,
seed=42,
silent=False)
regr.fit(X_train,y_train)
y_pred = regr.predict(X_test)
print("XGBoost score on training set: ", rmse(y_test, y_pred))
#create prediction csv
df_test = pd.read_csv('test.csv')
#transfer data
df_test['GrLivArea'] = np.log(df_train['GrLivArea'])
X_data = pd.concat([df_test['GrLivArea'],df_test['GarageArea'],df_test['TotalBsmtSF'],df_test['OverallQual'],df_test['FullBath'],df_test['Fireplaces'],df_test['YearBuilt'],df_test['YearRemodAdd']],axis=1)
y_pred_data = regr.predict(X_data)
y_pred_data = np.exp(y_pred_data)
pd.DataFrame({'Id': df_test['Id'], 'SalePrice':y_pred_data}).to_csv('result.csv', index =False)
"""
Explanation: How about using Xgboost?
End of explanation
"""
|
brian-rose/climlab | docs/source/courseware/Reset-time.ipynb | mit | import numpy as np
import climlab
climlab.__version__
"""
Explanation: Resetting time to zero after cloning a climlab process
Brian Rose, 2/15/2022
Here are some notes on how to reset a model's internal clock to zero after cloning a process with climlab.process_like()
These notes may become out of date after the next revision of climlab, because the calendar object that climlab uses will likely get replaced with something more robust in the future.
For posterity, this is the version of climlab we're using in this example:
End of explanation
"""
mystate = climlab.column_state()
m1 = climlab.radiation.RRTMG(state=mystate, timestep=climlab.utils.constants.seconds_per_day)
m1.time
"""
Explanation: The climlab time dictionary
Every process object contains a time attribute, which is just a dictionary with various counters and information about timesteps.
Here we create a single-column radiation model m1 with a timestep of 1 day, and inspect its time dictionary:
End of explanation
"""
m1.step_forward()
m1.time
"""
Explanation: If we take a single time step forward, some elements in this dictionary get updated:
End of explanation
"""
m2 = climlab.process_like(m1)
m2.time
"""
Explanation: In particular, steps has increased by 1, and days_elapsed is now 1.0 (using a timestep of 1 day).
Let's now clone this model. Both the state and the calendar are cloned, so our new model has the same date as m1:
End of explanation
"""
mystate2 = climlab.column_state()
m3 = climlab.radiation.RRTMG(state=mystate2, timestep=climlab.utils.constants.seconds_per_day)
zero_time = m3.time.copy()
m3.step_forward()
m4 = climlab.process_like(m3)
# Now both m3 and m4 have the same state:
assert m3.Ts == m4.Ts
assert np.all(m3.Tatm == m4.Tatm)
# And they also have the same calendar:
print('After cloning, m3 has taken {} steps, and m4 has taken {} steps.'.format(m3.time['steps'], m4.time['steps']))
# But we can reset the calendar for m4 as if it had never taken a step forward:
m4.time = zero_time
print('After replacing the time dict, m3 has taken {} steps, and m4 has taken {} steps.'.format(m3.time['steps'], m4.time['steps']))
"""
Explanation: What if we want to clone the state, but reset the calendar to zero?
One simple hack is just to keep a copy of the initial time dictionary prior to taking any steps forward.
We can do this with the dict's built-in copy() method.
End of explanation
"""
for model in [m3, m4]:
model.step_forward()
assert m3.Ts == m4.Ts
assert np.all(m3.Tatm == m4.Tatm)
print('After one more step, m3 has taken {} steps, and m4 has taken {} steps.'.format(m3.time['steps'], m4.time['steps']))
"""
Explanation: Since we haven't changed any model parameters, they should both evolve exactly the same way on their next timestep so the states remain the same:
End of explanation
"""
m4.subprocess['SW'].S0 += 10.
for model in [m3, m4]:
model.step_forward()
print('One step after changing S0 in m4, m3 has taken {} steps, and m4 has taken {} steps.'.format(m3.time['steps'], m4.time['steps']))
print('')
print('Now checking to see if the states are still the same:')
assert m3.Ts == m4.Ts
"""
Explanation: But if I now change a parameter in m4, their states will begin to differ:
End of explanation
"""
|
fluffy-hamster/A-Beginners-Guide-to-Python | A Beginners Guide to Python/26. Design Decisions, How to Build Chess Game.ipynb | mit | # One use, "throw away" code:
def one_to_one_hundred():
for i in range(1, 101):
print (i)
# Multi use, 'generalised' code:
def n_to_x(n, m):
for i in range(n, m+1):
print(i)
"""
Explanation: Features of Good Design
Hi guys, this lecture is a bit different, today we are mostly glossing over Python itself and instead we are going to be talk about software development.
At this moment in time you are writing tiny programs. This is all good experience but you will quickly learn that as programs that larger the harder it is to get working. For example, writing two-hundred one page letters to your grandma over the course of a year is much easier than trying to write a two-hundred page novel. The main reason being is that the bunch of letters are mostly independent bits of writing, whereas a novel requires hundreds of pages to 'flow' together.
I’d argue software development is a bit like that too; writing small programs is very different from writing code for large complex systems. For small problems if things go wrong you can just start 'from scratch' whereas for large systems starting from scratch is often not possible (and/or would take years), and so any mistakes made in the design of the system are likely 'warts' you are going to have to live with. As codebases grow concepts like readability become increasingly important.
Today’s lecture is intended as an introduction to some of skills you will likely need once you try to develop a substantial programs. The TLDR version; figuring out a good design ‘off the bat’ can potentially save you hours upon hours of work later on down the line.
Today I will be showing you an example of how we might code up a game of chess. But crucially I’m going to skip over a lot of the ‘low-level’ stuff and instead try to provide a ‘high-level’ sketch for what such a program may look like. If you have the time it may be worthwhile to quickly skim back over the intuition for OOP lecture.
There is a saying in England:
“[you] can’t see the forest for the trees”*.
It means that if you examine something closely (i.e. each tree) you might miss the bigger picture (i.e. that all the trees come together to make a forest). Most of this lecture series has been talking about trees, but today we are talking about the forest.
What is good design?
So before we start looking at a chess game lets say a few words about design; in particular, what counts as good program design?
Simplicity
Simple is better than complex.
Complex is better than complicated.
[...]
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
As always, Tim Peter's ‘Zen of Python’ has a thing or two to say about design, the lines highlighted here place an emphasis on simplicity and clarity of expression, these concepts are core to the entire language and we would do well to remember that; if things start to get complicated maybe it would be prudent to take a step back and reconsider our approach.
Performance
At first glance we might think performance is a 'low-level' consideration. You write something and then find ways to save a byte of memory here or there. But considering performance merely as ‘fine-tuning’ would be a crucial mistake.
Those of you that read my 'joy of fast cars' lecture would have seen a few examples of such low-level 'fine tuning', in one example I showed how we could optimize a call to range such that we could search for prime numbers faster. And for what its worth this tinkering did in fact pay off, significantly so, in fact.
However, that lecture also contained a ‘high-level’ idea as well; our tinkering with the range function was, although faster, still blindly searching for a needle in a haystack. We then stepped back and wondered if there was a better way to do it and indeed there was; generating primes is better than blindly searching for them.
The lesson here is that good design choices, even if executed poorly, can easily out-perform bad ideas implemented well. If you want to know more about this, please check do some reading on ‘algorithmic complexity’ and Big(O) notation (we wont cover this stuff in this course).
In short, good design/algorithm choices tend to be very performant once we scale up the problem to huge data sets, and this is why it’s worth taking the time to come up with a good idea.
Readable
Throughout this lectures series I have highlighted readability numerous times so I'm going to keep this section super short:
Readability Counts!
Modular
Modularity is one way to deal with complexity. In short, this is where we try to chop everything up into neat little boxes, where each little box has just one job to do.
An alternative (and terrible) design would be to have one big box that does everything. The problem with such an approach is that you end up with a complex spider web, and you fear changing one small part because you don't know how that small change may affect the entire system.
Generalisable / Reusable
Writing good code once and then reusing it is often better than starting from scratch each time. The way to make code reusable is to generalise it to solve variety of problems. This concept is probably best understood by example.
Suppose we were making a function that counted 1-to-100. What can we use this for other than its intended purpose?
Now suppose we write a function that counts from n-to-m. This code works for the current problem but because its design is generalised we may be able to reuse this code at a later date, in this project or the next.
If code is reusable, then that is often a good sign that it is modular as well.
End of explanation
"""
print("RNBQKBNR\nPPPPPPPP\n-x-x-x-x\nx-x-x-x-\n-x-x-x-x\npppppppp\nrnbkqbnr") # 'x' and '-' represent black and white squares.
"""
Explanation: Beauty
Beautiful is better than ugly.
Tim Peters, ‘Zen of Python’
Beauty!? At first glance making beauty a consideration my sound like a strange or 'out-of-place' concept. But if you take a broad view of human achievement you’ll find that we mere mortals make things, and then make those things beautiful. Just think of something like a sword, it is an object made with the most brutal of applications in mind and yet we still decided that even this was an object worthy of being made beautiful.
Another discipline where discussions of aesthetics may initially seem out-of-place is mathematics, and yet, there are no shortage of mathematicians throughout the ages discussing the aesthetic qualities of field and moreover there is some experimental evidence to suggest mathematicians genuinely see beauty in formula's in the same way the rest of us see beauty in music or art. For some, beauty truly is the joy of mathematics.
"Why are numbers beautiful? It's like asking why is Beethoven's Ninth Symphony beautiful. If you don't see why, someone can't tell you." -Paul Erdos
I think it would be wrong to dismiss beauty as a trivial aspect of mathematics or programming for that matter. There truly is a joy in experiencing good code, you just need to learn to appreciate it, I guess.
Building a Chess Program...
Okay so the above discussion highlighted a few aspirations and considerations for our chess project. Let’s start by making a list of all the things we need to do:
Represent the board (8x8 grid, alternating black/white squares)
Define piece movement, capture rules.
Define all other rules (e.g. promotion, castling, checkmate, 3-fold repetition, etc)
Peripherals (e.g. clocks, GUI's, multiplayer features like match-making, etc)
Thats a lot of stuff to do right there, today's lecture will mostly deal with points one and two.
Building the board
How should we represent a board in Python? This question mostly just boils down to what data-type we should use. Right now, I have two candidates in mind; strings and lists.
We could of course jump ‘straight-in’, pick one the data types at random and see what happens but, as alluded to in the above discussions such a method is both silly and wasteful. A better use of time would be to carefully consider our options BEFORE we write even a single line of code.
The Board as a string
Okay, so let’s consider using a string for the board. What might that look like?
Well, the letters "QKPN" could represent the pieces (lower-case for white), and we could use the new-line character ("\n") to separate the rows. Something like this:
End of explanation
"""
print("♖♘♗♔♕♗♘♖\n♙♙♙♙♙♙♙♙\n□ ■ □ ■ □ ■ □ ■\n■ □ ■ □ ■ □ ■ □\n□ ■ □ ■ □ ■ □ ■\n♟♟♟♟♟♟♟♟\n♜♞♝♛♚♝♞♜")
"""
Explanation: Actually, we can do even better than this, Python strings support unicode and there are unicode characters for chessmen. So now our string implementation even comes with some basic graphics:
End of explanation
"""
def make_move(move):
"""Takes a string a returns a new string with the specified move"""
# Code here
pass
# Our new function would have to take the original string and return the new string (both below)...
original_string = "RNBQKBNR\nPPPPPPPP\n-x-x-x-x\nx-x-x-x-\n-x-x-x-x\npppppppp\nrnbkqbnr"
new_string = "RNBQKBNR\nPPPPPPPP\n-x-x-x-x\nx-x-x-x-\n-x-x-n-x\npppppppp\nrnbkqb-r"
print(new_string)
"""
Explanation: Okay so the board as a string seems possible, but are there any drawbacks of an implementation like this? Well, I can think of two. Firstly, notice that because these unicode characters are a bigger than normal letters we are going to need a new way to denote black and white squares. You can see from above I tried to use a combination of spaces and ‘□■’ characters but even then the formating is a bit off. In short, it looks like trying to get the board to look nice is going to be both tedious and fiddly.
What is the second problem?
You remember me mentioning that strings are an immutable data-type, which means that every time we want to change the board we have to make a new one. Not only would this be computationally inefficient it may also be a bit tricky to actually change the board.
For example, lets see what sort of work we would have to to make the move 1.Nf3:
End of explanation
"""
class Knight (object):
def __init__ (self, current_square, colour):
self.colour = colour
self.current_square = current_square
@staticmethod
def is_legal_move(square1, square2):
"""
Checks if moving from square1 to square2 is a legal move for the Knight.
Returns True/False
"""
# Code goes here... i.e we calculate all 'game legal' squares a Knight could reach from position X.
pass
def make_move(self, new_square):
# since we don't want to make illegal moves, we check the intended move is legal first.
if Knight.is_legal_move(self.current_square, new_square):
self.current_square = new_square # <= moves the knight!
else:
return "Invalid Move!"
# Other methods would go here.
# Lets make a White Knight. Let's call him 'Dave'.
Dave = Knight((0,0), "White")
# Once the knight is made, we can move it using the move_to method:
Dave.make_move((3,3))
"""
Explanation: Now, to be clear, it is certainly possible to make the “make_move” function work, but it does seem to have several small moving parts and therefore probably lots of interesting ways to go wrong. And then lets think about the more complex functions; if movement seems a bit tricky, how easy do we think defining checkmate is going to be?
Basically, using strings seems doable but complicated. And as Tim Peters says, simple and complex are both better than complicated. Alright, on that note, let’s see if lists seem more straight-forward.
The Board as a List
[[00, 01, 02],
[10, 11, 12],
[20, 21, 22]]
The above is a nested list but we have put each of the sublists on a new line to make it easier to visualise how such a structure can work like a game board. The numbers represent the (x, y) coordinate of that 'square'. And remember that lists can contain strings as well, so this option doesn't stop us from using those pretty graphics we saw earlier.
Compared to the string implementation, the 1.Nf3 move should be somewhat straightforward:
current_position_knight = (a, b) # where (a, b) are coordinates.
next_position_knight = (a2, b2)
Board[a][b] = {empty}
Board[a2][b2] = {Knight}
At first glance, this seems considerably easier than a messing arround mutating strings.
There is also another possible advantage to lists as well, and that is they can store a variety of data-types. I haven't spoken about classes in this lecture series and I'm not going to into detail (classes are not suitable for a beginner class, in my opinion) But, I’ll will very briefly introduce you to the concept and how we could use it here.*
In short, the brainwave is that if we use lists we could litterally build Knight objects, King Objects, etc and they can be placed inside a list. We can't do that with strings.
Defining Chessmen
Basically Python makes it possible to create your own objects with their own methods. Using classes it is literally possible make a knight and put him onto a game board. Below I’ve provided a very rough sketch of what such a class could look like.
I would like to stress that this code is intended as a 'high-level' sketch and by that I mean lots of the small details are missing. Notice that the code for a Queen, King, Pawn, etc could all be written in the same way.
End of explanation
"""
# Note the following code doesn't work, it is for demonstration purposes only!
def all_bishop_moves(position, board):
""" When given a starting square, returns all squares the bishop can move to"""
pass
def all_rook_moves(position, board):
""" When given a starting square, returns all squares the rook can move to"""
pass
def all_queen_moves(position, board):
"""When given a starting square, returns all squares the queen can move to"""
pass
"""
Explanation: Defining Piece movement
Alright, onto the next problem. "How are we going to make the peices move?" And once again a smart choice here will make things so much easier than it otherwise could be.
One simple approach is to write a function for each piece, like so:
End of explanation
"""
# Note the following code doesn't work, it is for demonstration purposes only!
def diagonal_movement(position, directions, distance =1):
"""
Returns all diagonal squares in all valid directions N distance from the origin
"""
# code here
pass
def othagonal_movement(position, directions, distance=1):
"""
# doctests showing example useage:
>>> othagonal_movement( (2, 2), directions=[left], distance= 2)
[(2,2), (2, 1), (2, 0)]
>>> othagonal_movement( (2, 2), directions=[right], distance= 4)
[(2, 2), (2, 3), (2, 4), (2, 5), (2, 6)]
>>> othagonal_movement( (2, 2), directions=[right, left], distance= 1)
[(2, 1), (2,2), (2, 3)]
"""
# code here
pass
"""
Explanation: At first glance this code seems pretty good, but there are a few drawbacks. Firstly it looks like we are going to be repeating ourselves a lot; queen movement for example is just a copy & paste of the rook + bishop. The king function is likely a copy & paste of queen but where we change the distance to 1.
And by the way guys, repeating oneself is NOT quite the same as reusing code!
What we would really like to do here is generalise the problem as much as we can. And good technique for doing that is to think of the next project we might want to implement. For example, let’s suppose after building my chess game I want to support Capablanca Chess ?
<img src="http://hgm.nubati.net/rules/Capablanca.png" style="width:200px;height:150px;" ALIGN="centre">
Capablanca chess is played on a 10x8 board and it has two new pieces; the ‘archbishop’ moves like a bishop combined with a knight and a ‘chancellor’ moves like a rook and a knight.
So, what should we do here? Well, I think the first thing we should do is define movement of pieces WITHOUT referencing a board. If we don’t reference a board that means we should be able to handle boards of many different sizes. Secondly, if we define pieces in terms of combining general patterns (e.g Queen = diagonal + orthogonal movement) then defining new pieces will probably be less the five lines of code in many cases.
Let’s examine what that might look like:
End of explanation
"""
def queen_movement(position, limit):
o = othagonal_movement(position, direction="all", distance=limit)
d = diagonal_movement(position, direction="all", distance=limit)
return o + d
def bishop_movement(position, limit):
return diagonal_movement(position, direction="all", distance=limit)
"""
Explanation: Notice here that we have defined movement without reference to a board, our code here simply takes an (x,y) coordinate in space and will keep returning valid squares until it reaches the limit set by distance. The documentation in row movement explains the idea.
With this generalisation, we should be able to handle different boards AND we can define pieces with just a few lines of code like so:
End of explanation
"""
def king_movement(position, limit):
return queen_movement(position, limit=1)
"""
Explanation: In the case of an 8x8 board 'limit' would be set to 8. If a piece is only allowed to move 1 square forward (regardless of board shape/size) we can easily model that by setting the limit to 1.
notice that with this design some peices can be defined very simply. For example, we can define a king in the following way:
End of explanation
"""
def sum_column(table, column):
num_rows = len(table[0])
position = (0, column)
points = othagonal_movement(position, direction=["down"], limit=num_rows)
total = 0
for p in points:
x, y = p[0][1]
number = table[x][y]
total += number
return total
"""
Explanation: But lets take a step back for a moment, what are actually doing here?
When naming variables, a good practice is to state what the code does rather than state what it is used for. The reason this can be a code idea is that renaming a function/variable to something general can make code more reusable and modular. The process of thinking about a name may help you spot ideas and patterns that you may have otherwise missed. So, let me ask the question again, what are we actually doing here?
Suppose I give you a table of data, and I want you so sum up all the columns. For example:
[0, 1, 2]
[5, 6, 7]
returns [5, 7, 9]
How can we solve this problem? Well, notice that our othagonal_movement function can be used to solve this problem:
End of explanation
"""
|
Lattecom/HYStudy | scripts/[HYStudy 15th] Matplotlib 2.ipynb | mit | # make point with cumulative sum
points = np.random.randn(50).cumsum()
points
"""
Explanation: Magic Command
%matplotlib inline: show() 생략
%matplotlib qt: 외부창 출력
End of explanation
"""
# plt.plot(x, y): x, y = point(x, y) on coordinate
# put y only(default x = auto)
plt.plot(points)
plt.show()
# put x and y points
plt.plot(range(0, 250, 5), points)
plt.show()
"""
Explanation: Line plot
End of explanation
"""
# set color, marker, line
plt.plot(points, 'co:')
plt.show()
# style setting
plt.plot(points, 'co-', lw=3, ms=5, mfc='b') # lw=linewidth, ms=marker size, mfc=marker face color
plt.xlim(-10, 60) # set x axis limit
plt.ylim(-5, 5) # set y axis limit
plt.show()
# style setting
plt.plot(points, 'co-', lw=3, ms=5, mfc='b')
plt.xlim(-10, 60) # set x axis limit
plt.ylim(-5, 5) # set y axis limit
plt.xticks([0, 25, 50]) # set x axis ticks
plt.yticks([-7, -3, 1], [r'$\theta$', r'2$\theta$', r'3$\theta$']) # LaTeX input available
plt.grid(False) # grid off
plt.show()
# draw multiple lines
## plt.plot(x1, y1, xy1_style, x2, y2, xy2_style, x3, y3, xy3_style)
plt.plot(points, points, 'bo',
points, 2*points, 'cs-',
points, 0.5*points, 'r.', lw=0.5, ms=8)
plt.show()
# draw multiple lines -2
plt.plot(points, 'co-', lw=3, ms=5, mfc='b')
plt.plot(points*0.5)
plt.show()
"""
Explanation: Style setting
{color}{marker}{line}
color: http://matplotlib.org/examples/color/named_colors.html
marker: http://matplotlib.org/api/markers_api.html?highlight=marker#module-matplotlib.markers
line: http://matplotlib.org/api/lines_api.html?highlight=line#matplotlib.lines.Line2D.set_linestyle
style(other attributes): http://matplotlib.org/1.5.1/api/lines_api.html#matplotlib.lines.Line2D
End of explanation
"""
# legend, title
plt.rc('font', family='nanumgothic') # set font family, use Korean
plt.plot(points, label='random points') # set plot 1 label
plt.plot(0.5 * points, label='임의값') # set plot 2 label
plt.legend()
plt.xlabel('random x') # set x label
plt.ylabel('random y') # set y label
plt.title('random plot') # set the title
plt.show()
"""
Explanation: Legend, Title
plt.legend(loc=x): x = legend location
set legend location: https://matplotlib.org/api/legend_api.html
plt.xlabel("label name"): set x label as "label name"
plt.ylabel("label name"): set y label as "label name"
plt.title("plot title"): set plot title as "plot title")
End of explanation
"""
plt.plot(points)
plt.annotate(# text, arrow point(x, y), xy coordinate
r'(text)', xy=(40, -4), xycoords='data',
# text location from text coordinate, text coordinate
xytext=(-50, 50), textcoords='offset points',
# font, arrow shape
fontsize=20, arrowprops=dict(arrowstyle="->", linewidth=3, color="b"))
plt.show()
"""
Explanation: Annotation
annotation attributes: https://matplotlib.org/users/annotations_intro.html
more details: https://matplotlib.org/users/annotations_guide.html#plotting-guide-annotation
End of explanation
"""
plt.figure(figsize=(20, 3))
plt.plot(points)
plt.show()
"""
Explanation: Figure size
plt.figure(figsize=(x, y)): set the figure size as x, y
End of explanation
"""
ax1 = plt.subplot(2, 1, 1)
plt.plot(points)
ax2 = plt.subplot(2, 1, 2)
plt.plot(np.random.randn(50))
plt.show()
"""
Explanation: Axes, Subplots
Doc: http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes
plt.subplot(X, Y, Z): make subplots shape as (X, Y), and Z is location number in (X, Y)
End of explanation
"""
x = [3, 2, 1]
y = [1, 2, 3]
xlabel = ['한개', '두개', '세개']
# plt.bar: vertical / plt.barh: horizontal
plt.bar(x, y, align='center') # align: ceter(default), edge
plt.xticks(x, xlabel)
plt.show()
"""
Explanation: Bar chart
Doc for vertical bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.bar
Doc for horizontal bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.barh
End of explanation
"""
x = np.random.randint(0, 10, 10)
print(x)
arrays, bins, patches = plt.hist(x, bins=6)
plt.show()
# value counts for each bin
print(arrays)
# the range of each bin
print(bins)
"""
Explanation: Histogram
Doc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.hist
End of explanation
"""
plt.pie([30, 50, 10], # size
labels = ['피자', '햄버거', '감자튀김'], # label
colors = ['pink', 'salmon', 'tomato'], # colors
explode = (0.01, 0.01, 0.2), # explode
autopct = '%.2f%%', # set the ratio label format
shadow = True, # pie chart shadow
startangle = 0) # rotate the chart
plt.axis('equal') # chart shape slope
plt.title('품목별 매출비중')
plt.show()
"""
Explanation: Pie chart
Doc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.pie
Demo: https://matplotlib.org/1.5.3/examples/pylab_examples/pie_demo2.html
End of explanation
"""
|
kingb12/languagemodelRNN | old_comparisons/testcompare.ipynb | mit | report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
"""
Explanation: Comparing Encoder-Decoders Analysis
Model Architecture
End of explanation
"""
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
"""
Explanation: Perplexity on Each Dataset
End of explanation
"""
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Loss vs. Epoch
End of explanation
"""
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
"""
Explanation: Perplexity vs. Epoch
End of explanation
"""
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data = [['<u><b>' + enc_input + '</b></u>', '']]
data.append(['<b>Generated</b>', sample['generated']])
data.append(['<b>True</b>',gold])
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
data.append(['<b>Closest BLEU Match</b>', cbm])
data.append(['<b>Closest BLEU Score</b>', str(best_bleu['best_score'])])
display_table(data)
for rname, report in reports:
display(HTML('<h3>' + rname + ' (train)</h3>'))
for i, sample in enumerate(report['train_samples']):
display_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for rname, report in reports:
display(HTML('<h3>' + rname + ' (valid)</h3>'))
for i, sample in enumerate(report['valid_samples']):
display_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for rname, report in reports:
display(HTML('<h3>' + rname + ' (test)</h3>'))
for i, sample in enumerate(report['test_samples']):
display_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
"""
Explanation: Generations
End of explanation
"""
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
"""
Explanation: BLEU Analysis
End of explanation
"""
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
"""
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
"""
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
"""
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation
"""
|
Vvkmnn/books | AutomateTheBoringStuffWithPython/lesson22.ipynb | gpl-3.0 | #! usr/bin/env bash
# This is a shell script
#python3 runthisscript.py
#echo 'I'm running a python script'
"""
Explanation: Lesson 22:
Launching Python in Other Programs
The first line of any Pthon Script should be the Shebang Line.
OSX:
#! /usr/bin/env python3
Linux:
#! usr/bin/python3
Windows:
python3
This lets you run scrips in Terminal/CMD Prompt.
python3 /path/to/script.py
Batch files, or shell scripts, can run multiple seperate programs/scripts.
OSX/Linux:
.sh
Windows:
.bat
A shell script includes references to multiple programs:
End of explanation
"""
import sys
print('Hello world')
print(sys.argv)
"""
Explanation: You can then use CMD/Terminal to run this script
$sh /path/to/shellscript.sh
You can skip adding the absolute path to scripts by adding folders to the PATH environment variable.
This lets the operating system check these folders before looking anywhere else. Editing the PATH gives the entire OS access to those folders from any location.
Temporarily edit the PATH in OSX
Show PATH in Terminal
$ echo $PATH
Temporarily add to $PATH for that terminal session
$ PATH=/usr/bin:/bin:/usr/sbin:/newpathtofolder/
Permanently edit the PATH in OSX
Move to home folder
$ cd
Edit the .bash_profile (use whatever editor you want instead of 'nano')
$nano .bash_profile #
Add the new value to the PATH and include exiting folders (with :PATH)
$ export PATH="/usr/local/mysql/bin:$PATH"
Should now show the new PATH with the added folder.
$ echo $PATH
Scripts can also take command line arguments:
sh script.sh arg1 arg2
These are called system arguments, and can be accessed in a Python program via the sys.argv command.
End of explanation
"""
#! usr/bin/env bash
# This is a shell script
#python3 runthisscript.py %*
#echo 'I'm running a python script with system arguments'
"""
Explanation: This is useful for allowing your program to take additional parameters when incorporated into batch files.
Typically, you will need to add a %* to forward those arguments to the Python script:
End of explanation
"""
|
dalonlobo/GL-Mini-Projects | TweetAnalysis/Final/Q7/Dalon_4_RTD_MiniPro_Tweepy_Q7.ipynb | mit | import logging # python logging module
# basic format for logging
logFormat = "%(asctime)s - [%(levelname)s] (%(funcName)s:%(lineno)d) %(message)s"
# logs will be stored in tweepy.log
logging.basicConfig(filename='tweepyretweet.log', level=logging.INFO,
format=logFormat, datefmt="%Y-%m-%d %H:%M:%S")
"""
Explanation: Tweepy streamer
## Most popular tweets:
- Most popular tweets means here is the tweet which has been re-tweeted maximum number of times.
- Get top 100 most re-tweeted tweets in last 1 hour related to “iphone”.
Since this is streaming application, we will use python logging module to log. Further read.
End of explanation
"""
import tweepy # importing all the modules required
import socket # will be used to create sockets
import json # manipulate json
from httplib import IncompleteRead
# Keep these tokens secret, as anyone can have full access to your
# twitter account, using these tokens
consumerKey = "OnqLmzt5DHnpqmjmVg4eOb1W2"
consumerSecret = "nHABnTm1KLVygcaIgYRIaP0buHbWPbszcNCymrg6FSn0BZvsyv"
accessToken = "152227311-RT2GznX5DhcYTcnZpLIuIJrYnBqVAqN8v4zm1TNp"
accessTokenSecret = "8s1qNom8n5igNlEKOLWL9r4uM1856hgK77OWE9ffYysFN"
"""
Explanation: Authentication and Authorisation
Create an app in twitter here. Copy the necessary keys and access tokens, which will be used here in our code.
The authorization is done using Oauth, An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications. Further read.
We will use Tweepy a python module. Tweepy is open-sourced, hosted on GitHub and enables Python to communicate with Twitter platform and use its API. Tweepy supports oauth authentication. Authentication is handled by the tweepy.AuthHandler class.
End of explanation
"""
# Performing the authentication and authorization, post this step
# we will have full access to twitter api's
def connectToTwitter():
"""Connect to twitter."""
try:
auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)
api = tweepy.API(auth)
logging.info("Successfully logged in to twitter.")
return api, auth
except Exception as e:
logging.info("Something went wrong in oauth, please check your tokens.")
logging.error(e)
"""
Explanation: Post this step, we will have full access to twitter api's
End of explanation
"""
# Tweet listner class which subclasses from tweepy.StreamListener
class TweetListner(tweepy.StreamListener):
"""Twitter stream listner"""
def __init__(self, csocket):
self.clientSocket = csocket
def dataProcessing(self, data):
"""Process the data, before sending to spark streaming
"""
sendData = {} # data that is sent to spark streamer
text = data.get("text", "undefined").encode('utf-8')
print(data["retweet_count"])
if int(data.get("retweet_count", 0)):
print(data.get("retweet_count", 0))
retweetcount = data.get("retweet_count", 0)
sendData["text"] = text
sendData["retweetcount"] = retweetcount
#data_string = "{}:{}".format(name, followersCount)
self.clientSocket.send(json.dumps(sendData) + u"\n") # append new line character, so that spark recognizes it
logging.debug(json.dumps(sendData))
def on_data(self, raw_data):
""" Called when raw data is received from connection.
return False to stop stream and close connection.
"""
try:
data = json.loads(raw_data)
self.dataProcessing(data)
#self.clientSocket.send(json.dumps(sendData) + u"\n") # Because the connection was breaking
return True
except Exception as e:
logging.error("An unhandled exception has occured, check your data processing")
logging.error(e)
raise e
def on_error(self, status_code):
"""Called when a non-200 status code is returned"""
logging.error("A non-200 status code is returned: {}".format(status_code))
return True
# Creating a proxy socket
def createProxySocket(host, port):
""" Returns a socket which can be used to connect
to spark.
"""
try:
s = socket.socket() # initialize socket instance
s.bind((host, port)) # bind to the given host and port
s.listen(5) # Enable a server to accept connections.
logging.info("Listening on the port {}".format(port))
cSocket, address = s.accept() # waiting for a connection
logging.info("Received Request from: {}".format(address))
return cSocket
except socket.error as e:
if e.errno == socket.errno.EADDRINUSE: # Address in use
logging.error("The given host:port {}:{} is already in use"\
.format(host, port))
logging.info("Trying on port: {}".format(port + 1))
return createProxySocket(host, port + 1)
"""
Explanation: Streaming with tweepy
The Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real time than could be done using the REST API.
In Tweepy, an instance of tweepy.Stream establishes a streaming session and routes messages to StreamListener instance. The on_data method of a stream listener receives all messages and calls functions according to the message type.
But the on_data method is only a stub, so we need to implement the functionality by subclassing StreamListener.
Using the streaming api has three steps.
Create a class inheriting from StreamListener
Using that class create a Stream object
Connect to the Twitter API using the Stream.
End of explanation
"""
if __name__ == "__main__":
try:
api, auth = connectToTwitter() # connecting to twitter
# Global information is available by using 1 as the WOEID
# woeid = getWOEIDForTrendsAvailable(api, "Worldwide") # get the woeid of the worldwide
host = "localhost"
port = 8700
cSocket = createProxySocket(host, port) # Creating a socket
while True:
try:
# Connect/reconnect the stream
tweetStream = tweepy.Stream(auth, TweetListner(cSocket)) # Stream the twitter data
# DON'T run this approach async or you'll just create a ton of streams!
tweetStream.filter(track=["iphone", "iPhone", "iphoneX", "iphonex"]) # Filter on trending topics
except IncompleteRead:
# Oh well, reconnect and keep trucking
continue
except KeyboardInterrupt:
# Or however you want to exit this loop
tweetStream.disconnect()
break
except Exception as e:
logging.error("Unhandled exception has occured")
logging.error(e)
continue
except KeyboardInterrupt: # Keyboard interrupt called
logging.error("KeyboardInterrupt was hit")
except Exception as e:
logging.error("Unhandled exception has occured")
logging.error(e)
"""
Explanation: Drawbacks of twitter streaming API
The major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Studies have estimated that using Twitter’s Streaming API users can expect to receive anywhere from 1% of the tweets to over 40% of tweets in near real-time. The reason that you do not receive all of the tweets from the Twitter Streaming API is simply because Twitter doesn’t have the current infrastructure to support it, and they don’t want to; hence, the Twitter Firehose. Ref
So we will use a hack i.e. get the top trending topics and use that to filter data.
Problem with retweet count
Maybe you're looking in the wrong place for the value.
The Streaming API is in real time. When tweets are created and streamed, their retweet_count is always zero.
The only time you'll see a non-zero retweet_count in the Streaming API is for when you're streamed a tweet that represents a retweet. Those tweets have a child node called "retweeted_status" that contains the original tweet that was retweeted embedded within it. The retweet_count value attached to that node represents, roughly, the number of times that original tweet has been retweeted as of some time near when you were streamed the tweet.
Retweets themselves are currently not retweetable, so should not have a non-zero retweet_count.
Source: here
This is quite normal as it is expected when you are using streaming api endpoint, its because you receive the tweets as they are posted live on twitter platform, by the time you receive the tweet no other user had a chance to retweet it so retweet_count will always be 0. If you want to find out the retweet_count you have to refetch this particular tweet some time later using the rest api then you can see the retweet_count will contain the number of retweets happened till this particular point in time.
Source: here
End of explanation
"""
|
PySCeS/PyscesToolbox | example_notebooks/RateChar.ipynb | bsd-3-clause | mod = pysces.model('lin4_fb.psc')
rc = psctb.RateChar(mod)
"""
Explanation: RateChar
RateChar is a tool for performing generalised supply-demand analysis (GSDA) [5,6]. This entails the generation data needed to draw rate characteristic plots for all the variable species of metabolic model through parameter scans and the subsequent visualisation of these data in the form of ScanFig objects.
Features
Performs parameter scans for any variable species of a metabolic model
Stores results in a structure similar to Data2D.
Saving of raw parameter scan data, together with metabolic control analysis results to disk.
Saving of RateChar sessions to disk for later use.
Generates rate characteristic plots from parameter scans (using ScanFig).
Can perform parameter scans of any variable species with outputs for relevant response, partial response, elasticity and control coefficients (with data stores as Data2D objects).
Usage and Feature Walkthrough
Workflow
Performing GSDA with RateChar usually requires taking the following steps:
Instantiation of RateChar object (optionally specifying default settings).
Performing a configurable parameter scan of any combination of variable species (or loading previously saved results).
Accessing scan results through RateCharData objects corresponding to the names of the scanned species that can be found as attributes of the instantiated RateChar object.
Plotting results of a particular species using the plot method of the RateCharData object corresponding to that species.
Further analysis using the do_mca_scan method.
Session/Result saving if required.
Further Analysis
.. note:: Parameter scans are performed for a range of concentrations values between two set values. By default the minimum and maximum scan range values are calculated relative to the steady state concentration the species for which a scan is performed respectively using a division and multiplication factor. Minimum and maximum values may also be explicitly specified. Furthermore the number of points for which a scan is performed may also be specified. Details of how to access these options will be discussed below.
Object Instantiation
Like most tools provided in PySCeSToolbox, instantiation of a RateChar object requires a pysces model object (PysMod) as an argument. A RateChar session will typically be initiated as follows (here we will use the included lin4_fb.psc model):
End of explanation
"""
rc = psctb.RateChar(mod,min_concrange_factor=100,
max_concrange_factor=100,
scan_points=255,
auto_load=False)
"""
Explanation: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation:
End of explanation
"""
mod.species
rc.do_ratechar()
"""
Explanation: min_concrange_factor : The steady state division factor for calculating scan range minimums (default: 100).
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: 100).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: 256).
auto_load : If True RateChar will try to load saved data from a previous session during instantiation. Saved data is unaffected by the above options and are only subject to the settings specified during the session where they were generated. (default: False).
The settings specified with these optional arguments take effect when the corresponding arguments are not specified during a parameter scan.
Parameter Scan
After object instantiation, parameter scans may be performed for any of the variable species using the do_ratechar method. By default do_ratechar will perform parameter scans for all variable metabolites using the settings specified during instantiation. For saving/loading see Saving/Loading Sessions below.
End of explanation
"""
rc.do_ratechar(fixed=['S1','S3'], scan_min=0.02, max_concrange_factor=110, scan_points=200)
"""
Explanation: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan:
fixed : A string or list of strings specifying the species for which to perform a parameter scan. The string 'all' specifies that all variable species should be scanned. (default: all)
scan_min : The minimum value of the scan range, overrides min_concrange_factor (default: None).
scan_max : The maximum value of the scan range, overrides max_concrange_factor (default: None).
min_concrange_factor : The steady state division factor for calculating scan range minimums (default: None)
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: None).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: None).
solver : An integer value that specifies which solver to use (0:Hybrd,1:NLEQ,2:FINTSLV). (default: 0).
.. note:: For details on different solvers see the PySCeS documentation:
For example in a scenario where we only wanted to perform parameter scans of 200 points for the metabolites S1 and S3 starting at a value of 0.02 and ending at a value 110 times their respective steady-state values the method would be called as follows:
End of explanation
"""
# Each key represents a field through which results can be accessed
sorted(rc.S3.scan_results.keys())
"""
Explanation: Accessing Results
Parameter Scan Results
Parameter scan results for any particular species are saved as an attribute of the RateChar object under the name of that species. These RateCharData objects are similar to Data2D objects with parameter scan results being accessible through a scan_results DotDict:
End of explanation
"""
# Single value results
# scan_min value
rc.S3.scan_results.scan_min
# fixed metabolite name
rc.S3.scan_results.fixed
# 1-dimensional ndarray results (only every 10th value of 200 value arrays)
# scan_range values
rc.S3.scan_results.scan_range[::10]
# J_R3 values for scan_range
rc.S3.scan_results.J_R3[::10]
# total_supply values for scan_range
rc.S3.scan_results.total_supply[::10]
# Note that J_R3 and total_supply are equal in this case, because S3
# only has a single supply reaction
"""
Explanation: .. note:: The DotDict data structure is essentially a dictionary with additional functionality for displaying results in table form (when appropriate) and for accessing data using dot notation in addition the normal dictionary bracket notation.
In the above dictionary-like structure each field can represent different types of data, the most simple of which is a single value, e.g., scan_min and fixed, or a 1-dimensional numpy ndarray which represent input (scan_range) or output (J_R3, J_R4, total_supply):
End of explanation
"""
# Metabolic Control Analysis coefficient line data
# Names of elasticity coefficients related to the 'S3' parameter scan
rc.S3.scan_results.ec_names
# The x, y coordinates for two points that will be used to plot a
# visual representation of ecR3_S3
rc.S3.scan_results.ecR3_S3
# The x,y coordinates for two points that will be used to plot a
# visual representation of ecR4_S3
rc.S3.scan_results.ecR4_S3
# The ecR3_S3 and ecR4_S3 data collected into a single array
# (horizontally stacked).
rc.S3.scan_results.ec_data
"""
Explanation: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms: Lists names of the coefficients (under ec_names, prc_names, etc.), 2-dimensional arrays with exactly 4 values (representing 2 sets of x,y coordinates) that will be used to plot coefficient lines, and 2-dimensional array that collects coefficient line data for each coefficient type into single arrays (under ec_data, prc_names, etc.).
End of explanation
"""
# Metabolic control analysis coefficient results
rc.S3.mca_results
"""
Explanation: Metabolic Control Analysis Results
The in addition to being able to access the data that will be used to draw rate characteristic plots, the user also has access to the values of the metabolic control analysis coefficient values at the steady state of any particular species via the mca_results field. This field represents a DotDict dictionary-like object (like scan_results), however as each key maps to exactly one result, the data can be displayed as a table (see Basic Usage):
End of explanation
"""
# Control coefficient ccJR3_R1 value
rc.S3.mca_results.ccJR3_R1
"""
Explanation: Naturally, coefficients can also be accessed individually:
End of explanation
"""
# Rate characteristic plot for 'S3'.
S3_rate_char_plot = rc.S3.plot()
"""
Explanation: Plotting Results
One of the strengths of generalised supply-demand analysis is that it provides an intuitive visual framework for inspecting results through the used of rate characteristic plots. Naturally this is therefore the main focus of RateChar. Parameter scan results for any particular species can be visualised as a ScanFig object through the plot method:
End of explanation
"""
# Display plot via `interact` and enable certain lines by clicking category buttons.
# The two method calls below are equivalent to clicking the 'J_R3'
# and 'Partial Response Coefficients' buttons:
# S3_rate_char_plot.toggle_category('J_R3',True)
# S3_rate_char_plot.toggle_category('Partial Response Coefficients',True)
S3_rate_char_plot.interact()
"""
Explanation: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default).
End of explanation
"""
S3_rate_char_plot.toggle_line('prcJR3_S3_R4', False)
S3_rate_char_plot.show()
"""
Explanation: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows:
End of explanation
"""
# This points to a file under the Pysces directory
save_file = '~/Pysces/rc_doc_example.npz'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
save_file = psctb.utils.misc.unix_to_windows_path(save_file)
else:
save_file = path.expanduser(save_file)
rc.save_session(file_name = save_file)
"""
Explanation: .. note:: For more details on saving see the sections Saving and Default Directories and ScanFig under Basic Usage.
Saving
Saving/Loading Sessions
RateChar sessions can be saved for later use. This is especially useful when working with large data sets that take some time to generate. Data sets can be saved to any arbitrary location by supplying a path:
End of explanation
"""
rc.save_session() # to "~/Pysces/lin4_fb/ratechar/save_data.npz"
"""
Explanation: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case.
End of explanation
"""
rc.load_session(save_file)
# OR
rc.load_session() # from "~/Pysces/lin4_fb/ratechar/save_data.npz"
"""
Explanation: Similarly results may be loaded using the load_session method, either with or without a specified path:
End of explanation
"""
# This points to a subdirectory under the Pysces directory
save_folder = '~/Pysces/lin4_fb/'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
save_folder = psctb.utils.misc.unix_to_windows_path(save_folder)
else:
save_folder = path.expanduser(save_folder)
rc.save_results(save_folder)
"""
Explanation: Saving Results
Results may also be exported in csv format either to a specified location or to the default directory. Unlike saving of sessions results are spread over multiple files, so here an existing folder must be specified:
End of explanation
"""
# Otherwise results will be saved to the default directory
rc.save_results(save_folder) # to sub folders in "~/Pysces/lin4_fb/ratechar/
"""
Explanation: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files).
End of explanation
"""
|
nicococo/scRNA | notebooks/example.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from functools import partial
from sklearn.manifold import TSNE
import sklearn.metrics as metrics
from scRNA.simulation import generate_toy_data, split_source_target
from scRNA.nmf_clustering import NmfClustering_initW, NmfClustering, DaNmfClustering
from scRNA.sc3_clustering_impl import data_transformation_log2, cell_filter, gene_filter
"""
Explanation: scRNA - Example Application
This notebook showcases the various features of this package in a simple and
accessible example. I.e. we discuss the main parts of the transfer learning
and data simulation pipeline.
The main features of the scRNA package are:
* simulation of scRNA read-count data according to a user-defined cell hierarchy
* data splittings under various scenarios: random, stratified, overlapping, etc.
* setting up a data pre-processing pipeline, i.e. cell- and gene-filters, data transformations
* source data clustering using non-negative matrix factorization with and without accompanying labels
* augmented clustering of the target data with user defined mix-in of the source data influence.
Throughout this notebook, we will employ supervised adjusted Rand score for
empirical evaluation. This is a supervised score which assumes access to
ground truth labels which is, of course, implausible under practical considerations.
For discussions on unsupervised evaluations, we refer to our paper.
End of explanation
"""
n_genes = 1000
n_cells = 2000
cluster_spec = [1, 2, 3, [4, 5], [6, [7, 8]]]
np.random.seed(42)
data, labels = generate_toy_data(num_genes=n_genes,
num_cells=n_cells,
cluster_spec=cluster_spec)
print(data.shape)
"""
Explanation: 1. Simulating scRNA read count data
We will simulate 2000 cells with 1000 genes each whereas the cells exhibit
some hierarchical relation according to a user-defined, string-encoded tree structure.
top: 1 2 3 * *
1st: 4 5 6 *
2nd: 7 8
End of explanation
"""
model = TSNE(n_components=2, random_state=0, init='pca', method='exact', metric='euclidean', perplexity=30)
ret = model.fit_transform(data.T)
plt.title('tSNE'.format())
plt.scatter(ret[:, 0], ret[:, 1], 10, labels)
plt.xticks([])
plt.yticks([])
"""
Explanation: Let's have a tSNE plot on the simulated data. We se that cluster are nicely
distributed and easily recognizable.
To tweak the data, 'generate_toy_data' accepts a number of additional
arguments, e.g. for inserting more noise.
End of explanation
"""
plt.figure(0)
inds = np.argsort(labels)
plt.pcolor(data[:, inds] / np.max(data), cmap='Greys')
plt.clim(0.,+1.)
plt.xticks([])
plt.yticks([])
for i in range(len(labels)):
plt.vlines(i, 0, n_genes, colors='C{0}'.format(labels[inds[i]]), alpha=0.07)
plt.title('Read counts')
plt.xlabel('Cells')
plt.ylabel('Genes')
"""
Explanation: Plotting the read counts as matrix reveals that many entries are zero, or
close to zero. Cluster specific structures are partly visible from the raw data.
End of explanation
"""
n_trg = 100
n_src = 400
np.random.seed(2)
data_source, data_target, true_labels_source, true_labels_target = \
split_source_target(
data,
labels,
target_ncells = n_trg,
source_ncells = n_src,
source_clusters = [1,2,3,4,5,6,7,8],
mode = 6,
common = 0,
cluster_spec = cluster_spec
)
trg_labels = np.unique(true_labels_target)
src_labels = np.unique(true_labels_source)
print('Source cluster: ', np.unique(true_labels_source))
print('Target cluster: ', np.unique(true_labels_target))
"""
Explanation: 2. Splitting data into source and target
Once the data is generated, the consecutive step is to sample the source and
target data from the much larger corpus.
There are a number of ways to sample the data, e.g. by random, random but stratified,
exclusive clusters for source, overlapping clusters, etc.
The sampling method can be set by setting the corresponding
'mode' argument in the 'split_source_target' function.
Splitting mode:
- 1 = split randomly,
- 2 = split randomly, but stratified,
- 3 = split randomly, but anti-stratified (not implemented)
- 4 = Have some overlapping and some exclusive clusters,
- 5 = have only non-overlapping clusters
- 6 = Define source matrix clusters
- 7 = Define number of overlapping clusters
In this example, we will sample 100 target data and 400 source data using
mode 6 and sample from all clusters for our source dataset.
End of explanation
"""
np.random.seed(1)
nmf = NmfClustering(data_source.copy(), np.arange(n_genes), labels=None, num_cluster=src_labels.size)
nmf.apply(alpha=1., l1=0.75, rel_err=1e-8)
score = metrics.adjusted_rand_score(true_labels_source, nmf.cluster_labels)
print('Adjusted Rand Score w/o labels: ', score)
np.random.seed(1)
nmf = NmfClustering_initW(data_source.copy(), np.arange(n_genes), labels=true_labels_source, num_cluster=src_labels.size)
nmf.apply(alpha=1., l1=0.75, rel_err=1e-8)
score = metrics.adjusted_rand_score(true_labels_source, nmf.cluster_labels)
print('Adjusted Rand Score w/ labels: ', score)
"""
Explanation: 3. Clustering source data w/ and w/o labels
Source data must be clustered with our non-negative matrix factorization
approach. If source data labels are provided, then decomposing matrices
are initialized accordingly, which (unsurprisingly) leads to higher
in-sample accuracy scores than without.
End of explanation
"""
cell_filter_fun = partial(cell_filter, num_expr_genes=0, non_zero_threshold=-1)
gene_filter_fun = partial(gene_filter, perc_consensus_genes=1, non_zero_threshold=-1)
data_transf_fun = partial(data_transformation_log2)
np.random.seed(1)
nmf_transf = NmfClustering_initW(data_source.copy(), np.arange(n_genes), labels=true_labels_source, num_cluster=src_labels.size)
nmf_transf.add_cell_filter(cell_filter_fun)
nmf_transf.add_gene_filter(gene_filter_fun)
nmf_transf.set_data_transformation(data_transf_fun)
nmf_transf.apply(alpha=1., l1=0.75, rel_err=1e-8)
# nmf.print_reconstruction_error(data_source, nmf.dictionary, nmf.data_matrix)
score = metrics.adjusted_rand_score(true_labels_source, nmf_transf.cluster_labels)
print('Adjusted Rand Score: ', score)
"""
Explanation: We can transform and filter any data using sc3 inspired methods: ie.
log-transformations, gene-, and cell filters.
Any scRNA clustering method inherits from the scRNA/AbstractClustering class
and is able to process data before 'apply'. You ony need to add corresponding
filters and transformations. Implementations for sc3-style filtering and
transformations are stored in scRNA/sc3_clustering_impl.py.
End of explanation
"""
print('(Iteration) adjusted Rand score:')
da_nmf_target = DaNmfClustering(nmf, data_target.copy(), np.arange(n_genes), num_cluster=trg_labels.size)
thetas = np.linspace(0, 1, 20)
res = np.zeros(thetas.size)
for i in range(thetas.size):
da_nmf_target.apply(mix=thetas[i], alpha=1., l1=0.75, rel_err=1e-8, calc_transferability=False)
# print(da_nmf_target.cluster_labels)
res[i] = metrics.adjusted_rand_score(true_labels_target, da_nmf_target.cluster_labels)
print('(', i,')', res[i])
plt.figure(0)
plt.bar(thetas, res)
plt.xticks([])
plt.yticks([0., 1.])
plt.xlabel('theta')
plt.ylabel('adjusted Rand score')
"""
Explanation: 4. Transfer learning: utilizing source data to help clustering target data
End of explanation
"""
|
kidpixo/multibinner | examples/example_multibinner.ipynb | mit | image_df = pd.DataFrame(image.reshape(-1,image.shape[-1]),columns=['red','green','blue'])
image_df.describe()
n_data = image.reshape(-1,image.shape[-1]).shape[0]*10 # 10 times the original number of pixels : overkill!
x = np.random.random_sample(n_data)*image.shape[1]
y = np.random.random_sample(n_data)*image.shape[0]
data = pd.DataFrame({'x' : x, 'y' : y })
# extract the random point from the original image and add some noise
for index,name in zip(*(range(image.shape[-1]),['red','green','blue'])):
data[name] = image[data.y.astype(int),data.x.astype(int),index]+np.random.rand(n_data)*.1
data.describe().T
"""
Explanation: Dataset
Initial data are read from an image, then n_data samples will be extracted from the data.
The image contains 200x200 = 40k pixels
We will extract 400k random points from the image and build a pandas.DataFrame
This mimics the sampling process of a spacecraft for example : looking at a target (Earth or another body) and getting way more data points you need to reconstruct a coherent representation.
Moreover, visualize 400k x 3 columns of point is difficult, thus we will multibin the DataFrame to 200 bins on the x and 200 on the y direction, calculate the average for each bin and return 200x200 array of data in output.
The multibin.MultiBinnedDataFrame could generate as many dimension as one like, the 2D example here is for the sake of representation.
End of explanation
"""
pd.tools.plotting.scatter_matrix(data.sample(n=1000), alpha=0.5 , lw=0, figsize=(12, 12), diagonal='hist');
# Let's multibinning!
# functions we want to apply on the data in a single multidimensional bin:
aggregated_functions = {
'red' : {'elements' : len ,'average' : np.average},
'green' : {'average' : np.average},
'blue' : {'average' : np.average}
}
# the columns we want to have in output:
out_columns = ['red','green','blue']
# define the bins for sepal_length
group_variables = collections.OrderedDict([
('y',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[0], 'n_bins' : image.shape[0]})),
('x',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[1], 'n_bins' : image.shape[1]}))
])
# I use OrderedDict to have fixed order, a normal dict is fine too.
# that is the object collecting all the data that define the multi binning
mbdf = mb.MultiBinnedDataFrame(binstocolumns = True,
dataframe = data,
group_variables = group_variables,
aggregated_functions = aggregated_functions,
out_columns = out_columns)
mbdf.MBDataFrame.describe().T
# reconstruct the multidimensional array defined by group_variables
outstring = []
for key,val in mbdf.group_variables.iteritems():
outstring.append('{} bins ({})'.format(val['n_bins'],key))
key = 'red_average'
print '{} array = {}'.format(key,' x '.join(outstring))
print
print mbdf.col_df_to_array(key)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(figsize=[16,10], ncols=2, nrows=2)
cm = plt.get_cmap('jet')
key = 'red_elements'
imgplot = ax1.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax1)
ax1.set_title('elements per bin')
ax1.grid(False)
key = 'red_average'
imgplot = ax2.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax2)
ax2.set_title(key)
ax2.grid(False)
key = 'green_average'
imgplot = ax3.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax3)
ax3.set_title(key)
ax3.grid(False)
key = 'blue_average'
imgplot = ax4.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax4)
ax4.set_title(key)
ax4.grid(False)
rgb_image_dict = mbdf.all_df_to_array()
rgb_image = rgb_image_dict['red_average']
for name in ['green_average','blue_average']:
rgb_image = np.dstack((rgb_image,rgb_image_dict[name]))
fig, (ax1,ax2) = plt.subplots(figsize=[16,10], ncols=2)
ax1.imshow(255-rgb_image,interpolation='bicubic',origin='lower')
ax1.set_title('MultiBinnedDataFrame')
ax2.imshow(image ,interpolation='bicubic',origin='lower')
ax2.set_title('Original Image')
"""
Explanation: Data Visualization
[It is a downsampled version of the dataset, the full version would take around 1 minute per plot to visualize...]
Does this dataset make sense for you? can you guess the original imgage?
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_source_power_spectrum.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
"""
Explanation: Compute power spectrum densities of the sources with dSPM
Returns an STC file containing the PSD (in dB) of each of the sources.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label)
stc.save('psd_dSPM')
"""
Explanation: Set parameters
End of explanation
"""
plt.plot(1e3 * stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
"""
Explanation: View PSD of sources in label
End of explanation
"""
|
ContinuumIO/pydata-apps | Section_1_blaze_solutions.ipynb | mit | import pandas as pd
df = pd.read_csv('iris.csv')
df.head()
df.groupby(df.Species).PetalLength.mean() # Average petal length per species
"""
Explanation: <img src="images/continuum_analytics_logo.png"
alt="Continuum Logo",
align="right",
width="30%">,
Introduction to Blaze
In this tutorial we'll learn how to use Blaze to discover, migrate, and query data living in other databases. Generally this tutorial will have the following format
odo - Move data to database
blaze - Query data in database
Install
This tutorial uses many different libraries that are all available with the Anaconda Distribution. Once you have Anaconda install, please run these commands from a terminal:
$ conda install -y blaze
$ conda install -y bokeh
$ conda install -y odo
nbviewer: http://nbviewer.ipython.org/github/ContinuumIO/pydata-apps/blob/master/Section-1_blaze.ipynb
github: https://github.com/ContinuumIO/pydata-apps
<hr/>
Goal: Accessible, Interactive, Analytic Queries
NumPy and Pandas provide accessible, interactive, analytic queries; this is valuable.
End of explanation
"""
from odo import odo
import numpy as np
import pandas as pd
odo("iris.csv", pd.DataFrame)
odo("iris.csv", list)
odo("iris.csv", np.ndarray)
"""
Explanation: <hr/>
But as data grows and systems become more complex, moving data and querying data become more difficult. Python already has excellent tools for data that fits in memory, but we want to hook up to data that is inconvenient.
From now on, we're going to assume one of the following:
You have an inconvenient amount of data
That data should live someplace other than your computer
<hr/>
Databases and Python
When in-memory arrays/dataframes cease to be an option, we turn to databases. These live outside of the Python process and so might be less convenient. The open source Python ecosystem includes libraries to interact with these databases and with foreign data in general.
Examples:
SQL - sqlalchemy
Hive/Cassandra - pyhive
Impala - impyla
RedShift - redshift-sqlalchemy
...
MongoDB - pymongo
HBase - happybase
Spark - pyspark
SSH - paramiko
HDFS - pywebhdfs
Amazon S3 - boto
Today we're going to use some of these indirectly with odo (was into) and Blaze. We'll try to point out these libraries as we automate them so that, if you'd like, you can use them independently.
<hr />
<img src="images/continuum_analytics_logo.png"
alt="Continuum Logo",
align="right",
width="30%">,
odo (formerly into)
Odo migrates data between formats and locations.
Before we can use a database we need to move data into it. The odo project provides a single consistent interface to move data between formats and between locations.
We'll start with local data and eventually move out to remote data.
odo docs
<hr/>
Examples
Odo moves data into a target from a source
```python
odo(source, target)
```
The target and source can be either a Python object or a string URI. The following are all valid calls to into
```python
odo('iris.csv', pd.DataFrame) # Load CSV file into new DataFrame
odo(my_df, 'iris.json') # Write DataFrame into JSON file
odo('iris.csv', 'iris.json') # Migrate data from CSV to JSON
```
<hr/>
Exercise
Use odo to load the iris.csv file into a Python list, a np.ndarray, and a pd.DataFrame
End of explanation
"""
odo("iris.csv", "sqlite:///my.db::iris")
"""
Explanation: <hr/>
URI Strings
Odo refers to foreign data either with a Python object like a sqlalchemy.Table object for a SQL table, or with a string URI, like postgresql://hostname::tablename.
URI's often take on the following form
protocol://path-to-resource::path-within-resource
Where path-to-resource might point to a file, a database hostname, etc. while path-within-resource might refer to a datapath or table name. Note the two main separators
:// separates the protocol on the left (sqlite, mongodb, ssh, hdfs, hive, ...)
:: separates the path within the database on the right (e.g. tablename)
odo docs on uri strings
<hr/>
Examples
Here are some example URIs
myfile.json
myfiles.*.csv'
postgresql://hostname::tablename
mongodb://hostname/db::collection
ssh://user@host:/path/to/myfile.csv
hdfs://user@host:/path/to/*.csv
<hr />
Exercise
Migrate your CSV file into a table named iris in a new SQLite database at sqlite:///my.db. Remember to use the :: separator and to separate your database name from your table name.
odo docs on SQL
End of explanation
"""
type(_)
"""
Explanation: What kind of object did you get receive as output? Call type on your result.
End of explanation
"""
odo('s3://nyqpug/tips.csv', pd.DataFrame)
"""
Explanation: <hr/>
How it works
Odo is a network of fast pairwise conversions between pairs of formats. We when we migrate between two formats we traverse a path of pairwise conversions.
We visualize that network below:
Each node represents a data format. Each directed edge represents a function to transform data between two formats. A single call to into may traverse multiple edges and multiple intermediate formats. Red nodes support larger-than-memory data.
A single call to into may traverse several intermediate formats calling on several conversion functions. For example, we when migrate a CSV file to a Mongo database we might take the following route:
Load in to a DataFrame (pandas.read_csv)
Convert to np.recarray (DataFrame.to_records)
Then to a Python Iterator (np.ndarray.tolist)
Finally to Mongo (pymongo.Collection.insert)
Alternatively we could write a special function that uses MongoDB's native CSV
loader and shortcut this entire process with a direct edge CSV -> Mongo.
These functions are chosen because they are fast, often far faster than converting through a central serialization format.
This picture is actually from an older version of odo, when the graph was still small enough to visualize pleasantly. See odo docs for a more updated version.
<hr/>
Remote Data
We can interact with remote data in three locations
On Amazon's S3 (this will be quick)
On a remote machine via ssh
On the Hadoop File System (HDFS)
For most of this we'll wait until we've seen Blaze, briefly we'll use S3.
S3
For now, we quickly grab a file from Amazon's S3.
This example depends on boto to interact with S3.
conda install boto
odo docs on aws
End of explanation
"""
import pandas as pd
df = pd.read_csv('iris.csv')
df.head(5)
df.Species.unique()
df.Species.drop_duplicates()
"""
Explanation: <hr/>
<img src="images/continuum_analytics_logo.png"
alt="Continuum Logo",
align="right",
width="30%">,
Blaze
Blaze translates a subset of numpy/pandas syntax into database queries. It hides away the database.
On simple datasets, like CSV files, Blaze acts like Pandas with slightly different syntax. In this case Blaze is just using Pandas.
<hr/>
Pandas example
End of explanation
"""
import blaze as bz
d = bz.Data('iris.csv')
d.head(5)
d.Species.distinct()
"""
Explanation: <hr/>
Blaze example
End of explanation
"""
!ls
"""
Explanation: <hr/>
Foreign Data
Blaze does different things under-the-hood on different kinds of data
CSV files: Pandas DataFrames (or iterators of DataFrames)
SQL tables: SQLAlchemy.
Mongo collections: PyMongo
...
SQL
We'll play with SQL a lot during this tutorial. Blaze translates your query to SQLAlchemy. SQLAlchemy then translates to the SQL dialect of your database, your database then executes that query intelligently.
Blaze $\rightarrow$ SQLAlchemy $\rightarrow$ SQL $\rightarrow$ Database computation
This translation process lets analysts interact with a familiar interface while leveraging a potentially powerful database.
To keep things local we'll use SQLite, but this works with any database with a SQLAlchemy dialect. Examples in this section use the iris dataset. Exercises use the Lahman Baseball statistics database, year 2013.
If you have not downloaded this dataset you could do so here - https://github.com/jknecht/baseball-archive-sqlite/raw/master/lahman2013.sqlite.
<hr/>
End of explanation
"""
db = bz.Data('sqlite:///my.db')
#db.iris
#db.iris.head()
db.iris.Species.distinct()
db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]
"""
Explanation: Examples
Lets dive into Blaze Syntax. For simple queries it looks and feels similar to Pandas
End of explanation
"""
# Inspect SQL query
query = db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]
print bz.compute(query)
query = bz.by(db.iris.Species, longest=db.iris.PetalLength.max(),
shortest=db.iris.PetalLength.min())
print bz.compute(query)
odo(query, list)
"""
Explanation: <hr />
Work happens on the database
If we were using pandas we would read the table into pandas, then use pandas' fast in-memory algorithms for computation. Here we translate your query into SQL and then send that query to the database to do the work.
Pandas $\leftarrow_\textrm{data}$ SQL, then Pandas computes
Blaze $\rightarrow_\textrm{query}$ SQL, then database computes
If we want to dive into the internal API we can inspect the query that Blaze transmits.
<hr />
End of explanation
"""
# db = bz.Data('postgresql://postgres:postgres@ec2-54-159-160-163.compute-1.amazonaws.com') # Use Postgres if you don't have the sqlite file
db = bz.Data('sqlite:///lahman2013.sqlite')
db.dshape
# View the Salaries table
t = bz.Data('sqlite:///lahman2013.sqlite::Salaries')
t.dshape
# What are the distinct teamIDs in the Salaries table?
t.teamID.distinct()
odo(t.teamID.distinct(), list)
query = t.teamID.distinct()
print bz.compute(query)
# What is the minimum and maximum yearID in the Salaries table?
t.yearID.min()
t.yearID.max()
# For the Oakland Athletics (teamID OAK), pick out the playerID, salary, and yearID columns
t[t.teamID=='OAK'][['playerID', 'salary', 'yearID']]
oak = t[t.teamID=='OAK'][['playerID', 'salary', 'yearID']]
oak
odo(oak, 'oak.csv')
!ls
# Sort that result by salary.
# Use the ascending=False keyword argument to the sort function to find the highest paid players
oak.sort('salary',ascending=False)
"""
Explanation: <hr />
Exercises
Now we load the Lahman baseball database and perform similar queries
End of explanation
"""
import pandas as pd
iris = pd.read_csv('iris.csv')
iris.groupby('Species').PetalLength.min()
iris = bz.Data('sqlite:///my.db::iris')
bz.by(iris.Species, largest=iris.PetalLength.max(),
smallest=iris.PetalLength.min())
print(_)
"""
Explanation: <hr />
Example: Split-apply-combine
In Pandas we perform computations on a per-group basis with the groupby operator. In Blaze our syntax is slightly different, using instead the by function.
End of explanation
"""
iris = bz.Data('sqlite:///my.db::iris')
query = bz.by(iris.Species, largest=iris.PetalLength.max(), # A lazily evaluated result
smallest=iris.PetalLength.min())
odo(query, list) # A concrete result
"""
Explanation: <hr/>
Store Results
By default Blaze only shows us the first ten lines of a result. This provides a more interactive feel and stops us from accidentally crushing our system. Sometimes we do want to compute all of the results and store them someplace.
Blaze expressions are valid sources for odo. So we can store our results in any format.
End of explanation
"""
result = bz.by(db.Salaries.teamID, avg=db.Salaries.salary.mean(),
max=db.Salaries.salary.max(),
ratio=db.Salaries.salary.max() / db.Salaries.salary.min()
).sort('ratio', ascending=False)
odo(result, list)[:10]
odo(result, 'sqlite:///my.db::result')
"""
Explanation: <hr/>
Exercise: Storage
The solution to the first split-apply-combine problem is below. Store that result in a list, a CSV file, and in a new SQL table in our database (use a uri like sqlite://... to specify the SQL table.)
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/ca1574468d033ed7a4e04f129164b25b/20_cluster_1samp_spatiotemporal.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation t-test on source data with spatio-temporal clustering
This example tests if the evoked response is significantly different between
two conditions across subjects. Here just for demonstration purposes
we simulate data from multiple subjects using one subject's data.
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
"""
Explanation: Set parameters
End of explanation
"""
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
"""
Explanation: Read epochs for all channels, removing a bad one
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep * 1000 # convert to milliseconds
"""
Explanation: Transform to source space
End of explanation
"""
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
"""
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g. fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only p = 1/(2 ** 6) = 0.015,
which is large.</p></div>
End of explanation
"""
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir).morph_mat
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
"""
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
"""
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
"""
Explanation: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
End of explanation
"""
print('Computing adjacency.')
adjacency = mne.spatial_src_adjacency(src)
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, adjacency=adjacency, n_jobs=1,
threshold=t_threshold, buffer_size=None,
verbose=True)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
"""
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal)
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration.
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(
hemi='both', views='lateral', subjects_dir=subjects_dir,
time_label='temporal extent (ms)', size=(800, 800),
smoothing_steps=5, clim=dict(kind='value', pos_lims=[0, 1, 40]))
# brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
ffmmjj/intro_to_data_science_workshop | solutions/en_US/04-Example: Titanic survivors analysis.ipynb | apache-2.0 | import pandas as pd
raw_data = pd.read_csv('datasets/titanic.csv')
raw_data.head()
raw_data.info()
"""
Explanation: Titanic survival analysis
The Titanic survivors dataset is popularly used to illustrate concepts of data cleaning and exploration.
Let's start by importing the data to a pandas DataFrame from a CSV file:
End of explanation
"""
# Percentage of missing values in each column
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: The information above shows that this dataset consists of data for 891 passengers: their names, gender, age, etc (for a complete description of the meaning of each column, check this link)
Missing values
Before starting the data analysis, we need to check the data's "health" by consulting how much information is actually present in each column.
End of explanation
"""
raw_data.drop('Cabin', axis='columns', inplace=True)
raw_data.info()
"""
Explanation: It can be seen that 77% of the passengers do not present information about which cabin it was allocated to. This information could be useful for further analysis but, for now, let's drop this column:
End of explanation
"""
raw_data.dropna(subset=['Embarked'], inplace=True)
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: The column Embarked, that informs on which port the passenger embarked, only has a few missing entries. Since the amount of passanger with missing values is negligible, they can be discarded without much harm:
End of explanation
"""
raw_data.fillna({'Age': raw_data.Age.median()}, inplace=True)
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: Finally, the age is missing from around 20% of the passengers. It's not reasonable to drop all these passengers nor dropping the column as a whole, so one possible solution is to fill the missing values with the median age of the dataset:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
overall_fig = raw_data.Survived.value_counts().plot(kind='bar')
overall_fig.set_xlabel('Survived')
overall_fig.set_ylabel('Amount')
"""
Explanation: Why use the median instead of the average?
The median represents a robust statistics. A statistics is a number that summarizes a set of values, while a statistics is said to be robust if it is not significantly affected by variations in the data.
Suppose we have a group of people whose ages are [15, 16, 14, 15, 15, 19, 14, 17]. The average age in this groupo is 15.625. If a 80-year old person gets added to this group, its average age will now be 22.77 years, which does not seem to represent well the age profile of the group.
The median age of this group in both cases, instead, is 15 years - i.e. the median value was not changed by the presence of an outlier in the data, which makes it a robust statistics for the ages of the group.
Now that all of the passengers' information has been "cleaned", we can start to analyse the data.
Exploratory analysis
Let's start by exploring how many people in this dataset survived the Titanic:
End of explanation
"""
survived_sex = raw_data[raw_data['Survived']==1]['Sex'].value_counts()
dead_sex = raw_data[raw_data['Survived']==0]['Sex'].value_counts()
df = pd.DataFrame([survived_sex,dead_sex])
df.index = ['Survivors','Non-survivors']
df.plot(kind='bar',stacked=True, figsize=(15,8));
"""
Explanation: Overall, 38% of the passengers survived.
Now, let's segment the proportion of survivors along different profiles (the code to generate the following graphs was taken from this link).
By gender
End of explanation
"""
figure = plt.figure(figsize=(15,8))
plt.hist([raw_data[raw_data['Survived']==1]['Age'], raw_data[raw_data['Survived']==0]['Age']],
stacked=True, color=['g','r'],
bins=30, label=['Survivors','Non-survivors'])
plt.xlabel('Idade')
plt.ylabel('No. passengers')
plt.legend();
"""
Explanation: By age
End of explanation
"""
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(15,8))
plt.hist([raw_data[raw_data['Survived']==1]['Fare'], raw_data[raw_data['Survived']==0]['Fare']],
stacked=True, color=['g','r'],
bins=50, label=['Survivors','Non-survivors'])
plt.xlabel('Fare')
plt.ylabel('No. passengers')
plt.legend();
"""
Explanation: By fare
End of explanation
"""
data_for_prediction = raw_data[['Name', 'Sex', 'Age', 'Fare', 'Survived']]
data_for_prediction.is_copy = False
data_for_prediction.info()
"""
Explanation: The graps above indicate that passenger who are female, are less than 20 years and/or paid higher fares to embark have a greater chance to have survived the Titanic (what a surprise!).
How precisely can we use this information to be able to predict if a passenger would survive the accident?
Predicting chances of surviving
Let's start by preserving onle the information that we wish to use - we'll keep the passenger names for further analysis:
End of explanation
"""
data_for_prediction['Sex'] = data_for_prediction.Sex.map({'male': 0, 'female': 1})
data_for_prediction.info()
"""
Explanation: Numeric encoding of Strings
Some information is encoded as strins: the information about the passenger's gender, for instance, is represented by the strings male and female. To make use of this information in our coming predictive model, we must convert them to numeric values:
End of explanation
"""
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data_for_prediction, test_size=0.25, random_state=254)
len(train_data), len(test_data)
"""
Explanation: Training/validation set split
In order to be able to assess the model's predictive power, part of the data (in this case, 25%) must be separated into a validation set.
A validation set is a dataset for which the expected vallues are known but that is not used to train the predictive model - this way, the model will not be biased with information from these entries and this data set can be used to estimate the error rate.
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(train_data[['Sex', 'Age', 'Fare']], train_data.Survived)
tree.score(test_data[['Sex', 'Age', 'Fare']], test_data.Survived)
"""
Explanation: Predicting survival chances with decision trees
We'll use a simple Decision Tree model to predict if a passenger would survive the Titanic by making use of its gender, age and fare.
End of explanation
"""
test_data.is_copy = False
test_data['Predicted'] = tree.predict(test_data[['Sex', 'Age', 'Fare']])
test_data[test_data.Predicted != test_data.Survived]
"""
Explanation: With a simple decision tree, the result above indicates that it's possible to correctly predict the survival of circa 80% of the passengers.
An interesting exercise to do after training a predictive model is to take a look at the cases where it missed:
End of explanation
"""
|
patrickbreen/patrickbreen.github.io | notebooks/vae_my_version.ipynb | mit | import sys
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(0)
tf.set_random_seed(0)
# get the script bellow from
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/input_data.py
import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
n_samples = mnist.train.num_examples
"""
Explanation: Variational Auto-Encoder notebook using tensorflow
(Edit) I fixed this notebook so that it can be run top to bottom to reproduce everything. Also to get the ipython notebook format, change "html" to "ipynb" at the end of the URL above to access the notebook file.
Introduction:
In this blog series we are going to build towards a rather complicated, deep learning model that can be applied to an interesting bioinformatics application. I don't want to explain the end goal right now, since I want to keep that confidential until it is published. However the end goal is complicated enough, and I am unfamiliar with both the deep learning models, and the tensorflow framwork, that we aren't going to try to implement the end goal in one shot, rather we are going to implement a series of smaller simpler tensorflow models, to build up some pieces that can then be assembled later.
In this series we will investigate deep generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN). We will also develop recurrent neural networks, including LSTM and especially seq2seq models.
To start off, in this notebook, we look below at a VAE on the MNIST dataset, just to get familiar with VAE's, and generative models overall (which will be a piece that we need later), and to get familliar with tensorflow.
A lot of this notebook is taken and modified from https://jmetzen.github.io/2015-11-27/vae.html
End of explanation
"""
# We have training data Images 28x28=782 gray scale pixels (a list of 782 length numpy vectors):
print(mnist.train.images[0].shape) # 784
print(len(mnist.train.images)) # 55000
print(sys.getsizeof(mnist.train.images)) # 172 million bytes (fits in memory, yay!)
# the labels are stored as one-hot encoding (10 dimensional numpy vectors)
print(mnist.train.labels[0].shape) # 10
print(len(mnist.train.labels)) # 55000
# We also have 5000 validation image-label pairs
print(len(mnist.validation.labels)) # 5000
# and 10000 testing image-label pairs
print(len(mnist.test.labels)) # 10000
"""
Explanation: Description of the MNIST dataset:
Let me describe the data which was downloaded/loaded in the cell above:
The data is the familliar mnist dataset, a classic dataset for supervised machine learning, consisting of images of handrawn digits and their labels.
End of explanation
"""
# hyper params:
n_hidden_recog_1=500 # 1st layer encoder neurons
n_hidden_recog_2=500 # 2nd layer encoder neurons
n_hidden_gener_1=500 # 1st layer decoder neurons
n_hidden_gener_2=500 # 2nd layer decoder neurons
n_input=784 # MNIST data input (img shape: 28*28)
n_z=20
transfer_fct=tf.nn.softplus
learning_rate=0.001
batch_size=100
# CREATE NETWORK
# 1) input placeholder
x = tf.placeholder(tf.float32, [None, n_input])
# 2) weights and biases variables
def xavier_init(fan_in, fan_out, constant=1):
""" Xavier initialization of network weights"""
# https://stackoverflow.com/questions/33640581/how-to-do-xavier-initialization-on-tensorflow
low = -constant*np.sqrt(6.0/(fan_in + fan_out))
high = constant*np.sqrt(6.0/(fan_in + fan_out))
return tf.random_uniform((fan_in, fan_out),
minval=low, maxval=high,
dtype=tf.float32)
wr_h1 = tf.Variable(xavier_init(n_input, n_hidden_recog_1))
wr_h2 = tf.Variable(xavier_init(n_hidden_recog_1, n_hidden_recog_2))
wr_out_mean = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
wr_out_log_sigma = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
br_b1 = tf.Variable(tf.zeros([n_hidden_recog_1], dtype=tf.float32))
br_b2 = tf.Variable(tf.zeros([n_hidden_recog_2], dtype=tf.float32))
br_out_mean = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
br_out_log_sigma = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
wg_h1 = tf.Variable(xavier_init(n_z, n_hidden_gener_1))
wg_h2 = tf.Variable(xavier_init(n_hidden_gener_1, n_hidden_gener_2))
wg_out_mean = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
# wg_out_log_sigma = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
bg_b1 = tf.Variable(tf.zeros([n_hidden_gener_1], dtype=tf.float32))
bg_b2 = tf.Variable(tf.zeros([n_hidden_gener_2], dtype=tf.float32))
bg_out_mean = tf.Variable(tf.zeros([n_input], dtype=tf.float32))
# 3) recognition network
# use recognition network to predict mean and (log) variance of (latent) Gaussian distribution z (n_z dimensional)
r_layer_1 = transfer_fct(tf.add(tf.matmul(x, wr_h1), br_b1))
r_layer_2 = transfer_fct(tf.add(tf.matmul(r_layer_1, wr_h2), br_b2))
z_mean = tf.add(tf.matmul(r_layer_2, wr_out_mean), br_out_mean)
z_sigma = tf.add(tf.matmul(r_layer_2, wr_out_log_sigma), br_out_log_sigma)
# 4) do sampling on recognition network to get latent variables
# draw one n_z dimensional sample (for each input in batch), from normal distribution
eps = tf.random_normal((batch_size, n_z), 0, 1, dtype=tf.float32)
# scale that set of samples by predicted mu and epsilon to get samples of z, the latent distribution
# z = mu + sigma*epsilon
z = tf.add(z_mean, tf.mul(tf.sqrt(tf.exp(z_sigma)), eps))
# 5) use generator network to predict mean of Bernoulli distribution of reconstructed input
g_layer_1 = transfer_fct(tf.add(tf.matmul(z, wg_h1), bg_b1))
g_layer_2 = transfer_fct(tf.add(tf.matmul(g_layer_1, wg_h2), bg_b2))
x_reconstr_mean = tf.nn.sigmoid(tf.add(tf.matmul(g_layer_2, wg_out_mean), bg_out_mean))
"""
Explanation: Building the model
Let me briefly explain in words how a VAE works. First, like all generative models, VAEs are inherently unsupervised, here we aren't going to use the "labels" at all and we aren't going to use the validation or testing sets. How a VAE works, is that it takes an input, in this case a MNIST image, and maps that onto an internal encoding/embedding, which is then used to reconstruct the original input image.
In the specific case of deep VAE, we have a neural network (the enncoder/recognition network) that transforms the input, x, into a set of parameters that determine a probability distribution. This probability distribution is then sampled from to get a vector z. That vector is fed into a second network (the decoder/generator network) which attempts to output the original image, x, with maximum accuracy. The whole thing can be trained iteratively with simple gradient decent via back propogation.
Note that n_z samples of the latent distribution will be drawn for each input image
End of explanation
"""
# DEFINE LOSS AND OPTIMIZER
# The loss is composed of two terms:
# 1.) The reconstruction loss (the negative log probability
# of the input under the reconstructed Bernoulli distribution
# induced by the decoder in the data space).
# This can be interpreted as the number of "nats" required
# for reconstructing the input when the activation in latent
# is given.
# Adding 1e-10 to avoid evaluatio of log(0.0)
reconstr_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_reconstr_mean) + (1-x) * tf.log(1e-10 + 1 - x_reconstr_mean), 1)
# 2.) The latent loss, which is defined as the Kullback Leibler divergence
## between the distribution in latent space induced by the encoder on
# the data and some prior. This acts as a kind of regularizer.
# This can be interpreted as the number of "nats" required
# for transmitting the the latent space distribution given
# the prior.
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_sigma), 1)
# Since reconstr_loss and latent_loss are in terms of "nats" they
# should be on simmilar scales. So we can add them together.
cost = tf.reduce_mean(reconstr_loss + latent_loss) # average over batch
# 3) set up optimizer (use ADAM optimizer)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
"""
Explanation: Loss and Optimizer
The loss function is in two parts. One that measures how well the reconstruction "fits" the original image, and one that measures the "complexity" of the latent distribution (acting as a regularizer).
End of explanation
"""
# INITALIZE VARIABLES AND TF SESSION
init = tf.initialize_all_variables()
sess = tf.InteractiveSession()
sess.run(init)
"""
Explanation: Graphs, sessions and state-initialization
The above cells determine the tensorflow "graph" i.e. the graph consisting of nodes of ops connected by directed tensor-edges. But the graph above cannot be run yet. It hasn't been initialized into a "working graph" with statefull variables. The tensorflow tutorials refer to the graph as something like a "blueprint".
The session contains the state for a specific graph, including the current values of all of the tensors in the graph. Once the session is created, the variables are initialized. From now on, the variables have state, and can be read/viewed, or updated iteratively according to the data and the optimization op.
End of explanation
"""
# INTERACTIVE TESTING
# get a batch of inputs
batch_xs, _ = mnist.train.next_batch(batch_size)
# get the reconstructed mean for those inputs
# (on a completely untrained model)
# below is the "interactive version" to "read" a tensor (given an input in the feed_dict)
print(x_reconstr_mean.eval(feed_dict={x: batch_xs}).shape)
print(x_reconstr_mean.eval(feed_dict={x: batch_xs}))
print("----------------------------------------------------")
# below is the same but using explicit reference to the session state
print(tf.get_default_session().run(x_reconstr_mean, feed_dict={x: batch_xs}).shape)
print(tf.get_default_session().run(x_reconstr_mean, feed_dict={x: batch_xs}))
# and this is also the same thing:
print(sess.run(x_reconstr_mean, feed_dict={x: batch_xs}).shape)
print(sess.run(x_reconstr_mean, feed_dict={x: batch_xs}))
"""
Explanation: Interactive tensorflow
Because this is an ipython notebook and we are doing exploratory machine learning, we should have an ability to interact with the tensorflow model interactively. Note the line above sess = tf.InteractiveSession() which allows us to do things like some_tensor.eval(feed_dict={x: batch_xs}). Some tensors can simplly be evaluated without passing in an input, but only if they are something that does not depend on an input, remember tensor flow is a directed graph of op-nodes and tensor-edges that denote dependancies. Variables like weights and biases do not depend on any input, but something like the reconstructed input depends on the input.
End of explanation
"""
# make a saver object
saver = tf.train.Saver()
# save the current state of the session to file "model.ckpt"
save_path = saver.save(sess, "model.ckpt")
# a binding to a session can be restored from a file with the following
restored_session = tf.Session()
saver.restore(restored_session, "model.ckpt")
# prove the two sessions are equal:
# here we evaluate a weight variable that does not depend on input hence no feed_dict is nessisary
print(sess.run(wr_h1))
print("-------------------------------------------------------------------")
print(restored_session.run(wr_h1))
"""
Explanation: Interpretting the above interactive results
Note that the three printed reconstructions are not the same!! This isn't due to some bug. This is because the model is inherently stocastic. Given the same input, the reconstructions will be different. Especially here, with an untrained model.
Also note for the above tensors we printed, they are 100 by 784 dim tensors. The first dim, 100 is the batch size. The second dim, 784, is the flattened pixels in the reconstructed image.
This interactive testing can be usefull for tensorflow noobs, who want to make sure that their tensors and ops are compatible as they build thier evaluation graph one tensor/op at a time.
Remember, when using interactive tensorflow you have access to the current state of the model at any time. Just use eval. Even without interactive mode, it is still easy, you just have to make sure to keep the session which holds the tensorflow graph's state.
Adding a Saver to record checkpoints
The session contains state, namely the current values of all of the variables (parameters) in the model. We can save this state to a file periodically which can be used to restart training after an interuption, or to reload the session/model for any reason (including perhaps deploying a trained model to be evaluated on another machine)
End of explanation
"""
# remake the whole graph with scopes, names, summaries
wr_h1 = tf.Variable(xavier_init(n_input, n_hidden_recog_1))
wr_h2 = tf.Variable(xavier_init(n_hidden_recog_1, n_hidden_recog_2))
wr_out_mean = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
wr_out_log_sigma = tf.Variable(xavier_init(n_hidden_recog_2, n_z))
br_b1 = tf.Variable(tf.zeros([n_hidden_recog_1], dtype=tf.float32))
br_b2 = tf.Variable(tf.zeros([n_hidden_recog_2], dtype=tf.float32))
br_out_mean = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
br_out_log_sigma = tf.Variable(tf.zeros([n_z], dtype=tf.float32))
wg_h1 = tf.Variable(xavier_init(n_z, n_hidden_gener_1))
wg_h2 = tf.Variable(xavier_init(n_hidden_gener_1, n_hidden_gener_2))
wg_out_mean = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
# wg_out_log_sigma = tf.Variable(xavier_init(n_hidden_gener_2, n_input))
bg_b1 = tf.Variable(tf.zeros([n_hidden_gener_1], dtype=tf.float32))
bg_b2 = tf.Variable(tf.zeros([n_hidden_gener_2], dtype=tf.float32))
bg_out_mean = tf.Variable(tf.zeros([n_input], dtype=tf.float32))
# 3) recognition network
# use recognition network to predict mean and (log) variance of (latent) Gaussian distribution z (n_z dimensional)
with tf.name_scope('recognition-encoding'):
r_layer_1 = transfer_fct(tf.add(tf.matmul(x, wr_h1), br_b1))
r_layer_2 = transfer_fct(tf.add(tf.matmul(r_layer_1, wr_h2), br_b2))
z_mean = tf.add(tf.matmul(r_layer_2, wr_out_mean), br_out_mean)
z_sigma = tf.add(tf.matmul(r_layer_2, wr_out_log_sigma), br_out_log_sigma)
# 4) do sampling on recognition network to get latent variables
# draw one n_z dimensional sample (for each input in batch), from normal distribution
eps = tf.random_normal((batch_size, n_z), 0, 1, dtype=tf.float32)
# scale that set of samples by predicted mu and epsilon to get samples of z, the latent distribution
# z = mu + sigma*epsilon
z = tf.add(z_mean, tf.mul(tf.sqrt(tf.exp(z_sigma)), eps))
# 5) use generator network to predict mean of Bernoulli distribution of reconstructed input
with tf.name_scope('generator-decoding'):
g_layer_1 = transfer_fct(tf.add(tf.matmul(z, wg_h1), bg_b1))
g_layer_2 = transfer_fct(tf.add(tf.matmul(g_layer_1, wg_h2), bg_b2))
x_reconstr_mean = tf.nn.sigmoid(tf.add(tf.matmul(g_layer_2, wg_out_mean), bg_out_mean))
reconstr_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_reconstr_mean) + (1-x) * tf.log(1e-10 + 1 - x_reconstr_mean), 1)
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_sigma), 1)
batch_cost = reconstr_loss + latent_loss
cost = tf.reduce_mean(batch_cost) # average over batch
stdev_cost =tf.sqrt(tf.reduce_sum(tf.square(batch_cost - cost)))
tf.scalar_summary("mean_cost", cost)
tf.scalar_summary("sttdev_cost", stdev_cost)
tf.histogram_summary("histo_cost", batch_cost)
# 3) set up optimizer (use ADAM optimizer)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# this is the "summary op"
merged = tf.merge_all_summaries()
# INITALIZE VARIABLES AND TF SESSION
sess = tf.Session()
sess.run(tf.initialize_all_variables())
# this is the summary writer
train_writer = tf.train.SummaryWriter("summaries/train", sess.graph)
"""
Explanation: Use scopes and names to organize your tensorflow graph
group tensors with with tf.name_scope('hidden') as scope:
and name tensors with the name property like a = tf.constant(5, name='alpha') prior to graph visualization.
Use summaries to record how your model changes over training-time
Lets reorganize our computational graph with scopes and names, and add some summaries:
There are three summaries which we will add to the "cost" tensor.
We will record its mean, standard deviation and histogram over a batch.
Below is the whole graph from above again. Lots of copying and pasting.
End of explanation
"""
# DO TRAINING
learning_rate=0.001
batch_size=100
training_epochs=10
display_step=1
save_ckpt_dir="model/"
os.mkdir(save_ckpt_dir)
saver = tf.train.Saver()
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(n_samples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, _ = mnist.train.next_batch(batch_size)
# Fit training using batch data ( don't want to eval summaries every time during training)
_, batch_cost = sess.run((optimizer, cost), feed_dict={x: batch_xs})
_, loss = sess.run([optimizer, cost],
feed_dict={x: batch_xs})
avg_cost += loss / n_samples * batch_size
if epoch % display_step == 0:
# At the end of every epoch print loss, save model, and save summaries
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
# run op to get summaries (don't want to be running this everytime during training)
summary = sess.run(merged,
feed_dict={x: batch_xs},
options=run_options,
run_metadata=run_metadata)
# write summaries
train_writer.add_run_metadata(run_metadata, 'epoch%03d' % epoch)
train_writer.add_summary(summary, epoch)
# write ckpt
save_path = saver.save(sess, save_ckpt_dir + "model_" + str(epoch) + ".ckpt")
# print ave cost
avg_cost += loss / n_samples * batch_size
print("Epoch:", '%04d' % (epoch+1), "average loss:", "{:.9f}".format(avg_cost))
"""
Explanation: Run train with summaries
Now we finally run the training loop. We loop over all batches passing the feed dictionary in to the session run command with feed_dict. After display_step worth of epochs, w
we print the epoch number and the cost to the screen.
we save a checkpoint of the model.
we save summaries of the loss to a file.
End of explanation
"""
# DO RECONSTRUCTION / PLOTTING
def do_reconstruction(sess):
x_sample = mnist.test.next_batch(100)[0]
x_reconstruct = sess.run(x_reconstr_mean, feed_dict={x: x_sample})
plt.figure(figsize=(8, 12))
examples_to_plot = 3
for i in range(examples_to_plot):
plt.subplot(examples_to_plot, 2, 2*i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1)
plt.title("Test input")
plt.colorbar()
plt.subplot(examples_to_plot, 2, 2*i + 2)
plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1)
plt.title("Reconstruction")
plt.colorbar()
plt.tight_layout()
new_sess = tf.Session()
new_sess.run(tf.initialize_all_variables())
# do reconstruction before training
do_reconstruction(new_sess)
# do reconstruction after training
do_reconstruction(sess)
"""
Explanation: Evaluate the training
One thing that we can do to evaluate training is print the reconstruction image with a new (untrained) session and compare that visually to the reconstruction that can be achieved with our trained model. See the plotting of those reconstructions bellow. The difference is immediately apparent!
End of explanation
"""
|
guyk1971/deep-learning | sentiment-rnn/Sentiment_RNN.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
word_counts = Counter(words) # create a sort of dictionary k,v=word,count
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True) # list of words sorted by their word count
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {word: ii+1 for ii, word in enumerate(sorted_vocab)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[w] for w in review.split()] for review in reviews]
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [int(w=='positive') for w in labels.split()]
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints.remove([])
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
features = [[0]*(seq_len-len(r))+r[:seq_len] for r in reviews_ints]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
split_idx=int(split_frac*features.shape[0])
train_x, val_x = features[:split_idx,:], features[split_idx:,:]
train_y, val_y = np.array(labels[:split_idx]), np.array(labels[split_idx:])
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(dtype=tf.int32,shape=[None,seq_len], name='inputs')
labels_ = tf.placeholder(dtype=tf.int32,shape=[None,1],name='labels')
keep_prob = tf.placeholder(dtype=tf.float32,name='keep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words,embed_size),-1,1))
embed = tf.nn.embedding_lookup(embedding,inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell,embed,seq_len,initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
sailuh/perceive | Notebooks/CWE/Fielder_Parser/Legacy CWE Field Parser.ipynb | gpl-2.0 | tree = lxml.etree.parse('cwec_v3.0.xml')
root = tree.getroot()
# Remove namespaces from XML.
for elem in root.getiterator():
if not hasattr(elem.tag, 'find'): continue # (1)
i = elem.tag.find('}') # Counts the number of characters up to the '}' at the end of the XML namespace within the XML tag
if i >= 0:
elem.tag = elem.tag[i+1:] # Starts the tag a character after the '}'
for table in root:
print (table.tag)
"""
Explanation: Introduction
The purpose of this notebook is to build a field parser and extract the contents of various fields in the CWE 3.0 XML file so that the field content can be directly analyzed and stored into database. The raw XML file can be downloaded at http://cwe.mitre.org/data/xml/cwec_v3.0.xml.zip. Guided by CWE Introduction notebook, this notebook will focus on the detail structure under Weakness table and how parser functions work in order to extract two formats of field: fields with no nesting element and fields with nesting structure.
Although the overall structure of CWE XML file has been documented in CWE Introduction notebook, the Introduction notebook is built on version 2.9. Therefore, the following differences about weakness table between version 2.9 and 3.0 can be observed:
The order of four tables is changed and weakness table in version 3.0 is the first.
Several fields are removed or changed to other names in version 3.0: Time_of_Introduction, Maintenace_Notes, Causal_Nature, Research_Gaps, White_Box_Definitions, Terminology_Notes, Other_Notes, Enabling_Factors_for_Exploitation, Relevant_Properties
End of explanation
"""
def write_dict_to_csv(output_file,csv_header,dict_data):
'''
Create a CSV file with headers and write a dictionary;
If the file already existes, only append a dictionary.
Args:
output_file -- name of the output csv file
csv_header -- the header of the output csv file.
dict_data -- the dictionary that will be writen into the CSV file. The number of
element in the dictionary should be equal to or lower than the number of
headers of the CSV file.
Outcome:
a new csv file with headers and one row that includes the information from the dictionary;
or an existing CSV file with a new row that includes the information from the dictionary
'''
# create a file if the file does not exist; if exsits, open the file
with open(output_file, 'a') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=csv_header,lineterminator='\n')
# check whether the csv file is empty
if csv_file.tell()==0:
# if empty, write header and the dictionary
writer.writeheader()
writer.writerow(dict_data)
else:
# if not empty, only write the dictionary
writer.writerow(dict_data)
"""
Explanation: Format and Field Parser
Although there are various kinds of field, in general, there are only three ways to store the field information in the CWE XML file: 1) fields with no nesting element, 2) fields with nesting element, 3) fields with attribute information.
|Format|CWE Field Example|
|:----:|:---------:|
|Fields with no nesting element|Description, Extended_Description, Likelihood_Of_Exploit, Background_Details|
|Fields with nesting element|Potential_Mitigations, Weakness_Ordinalities, Common_Consequences, Alternate_Terms, Modes_Of_Introduction, Affected_Resources, Observed_Examples, Functional_Areas, Content_History, Detection_Methods|
|Fields with attribute information|Demonstrative_Exampls, Taxonomy_Mappings, Applicable_Platforms, References,Related Attack Pattern|
We will discuss the detail structure and how to parse the first two types of field below.
1.1 Fields with no nesting element
Typically, the fields in this format will keep of the information directly under the field element, without any nesting structure and attribute. For example, Description and Extended_Description are the fields in this format. There is no further nesting structure under the field element and thus cannot be extended (no plus sign on the left)
However, when parsing Extended_Description in cwe-1007, there are nesting html elements under Extended_Description element. In this case, we will remove the html tag and concatenate the contents under separate html elements
<b>General case</b>:
<b>HTML elements under Extended_Description</b>:
1.2 Parser function for field with no nesting element
Before introducing the parser function, we need a function that can write the dictionary that stores the field content to a CSV file. Function <b> write_dict_to_csv </b> will append the given dictionary to the end of the CSV file. If the file does not exist, the function will create a CSV file and take the csv_header as the header of this CSV file.
End of explanation
"""
def no_nesting_field_parser(target_field, root):
'''
Parse the field with no nesting element from cwec_v3.0.xml file and output the information to a csv file.
Args:
target_field -- the target field that will be parsed through this function. The format of this arg should be string.
root -- the root element of the whole parsed tree.
Outcome:
a csv file named by the field name. Each row will include the following information:
- cwe_id: The CWE identifier
- field: The name of the target field
- (field name)_content: The text information stored under the target field. The header varies depending on field.
For example, the header will be 'description_content' if parsing 'Description' field
'''
# define the path of target field. Here we select all element nodes that the tag is the target field
target_field_path='Weakness/./'+target_field
# extract weakness table in the XML
weakness_table = root[0]
#define the headers
field_header=target_field.lower()+'_content'
output_header=['cwe_id','field',field_header]
#define path of the output file
output_path=target_field+'.csv'
# for each target field node
for field in weakness_table.findall(target_field_path):
# extract cwe_id from the parent node of the target field node
cwe_id=field.getparent().attrib.get('ID')
# extract the content under the target field
field_entry_content=field.text
# in case there are nested html tags under the field
if field_entry_content.isspace()==True:
for field_entry in field:
# extract the content under html tags and concatenate
field_entry_content=field_entry.text
field_entry_content=field_entry_content+field_entry
# build the dictionary that is used to write
field_entry_dict=dict()
field_entry_dict['cwe_id']=cwe_id
field_entry_dict['field']=target_field
field_entry_dict[field_header.lower()]= field_entry_content.strip()
# write the dictionary with headers to a CSV file
write_dict_to_csv(output_path,output_header, field_entry_dict)
des='Description'
extended_des='Extended_Description'
likelihood='Likelihood_Of_Exploit'
background='Background_Details'
no_nesting_field_parser(des,root)
"""
Explanation: Given the target field, function <b> no_nesting_field_parser </b> will extract the contents within the target field element and write cwe_id and content into a CSV file named by the target field. Each row in the output CSV file will include the following information:
- cwe_id: The CWE identifier
- field: The name of the target field
- (field name)_content: The text information stored under the target field. The header of this column varies depending on the field. For example, the header will be 'description_content' if parsing 'Description' field
The following fields have been tested successfully: Description, Extended_Description, Likelihood_Of_Exploit, Background_Details.
End of explanation
"""
no_nesting_field=pd.read_csv('Description.csv')
no_nesting_field.head(5)
"""
Explanation: After running the above codes, the file named by 'Description.csv' should be created under the same directory as this notebook. For parsing other fields, need to change the name of the target field.
End of explanation
"""
def nesting_field_parser(target_field, root):
'''
Parser the field with nested elements from cwec_v3.0.xml file and output the information to a csv file.
The following fields have been tested successfully:
-Potential_Mitigations, Weakness_Ordinalities
-Common_Consequences, Alternate_Terms
-Modes_Of_Introduction, Affected_Resources
-Observed_Examples, Functional_Areas
-Content_History, Detection_Methods
Args:
target_field -- the target field that will be parsed through this function. The format of this arg should be string.
root -- the root element of the parsed tree.
Outcome:
a csv file named by the field name. Each row will include the following headers:
- cwe_id: The CWE identifier
- field: The name of the target field
- tags under the field node, but exclude all html tags, including li, div, ul,and p.
'''
# define the path of target field. Here we select all element nodes that the tag is the target field
target_field_path='Weakness/./'+target_field
# extract weakness table in the XML
weakness_table = root[0]
# define the headers
output_header=['cwe_id','field']
# define path of the output file
output_path=target_field+'.csv'
### 1.Generate all possible tags(column header in csv file) under the target field tree
# for each target field node
for field in weakness_table.findall(target_field_path):
# for each field entry, in case there are multiple field entries under the target field node
for field_entry in list(field):
# traverse all entry_element nodes under each field entry
for entry_element in field_entry.iter():
# generate tag and content of each entry_element
entry_element_tag=entry_element.tag
entry_element_content=entry_element.text
# exclude the tag of field entry node, since .iter() will return field entry node and its entry_element nodes
if entry_element_content.isspace():
continue
# exclude all html tags, such as li,div,ul,p
if entry_element_tag=='li' or entry_element_tag=='div' or entry_element_tag=='p' or entry_element_tag=='ul':
continue
# append the tag to the output_header list if it does not exist in the list
if entry_element_tag.lower() not in output_header:
output_header.append(entry_element_tag.lower())
### 2.Extract the content from the nesting target field
# for each target field node
for field in weakness_table.findall(target_field_path):
# extract cwe_id from the attribute of its parent node
cwe_id=field.getparent().attrib.get('ID')
# for each field entry node under the target field node
for field_entry in list(field):
# the dictionary that will be written to a CSV file
entry_element_dict=dict()
entry_element_dict['cwe_id']=cwe_id
entry_element_dict['field']=target_field
# traverse all entry_element nodes under each field entry
for entry_element in field_entry.iter():
# generate tag and content of each entry_element
entry_element_tag=entry_element.tag
entry_element_content=entry_element.text
# skip the first field entry node
if entry_element_content.isspace():
continue
#if the tag is html tag, such as li, div, p, and ul, the tag will be replaced by its parent tag
while(entry_element_tag.lower() not in output_header):
entry_element_tag=entry_element.getparent().tag.lower()
entry_element=entry_element.getparent()
#if there are multiple entry_element entries using a same tag, all content will be concatenated
if entry_element_tag.lower() in entry_element_dict:
# add the concatenated content into the dictionary
entry_element_dict[entry_element_tag.lower()]=entry_element_dict[entry_element_tag.lower()]+ ';'+entry_element_content
# if not, directly add the entry_element content into the dictionary
else:
entry_element_dict[entry_element_tag.lower()]=entry_element_content
# write the dictionary with headers to a CSV file
write_dict_to_csv(output_path,output_header,entry_element_dict)
mitigation="Potential_Mitigations"
consequence='Common_Consequences'
mode='Modes_Of_Introduction'
example='Observed_Examples'
content='Content_History'
weakness='Weakness_Ordinalities'
detection='Detection_Methods'
term='Alternate_Terms'
resources='Affected_Resources'
function_area='Functional_Areas'
nesting_field_parser(consequence, root)
"""
Explanation: 2.1 Fields with nesting elements
Typically, the fields in this format will have a nested structured under the target field element. To understand the nesting structure, here we use the Common_Consequences field in cwe-1004 as the example. Under Common_Consequences element, there are two field entries named by 'Consequence', which represent two different individual consequences associated with the weakness. Under each consequence element, there are three entry elements (scope, impact, and note), which have the contents that our parser is intended to to extract.
<b>General Case </b>:
To understand the structure and the variable naming in the coding part, I generalized the structure of the fields in this format. Here is the general format:
<Target_Field>
<Field_Entry1>
<Entry_Element1> the content function will parse</Entry_Element1>
<Entry_Element2> the content function will parse</Entry_Element2>
<Entry_Element3> the content function will parse</Entry_Element3>
<Entry_Element4> the content function will parse</Entry_Element4>
...
</Field_Entry1>
<Field_Entry2>
<Entry_Element1> the content function will parse</Entry_Element1>
<Entry_Element2> the content function will parse</Entry_Element2>
<Entry_Element3> the content function will parse</Entry_Element3>
<Entry_Element4> the content function will parse</Entry_Element4>
...
</Field_Entry2>
...
</Target_Field>
Here are two special cases when parsing the nesting fields.
1) Muliple entry elements may share a same tag:
For example, a consequence of a weakness may have only one impact and note but multiple scopes. Therefore, in this case, the parser will extract and concatenate the contents that share a same tag under an individual field entry element.
2) HTML elements under entry element:
For some unknown reason, the content we aim to extract will be stored in html elements, such as li, div, ul,and o. Therefore, in this case, the parser will extract and concatenate the content that have html tag under a same entry_element. After extracting the content, the parser will also parse the tag information from their parent elements.
2.2 Parser Function for fields with nesting elements
Given the target field, function <b> nesting_field_parser </b> will extract the content within the target field element and write cwe_id and content into a CSV file named by the target field. Each row in the output CSV file will include the following information:
- cwe_id: The CWE identifier
- field: The name of the target field
- tags under the field node, but exclude all html tags, including li, div, ul,and p.
There are two parts within function <b> nesting_field_parser </b>. The first part will generate all possible tags as the headers of the output CSV file by traversing all child element tags under each field entry. It is very important for the first part, because once the function writes the headers, it is computationally expensive to edit the first row later - we have to read all content of the original file and re-write to a new file. The function will exclude all HTML tags, such as li, div, ul, and p, because these html tags are meaningless and repetitive. The second part will extract the content from the nesting target field and then write to a CSV file by using function <b> write_dict_to_csv </b>.
The following fields have been tested successfully:
Potential_Mitigations, Weakness_Ordinalities
Common_Consequences, Alternate_Terms
Modes_Of_Introduction, Affected_Resources
Observed_Examples, Functional_Areas
Content_History, etection_Methods
End of explanation
"""
nesting_field=pd.read_csv('Common_Consequences.csv')
nesting_field.head(5)
"""
Explanation: After running the above codes, the file named by 'Common_Consequences.csv' should be created under the same directory as this notebook. For parsing other fields, need to change the name of the target field.
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt | falsepos.ipynb | mit | from __future__ import print_function, division
import thinkbayes2
from sympy import symbols
"""
Explanation: Exploration of a problem interpreting binary test results
Copyright 2015 Allen Downey
MIT License
End of explanation
"""
p, q, s, t1, t2 = symbols('p q s t1 t2')
"""
Explanation: p is the prevalence of a condition
s is the sensititivity of the test
The false positive rate is known to be either t1 (with probability q) or t2 (with probability 1-q)
End of explanation
"""
a, b, c, d, e, f, g, h = symbols('a b c d e f g h')
"""
Explanation: I'll use a through h for each of the 8 possible conditions.
End of explanation
"""
a = q * p * s
b = q * p * (1-s)
c = q * (1-p) * t1
d = q * (1-p) * (1-t1)
e = (1-q) * p * s
f = (1-q) * p * (1-s)
g = (1-q) * (1-p) * t2
h = (1-q) * (1-p) * (1-t2)
pmf1 = thinkbayes2.Pmf()
pmf1['sick'] = p*s
pmf1['notsick'] = (1-p)*t1
pmf1
nc1 = pmf1.Normalize()
nc1.simplify()
pmf2 = thinkbayes2.Pmf()
pmf2['sick'] = p*s
pmf2['notsick'] = (1-p)*t2
pmf2
nc2 = pmf2.Normalize()
nc2.simplify()
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t[t1] *= nc1
pmf_t[t2] *= nc2
pmf_t.Normalize()
pmf_t.Mean().simplify()
d1 = dict(q=0.5, p=0.1, s=0.5, t1=0.2, t2=0.8)
pmf_t.Mean().evalf(subs=d1)
d2 = dict(q=0.75, p=0.1, s=0.5, t1=0.4, t2=0.8)
pmf_t.Mean().evalf(subs=d2)
pmf_t[t1].evalf(subs=d2)
x = pmf_t[t1] * pmf1['sick'] + pmf_t[t2] * pmf2['sick']
x.simplify()
x.evalf(subs=d1)
x.evalf(subs=d2)
t = q * t1 + (1-q) * t2
pmf = thinkbayes2.Pmf()
pmf['sick'] = p*s
pmf['notsick'] = (1-p)*t
pmf
pmf.Normalize()
pmf['sick'].simplify()
pmf['sick'].evalf(subs=d1)
pmf['sick'].evalf(subs=d2)
gold = thinkbayes2.Pmf()
gold['0 sick t1'] = q * (1-p)**2 * t1**2
gold['1 sick t1'] = q * 2*p*(1-p) * s * t1
gold['2 sick t1'] = q * p**2 * s**2
gold['0 sick t2'] = (1-q) * (1-p)**2 * t2**2
gold['1 sick t2'] = (1-q) * 2*p*(1-p) * s * t2
gold['2 sick t2'] = (1-q) * p**2 * s**2
gold.Normalize()
p0 = gold['0 sick t1'] + gold['0 sick t2']
p0.evalf(subs=d1)
p0.evalf(subs=d2)
t = q * t1 + (1-q) * t2
collapsed = thinkbayes2.Pmf()
collapsed['0 sick'] = (1-p)**2 * t**2
collapsed['1 sick'] = 2*p*(1-p) * s * t
collapsed['2 sick'] = p**2 * s**2
collapsed.Normalize()
collapsed['0 sick'].evalf(subs=d1)
collapsed['0 sick'].evalf(subs=d2)
pmf1 = thinkbayes2.Pmf()
pmf1['0 sick'] = (1-p)**2 * t1**2
pmf1['1 sick'] = 2*p*(1-p) * s * t1
pmf1['2 sick'] = p**2 * s**2
nc1 = pmf1.Normalize()
pmf2 = thinkbayes2.Pmf()
pmf2['0 sick'] = (1-p)**2 * t2**2
pmf2['1 sick'] = 2*p*(1-p) * s * t2
pmf2['2 sick'] = p**2 * s**2
nc2 = pmf2.Normalize()
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t[t1] *= nc1
pmf_t[t2] *= nc2
pmf_t.Normalize()
x = pmf_t[t1] * pmf1['0 sick'] + pmf_t[t2] * pmf2['0 sick']
x.simplify()
x.evalf(subs=d1), p0.evalf(subs=d1)
x.evalf(subs=d2), p0.evalf(subs=d2)
"""
Explanation: And here are the probabilities of the conditions.
End of explanation
"""
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t.Mean().simplify()
"""
Explanation: pmf_t represents the distribution of t
End of explanation
"""
d1 = dict(q=0.5, p=0.1, s=0.5, t1=0.2, t2=0.8)
pmf_t.Mean().evalf(subs=d1)
d2 = dict(q=0.75, p=0.1, s=0.5, t1=0.4, t2=0.8)
pmf_t.Mean().evalf(subs=d2)
"""
Explanation: I'll consider two sets of parameters, d1 and d2, which have the same mean value of t.
End of explanation
"""
def prob(yes, no):
return yes / (yes + no)
"""
Explanation: prob takes two numbers that represent odds in favor and returns the corresponding probability.
End of explanation
"""
res = prob(a+e, c+g)
res.simplify()
"""
Explanation: Scenario A
In the first scenario, there are two kinds of people in the world, or two kinds of tests, so there are four outcomes that yield positive tests: two true positives (a and d) and two false positives (c and g).
We can compute the probability of a true positive given a positive test:
End of explanation
"""
res.evalf(subs=d1)
res.evalf(subs=d2)
"""
Explanation: In this scenario, the two parameter sets yield the same answer:
End of explanation
"""
p1 = prob(a, c)
p1.simplify()
p1.evalf(subs=d1)
p2 = prob(e, g)
p2.simplify()
p2.evalf(subs=d1)
pmf_p = thinkbayes2.Pmf([p1, p2])
pmf_p.Mean().simplify()
pmf_p.Mean().evalf(subs=d1)
p1.evalf(subs=d2), p2.evalf(subs=d2), pmf_p.Mean().evalf(subs=d2)
"""
Explanation: Scenario B
Now suppose instead of two kinds of people, or two kinds of tests, the distribution of t represents our uncertainty about t. That is, we are only considering one test, and we think the false positive rate is the same for everyone, but we don't know what it is.
In this scenario, we need to think about the sampling process that brings patients to see doctors. There are three possibilities:
B1. Only patients who test positive see a doctor.
B2. All patients see a doctor with equal probability, regardless of test results and regardless of whether they are sick or not.
B3. Patients are more or less likely to see a doctor, depending on the test results and whether they are sick or not.
Scenario B1
If patients only see a doctor after testing positive, the doctor doesn't learn anything about t just because a patient tests positive. In that case, the doctor should compute the conditional probabilities:
p1 is the probability the patient is sick given a positive test and t1
p2 is the probability the patient is sick given a positive test and t2
We can compute p1 and p2, form pmf_p, and compute its mean:
End of explanation
"""
def update(pmf):
post = pmf.Copy()
post[p1] *= (a + c) / q
post[p2] *= (e + g) / (1-q)
post.Normalize()
return post
"""
Explanation: Scenario B2
If all patients see a doctor, the doctor can learn about t based on the number of positive and negative tests.
The likelihood of a positive test given t1 is (a+c)/q
The likelihood of a positive test given t2 is (e+g)/(1-q)
update takes a pmf and updates it with these likelihoods
End of explanation
"""
post = update(pmf_p)
post[p1].simplify()
post.Mean().simplify()
"""
Explanation: post is what we should believe about p after seeing one patient with a positive test:
End of explanation
"""
post.Mean().evalf(subs=d1)
"""
Explanation: When q is 0.5, the posterior mean is p:
End of explanation
"""
post.Mean().evalf(subs=d2)
"""
Explanation: But other distributions of t yield different values.
End of explanation
"""
post2 = update(post)
post2.Mean().simplify()
"""
Explanation: Let's see what we get after seeing two patients
End of explanation
"""
post2.Mean().evalf(subs=d1)
post2.Mean().evalf(subs=d2)
post3 = update(post2)
post3.Mean().evalf(subs=d1)
post3.Mean().evalf(subs=d2)
"""
Explanation: Positive tests are more likely under t2 than t1, so each positive test makes it more likely that t=t2. So the expected value of p converges on p2.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/3674b896fc4e4a279156fa5c0f61aea8/plot_10_preprocessing_overview.ipynb | bsd-3-clause | import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(0, 60).load_data() # just use a fraction of data for speed here
"""
Explanation: Overview of artifact detection
This tutorial covers the basics of artifact detection, and introduces the
artifact detection tools available in MNE-Python.
:depth: 2
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>:
End of explanation
"""
ssp_projectors = raw.info['projs']
raw.del_proj()
"""
Explanation: What are artifacts?
Artifacts are parts of the recorded signal that arise from sources other than
the source of interest (i.e., neuronal activity in the brain). As such,
artifacts are a form of interference or noise relative to the signal of
interest. There are many possible causes of such interference, for example:
Environmental artifacts
Persistent oscillations centered around the AC power line frequency_
(typically 50 or 60 Hz)
Brief signal jumps due to building vibration (such as a door slamming)
Electromagnetic field noise from nearby elevators, cell phones, the
geomagnetic field, etc.
Instrumentation artifacts
Electromagnetic interference from stimulus presentation (such as EEG
sensors picking up the field generated by unshielded headphones)
Continuous oscillations at specific frequencies used by head position
indicator (HPI) coils
Random high-amplitude fluctuations (or alternatively, constant zero
signal) in a single channel due to sensor malfunction (e.g., in surface
electrodes, poor scalp contact)
Biological artifacts
Periodic QRS_-like signal patterns (especially in magnetometer
channels) due to electrical activity of the heart
Short step-like deflections (especially in frontal EEG channels) due to
eye movements
Large transient deflections (especially in frontal EEG channels) due to
blinking
Brief bursts of high frequency fluctuations across several channels due
to the muscular activity during swallowing
There are also some cases where signals from within the brain can be
considered artifactual. For example, if a researcher is primarily interested
in the sensory response to a stimulus, but the experimental paradigm involves
a behavioral response (such as button press), the neural activity associated
with the planning and executing the button press could be considered an
artifact relative to signal of interest (i.e., the evoked sensory response).
<div class="alert alert-info"><h4>Note</h4><p>Artifacts of the same genesis may appear different in recordings made by
different EEG or MEG systems, due to differences in sensor design (e.g.,
passive vs. active EEG electrodes; axial vs. planar gradiometers, etc).</p></div>
What to do about artifacts
There are 3 basic options when faced with artifacts in your recordings:
Ignore the artifact and carry on with analysis
Exclude the corrupted portion of the data and analyze the remaining data
Repair the artifact by suppressing artifactual part of the recording
while (hopefully) leaving the signal of interest intact
There are many different approaches to repairing artifacts, and MNE-Python
includes a variety of tools for artifact repair, including digital filtering,
independent components analysis (ICA), Maxwell filtering / signal-space
separation (SSS), and signal-space projection (SSP). Separate tutorials
demonstrate each of these techniques for artifact repair. Many of the
artifact repair techniques work on both continuous (raw) data and on data
that has already been epoched (though not necessarily equally well); some can
be applied to memory-mapped_ data while others require the data to be
copied into RAM. Of course, before you can choose any of these strategies you
must first detect the artifacts, which is the topic of the next section.
Artifact detection
MNE-Python includes a few tools for automated detection of certain artifacts
(such as heartbeats and blinks), but of course you can always visually
inspect your data to identify and annotate artifacts as well.
We saw in the introductory tutorial <tut-overview> that the example
data includes :term:SSP projectors <projector>, so before we look at
artifacts let's set aside the projectors in a separate variable and then
remove them from the :class:~mne.io.Raw object using the
:meth:~mne.io.Raw.del_proj method, so that we can inspect our data in it's
original, raw state:
End of explanation
"""
mag_channels = mne.pick_types(raw.info, meg='mag')
raw.plot(duration=60, order=mag_channels, n_channels=len(mag_channels),
remove_dc=False)
"""
Explanation: Low-frequency drifts
Low-frequency drifts are most readily detected by visual inspection using the
basic :meth:~mne.io.Raw.plot method, though it is helpful to plot a
relatively long time span and to disable channel-wise DC shift correction.
Here we plot 60 seconds and show all the magnetometer channels:
End of explanation
"""
fig = raw.plot_psd(tmax=np.inf, fmax=250, average=True)
# add some arrows at 60 Hz and its harmonics:
for ax in fig.axes[1:]:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
ax.arrow(x=freqs[idx], y=psds[idx] + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
"""
Explanation: Low-frequency drifts are readily removed by high-pass filtering at a fairly
low cutoff frequency (the wavelength of the drifts seen above is probably
around 20 seconds, so in this case a cutoff of 0.1 Hz would probably suppress
most of the drift).
Power line noise
Power line artifacts are easiest to see on plots of the spectrum, so we'll
use :meth:~mne.io.Raw.plot_psd to illustrate.
End of explanation
"""
ecg_epochs = mne.preprocessing.create_ecg_epochs(raw)
ecg_epochs.plot_image(combine='mean')
"""
Explanation: Here we see narrow frequency peaks at 60, 120, 180, and 240 Hz — the power
line frequency of the USA (where the sample data was recorded) and its 2nd,
3rd, and 4th harmonics. Other peaks (around 25 to 30 Hz, and the second
harmonic of those) are probably related to the heartbeat, which is more
easily seen in the time domain using a dedicated heartbeat detection function
as described in the next section.
Heartbeat artifacts (ECG)
MNE-Python includes a dedicated function
:func:~mne.preprocessing.find_ecg_events in the :mod:mne.preprocessing
submodule, for detecting heartbeat artifacts from either dedicated ECG
channels or from magnetometers (if no ECG channel is present). Additionally,
the function :func:~mne.preprocessing.create_ecg_epochs will call
:func:~mne.preprocessing.find_ecg_events under the hood, and use the
resulting events array to extract epochs centered around the detected
heartbeat artifacts. Here we create those epochs, then show an image plot of
the detected ECG artifacts along with the average ERF across artifacts. We'll
show all three channel types, even though EEG channels are less strongly
affected by heartbeat artifacts:
End of explanation
"""
avg_ecg_epochs = ecg_epochs.average().apply_baseline((-0.5, -0.2))
"""
Explanation: The horizontal streaks in the magnetometer image plot reflect the fact that
the heartbeat artifacts are superimposed on low-frequency drifts like the one
we saw in an earlier section; to avoid this you could pass
baseline=(-0.5, -0.2) in the call to
:func:~mne.preprocessing.create_ecg_epochs.
You can also get a quick look at the
ECG-related field pattern across sensors by averaging the ECG epochs together
via the :meth:~mne.Epochs.average method, and then using the
:meth:mne.Evoked.plot_topomap method:
End of explanation
"""
avg_ecg_epochs.plot_topomap(times=np.linspace(-0.05, 0.05, 11))
"""
Explanation: Here again we can visualize the spatial pattern of the associated field at
various times relative to the peak of the EOG response:
End of explanation
"""
avg_ecg_epochs.plot_joint(times=[-0.25, -0.025, 0, 0.025, 0.25])
"""
Explanation: Or, we can get an ERP/F plot with :meth:~mne.Evoked.plot or a combined
scalp field maps and ERP/F plot with :meth:~mne.Evoked.plot_joint. Here
we've specified the times for scalp field maps manually, but if not provided
they will be chosen automatically based on peaks in the signal:
End of explanation
"""
eog_epochs = mne.preprocessing.create_eog_epochs(raw, baseline=(-0.5, -0.2))
eog_epochs.plot_image(combine='mean')
eog_epochs.average().plot_joint()
"""
Explanation: Ocular artifacts (EOG)
Similar to the ECG detection and epoching methods described above, MNE-Python
also includes functions for detecting and extracting ocular artifacts:
:func:~mne.preprocessing.find_eog_events and
:func:~mne.preprocessing.create_eog_epochs. Once again we'll use the
higher-level convenience function that automatically finds the artifacts and
extracts them in to an :class:~mne.Epochs object in one step. Unlike the
heartbeat artifacts seen above, ocular artifacts are usually most prominent
in the EEG channels, but we'll still show all three channel types. We'll use
the baseline parameter this time too; note that there are many fewer
blinks than heartbeats, which makes the image plots appear somewhat blocky:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/tensorflow_extended/solutions/Simple_TFX_Pipeline_for_Vertex_Pipelines.ipynb | apache-2.0 | # Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
"""
Explanation: Creating Simple TFX Pipeline for Vertex Pipelines
Learning objectives
Prepare example data.
Create a pipeline.
Run the pipeline on Vertex Pipelines.
Introduction
In this notebook, you will create a simple TFX pipeline and run it using
Google Cloud Vertex Pipelines. This notebook is based on the TFX pipeline
you built in
Simple TFX Pipeline Tutorial.
If you are not familiar with TFX and you have not read that tutorial yet, you
should read it before proceeding with this notebook.
Google Cloud Vertex Pipelines helps you to automate, monitor, and govern
your ML systems by orchestrating your ML workflow in a serverless manner. You
can define your ML pipelines using Python with TFX, and then execute your
pipelines on Google Cloud. See
Vertex Pipelines introduction
to learn more about Vertex Pipelines.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Install python packages
You will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
End of explanation
"""
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Did you restart the runtime?
You can restart runtime with following cell.
End of explanation
"""
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
"""
Explanation: Check the package versions
End of explanation
"""
GOOGLE_CLOUD_PROJECT = 'qwiklabs-gcp-01-2e305ff9c72b' # Replace this with your Project-ID
GOOGLE_CLOUD_REGION = 'us-central1' # Replace this with your region
GCS_BUCKET_NAME = 'qwiklabs-gcp-01-2e305ff9c72b' # Replace this with your Cloud Storage bucket
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
"""
Explanation: Set up variables
You will set up some variables used to customize the pipelines below. Following
information is required:
GCP Project id. See
Identifying your project id.
GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
Vertex AI locations guide.
Google Cloud Storage Bucket to store pipeline outputs.
Enter required values in the cell below before running it.
End of explanation
"""
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-pipelines'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for input data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
"""
Explanation: Set gcloud to use your project.
End of explanation
"""
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
"""
Explanation: Prepare example data
You will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. You will build a classification model which predicts the
species of penguins.
You need to make your own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, you need to create a directory and copy dataset to it
on GCS.
End of explanation
"""
# TODO 1
# Review the contents of the CSV file
!gsutil cat {DATA_ROOT}/penguins_processed.csv | head
"""
Explanation: Take a quick look at the CSV file.
End of explanation
"""
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since you're not generating or creating a schema, you will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
"""
Explanation: Create a pipeline
TFX pipelines are defined using Python APIs. You will define a pipeline which
consists of three components, CsvExampleGen, Trainer and Pusher. The pipeline
and model definition is almost the same as
Simple TFX Pipeline Tutorial.
The only difference is that you don't need to set metadata_connection_config
which is used to locate
ML Metadata database. Because
Vertex Pipelines uses a managed metadata service, users don't need to care
of it, and you don't need to specify the parameter.
Before actually define the pipeline, you need to write a model code for the
Trainer component first.
Write model code.
You will use the same model code as in the
Simple TFX Pipeline Tutorial.
End of explanation
"""
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
"""
Explanation: Copy the module file to GCS which can be accessed from the pipeline components.
Because model training happens on GCP, you need to upload this model definition.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline.
End of explanation
"""
# TODO 2
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified because you don't need `metadata_path` argument.
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str,
) -> tfx.dsl.Pipeline:
"""Creates a three component penguin pipeline with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
# Following three components will be included in the pipeline.
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components)
"""
Explanation: Write a pipeline definition
You will define a function to create a TFX pipeline.
End of explanation
"""
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
# Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE.
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR))
"""
Explanation: Run the pipeline on Vertex Pipelines.
You used LocalDagRunner which runs on local environment in
Simple TFX Pipeline Tutorial.
TFX provides multiple orchestrators to run your pipeline. In this tutorial you
will use the Vertex Pipelines together with the Kubeflow V2 dag runner.
You need to define a runner to actually run the pipeline. You will compile
your pipeline into our pipeline definition format using TFX APIs.
End of explanation
"""
# TODO 3
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
# Create a job to submit the pipeline
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.submit()
"""
Explanation: The generated definition file can be submitted using kfp client.
End of explanation
"""
|
emsi/ml-toolbox | random/Atmosfera/MODEL-11-conv.ipynb | agpl-3.0 | # Dane wejściowe
with open("X-sequences.pickle", 'rb') as f:
X = pickle.load(f)
with open("Y.pickle", 'rb') as f:
Y = pickle.load(f)
# Zostaw tylko poniższe kategorie, pozostale zmień na -1
lista = [2183,
#325,
37, 859, 2655, 606, 412, 2729, 1683, 1305]
# Y=[y if y in lista else -1 for y in Y]
mask = [y in lista for y in Y]
import itertools
X = np.array(list(itertools.compress(X, mask)))
Y = np.array(list(itertools.compress(Y, mask)))
np.unique(Y)
"""
Explanation: ```
Read data
with open("Atmosfera-Incidents-2017.pickle", 'rb') as f:
incidents = pickle.load(f)
Skonwertuj root_service do intów i zapisz
Y=[int(i) for i in incidents[1:,3]]
with open("Y.pickle", 'wb') as f:
pickle.dump(Y, f, pickle.HIGHEST_PROTOCOL)
```
End of explanation
"""
root_services=np.sort(np.unique(Y))
# skonstruuj odwrtotny indeks kategorii głównych
services_idx={root_services[i]: i for i in range(len(root_services))}
# Zamień
Y=[services_idx[y] for y in Y]
Y=to_categorical(Y)
Y.shape
top_words = 5000
classes=Y[0,].shape[0]
print(classes)
# max_length (98th percentile is 476), padd the rest
max_length=500
X=sequence.pad_sequences(X, maxlen=max_length)
"""
Explanation: W tej wersji eksperymentu, Y zawiera root_service - 44 unikalne kategorie główne.
Zamieńmy je na liczby z przedziału 0-43
End of explanation
"""
# create the model
embedding_vecor_length = 60
_input = Input(shape=(max_length,), name='input')
embedding=Embedding(top_words, embedding_vecor_length, input_length=max_length)(_input)
conv1 = Conv1D(filters=128, kernel_size=1, padding='same', activation='relu')
conv2 = Conv1D(filters=128, kernel_size=2, padding='same', activation='relu')
conv3 = Conv1D(filters=128, kernel_size=3, padding='same', activation='relu')
conv4 = Conv1D(filters=128, kernel_size=4, padding='same', activation='relu')
conv5 = Conv1D(filters=32, kernel_size=5, padding='same', activation='relu')
conv6 = Conv1D(filters=32, kernel_size=6, padding='same', activation='relu')
conv1 = conv1(embedding)
glob1 = GlobalAveragePooling1D()(conv1)
conv2 = conv2(embedding)
glob2 = GlobalAveragePooling1D()(conv2)
conv3 = conv3(embedding)
glob3 = GlobalAveragePooling1D()(conv3)
conv4 = conv4(embedding)
glob4 = GlobalAveragePooling1D()(conv4)
conv5 = conv5(embedding)
glob5 = GlobalAveragePooling1D()(conv5)
conv6 = conv6(embedding)
glob6 = GlobalAveragePooling1D()(conv6)
merge = concatenate([glob1, glob2, glob3, glob4, glob5, glob6])
x = Dropout(0.2)(merge)
x = BatchNormalization()(x)
x = Dense(300, activation='relu')(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
pred = Dense(classes, activation='softmax')(x)
model = Model(inputs=[_input], outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])#, decay=0.0000001)
print(model.summary())
# Callbacks
early_stop_cb = EarlyStopping(monitor='val_loss', patience=20, verbose=1)
checkpoit_cb = ModelCheckpoint(NAME+".h5", save_best_only=True)
# Print the batch number at the beginning of every batch.
batch_print_cb = LambdaCallback(on_batch_begin=lambda batch, logs: print (".",end=''),
on_epoch_end=lambda batch, logs: print (batch))
# Plot the loss after every epoch.
plot_loss_cb = LambdaCallback(on_epoch_end=lambda epoch, logs:
print (epoch, logs))
#plt.plot(np.arange(epoch), logs['loss']))
print("done")
history = model.fit(
X,#_train,
Y,#_train,
# initial_epoch=1200,
epochs=1500,
batch_size=2048,
#validation_data=(X_valid,Y_valid),
validation_split=0.25,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
#history=model.fit(X_train, Y_train, validation_data=(X_test, Y_test), nb_epoch=3, batch_size=512)
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
# plt.title('model loss (log scale)')
# plt.yscale('log')
plt.show()
history2 = model.fit(
X,#_train,
Y,#_train,
initial_epoch=10000,
epochs=10010,
batch_size=1024,
#validation_data=(X_valid,Y_valid),
validation_split=0.1,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
score=model.evaluate(X_test,Y_test, verbose=0)
print("OOS %s: %.2f%%" % (model.metrics_names[1], score[1]*100))
print("OOS %s: %.2f" % (model.metrics_names[0], score[0]))
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history2.history['acc'])
plt.plot(history2.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history2.history['loss'])
plt.plot(history2.history['val_loss'])
plt.title('model loss (log scale)')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.yscale('log')
plt.show()
history3 = model.fit(
X,#_train,
Y,#_train,
initial_epoch=60,
epochs=90,
batch_size=1024,
#validation_data=(X_valid,Y_valid),
validation_split=0.3,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history3.history['acc'])
plt.plot(history3.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history3.history['loss'])
plt.plot(history3.history['val_loss'])
plt.title('model loss (log scale)')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.yscale('log')
plt.show()
"""
Explanation: slice in half even/odds to nulify time differencies
X_train=X[0:][::2] # even
X_test=X[1:][::2] # odds
Y_train=np.array(Y[0:][::2]) # even
Y_test=np.array(Y[1:][::2]) # odds
if split_valid_test:
# Split "test" in half for validation and final testing
X_valid=X_test[:len(X_test)/2]
Y_valid=Y_test[:len(Y_test)/2]
X_test=X_test[len(X_test)/2:]
Y_test=Y_test[len(Y_test)/2:]
else:
X_valid=X_test
Y_valid=Y_test
End of explanation
"""
|
KMFleischer/PyEarthScience | Tutorial/04_PyNGL_basics.ipynb | mit | import Ngl
"""
Explanation: 4. PyNGL basics
PyNGL is a Python language module for creating 2D high performance visualizations of scientific data. It is based on NCL graphics but still not as extensive as NCL's last version 6.6.2.
The aim of this notebook is to give you an introduction to PyNGL, read your data from file, create plots, and write the graphics output to a specified graphics file format.
Content
1. Import PyNGL
2. Graphics output
3. Plot types
4. Plot resources
5. Text
6. Annotations
7. Panels
<br>
4.1 Import PyNGL
The Python module of PyNGL is called Ngl.
End of explanation
"""
wks = Ngl.open_wks('png', 'plot_test1')
"""
Explanation: To create a visualization of your data you need to do
- read the data
- open a graphics output channel called workstation
- generate the graphic
- save the graphic on disk
How to read the data has been explained in 03_Xarray_PyNIO_basics, we will use it here without further explainations.
4.2 Graphics output
Let us start opening a graphics output channel and link it to the variable wks. You can call it like ever you want but it is used very often by NCL users.
The workstation types are
- ps
- eps
- epsi
- pdf
- newpdf (creates smaller output)
- x11
In our first example we want to use PNG as output format to make it possible to display the plots in the notebook. To open a workstation we use the function Ngl.open_wks. The name of the graphics output file shall be plot_test1.png. The suffix .png will be appended automatically to the basename of the file name.
End of explanation
"""
wks_res = Ngl.Resources()
wks_res.wkPaperSize = 'A4'
wks = Ngl.open_wks('pdf', 'plot_test_A4', wks_res)
"""
Explanation: That is of course a very simple case but if you want to specify the size or orientation of the graphics you have to work with resources. NCL users already know how to deal with resources, and it shouldn't be difficult to Python users. Resources are the same as attributes of Python objects, if set the user is able to manage a lot of settings for PyNGL functions.
Let us say, we want to generate a PDF file of size DIN A4. First, we have to assign a PyNGL object variable wks_res (you can call it like you want) with the function Ngl.Resources() to store the size settings for the workstation. Notice, that we now use Ngl.open_wks with three parameters, and we have to delete the first workstation.
End of explanation
"""
wks_res = Ngl.Resources()
wks_res.wkPaperWidthF = 8.5 # in inches
wks_res.wkPaperHeightF = 14.0 # in inches
wks = Ngl.open_wks('pdf',' plot_test_legal', wks_res)
"""
Explanation: There are many wk resources available (see NCL's wk resources page). Read the resources manual carefully because PyNGL and NCL make a lot of differences depending on the selected output format.
The next example shows how to set the size of the output to legal giving the width and height in inches instead of wkPaperSize = 'legal'. It will create a PDF file with width 8.5 inches, height 14.0 inches, and the orientation is portrait (default).
End of explanation
"""
wks_res = Ngl.Resources()
wks_res.wkPaperSize = 'legal'
wks_res.wkOrientation = 'landscape'
wks = Ngl.open_wks('pdf', 'plot_test_legal_landscape', wks_res)
"""
Explanation: Now, we want to change the orientation of the legal size PDF file to landscape.
End of explanation
"""
Ngl.delete_wks(wks)
"""
Explanation: Ok, we want to start with a clean script. We delete the workstation from above using the function Ngl.delete_wks.
End of explanation
"""
|
satishgoda/learning | web/html.ipynb | mit | from IPython.display import HTML, Javascript
HTML("Hello World")
"""
Explanation: HTML and w3 Schools
End of explanation
"""
!gvim draggable_1.html
HTML('./draggable_1.html')
"""
Explanation: Supporting Technologies
jQuery
Examples
Draggable Elements
https://www.w3schools.com/tags/att_global_draggable.asp
https://www.w3schools.com/tags/tryit.asp?filename=tryhtml5_global_draggable
https://www.w3schools.com/html/html5_draganddrop.asp
End of explanation
"""
%%javascript
$("p#drag1").css("border", "1px double red")
"""
Explanation: Using the %%javascript cell magic and jQuery, we can modify the DOM node's display attributes using their id's!!
End of explanation
"""
Javascript("""$("p#drag2").css("border", "2px double green")""")
"""
Explanation: One can also the IPython.display.Javascript class to run code!!
End of explanation
"""
|
mathinmse/mathinmse.github.io | Lecture-13-Integral-Transforms.ipynb | mit | import sympy as sp
sp.init_printing(use_latex=True)
# symbols we will need below
x,y,z,t,c = sp.symbols('x y z t c')
# note the special declaration that omega is a positive number
omega = sp.symbols('omega', positive=True)
"""
Explanation: Lecture 13: Integral Transforms, D/FFT and Electron Microscopy
Background
An integral transform maps a function of one independent variable into a function of another independant variable using a kernel.
$$g(\alpha) = \int_{a}^{b} f(t) K(\alpha,t) dt $$
The function $f(t)$ is transformed to a new function $g(\alpha)$ through the definite integral. A similarity to the dot product of functions is evident in this form and this operation can be thought of as a mapping or projection of $f(t)$ into a different independent variable $\alpha$. Existence, integrability and inversion of integral transform operations are important in the study of this topic, although not covered in these notes.
Two examples of integral transforms, the Laplace and Fourier, are discussed in this lecture. It is typical to use the Laplace transform to remove the time dependence from Fick's second law in diffusion problems. The Fourier transform is used in the study of diffraction under certain conditions.
What skills will I learn?
The definition of an integral transform.
The algorithm for computing the discrete Fourier transform
How diffraction patterns can be used to create phase contrast images in electron microscopy
What steps should I take?
Compute the Fourier transform of different aperture functions.
Practice taking Fourier transforms and discrete Fourier transforms using the DIY problems.
Load an image into a Numpy array. You will need to learn the structure of an image file to understand this fully.
Compute the Fourier transform of the iamge.
Select different regions of the Fourier transform and reconstruct the image.
Reading and Reference
Advanced engineering Mathematics, E. Kreyszig, John wiley and Sons, 2010
Numerical Recipes, W. Press, Cambridge University Press, 1986
M. De Graef and M. McHenry, Structure of Materials, Cambridge University Press, 2nd ed.
C. Hammond, The Basics of Crystallography and Diffraction, Oxford Science Publications, 4th ed.
To assist in this lecture some special symbols in Python and sympy are reviewed:
End of explanation
"""
sp.I**2
"""
Explanation: Complex Number Review
A reminder that $i$ is the square root of negative one and this is how you specify $i$ in Sympy and that is different than the complex data type in Python.
End of explanation
"""
sp.log(sp.E)
"""
Explanation: The natural logarithm of $e$ is $1$:
End of explanation
"""
sp.Integral(sp.E**(sp.I*omega*t),t)
# 'omega', positive=True
sp.integrate(sp.E**(sp.I*omega*t),t)
"""
Explanation: In SymPy there are two ways to deal with integration. If you would like to represent an unevaluated integral, you can use the Integral function. If you want to compute the integration of an expression you can use the integrate function.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 8, 4
p = sp.Piecewise((0,x<-1),(1,x<1),(0,True))
sp.plot(p);
"""
Explanation: Where we assume there is no zero frequency (as we are dividing by $\omega$) - hence the assumption positive=True in the symbol definition above. (Try replacing $\omega$ with $y$ and inspect the expression returned by integrate.)
The Fourier Transform
As the domain of the periodicity increases, the frequency spectrum required required to represent the function becomes more finely divided. Recall the argument of the trigonometric terms in the functions of the Fourier series:
$$ \frac{n \pi (\omega +c)}{d} $$
where n is the order of the frequency component, c the offset relative to the origin, and d the domain width. If we let the domain width go to infinity (implying that the function is not periodic) then an integral sum is required rather than a discrete summation. The, infinte, non-periodic function and its frequency spectrum are related by the Fourier transform defined by:
$$ \hat{f}(\omega) = \sqrt{\frac{1}{2\pi}} \int^{+\infty}_{-\infty} f(t) \exp[-i \omega t] dt $$
This results in a mapping of the function f(t) into frequency space.
The real or complex and even or odd nature of the function $f(t)$ determines if the transformed function is even, odd, real, or complex. For the purposes of materials crystal structures in this lecture we will be using even and real functions.
Diffraction from An Aperture
A useful physical problem requiring use of the Fourier transform is diffraction. In this problem we will use a top-hat function to represent the location of an infinity of wave sources from an aperture. We use the sp.Piecewise function to generate a "tophat" function for the Fourier transform.
End of explanation
"""
sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,-c,c))
"""
Explanation: At some distance from the aperture we place a detector that measures the combined intensity of all the wave sources, however due to the finite width of the slit each wave travels a different distance to the detector. The phase difference between the waves at the detector is given by the Fourier transform of the aperture function when the Fraunhofer approximation is valid.
This aperture function is even and real so we expect our transformed function to also be even and real. We use the definition of the integral transform above to write an explicit integral statement of the Fourier transform of the top-hat function above. The integral is $1$ between $c$ and $-c$ and zero elsewhere - so we can integrate just the non-zero part. This is integrated as:
End of explanation
"""
a = sp.sqrt(1/(2*sp.pi))*sp.integrate(1*sp.exp(-sp.I*2*omega*x),(x,-c,c))
a
"""
Explanation: Calling explicitly for the integration and assigning the result to a:
End of explanation
"""
solution = sp.expand(a.rewrite(sp.sin))
solution
"""
Explanation: This does not (at first glance) appear to be a real function due to the two exponential terms, but we can use some of the algebraic methods built into SymPy to help. We can ask for this form using sines and cosines with the rewrite method. Furthermore - we can simplify it further with the expand function. Trial and error may be required to determine the best combination and ordering of algebraic manipulations.
End of explanation
"""
sp.plot(solution.subs(c,1));
sp.plot(solution.subs(c,1)**2);
"""
Explanation: Here we can use the subs (substitution) method to set the value of c. I plotted the square of the function since the intensity of a diffracted wave is related to the time averaged energy transferred by the wave. This is proportional to the amplitude squared. As our function is real valued, we can take a shortcut and just plot the square.
End of explanation
"""
compositeIntegral = sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,1,2)) + \
sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),(x,-2,-1))
compositeIntegral
om = compositeIntegral.doit()
om
"""
Explanation: Diffraction from Two Apertures
We could perform the same integration over two top-hat functions and plot those results.
End of explanation
"""
sp.plot(om.rewrite(sp.sin).expand()**2)
"""
Explanation: The diffracted intensity from this pair of slits would appear as:
End of explanation
"""
def diffractionFunction(d=4.0, w=1.0):
result = sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),\
(x,-(d+w),-(d-w))) + \
sp.sqrt(1/(2*sp.pi))*sp.Integral(1*sp.exp(-sp.I*2*omega*x),\
(x,(d-w),(d+w)))
return result.doit()
sp.expand(diffractionFunction(10.,2.).rewrite(sp.sin))
"""
Explanation: Or we could functionalize our function to explore other parameters:
End of explanation
"""
import numpy as np
def DFT_slow(x):
"""Compute the discrete Fourier Transform of the 1D array x"""
x = np.asarray(x, dtype=float)
N = x.shape[0]
n = np.arange(N)
k = n.reshape((N, 1))
M = np.exp(-2j * np.pi * k * n / N)
return np.dot(M, x)
"""
Explanation: DIY: Complex Numbers
Perform the Fourier transformation on an odd or complex valued function. Plot the real and imaginary parts of both the target function and the transformed functions.
DIY: The Airy Disk
Solve for the diffracted intensity in two dimensions from a circular aperture. It may be easier to do this as a discrete problem using the DFT below.
The Discrete Fourier Transform
The discrete Fourier Transform is defined here and is regarded as one of the most important advances in computing science in the 20th century. Other resources such as Numerical Recipes, the Python help files and many other websites detail the calculation and implementations.
It is often instructive to review other implementations of the DFT to help you gain experience. I will be modeling this implementation after Jake Vanderplas' blog article here. Following the notion in the blog article:
Forward DFT:
$$X_k = \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}$$
Inverse DFT:
$$x_n = \frac{1}{N}\sum_{k=0}^{N-1} X_k e^{i~2\pi~k~n~/~N}$$
In this section of the notebook, we use Vanderplas' description and implementation.
For simplicity, we'll concern ourself only with the forward transform, as the inverse transform can be implemented in a very similar manner. Taking a look at the DFT expression above, we see that it is nothing more than a straightforward linear operation: a matrix-vector multiplication of $\vec{x}$,
$$\vec{X} = M \cdot \vec{x}$$
with the matrix $M$ given by
$$M_{kn} = e^{-i~2\pi~k~n~/~N}$$
With this in mind, we can compute the DFT using simple matrix multiplication as follows:
End of explanation
"""
x_signal = np.random.random(1024)
np.allclose(DFT_slow(x_signal), np.fft.fft(x_signal))
"""
Explanation: We can use the "all close" function to check if the result from DFT_slow and Numpy are close:
End of explanation
"""
import sympy as sp
from sympy import Matrix
import numpy as np
sp.init_printing()
"""
Explanation: I think it would be instructive to symbolically expand the matrix above so that it is clear how n*k leads to a two dimensional matrix. Switching to sympy symbols to expose the details we can do the following:
End of explanation
"""
x = sp.Matrix(sp.symbols('x0:5'))
n = sp.Matrix(sp.symbols('n0:5')).T
k = sp.Matrix(sp.symbols('k0:5'))
N = sp.symbols('N')
M = (-sp.I*2*sp.pi*k*n/N).applyfunc(sp.exp)
M*x
"""
Explanation: x is the input vector.
k is the wavenumber or frequency.
n is the component of the input vector.
End of explanation
"""
?np.fft # This gives us information on the conventions used in the return values of the functions.
?np.fft.fft # This is the main DFT function we will use.
?np.fft.fftfreq # This is a helper function to prepare a vector of frequencies.
?np.arange # Points in an evenly spaced interval.
"""
Explanation: Each frequency element is projected into each point of the input vector - the matrix links k and n. So - the contribution at each point is a sum of each frequency contribution, similar to the dot product of functions.
DFT with Numpy Functions
In this section we use the FFT submodule of numpy to help in the computation of the DFT.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
samplingRate = 150.0
samplingInterval = 1.0/samplingRate
timeVector = np.arange(0, 1, samplingInterval)
# Print out the first few elements so you can see what is going on:
timeVector[0:10:]
"""
Explanation: This approach is derived from a nice discussion on FFT found on the blog Glowing Python.
First we will divide up time into samplingInterval sized chunks between 0 and 1. This will aid in getting the x-axis scaled correctly so that frequency can be read directly off the DFT result. You can take samplingInterval in seconds putting samplingRate in Hz. Notice the approach here - we could have done this all in one line, but, by intelligently naming our variables and exposing the details of our thoughts the code is more readable:
End of explanation
"""
signalFrequency = 10.0;
ourSignal = np.sin(2*np.pi*signalFrequency*timeVector) + 0.5*np.sin(2*np.pi*(2*signalFrequency)*timeVector)
"""
Explanation: Next we decide on the frequency of our signal and create a list to have a signal to work with.
End of explanation
"""
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(timeVector, ourSignal, 'r')
axes.set_xlabel('Time')
axes.set_ylabel('Signal')
axes.set_title('Our Modest Signal');
"""
Explanation: Plotting the input function for clarity:
End of explanation
"""
n = ourSignal.size
frequencies = np.fft.fftfreq(n, d=1.0/samplingRate)
spectrum = np.abs(np.fft.fft(ourSignal))
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.scatter(frequencies, spectrum, c='r', marker='s', alpha=0.4)
axes.set_xlabel('Frequency')
axes.set_ylabel('Amplitude')
axes.set_title('Our Amplitude Spectrum');
"""
Explanation: Using numpy to compute the DFT:
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy.fft import *
def atomic_func(x,y):
param = 64.0
return (1+np.sin(4*(x+y)*2*np.pi/param))*(1+np.sin(2*(x-2*y)*2*np.pi/param))/4
def aperture(X, Y, xoffset, yoffset, size):
return (X-xoffset)**2+(Y-yoffset)**2 > size**2
"""
Explanation: Interactive Microscopy Demonstration (Optional)
Original developed by C. Carter, translated to Python by D. Lewis
Transmission electron microscopy utilizes diffraction to determine crystal structures and develop contrast in images. In this section of the lecture we will simulate the diffraction pattern of an atomic structure. Using this diffraction pattern we will simulate using a diffraction aperture to reconstruct a phase contrast image.
End of explanation
"""
x = np.arange(0.0,256.0,1.0)
y = np.arange(0.0,256.0,1.0)
X,Y = np.meshgrid(x, y)
Z = atomic_func(X,Y)
"""
Explanation: We define two functions above:
atomic_func is used to provide an image function periodic in two dimensions from which the diffraction pattern will be constructed. This can be thought of as the density of electrons in a solid that is used to approximate a crystal structure.
aperture returns a Boolean array that will be used to mask the diffraction pattern so that individual frequencies can be selected for image reconstruction. aperture will return True or False.
End of explanation
"""
P = np.zeros(Z.shape,dtype=complex)
K = np.zeros(Z.shape,dtype=complex)
K = fftshift(fft2(Z, norm='ortho'))
P = np.copy(K)
P[np.where(aperture(X, Y, 128, 128, 3) & aperture(X, Y, 150, 128, 3))] = 0
"""
Explanation: The Z array holds the atomic image function.
End of explanation
"""
Im = fftshift(ifft2(P))
"""
Explanation: The P array holds the processed Fourier spectrum. The values of P are set to zero when they are outside the aperture. We use the K array to hold a opy of the image
In this cell we create two more numpy arrays (there are other ways to do this) that have the same shape as Z. The P array we use to hold the processed Fourier spectrum. The processing uses numpy's Boolean indexing to set values in P equal to zero when they are "outside" the aperture. When we get to the images below you'll see what is meant.
Because Python passes by reference we need to call for a copy of K so that we can modify one without changing the other.
From this processed spectrum we will create an image. The K array holds the whole Fourier spectrum.
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(30,9))
axes[0].imshow(Z, origin='lower')
axes[1].imshow(abs(K),origin='lower', cmap=plt.get_cmap('pink'))
aperture1 = plt.Circle((128,128),3**2,color='r', fill = False)
aperture2 = plt.Circle((150,128),3**2,color='y', fill = False)
axes[1].add_artist(aperture1)
axes[1].add_artist(aperture2)
axes[2].imshow(abs(Im)**2, origin='lower')
plt.show()
"""
Explanation: Above we reprocess P into the image Im.
End of explanation
"""
|
kubeflow/pipelines | components/gcp/dataproc/submit_hive_job/sample.ipynb | apache-2.0 | %%capture --no-stderr
!pip3 install kfp --upgrade
"""
Explanation: Name
Data preparation using Apache Hive on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, Hive, Apache
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Hive job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Hive job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectId | | |
| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| queries | The queries to execute the Hive job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
| query_file_uri | The HCFS URI of the script that contains the Hive queries. | Yes | GCSPath | | None |
| script_variables | Mapping of the query’s variable names to their values (equivalent to the Hive command: SET name="value";). | Yes | Dict | | None |
| hive_job | The payload of a HiveJob | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed description
This component creates a Hive job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataproc_submit_hive_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_hive_job/component.yaml')
help(dataproc_submit_hive_job_op)
"""
Explanation: Load the component using KFP SDK
End of explanation
"""
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
DROP TABLE IF EXISTS natality_csv;
CREATE EXTERNAL TABLE natality_csv (
source_year BIGINT, year BIGINT, month BIGINT, day BIGINT, wday BIGINT,
state STRING, is_male BOOLEAN, child_race BIGINT, weight_pounds FLOAT,
plurality BIGINT, apgar_1min BIGINT, apgar_5min BIGINT,
mother_residence_state STRING, mother_race BIGINT, mother_age BIGINT,
gestation_weeks BIGINT, lmp STRING, mother_married BOOLEAN,
mother_birth_state STRING, cigarette_use BOOLEAN, cigarettes_per_day BIGINT,
alcohol_use BOOLEAN, drinks_per_week BIGINT, weight_gain_pounds BIGINT,
born_alive_alive BIGINT, born_alive_dead BIGINT, born_dead BIGINT,
ever_born BIGINT, father_race BIGINT, father_age BIGINT,
record_weight BIGINT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'gs://public-datasets/natality/csv';
SELECT * FROM natality_csv LIMIT 10;'''
EXPERIMENT_NAME = 'Dataproc - Submit Hive Job'
"""
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a Hive query
Put your Hive queries in the queries list, or upload your Hive queries into a file saved in a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in query_file_uri. In this sample, we will use a hard coded query in the queries list to select data from a public CSV file from Cloud Storage.
For more details, see the Hive language manual.
Set sample parameters
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Hive job pipeline',
description='Dataproc submit Hive job pipeline'
)
def dataproc_submit_hive_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
hive_job='',
job='',
wait_interval='30'
):
dataproc_submit_hive_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
hive_job=hive_job,
job=job,
wait_interval=wait_interval)
"""
Explanation: Example pipeline that uses the component
End of explanation
"""
pipeline_func = dataproc_submit_hive_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
"""
Explanation: Compile the pipeline
End of explanation
"""
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/9e70404d3a55a6b6d1c1877784347c14/mixed_source_space_inverse.ipynb | bsd-3-clause | # Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>
#
# License: BSD-3-Clause
import os.path as op
import matplotlib.pyplot as plt
from nilearn import plotting
import mne
from mne.minimum_norm import make_inverse_operator, apply_inverse
# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
"""
Explanation: Compute MNE inverse solution on evoked data with a mixed source space
Create a mixed source space and compute an MNE inverse solution on an evoked
dataset.
End of explanation
"""
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
"""
Explanation: Set up our source space
List substructures we are interested in. We select only the
sub structures we want to include in the source space:
End of explanation
"""
src = mne.setup_source_space(subject, spacing='oct5',
add_dist=False, subjects_dir=subjects_dir)
"""
Explanation: Get a surface-based source space, here with few source points for speed
in this demonstration, in general you should use oct6 spacing!
End of explanation
"""
vol_src = mne.setup_volume_source_space(
subject, mri=fname_aseg, pos=10.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir,
add_interpolator=False, # just for speed, usually this should be True
verbose=True)
# Generate the mixed source space
src += vol_src
print(f"The source space contains {len(src)} spaces and "
f"{sum(s['nuse'] for s in src)} vertices")
"""
Explanation: Now we create a mixed src space by adding the volume regions specified in the
list labels_vol. First, read the aseg file and the source space bounds
using the inner skull surface (here using 10mm spacing to save time,
we recommend something smaller like 5.0 in actual analyses):
End of explanation
"""
src.plot(subjects_dir=subjects_dir)
"""
Explanation: View the source space
End of explanation
"""
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True, overwrite=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
"""
Explanation: We could write the mixed source space with::
write_source_spaces(fname_mixed_src, src, overwrite=True)
We can also export source positions to NIfTI file and visualize it again:
End of explanation
"""
fwd = mne.make_forward_solution(
fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=None)
del src # save memory
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
print(f"The fwd source space contains {len(fwd['src'])} spaces and "
f"{sum(s['nuse'] for s in fwd['src'])} vertices")
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
"""
Explanation: Compute the fwd matrix
End of explanation
"""
snr = 3.0 # use smaller SNR for raw data
inv_method = 'dSPM' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
loose = dict(surface=0.2, volume=1.)
lambda2 = 1.0 / snr ** 2
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)
del fwd
stc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
src = inverse_operator['src']
"""
Explanation: Compute inverse solution
End of explanation
"""
initial_time = 0.1
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori='vector')
brain = stc_vec.plot(
hemi='both', src=inverse_operator['src'], views='coronal',
initial_time=initial_time, subjects_dir=subjects_dir,
brain_kwargs=dict(silhouette=True), smoothing_steps=7)
"""
Explanation: Plot the mixed source estimate
End of explanation
"""
brain = stc.surface().plot(initial_time=initial_time,
subjects_dir=subjects_dir, smoothing_steps=7)
"""
Explanation: Plot the surface
End of explanation
"""
fig = stc.volume().plot(initial_time=initial_time, src=src,
subjects_dir=subjects_dir)
"""
Explanation: Plot the volume
End of explanation
"""
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(
subject, parc=parc, subjects_dir=subjects_dir)
label_ts = mne.extract_label_time_course(
[stc], labels_parc, src, mode='mean', allow_empty=True)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stc.times, label_ts[0][-1, :].T, 'r', label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
"""
Explanation: Process labels
Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space
End of explanation
"""
|
slundberg/shap | notebooks/image_examples/image_classification/Explain MobilenetV2 using the Partition explainer (PyTorch).ipynb | mit | import json
import numpy as np
import torchvision
import torch
import torch.nn as nn
import shap
from PIL import Image
"""
Explanation: Explain PyTorch MobileNetV2 using the Partition explainer
In this example we are explaining the output of MobileNetV2 for classifying images into 1000 ImageNet classes.
End of explanation
"""
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torchvision.models.mobilenet_v2(pretrained=True, progress=False)
model.to(device)
model.eval()
X, y = shap.datasets.imagenet50()
# Getting ImageNet 1000 class names
url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
with open(shap.datasets.cache(url)) as file:
class_names = [v[1] for v in json.load(file).values()]
print("Number of ImageNet classes:", len(class_names))
#print("Class names:", class_names)
# Prepare data transformation pipeline
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
def nhwc_to_nchw(x: torch.Tensor) -> torch.Tensor:
if x.dim() == 4:
x = x if x.shape[1] == 3 else x.permute(0, 3, 1, 2)
elif x.dim() == 3:
x = x if x.shape[0] == 3 else x.permute(2, 0, 1)
return x
def nchw_to_nhwc(x: torch.Tensor) -> torch.Tensor:
if x.dim() == 4:
x = x if x.shape[3] == 3 else x.permute(0, 2, 3, 1)
elif x.dim() == 3:
x = x if x.shape[2] == 3 else x.permute(1, 2, 0)
return x
transform= [
torchvision.transforms.Lambda(nhwc_to_nchw),
torchvision.transforms.Lambda(lambda x: x*(1/255)),
torchvision.transforms.Normalize(mean=mean, std=std),
torchvision.transforms.Lambda(nchw_to_nhwc),
]
inv_transform= [
torchvision.transforms.Lambda(nhwc_to_nchw),
torchvision.transforms.Normalize(
mean = (-1 * np.array(mean) / np.array(std)).tolist(),
std = (1 / np.array(std)).tolist()
),
torchvision.transforms.Lambda(nchw_to_nhwc),
]
transform = torchvision.transforms.Compose(transform)
inv_transform = torchvision.transforms.Compose(inv_transform)
def predict(img: np.ndarray) -> torch.Tensor:
img = nhwc_to_nchw(torch.Tensor(img))
img = img.to(device)
output = model(img)
return output
# Check that transformations work correctly
Xtr = transform(torch.Tensor(X))
out = predict(Xtr[1:3])
classes = torch.argmax(out, axis=1).cpu().numpy()
print(f'Classes: {classes}: {np.array(class_names)[classes]}')
"""
Explanation: Loading Model and Data
End of explanation
"""
topk = 4
batch_size = 50
n_evals = 10000
# define a masker that is used to mask out partitions of the input image.
masker_blur = shap.maskers.Image("blur(128,128)", Xtr[0].shape)
# create an explainer with model and image masker
explainer = shap.Explainer(predict, masker_blur, output_names=class_names)
# feed only one image
# here we explain two images using 100 evaluations of the underlying model to estimate the SHAP values
shap_values = explainer(Xtr[1:2], max_evals=n_evals, batch_size=batch_size,
outputs=shap.Explanation.argsort.flip[:topk])
(shap_values.data.shape, shap_values.values.shape)
shap_values.data = inv_transform(shap_values.data).cpu().numpy()[0]
shap_values.values = [val for val in np.moveaxis(shap_values.values[0],-1, 0)]
shap.image_plot(shap_values=shap_values.values,
pixel_values=shap_values.data,
labels=shap_values.output_names,
true_labels=[class_names[132]])
"""
Explanation: Explain one image
End of explanation
"""
# define a masker that is used to mask out partitions of the input image.
masker_blur = shap.maskers.Image("blur(128,128)", Xtr[0].shape)
# create an explainer with model and image masker
explainer = shap.Explainer(predict, masker_blur, output_names=class_names)
# feed only one image
# here we explain two images using 100 evaluations of the underlying model to estimate the SHAP values
shap_values = explainer(Xtr[1:4], max_evals=n_evals, batch_size=batch_size,
outputs=shap.Explanation.argsort.flip[:topk])
(shap_values.data.shape, shap_values.values.shape)
shap_values.data = inv_transform(shap_values.data).cpu().numpy()
shap_values.values = [val for val in np.moveaxis(shap_values.values,-1, 0)]
(shap_values.data.shape, shap_values.values[0].shape)
shap.image_plot(shap_values=shap_values.values,
pixel_values=shap_values.data,
labels=shap_values.output_names)
"""
Explanation: Explain multiple images
End of explanation
"""
|
fsilva/deputado-histogramado | notebooks/Deputado-Histogramado-5.ipynb | gpl-3.0 | %matplotlib inline
import pylab
import matplotlib
import pandas
import numpy
dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')
sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)
del sessoes['tamanho']
total0 = numpy.sum(sessoes['sessao'].map(len))
print(total0)
"""
Explanation: Deputado Histogramado
expressao.xyz/deputado/
Como processar as sessões do parlamento Português
Índice
Reunír o dataset
Contando as palavras mais comuns
Fazendo histogramas
Representações geograficas
Simplificar o dataset e exportar para o expressao.xyz/deputado/
O que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976?
Neste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.
Para obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.
Para executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)
Parte 3 - Simplificar o dataset e exportar
Código para carregar os dados do notebook anterior:
End of explanation
"""
def substitui_palavras_comuns(texto):
t = texto.replace('.',' ').replace('\n',' ').replace(',',' ').replace(')',' ').replace('(',' ').replace('!',' ').replace('?',' ').replace(':',' ').replace(';',' ')
t = t.replace(' de ',' ').replace(' que ',' ').replace(' do ',' ').replace(' da ',' ').replace(' sr ',' ').replace(' não ',' ').replace(' em ',' ').replace(' se ','').replace(' para',' ').replace(' os ',' ').replace(' dos ',' ').replace(' uma ',' ').replace(' um ',' ').replace(' as ',' ').replace(' dos ',' ').replace(' no ',' ').replace(' dos ',' ').replace('presidente','').replace(' na ',' ').replace(' por ','').replace('presidente','').replace(' com ',' ').replace(' ao ',' ').replace('deputado','').replace(' das ',' ').replace(' como ','').replace('governo','').replace(' ou ','').replace(' mais ',' ').replace(' assembleia ','').replace(' ser ',' ').replace(' tem ',' ')
t = t.replace(' srs ','').replace(' pelo ','').replace(' mas ','').replace(' foi ','').replace('srs.','').replace('palavra','').replace(' que ','').replace(' sua ','').replace(' artigo ','').replace(' nos ','').replace(' eu ','').replace('muito','').replace('sobre ','').replace('também','').replace('proposta','').replace(' aos ',' ').replace(' esta ',' ').replace(' já ',' ')
t = t.replace(' vamos ',' ').replace(' nesta ',' ').replace(' lhe ',' ').replace(' meu ',' ').replace(' eu ',' ').replace(' vai ',' ')
t = t.replace(' isso ',' ').replace(' dia ',' ').replace(' discussão ',' ').replace(' dizer ',' ').replace(' seus ',' ').replace(' apenas ',' ').replace(' agora ',' ')
t = t.replace(' ª ',' ').replace(' foram ',' ').replace(' pois ',' ').replace(' nem ',' ').replace(' suas ',' ').replace(' deste ',' ').replace(' quer ',' ').replace(' desta ',' ').replace(' qual ',' ')
t = t.replace(' o ',' ').replace(' a ',' ').replace(' e ',' ').replace(' é ',' ').replace(' à ',' ').replace(' s ',' ')
t = t.replace(' - ','').replace(' º ',' ').replace(' n ',' ').replace(' . ',' ').replace(' são ',' ').replace(' está ',' ').replace(' seu ',' ').replace(' há ',' ').replace('orador',' ').replace(' este ',' ').replace(' pela ',' ').replace(' bem ',' ').replace(' nós ',' ').replace('porque','').replace('aqui','').replace(' às ',' ').replace('ainda','').replace('todos','').replace(' só ',' ').replace('fazer',' ').replace(' sem ',' ').replace(' qualquer ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' nosso ',' ').replace(' neste ',' ').replace(' ter ',' ').replace(' mesmo ',' ').replace(' essa ',' ').replace(' até ',' ').replace(' me ',' ').replace(' nossa ',' ').replace(' entre ',' ').replace(' nas ',' ').replace(' esse ',' ').replace(' será ',' ').replace(' isto ',' ').replace(' quando ',' ').replace(' seja ',' ').replace(' assim ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' é ',' ')
t = t.replace(' ',' ').replace(' ',' ').replace(' ',' ')
return t
sessoes['sessao'] = sessoes['sessao'].map(substitui_palavras_comuns)
"""
Explanation: Temos ~800 MB de dados. O servidor onde o backend do site vai funcionar apenas têm 1GB de memória, o que cria um desafio técnico. Como a útilidade do site é apenas contar palavras ou expressões que ocorrem mais em certas sessões, e não em todas as sessões ('enfermeiro' vs 'deputado'), podemos retirar essas palavras mais usuais:
End of explanation
"""
import re
from collections import Counter
def agrupa_palavras(texto):
texto = texto.lower() #processa tudo em minusculas
palavras = re.split(';|,|\n| |\(|\)|\?|\!|:',texto) # separa as palavras
palavras = [x.title() for x in palavras if len(x)>0] # organiza e remove as palavras com menos de 5 caracteres
return palavras
def conta_palavras(sessoes):
lista = sessoes['sessao'].map(agrupa_palavras) # cria uma lista de 'lista de palavras', um elemento por sessao
palavras = []
for l in lista:
palavras.extend(l) # junta as 'listas de palavras' todas na mesma lista
return Counter(palavras).most_common(100) # conta as palavras mais frequentes
x = conta_palavras(sessoes[1:100])
for (y,z) in x:
print(str(str(z)+' x '+y))
"""
Explanation: Fazendo uma contagem ás palavras mais frequentes que ainda restam:
End of explanation
"""
total = numpy.sum(sessoes['sessao'].map(len))
print(str(total/total0*100)+' %')
print(total)
"""
Explanation: E estimando a redução de tamanho:
End of explanation
"""
sessoes.to_csv('sessoes_democratica_clipped.csv')
"""
Explanation: 536 MB. Nada mau. Graças a esta redução tornou-se possível fazer uma query do site funcionar em ~4 seg em vez de 30 seg pois agora os dados cabem na memória. De notar que a ordem das palavras é a mesma, mas geram-se alguns problemas contando certas expressões ('porto de mar' é agora 'porto mar', e contando 'porto mar' tambem se contam ocorrencias de '(...)Porto. Mar(...)', pois retiramos os pontos e reduzimos os espaços consecutivos a um único. Mesmo assim, o dataset é perfeitamente útil para identificar em que sessões se falou de um certo assunto.
Exportemos entao o ficheiro CSV que vai ser usado no site:
End of explanation
"""
|
jinntrance/MOOC | coursera/ml-clustering-and-retrieval/assignments/2_kmeans-with-text-data_blank.ipynb | cc0-1.0 | import graphlab
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from scipy.sparse import csr_matrix
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
"""
Explanation: k-means with text data
In this assignment you will
* Cluster Wikipedia documents using k-means
* Explore the role of random initialization on the quality of the clustering
* Explore how results differ after changing the number of clusters
* Evaluate clustering, both quantitatively and qualitatively
When properly executed, clustering uncovers valuable insights from a set of unlabeled documents.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.
End of explanation
"""
wiki = graphlab.SFrame('people_wiki.gl/')
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
"""
Explanation: Load data, extract features
To work with text data, we must first convert the documents into numerical features. As in the first assignment, let's extract TF-IDF features for each article.
End of explanation
"""
def sframe_to_scipy(x, column_name):
'''
Convert a dictionary column of an SFrame into a sparse matrix format where
each (row_id, column_id, value) triple corresponds to the value of
x[row_id][column_id], where column_id is a key in the dictionary.
Example
>>> sparse_matrix, map_key_to_index = sframe_to_scipy(sframe, column_name)
'''
assert x[column_name].dtype() == dict, \
'The chosen column must be dict type, representing sparse data.'
# Create triples of (row_id, feature_id, count).
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack(column_name, ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# 1. Fit the transformer using the above data.
f.fit(x)
# 2. The transform takes 'feature' column and adds a new column 'feature_encoding'.
x = f.transform(x)
# 3. Get the feature mapping.
mapping = f['feature_encoding']
# 4. Get the feature id to use for each key.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
# The conversion will take about a minute or two.
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
tf_idf
"""
Explanation: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are matrices that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format. We included plenty of comments for the curious; if you'd like, you may skip the next block and treat the function as a black box.
End of explanation
"""
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
"""
Explanation: The above matrix contains a TF-IDF score for each of the 59071 pages in the data set and each of the 547979 unique words.
Normalize all vectors
As discussed in the previous assignment, Euclidean distance can be a poor metric of similarity between documents, as it unfairly penalizes long articles. For a reasonable assessment of similarity, we should disregard the length information and use length-agnostic metrics, such as cosine distance.
The k-means algorithm does not directly work with cosine distance, so we take an alternative route to remove length information: we normalize all vectors to be unit length. It turns out that Euclidean distance closely mimics cosine distance when all vectors are unit length. In particular, the squared Euclidean distance between any two vectors of length one is directly proportional to their cosine distance.
We can prove this as follows. Let $\mathbf{x}$ and $\mathbf{y}$ be normalized vectors, i.e. unit vectors, so that $\|\mathbf{x}\|=\|\mathbf{y}\|=1$. Write the squared Euclidean distance as the dot product of $(\mathbf{x} - \mathbf{y})$ to itself:
\begin{align}
\|\mathbf{x} - \mathbf{y}\|^2 &= (\mathbf{x} - \mathbf{y})^T(\mathbf{x} - \mathbf{y})\
&= (\mathbf{x}^T \mathbf{x}) - 2(\mathbf{x}^T \mathbf{y}) + (\mathbf{y}^T \mathbf{y})\
&= \|\mathbf{x}\|^2 - 2(\mathbf{x}^T \mathbf{y}) + \|\mathbf{y}\|^2\
&= 2 - 2(\mathbf{x}^T \mathbf{y})\
&= 2(1 - (\mathbf{x}^T \mathbf{y}))\
&= 2\left(1 - \frac{\mathbf{x}^T \mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}\right)\
&= 2\left[\text{cosine distance}\right]
\end{align}
This tells us that two unit vectors that are close in Euclidean distance are also close in cosine distance. Thus, the k-means algorithm (which naturally uses Euclidean distances) on normalized vectors will produce the same results as clustering using cosine distance as a distance metric.
We import the normalize() function from scikit-learn to normalize all vectors to unit length.
End of explanation
"""
def get_initial_centroids(data, k, seed=None):
'''Randomly choose k data points as initial centroids'''
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices,:].toarray()
return centroids
"""
Explanation: Implement k-means
Let us implement the k-means algorithm. First, we choose an initial set of centroids. A common practice is to choose randomly from the data points.
Note: We specify a seed here, so that everyone gets the same answer. In practice, we highly recommend to use different seeds every time (for instance, by using the current timestamp).
End of explanation
"""
from sklearn.metrics import pairwise_distances
# Get the TF-IDF vectors for documents 100 through 102.
queries = tf_idf[100:102,:]
# Compute pairwise distances from every data point to each query vector.
dist = pairwise_distances(tf_idf, queries, metric='euclidean')
print dist
"""
Explanation: After initialization, the k-means algorithm iterates between the following two steps:
1. Assign each data point to the closest centroid.
$$
z_i \gets \mathrm{argmin}j \|\mu_j - \mathbf{x}_i\|^2
$$
2. Revise centroids as the mean of the assigned data points.
$$
\mu_j \gets \frac{1}{n_j}\sum{i:z_i=j} \mathbf{x}_i
$$
In pseudocode, we iteratively do the following:
cluster_assignment = assign_clusters(data, centroids)
centroids = revise_centroids(data, k, cluster_assignment)
Assigning clusters
How do we implement Step 1 of the main k-means loop above? First import pairwise_distances function from scikit-learn, which calculates Euclidean distances between rows of given arrays. See this documentation for more information.
For the sake of demonstration, let's look at documents 100 through 102 as query documents and compute the distances between each of these documents and every other document in the corpus. In the k-means algorithm, we will have to compute pairwise distances between the set of centroids and the set of documents.
End of explanation
"""
# Students should write code here
top3_queries = tf_idf[0:3,:]
top3_dist = pairwise_distances(tf_idf, top3_queries, metric='euclidean')
dist = top3_dist[430, 1]
'''Test cell'''
if np.allclose(dist, pairwise_distances(tf_idf[430,:], tf_idf[1,:])):
print('Pass')
else:
print('Check your code again')
"""
Explanation: More formally, dist[i,j] is assigned the distance between the ith row of X (i.e., X[i,:]) and the jth row of Y (i.e., Y[j,:]).
Checkpoint: For a moment, suppose that we initialize three centroids with the first 3 rows of tf_idf. Write code to compute distances from each of the centroids to all data points in tf_idf. Then find the distance between row 430 of tf_idf and the second centroid and save it to dist.
End of explanation
"""
# Students should write code here
distances = top3_dist
closest_cluster = np.argmin(top3_dist, axis = 1)
closest_cluster
'''Test cell'''
reference = [list(row).index(min(row)) for row in distances]
if np.allclose(closest_cluster, reference):
print('Pass')
else:
print('Check your code again')
"""
Explanation: Checkpoint: Next, given the pairwise distances, we take the minimum of the distances for each data point. Fittingly, NumPy provides an argmin function. See this documentation for details.
Read the documentation and write code to produce a 1D array whose i-th entry indicates the centroid that is the closest to the i-th data point. Use the list of distances from the previous checkpoint and save them as distances. The value 0 indicates closeness to the first centroid, 1 indicates closeness to the second centroid, and so forth. Save this array as closest_cluster.
Hint: the resulting array should be as long as the number of data points.
End of explanation
"""
# Students should write code here
cluster_assignment = closest_cluster
if len(cluster_assignment)==59071 and \
np.array_equal(np.bincount(cluster_assignment), np.array([23061, 10086, 25924])):
print('Pass') # count number of data points for each cluster
else:
print('Check your code again.')
"""
Explanation: Checkpoint: Let's put these steps together. First, initialize three centroids with the first 3 rows of tf_idf. Then, compute distances from each of the centroids to all data points in tf_idf. Finally, use these distance calculations to compute cluster assignments and assign them to cluster_assignment.
End of explanation
"""
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = pairwise_distances(data, centroids, metric='euclidean')
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis = 1)
return cluster_assignment
"""
Explanation: Now we are ready to fill in the blanks in this function:
End of explanation
"""
if np.allclose(assign_clusters(tf_idf[0:100:10], tf_idf[0:8:2]), np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1])):
print('Pass')
else:
print('Check your code again.')
"""
Explanation: Checkpoint. For the last time, let us check if Step 1 was implemented correctly. With rows 0, 2, 4, and 6 of tf_idf as an initial set of centroids, we assign cluster labels to rows 0, 10, 20, ..., and 90 of tf_idf. The resulting cluster labels should be [0, 1, 1, 0, 0, 2, 0, 2, 2, 1].
End of explanation
"""
data = np.array([[1., 2., 0.],
[0., 0., 0.],
[2., 2., 0.]])
centroids = np.array([[0.5, 0.5, 0.],
[0., -0.5, 0.]])
"""
Explanation: Revising clusters
Let's turn to Step 2, where we compute the new centroids given the cluster assignments.
SciPy and NumPy arrays allow for filtering via Boolean masks. For instance, we filter all data points that are assigned to cluster 0 by writing
data[cluster_assignment==0,:]
To develop intuition about filtering, let's look at a toy example consisting of 3 data points and 2 clusters.
End of explanation
"""
cluster_assignment = assign_clusters(data, centroids)
print cluster_assignment
"""
Explanation: Let's assign these data points to the closest centroid.
End of explanation
"""
cluster_assignment==1
"""
Explanation: The expression cluster_assignment==1 gives a list of Booleans that says whether each data point is assigned to cluster 1 or not:
End of explanation
"""
cluster_assignment==0
"""
Explanation: Likewise for cluster 0:
End of explanation
"""
data[cluster_assignment==1]
"""
Explanation: In lieu of indices, we can put in the list of Booleans to pick and choose rows. Only the rows that correspond to a True entry will be retained.
First, let's look at the data points (i.e., their values) assigned to cluster 1:
End of explanation
"""
data[cluster_assignment==0]
"""
Explanation: This makes sense since [0 0 0] is closer to [0 -0.5 0] than to [0.5 0.5 0].
Now let's look at the data points assigned to cluster 0:
End of explanation
"""
data[cluster_assignment==0].mean(axis=0)
"""
Explanation: Again, this makes sense since these values are each closer to [0.5 0.5 0] than to [0 -0.5 0].
Given all the data points in a cluster, it only remains to compute the mean. Use np.mean(). By default, the function averages all elements in a 2D array. To compute row-wise or column-wise means, add the axis argument. See the linked documentation for details.
Use this function to average the data points in cluster 0:
End of explanation
"""
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in xrange(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis = 0)
# Convert numpy.matrix type to numpy.ndarray type
centroid = centroid.A1
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
"""
Explanation: We are now ready to complete this function:
End of explanation
"""
result = revise_centroids(tf_idf[0:100:10], 3, np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1]))
if np.allclose(result[0], np.mean(tf_idf[[0,30,40,60]].toarray(), axis=0)) and \
np.allclose(result[1], np.mean(tf_idf[[10,20,90]].toarray(), axis=0)) and \
np.allclose(result[2], np.mean(tf_idf[[50,70,80]].toarray(), axis=0)):
print('Pass')
else:
print('Check your code')
"""
Explanation: Checkpoint. Let's check our Step 2 implementation. Letting rows 0, 10, ..., 90 of tf_idf as the data points and the cluster labels [0, 1, 1, 0, 0, 2, 0, 2, 2, 1], we compute the next set of centroids. Each centroid is given by the average of all member data points in corresponding cluster.
End of explanation
"""
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in xrange(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment==i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(member_data_points, [centroids[i]], metric='euclidean')
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
"""
Explanation: Assessing convergence
How can we tell if the k-means algorithm is converging? We can look at the cluster assignments and see if they stabilize over time. In fact, we'll be running the algorithm until the cluster assignments stop changing at all. To be extra safe, and to assess the clustering performance, we'll be looking at an additional criteria: the sum of all squared distances between data points and centroids. This is defined as
$$
J(\mathcal{Z},\mu) = \sum_{j=1}^k \sum_{i:z_i = j} \|\mathbf{x}_i - \mu_j\|^2.
$$
The smaller the distances, the more homogeneous the clusters are. In other words, we'd like to have "tight" clusters.
End of explanation
"""
compute_heterogeneity(data, 2, centroids, cluster_assignment)
"""
Explanation: Let's compute the cluster heterogeneity for the 2-cluster example we've been considering based on our current cluster assignments and centroids.
End of explanation
"""
# Fill in the blanks
def kmeans(data, k, initial_centroids, maxiter, record_heterogeneity=None, verbose=False):
'''This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.
record_heterogeneity: (optional) a list, to store the history of heterogeneity as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in each iteration'''
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in xrange(maxiter):
if verbose:
print(itr)
# 1. Make cluster assignments using nearest centroids
# YOUR CODE HERE
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data points assigned to that cluster.
# YOUR CODE HERE
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if prev_cluster_assignment is not None and \
(prev_cluster_assignment==cluster_assignment).all():
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment!=cluster_assignment)
if verbose:
print(' {0:5d} elements changed their cluster assignment.'.format(num_changed))
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
"""
Explanation: Combining into a single function
Once the two k-means steps have been implemented, as well as our heterogeneity metric we wish to monitor, it is only a matter of putting these functions together to write a k-means algorithm that
Repeatedly performs Steps 1 and 2
Tracks convergence metrics
Stops if either no assignment changed or we reach a certain number of iterations.
End of explanation
"""
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7,4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel('# Iterations')
plt.ylabel('Heterogeneity')
plt.title('Heterogeneity of clustering over time, K={0:d}'.format(k))
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
"""
Explanation: Plotting convergence metric
We can use the above function to plot the convergence metric across iterations.
End of explanation
"""
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(tf_idf, k, seed=0)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=heterogeneity, verbose=True)
plot_heterogeneity(heterogeneity, k)
np.bincount(cluster_assignment)
"""
Explanation: Let's consider running k-means with K=3 clusters for a maximum of 400 iterations, recording cluster heterogeneity at every step. Then, let's plot the heterogeneity over iterations using the plotting function above.
End of explanation
"""
k = 10
heterogeneity = {}
import time
start = time.time()
for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:
initial_centroids = get_initial_centroids(tf_idf, k, seed)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=None, verbose=False)
# To save time, compute heterogeneity only once in the end
heterogeneity[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))
print np.bincount(cluster_assignment)
sys.stdout.flush()
end = time.time()
print(end-start)
"""
Explanation: Quiz Question. (True/False) The clustering objective (heterogeneity) is non-increasing for this example.
Quiz Question. Let's step back from this particular example. If the clustering objective (heterogeneity) would ever increase when running k-means, that would indicate: (choose one)
k-means algorithm got stuck in a bad local minimum
There is a bug in the k-means code
All data points consist of exact duplicates
Nothing is wrong. The objective should generally go down sooner or later.
Quiz Question. Which of the cluster contains the greatest number of data points in the end? Hint: Use np.bincount() to count occurrences of each cluster label.
1. Cluster #0
2. Cluster #1
3. Cluster #2
Beware of local maxima
One weakness of k-means is that it tends to get stuck in a local minimum. To see this, let us run k-means multiple times, with different initial centroids created using different random seeds.
Note: Again, in practice, you should set different seeds for every run. We give you a list of seeds for this assignment so that everyone gets the same answer.
This may take several minutes to run.
End of explanation
"""
def smart_initialize(data, k, seed=None):
'''Use k-means++ to initialize a good set of centroids'''
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
centroids = np.zeros((k, data.shape[1]))
# Randomly choose the first centroid.
# Since we have no prior knowledge, choose uniformly at random
idx = np.random.randint(data.shape[0])
centroids[0] = data[idx,:].toarray()
# Compute distances from the first centroid chosen to all the other data points
distances = pairwise_distances(data, centroids[0:1], metric='euclidean').flatten()
for i in xrange(1, k):
# Choose the next centroid randomly, so that the probability for each data point to be chosen
# is directly proportional to its squared distance from the nearest centroid.
# Roughtly speaking, a new centroid should be as far as from ohter centroids as possible.
idx = np.random.choice(data.shape[0], 1, p=distances/sum(distances))
centroids[i] = data[idx,:].toarray()
# Now compute distances from the centroids to all data points
distances = np.min(pairwise_distances(data, centroids[0:i+1], metric='euclidean'),axis=1)
return centroids
"""
Explanation: Notice the variation in heterogeneity for different initializations. This indicates that k-means sometimes gets stuck at a bad local minimum.
Quiz Question. Another way to capture the effect of changing initialization is to look at the distribution of cluster assignments. Add a line to the code above to compute the size (# of member data points) of clusters for each run of k-means. Look at the size of the largest cluster (most # of member data points) across multiple runs, with seeds 0, 20000, ..., 120000. How much does this measure vary across the runs? What is the minimum and maximum values this quantity takes?
One effective way to counter this tendency is to use k-means++ to provide a smart initialization. This method tries to spread out the initial set of centroids so that they are not too close together. It is known to improve the quality of local optima and lower average runtime.
End of explanation
"""
k = 10
heterogeneity_smart = {}
start = time.time()
for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:
initial_centroids = smart_initialize(tf_idf, k, seed)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=None, verbose=False)
# To save time, compute heterogeneity only once in the end
heterogeneity_smart[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity_smart[seed]))
sys.stdout.flush()
end = time.time()
print(end-start)
"""
Explanation: Let's now rerun k-means with 10 clusters using the same set of seeds, but always using k-means++ to initialize the algorithm.
This may take several minutes to run.
End of explanation
"""
plt.figure(figsize=(8,5))
plt.boxplot([heterogeneity.values(), heterogeneity_smart.values()], vert=False)
plt.yticks([1, 2], ['k-means', 'k-means++'])
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
"""
Explanation: Let's compare the set of cluster heterogeneities we got from our 7 restarts of k-means using random initialization compared to the 7 restarts of k-means using k-means++ as a smart initialization.
The following code produces a box plot for each of these methods, indicating the spread of values produced by each method.
End of explanation
"""
def kmeans_multiple_runs(data, k, maxiter, num_runs, seed_list=None, verbose=False):
heterogeneity = {}
min_heterogeneity_achieved = float('inf')
best_seed = None
final_centroids = None
final_cluster_assignment = None
for i in xrange(num_runs):
# Use UTC time if no seeds are provided
if seed_list is not None:
seed = seed_list[i]
np.random.seed(seed)
else:
seed = int(time.time())
np.random.seed(seed)
# Use k-means++ initialization
# YOUR CODE HERE
initial_centroids = smart_initialize(data, k, seed=seed)
# Run k-means
# YOUR CODE HERE
centroids, cluster_assignment = kmeans(data, k, initial_centroids, maxiter=maxiter,
record_heterogeneity=None, verbose=verbose)
# To save time, compute heterogeneity only once in the end
# YOUR CODE HERE
heterogeneity[seed] = compute_heterogeneity(data, k, centroids, cluster_assignment)
if verbose:
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))
sys.stdout.flush()
# if current measurement of heterogeneity is lower than previously seen,
# update the minimum record of heterogeneity.
if heterogeneity[seed] < min_heterogeneity_achieved:
min_heterogeneity_achieved = heterogeneity[seed]
best_seed = seed
final_centroids = centroids
final_cluster_assignment = cluster_assignment
# Return the centroids and cluster assignments that minimize heterogeneity.
return final_centroids, final_cluster_assignment
"""
Explanation: A few things to notice from the box plot:
* Random initialization results in a worse clustering than k-means++ on average.
* The best result of k-means++ is better than the best result of random initialization.
In general, you should run k-means at least a few times with different initializations and then return the run resulting in the lowest heterogeneity. Let us write a function that runs k-means multiple times and picks the best run that minimizes heterogeneity. The function accepts an optional list of seed values to be used for the multiple runs; if no such list is provided, the current UTC time is used as seed values.
End of explanation
"""
#def plot_k_vs_heterogeneity(k_values, heterogeneity_values):
# plt.figure(figsize=(7,4))
# plt.plot(k_values, heterogeneity_values, linewidth=4)
# plt.xlabel('K')
# plt.ylabel('Heterogeneity')
# plt.title('K vs. Heterogeneity')
# plt.rcParams.update({'font.size': 16})
# plt.tight_layout()
#start = time.time()
#centroids = {}
#cluster_assignment = {}
#heterogeneity_values = []
#k_list = [2, 10, 25, 50, 100]
#seed_list = [0, 20000, 40000, 60000, 80000, 100000, 120000]
#for k in k_list:
# heterogeneity = []
# centroids[k], cluster_assignment[k] = kmeans_multiple_runs(tf_idf, k, maxiter=400,
# num_runs=len(seed_list),
# seed_list=seed_list,
# verbose=True)
# score = compute_heterogeneity(tf_idf, k, centroids[k], cluster_assignment[k])
# heterogeneity_values.append(score)
#plot_k_vs_heterogeneity(k_list, heterogeneity_values)
#end = time.time()
#print(end-start)
"""
Explanation: How to choose K
Since we are measuring the tightness of the clusters, a higher value of K reduces the possible heterogeneity metric by definition. For example, if we have N data points and set K=N clusters, then we could have 0 cluster heterogeneity by setting the N centroids equal to the values of the N data points. (Note: Not all runs for larger K will result in lower heterogeneity than a single run with smaller K due to local optima.) Let's explore this general trend for ourselves by performing the following analysis.
Use the kmeans_multiple_runs function to run k-means with five different values of K. For each K, use k-means++ and multiple runs to pick the best solution. In what follows, we consider K=2,10,25,50,100 and 7 restarts for each setting.
IMPORTANT: The code block below will take about one hour to finish. We highly suggest that you use the arrays that we have computed for you.
Side note: In practice, a good implementation of k-means would utilize parallelism to run multiple runs of k-means at once. For an example, see scikit-learn's KMeans.
End of explanation
"""
def plot_k_vs_heterogeneity(k_values, heterogeneity_values):
plt.figure(figsize=(7,4))
plt.plot(k_values, heterogeneity_values, linewidth=4)
plt.xlabel('K')
plt.ylabel('Heterogeneity')
plt.title('K vs. Heterogeneity')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
filename = 'kmeans-arrays.npz'
heterogeneity_values = []
k_list = [2, 10, 25, 50, 100]
if os.path.exists(filename):
arrays = np.load(filename)
centroids = {}
cluster_assignment = {}
for k in k_list:
print k
sys.stdout.flush()
'''To save memory space, do not load the arrays from the file right away. We use
a technique known as lazy evaluation, where some expressions are not evaluated
until later. Any expression appearing inside a lambda function doesn't get
evaluated until the function is called.
Lazy evaluation is extremely important in memory-constrained setting, such as
an Amazon EC2 t2.micro instance.'''
centroids[k] = lambda k=k: arrays['centroids_{0:d}'.format(k)]
cluster_assignment[k] = lambda k=k: arrays['cluster_assignment_{0:d}'.format(k)]
score = compute_heterogeneity(tf_idf, k, centroids[k](), cluster_assignment[k]())
heterogeneity_values.append(score)
plot_k_vs_heterogeneity(k_list, heterogeneity_values)
else:
print('File not found. Skipping.')
"""
Explanation: To use the pre-computed NumPy arrays, first download kmeans-arrays.npz as mentioned in the reading for this assignment and load them with the following code. Make sure the downloaded file is in the same directory as this notebook.
End of explanation
"""
def visualize_document_clusters(wiki, tf_idf, centroids, cluster_assignment, k, map_index_to_word, display_content=True):
'''wiki: original dataframe
tf_idf: data matrix, sparse matrix format
map_index_to_word: SFrame specifying the mapping betweeen words and column indices
display_content: if True, display 8 nearest neighbors of each centroid'''
print('==========================================================')
# Visualize each cluster c
for c in xrange(k):
# Cluster heading
print('Cluster {0:d} '.format(c)),
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroids[c].argsort()[::-1]
for i in xrange(5): # Print each word along with the TF-IDF weight
print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroids[c,idx[i]])),
print('')
if display_content:
# Compute distances from the centroid to all data points in the cluster,
# and compute nearest neighbors of the centroids within the cluster.
distances = pairwise_distances(tf_idf, centroids[c].reshape(1, -1), metric='euclidean').flatten()
distances[cluster_assignment!=c] = float('inf') # remove non-members from consideration
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('\n* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('==========================================================')
"""
Explanation: In the above plot we show that heterogeneity goes down as we increase the number of clusters. Does this mean we should always favor a higher K? Not at all! As we will see in the following section, setting K too high may end up separating data points that are actually pretty alike. At the extreme, we can set individual data points to be their own clusters (K=N) and achieve zero heterogeneity, but separating each data point into its own cluster is hardly a desirable outcome. In the following section, we will learn how to detect a K set "too large".
Visualize clusters of documents
Let's start visualizing some clustering results to see if we think the clustering makes sense. We can use such visualizations to help us assess whether we have set K too large or too small for a given application. Following the theme of this course, we will judge whether the clustering makes sense in the context of document analysis.
What are we looking for in a good clustering of documents?
* Documents in the same cluster should be similar.
* Documents from different clusters should be less similar.
So a bad clustering exhibits either of two symptoms:
* Documents in a cluster have mixed content.
* Documents with similar content are divided up and put into different clusters.
To help visualize the clustering, we do the following:
* Fetch nearest neighbors of each centroid from the set of documents assigned to that cluster. We will consider these documents as being representative of the cluster.
* Print titles and first sentences of those nearest neighbors.
* Print top 5 words that have highest tf-idf weights in each centroid.
End of explanation
"""
'''Notice the extra pairs of parentheses for centroids and cluster_assignment.
The centroid and cluster_assignment are still inside the npz file,
and we need to explicitly indicate when to load them into memory.'''
visualize_document_clusters(wiki, tf_idf, centroids[2](), cluster_assignment[2](), 2, map_index_to_word)
"""
Explanation: Let us first look at the 2 cluster case (K=2).
End of explanation
"""
k = 10
visualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k, map_index_to_word)
"""
Explanation: Both clusters have mixed content, although cluster 1 is much purer than cluster 0:
* Cluster 0: artists, songwriters, professors, politicians, writers, etc.
* Cluster 1: baseball players, hockey players, football (soccer) players, etc.
Top words of cluster 1 are all related to sports, whereas top words of cluster 0 show no clear pattern.
Roughly speaking, the entire dataset was divided into athletes and non-athletes. It would be better if we sub-divided non-athletes into more categories. So let us use more clusters. How about K=10?
End of explanation
"""
np.bincount(cluster_assignment[10]())
"""
Explanation: Clusters 0 and 2 appear to be still mixed, but others are quite consistent in content.
* Cluster 0: artists, poets, writers, environmentalists
* Cluster 1: film directors
* Cluster 2: female figures from various fields
* Cluster 3: politicians
* Cluster 4: track and field athletes
* Cluster 5: composers, songwriters, singers, music producers
* Cluster 6: soccer (football) players
* Cluster 7: baseball players
* Cluster 8: professors, researchers, scholars
* Cluster 9: lawyers, judges, legal scholars
Clusters are now more pure, but some are qualitatively "bigger" than others. For instance, the category of scholars is more general than the category of baseball players. Increasing the number of clusters may split larger clusters. Another way to look at the size of the clusters is to count the number of articles in each cluster.
End of explanation
"""
visualize_document_clusters(wiki, tf_idf, centroids[25](), cluster_assignment[25](), 25,
map_index_to_word, display_content=False) # turn off text for brevity
"""
Explanation: Quiz Question. Which of the 10 clusters above contains the greatest number of articles?
Cluster 0: artists, poets, writers, environmentalists
Cluster 4: track and field athletes
Cluster 5: composers, songwriters, singers, music producers
Cluster 7: baseball players
Cluster 9: lawyers, judges, legal scholars
Quiz Question. Which of the 10 clusters contains the least number of articles?
Cluster 1: film directors
Cluster 3: politicians
Cluster 6: soccer (football) players
Cluster 7: baseball players
Cluster 9: lawyers, judges, legal scholars
There appears to be at least some connection between the topical consistency of a cluster and the number of its member data points.
Let us visualize the case for K=25. For the sake of brevity, we do not print the content of documents. It turns out that the top words with highest TF-IDF weights in each cluster are representative of the cluster.
End of explanation
"""
k=100
visualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k,
map_index_to_word, display_content=False)
# turn off text for brevity -- turn it on if you are curious ;)
"""
Explanation: Looking at the representative examples and top words, we classify each cluster as follows. Notice the bolded items, which indicate the appearance of a new theme.
* Cluster 0: composers, songwriters, singers, music producers
* Cluster 1: poets
* Cluster 2: rugby players
* Cluster 3: baseball players
* Cluster 4: government officials
* Cluster 5: football players
* Cluster 6: radio hosts
* Cluster 7: actors, TV directors
* Cluster 8: professors, researchers, scholars
* Cluster 9: lawyers, judges, legal scholars
* Cluster 10: track and field athletes
* Cluster 11: (mixed; no clear theme)
* Cluster 12: car racers
* Cluster 13: priests, bishops, church leaders
* Cluster 14: painters, sculptors, artists
* Cluster 15: novelists
* Cluster 16: American football players
* Cluster 17: golfers
* Cluster 18: American politicians
* Cluster 19: basketball players
* Cluster 20: generals of U.S. Air Force
* Cluster 21: politicians
* Cluster 22: female figures of various fields
* Cluster 23: film directors
* Cluster 24: music directors, composers, conductors
Indeed, increasing K achieved the desired effect of breaking up large clusters. Depending on the application, this may or may not be preferable to the K=10 analysis.
Let's take it to the extreme and set K=100. We have a suspicion that this value is too large. Let us look at the top words from each cluster:
End of explanation
"""
print cluster_assignment[100]()
#np.bincount(cluster_assignment)
total = cluster_assignment[100]()
bc = np.bincount(total)
bc[bc < 236].size
"""
Explanation: The class of rugby players has been broken into two clusters (11 and 72). The same goes for soccer (football) players (clusters 6, 21, 40, and 87), although some may like the benefit of having a separate category for Australian Football League. The class of baseball players has also been broken into two clusters (18 and 95).
A high value of K encourages pure clusters, but we cannot keep increasing K. For large enough K, related documents end up going to different clusters.
That said, the result for K=100 is not entirely bad. After all, it gives us separate clusters for such categories as Scotland, Brazil, LGBT, computer science and the Mormon Church. If we set K somewhere between 25 and 100, we should be able to avoid breaking up clusters while discovering new ones.
Also, we should ask ourselves how much granularity we want in our clustering. If we wanted a rough sketch of Wikipedia, we don't want too detailed clusters. On the other hand, having many clusters can be valuable when we are zooming into a certain part of Wikipedia.
There is no golden rule for choosing K. It all depends on the particular application and domain we are in.
Another heuristic people use that does not rely on so much visualization, which can be hard in many applications (including here!) is as follows. Track heterogeneity versus K and look for the "elbow" of the curve where the heterogeneity decrease rapidly before this value of K, but then only gradually for larger values of K. This naturally trades off between trying to minimize heterogeneity, but reduce model complexity. In the heterogeneity versus K plot made above, we did not yet really see a flattening out of the heterogeneity, which might indicate that indeed K=100 is "reasonable" and we only see real overfitting for larger values of K (which are even harder to visualize using the methods we attempted above.)
Quiz Question. Another sign of too large K is having lots of small clusters. Look at the distribution of cluster sizes (by number of member data points). How many of the 100 clusters have fewer than 236 articles, i.e. 0.4% of the dataset?
Hint: Use cluster_assignment[100](), with the extra pair of parentheses for delayed loading.
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | environments_setup/mlops-composer-mlflow/caip-training-test.ipynb | apache-2.0 | import os
import re
from IPython.core.display import display, HTML
from datetime import datetime
import mlflow
import pymysql
# Jupyter magic template to create Python file with variable substitution
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
experiment_name = "caipt-test"
mlflow.set_experiment(experiment_name)
mlflow_tracking_uri = mlflow.get_tracking_uri()
MLFLOW_EXPERIMENTS_URI = os.environ['MLFLOW_EXPERIMENTS_URI']
training_artifacts_uri = MLFLOW_EXPERIMENTS_URI+"/caip-training"
REGION=os.environ['MLOPS_REGION']
ML_IMAGE_URI=os.environ['ML_IMAGE_URI']
print(f"MLflow tracking server URI: {mlflow_tracking_uri}")
print(f"MLflow artifacts store root: {MLFLOW_EXPERIMENTS_URI}")
print(f"MLflow SQL connction name: {os.environ['MLFLOW_SQL_CONNECTION_NAME']}")
print(f"MLflow SQL connction string: {os.environ['MLFLOW_SQL_CONNECTION_STR']}")
display(HTML('<hr>You can check results of this test in MLflow and GCS folder:'))
display(HTML('<h4><a href="{}" rel="noopener noreferrer" target="_blank">Click to open MLflow UI</a></h4>'.format(os.environ['MLFLOW_TRACKING_EXTERNAL_URI'])))
display(HTML('<h4><a href="https://console.cloud.google.com/storage/browser/{}" rel="noopener noreferrer" target="_blank">Click to open GCS folder</a></h4>'.format(MLFLOW_EXPERIMENTS_URI.replace('gs://',''))))
!mkdir -p ./package/training
"""
Explanation: Verifying the MLOps environment on GCP with Cloud AI Platfrom training custom container
This notebook verifies the MLOps environment provisioned on GCP
1. Create trainer module and submit a Cloud AI Platfrom training job using custom container
2. Test using the training result log entries in the Cloud SQL
1. Create and submit Cloud AI Platfrom training job
End of explanation
"""
%%writefile ./package/training/task.py
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
import sys
import argparse
import os
def train_model(args):
print("Regularized logistic regression model train step started...")
with mlflow.start_run(nested=True):
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
# args.epochs is a training job parameter
lr = LogisticRegression(max_iter=args.epochs)
lr.fit(X, y)
score = lr.score(X, y)
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model")
print("LogisticRegression training finished.")
def training_data(local_data):
dircontent = os.listdir(local_data)
print(f"Check local data @: {local_data} :\n{dircontent}")
def upload_data(local, job_dir):
print(f"Upload local data {local} to GCS: {job_dir}")
def main():
print(f'Training arguments: {" ".join(sys.argv[1:])}'.format())
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', type=int)
parser.add_argument('--job-dir', type=str)
parser.add_argument('--local_data', type=str)
args, unknown_args = parser.parse_known_args()
# CLOUD_ML_JOB conatains other CAIP Training runtime parameters in JSON object
# job = os.environ['CLOUD_ML_JOB']
# MLflow locally available
mlflow.set_tracking_uri('http://127.0.0.1:80')
mlflow.set_experiment("caipt-test")
# Data already downloaded from GCS to 'local_data' folder if --data_source argument provided
# in 'ai-platform jobs submit training' command
if args.local_data:
training_data(args.local_data)
print('Training main started')
train_model(args)
# if --job-dir provided in 'ai-platform jobs submit' command you can upload any training result to that
# if args.job_dir:
# upload_data(args.local_data, args.job_dir):
if __name__ == '__main__':
main()
"""
Explanation: 1.1. Create model trainer file
The following cells will write out python module files that will be sent as a training module to Cloud AI Platform Training.
At first, we implement a simple Scikit-learn model training routine.
End of explanation
"""
%%writefile ./package/training/__init__.py
"""
Explanation: Create an empty init file which is needed for training module.
End of explanation
"""
%%writefile ./package/setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['mlflow==1.13.1','PyMySQL==0.9.3']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Customer training setup.'
)
"""
Explanation: setup.py to ensure MLFlow modules are installed
End of explanation
"""
submit_time = datetime.now().strftime("%Y%m%d_%H%M%S")
JOB_NAME=f"training_job_{submit_time}"
JOB_DIR=f"{training_artifacts_uri}/training_{submit_time}"
print(f"Training job name: '{JOB_NAME}' will run in {REGION} region using image from:\n {ML_IMAGE_URI}\n")
!gcloud ai-platform jobs submit training {JOB_NAME} \
--region {REGION} \
--scale-tier BASIC \
--job-dir {JOB_DIR} \
--package-path ./package/training/ \
--module-name training.task \
--master-image-uri {ML_IMAGE_URI} \
-- \
--mlflowuri {MLFLOW_EXPERIMENTS_URI} \
--epochs 2
"""
Explanation: 1.2. Submit training job
Note: Every run of this notebook cell creates a new traing job!
End of explanation
"""
!gcloud ai-platform jobs describe {JOB_NAME}
"""
Explanation: 1.3 Wait for job done
After you submit your job, you can monitor the job status
End of explanation
"""
!gcloud ai-platform jobs stream-logs {JOB_NAME}
"""
Explanation: Training logs
End of explanation
"""
sqlauth=re.search('mysql\\+pymysql://(?P<user>.*):(?P<psw>.*)@127.0.0.1:3306/mlflow', os.environ['MLFLOW_SQL_CONNECTION_STR'],re.DOTALL)
connection = pymysql.connect(
host='127.0.0.1',
port=3306,
database='mlflow',
user=sqlauth.group('user'),
passwd=sqlauth.group('psw')
)
cursor = connection.cursor()
"""
Explanation: 2.0. Cloud AI Platform Training test results
Examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
End of explanation
"""
cursor.execute(f"SELECT * FROM experiments where name='{experiment_name}' ORDER BY experiment_id desc LIMIT 1")
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print(f"'{experiment_name}' experiment ID: {experiment_id}")
"""
Explanation: 2.2. Retrieve experiment
End of explanation
"""
cursor.execute(f"SELECT * FROM runs where experiment_id={experiment_id} ORDER BY start_time desc LIMIT 1")
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print(f"Last run id of '{experiment_name}' experiment is: {run_uuid}\n")
print(entity)
"""
Explanation: 2.3. Query runs
End of explanation
"""
cursor.execute(f"SELECT * FROM metrics where run_uuid = '{run_uuid}'")
if cursor.rowcount == 0:
print("No metrics found")
else:
for entry in cursor:
print(entry)
"""
Explanation: 2.4. Query metrics
End of explanation
"""
!gsutil ls {MLFLOW_EXPERIMENTS_URI}/{experiment_id}/{run_uuid}/artifacts/model
"""
Explanation: 2.5. List the artifacts in Cloud Storage
End of explanation
"""
COMPOSER_NAME=os.environ['MLOPS_COMPOSER_NAME']
REGION=os.environ['MLOPS_REGION']
submit_time = datetime.now().strftime("%Y%m%d_%H%M%S")
JOB_NAME=f"training_job_{submit_time}"
JOB_DIR=f"{training_artifacts_uri}/training_{submit_time}"
print(f"Training job name: '{JOB_NAME}' will run in {REGION} region using image from:\n {ML_IMAGE_URI}\n")
"""
Explanation: 3. Submitting a workflow to Composer to run training in Cloud AI Platform training
This section will test a training job submitted from Composer workflow by reusing training module
created in the 1.1. section earlier. Therefore the training metrics and artifacts will be stored in the
same 'caipt-test' MLFlow experiment.
End of explanation
"""
!gcloud composer environments storage data import \
--environment {COMPOSER_NAME} \
--location {REGION} \
--source ./package \
--destination test-sklearn-mlflow-caipt
"""
Explanation: 3.1. Importing existing training module
Upload local training /package folder to Composer's GCS bucket.
See more details about data import and Composer's folder structure
End of explanation
"""
%%writetemplate test-sklearn-mlflow-caipt.py
from datetime import timedelta
import airflow
from airflow.operators.bash_operator import BashOperator
from airflow.operators.dummy_operator import DummyOperator
default_args = dict(retries=1,start_date=airflow.utils.dates.days_ago(0))
command="""gcloud ai-platform jobs submit training {JOB_NAME} \
--region {REGION} \
--scale-tier BASIC \
--job-dir {JOB_DIR} \
--package-path /home/airflow/gcs/data/test-sklearn-mlflow-caipt/package/training/ \
--module-name training.task \
--master-image-uri {ML_IMAGE_URI} \
-- \
--mlflowuri {MLFLOW_EXPERIMENTS_URI} \
--epochs 2"""
print (command)
with airflow.DAG(
"test_sklearn_mlflow_caipt",
default_args=default_args,
schedule_interval=None,
dagrun_timeout=timedelta(minutes=15)) as dag:
dummy_task = DummyOperator(task_id="dummy_task")
bash_task = BashOperator(
task_id="test_sklearn_mlflow_caipt",
bash_command=command
)
dummy_task >> bash_task
!gcloud composer environments storage dags import \
--environment {COMPOSER_NAME} \
--location {REGION} \
--source test-sklearn-mlflow-caipt.py
"""
Explanation: 3.2. Uploading the Airflow workflow
End of explanation
"""
!gcloud composer environments storage dags list \
--environment {COMPOSER_NAME} --location {REGION}
"""
Explanation: Check imported Dag
End of explanation
"""
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} unpause -- test_sklearn_mlflow_caipt
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} trigger_dag -- test_sklearn_mlflow_caipt
"""
Explanation: 3.3. Triggering the workflow
Please wait for 30-60 seconds before triggering the workflow at the first Airflow Dag import
End of explanation
"""
cursor = connection.cursor()
"""
Explanation: 4. Cloud AI Platform Training through Cloud Composer test results
End of explanation
"""
experiment_name = "caipt-test"
cursor.execute("SELECT * FROM experiments where name='{}' ORDER BY experiment_id desc LIMIT 1".format(experiment_name))
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print(f"'{experiment_name}' experiment ID: {experiment_id}")
"""
Explanation: 4.1 Retrieve experiment
End of explanation
"""
cursor.execute("SELECT * FROM runs where experiment_id={} ORDER BY start_time desc LIMIT 1".format(experiment_id))
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print(f"Last run id of '{experiment_name}' experiment is: {run_uuid}\n")
print(entity)
"""
Explanation: 4.2 Query runs
End of explanation
"""
cursor.execute("SELECT * FROM metrics where run_uuid = '{}'".format(run_uuid))
if cursor.rowcount == 0:
print("No metrics found")
else:
for entry in cursor:
print(entry)
"""
Explanation: 4.3 Query metrics
End of explanation
"""
!gsutil ls {MLFLOW_EXPERIMENTS_URI}/{experiment_id}/{run_uuid}/artifacts/model
"""
Explanation: 4.5. List the artifacts in Cloud Storage
End of explanation
"""
|
bretthandrews/marvin | docs/sphinx/jupyter/my-first-query.ipynb | bsd-3-clause | # Python 2/3 compatibility
from __future__ import print_function, division, absolute_import
from marvin import config
config.mode = 'remote'
config.setRelease('MPL-4')
from marvin.tools.query import Query
"""
Explanation: My First Query
One of the most powerful features of Marvin 2.0 is ability to query the newly created DRP and DAP databases. You can do this in two ways:
1. via the Marvin-web Search page or
2. via Python (in the terminal/notebook/script) with Marvin-tools.
The best part is that both interfaces use the same underlying query structure, so your input search will be the same. Here we will run a few queries with Marvin-tools to learn the basics of how to construct a query and also test drive some of the more advanced features that are unique to the Marvin-tools version of querying.
End of explanation
"""
myquery1 = 'nsa.sersic_mass > 3e11'
# or
myquery1 = 'nsa.sersic_logmass > 11.47'
q1 = Query(searchfilter=myquery1)
r1 = q1.run()
"""
Explanation: Let's search for galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
To specify our search parameter, M$_\star$, we must know the database table and name of the parameter. In this case, MaNGA uses the NASA-Sloan Atlas (NSA) for target selection so we will use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so our search parameter will be nsa.sersic_mass. You can also use nsa.sersic_logmass
Generically, the search parameter will take the form table.parameter.
End of explanation
"""
# show results
r1.results
"""
Explanation: Running the query produces a Results object (r1):
End of explanation
"""
myquery2 = 'nsa.sersic_mass > 3e11 AND nsa.z < 0.1'
q2 = Query(searchfilter=myquery2)
r2 = q2.run()
r2.results
"""
Explanation: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our search to find only galaxies with a redshift less than 0.1.
Redshift is the z parameter and is also in the nsa table, so its full search parameter designation is nsa.z.
End of explanation
"""
myquery3 = '(nsa.sersic_mass > 3e11 AND nsa.z < 0.1) OR (ifu.name=127* AND nsa.ba90 >= 0.95)'
q3 = Query(searchfilter=myquery3)
r3 = q3.run()
r3.results
"""
Explanation: Compound Search Statements
We were hoping for a few more than 3 galaxies, so let's try to increase our search by broadening the criteria to also include galaxies with 127 fiber IFUs and a b/a ratio of at least 0.95.
To find 127 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 12701, so we need to to set the value to 127*.
The b/a ratio is in nsa table as the ba90 parameter.
We're also going to join this to or previous query with an OR operator and use parentheses to group our individual search statements into a compound search statement.
End of explanation
"""
# Enter your search here
"""
Explanation: Design Your Own Search
OK, now it's your turn to try designing a search.
Exercise: Write a search filter that will find galaxies with a redshift less than 0.02 that were observed with the 1901 IFU?
End of explanation
"""
# You might have to do an svn update to get this to work (otherwise try the next cell)
q = Query()
q.get_available_params()
# try this if the previous cell didn't return a list of parameters
from marvin.api.api import Interaction
from pprint import pprint
url = config.urlmap['api']['getparams']['url']
ii = Interaction(route=url)
mykeys = ii.getData()
pprint(mykeys)
"""
Explanation: You should get 8 results:
[NamedTuple(mangaid='1-22438', plate=7992, name='1901', z=0.016383046284318),
NamedTuple(mangaid='1-113520', plate=7815, name='1901', z=0.0167652331292629),
NamedTuple(mangaid='1-113698', plate=8618, name='1901', z=0.0167444702237844),
NamedTuple(mangaid='1-134004', plate=8486, name='1901', z=0.0185601413249969),
NamedTuple(mangaid='1-155903', plate=8439, name='1901', z=0.0163660924881697),
NamedTuple(mangaid='1-167079', plate=8459, name='1901', z=0.0157109703868628),
NamedTuple(mangaid='1-209729', plate=8549, name='1901', z=0.0195561610162258),
NamedTuple(mangaid='1-277339', plate=8254, name='1901', z=0.0192211158573627)]
Finding the Available Parameters
Now you might want to go out and try all of the interesting queries that you've been saving up, but you don't know what the parameters are called or what database table they are in.
You can find all of the availabale parameters by:
1. clicking on in the Return Parameters dropdown menu on the left side of the Marvin-web Search page,
2. reading the Marvin Docs page, or
3. via Marvin-tools (see next two cells)
End of explanation
"""
myquery5 = 'nsa.z > 0.1'
bonusparams5 = ['cube.ra', 'cube.dec']
# bonusparams5 = 'cube.ra' # This works too
q5 = Query(searchfilter=myquery5, returnparams=bonusparams5)
r5 = q5.run()
r5.results
"""
Explanation: Go ahead and try to create some new searches on your own from the parameter list. Please feel free to also try out the some of the same search on the Marvin-web Search page.
Returning Bonus Parameters
Often you want to run a query and see the value of parameters that you didn't explicitly search on. For instance, you want to find galaxies above a redshift of 0.1 and would like to know their RA and DECs.
In Marvin-tools, this is as easy as specifying the returnparams option with either a string (for a single bonus parameter) or a list of strings (for multiple bonus parameters).
End of explanation
"""
|
samuelshaner/openmc | docs/source/pythonapi/examples/mgxs-part-ii.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import openmoc
from openmoc.opencg_compatible import get_openmoc_geometry
import openmc
import openmc.mgxs as mgxs
import openmc.data
%matplotlib inline
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. In order to run this Notebook in its entirety, you must have OpenMOC installed on your system, along with OpenCG to convert the OpenMC geometries into OpenMOC geometries. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
"""
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
"""
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
"""
Explanation: With the nuclides we defined, we will now create three distinct materials for water, clad and fuel.
End of explanation
"""
# Instantiate a Materials collection
materials_file = openmc.Materials((fuel, water, zircaloy))
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.region = +min_x & -max_x & +min_y & -max_y
root_cell.fill = pin_cell_universe
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 190 active batches each with 10,000 particles.
End of explanation
"""
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups()
coarse_groups.group_edges = np.array([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups()
fine_groups.group_edges = np.array([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
"""
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
"""
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.NuFissionXS(groups=fine_groups)
xs_library[cell.id]['nu-scatter'] = mgxs.NuScatterMatrixXS(groups=fine_groups)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
"""
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
"""
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1E-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
"""
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
"""
# Run OpenMC
openmc.run(output=True)
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.074.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
"""
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
"""
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
"""
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
"""
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
"""
# Extract the 16-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
"""
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
"""
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
"""
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 16-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenCG Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.opencg_geometry)
"""
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we use OpenCG construct an equivalent OpenMOC geometry.
End of explanation
"""
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
"""
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
openmoc_geometry = get_openmoc_geometry(sp.summary.opencg_geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
"""
# Parse ACE data into memory
u235 = openmc.data.IncidentNeutron.from_ace('../../../../scripts/nndc/293.6K/U_235_293.6K.ace')
# Extract the continuous-energy U-235 fission cross section data
fission = u235[18]
"""
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.data module to parse continuous-energy cross sections from an openly available ACE cross section library distributed by NNDC. First, we instantiate a openmc.data.IncidentNeutron object for U-235 as follows.
End of explanation
"""
# Create a loglog plot of the U-235 continuous-energy fission cross section
plt.loglog(fission.xs['294K'].x, fission.xs['294K'].y, color='b', linewidth=1)
# Extract energy group bounds and MGXS values to plot
nufission = xs_library[fuel_cell.id]['fission']
energy_groups = nufission.energy_groups
x = energy_groups.group_edges
y = nufission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound to the value defined by the ACE library
x[0] = fission.xs['294K'].x[0]
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
plt.plot(x, y, drawstyle='steps', color='r', linewidth=3)
plt.title('U-235 Fission Cross Section')
plt.xlabel('Energy [eV]')
plt.ylabel('Micro Fission XS')
plt.legend(['Continuous', 'Multi-Group'])
plt.xlim((x.min(), x.max()))
"""
Explanation: Now, we use matplotlib and seaborn to plot the continuous-energy and multi-group cross sections on a single plot.
End of explanation
"""
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.as_matrix()
o16 = o16.as_matrix()
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
"""
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
"""
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
plt.grid()
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
plt.grid()
# Show the plot on screen
plt.show()
"""
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation
"""
|
eecs445-f16/umich-eecs445-f16 | lecture11_info-theory-decision-trees/collocations/Collocations Example.ipynb | mit | # read text file
text_path = "data/crime-and-punishment.txt";
with open(text_path) as f:
text_raw = f.read().lower();
# remove punctuation
translate_table = dict((ord(char), None) for char in string.punctuation);
text_raw = text_raw.translate(translate_table);
# tokenize
tokens = nltk.word_tokenize(text_raw);
bigrams = nltk.bigrams(tokens);
# unigram/bigram frequencies
unigram_counts = nltk.FreqDist(tokens);
bigram_counts = nltk.FreqDist(bigrams);
# write to file
unigram_path = text_path + ".unigrams";
bigram_path = text_path + ".bigrams";
with open(unigram_path, "w") as f:
writer = csv.writer(f);
filtered = [ (w,c) for w,c in unigram_counts.items() if c > 1];
writer.writerows(filtered);
with open(bigram_path, "w") as f:
writer = csv.writer(f);
filtered = [ (b[0], b[1],c) for b,c in bigram_counts.items() if c > 3];
writer.writerows(filtered);
"""
Explanation: Collocations
Benjamin Bray
(this notebook requires NLTK)
Text Preprocessing
First, we preprocess the text document by
- converting to lowercase
- removing punctuation
- counting unigrams and bigrams
- saving unigram and bigram counts to a file
End of explanation
"""
unigram_counts.most_common(20)
"""
Explanation: Most Common Words & Phrases
Here are the top few most common words:
End of explanation
"""
bigram_counts.most_common(20)
"""
Explanation: Below are the most commmon word pairs. These aren't collocations!
End of explanation
"""
# compute pmi
pmi_bigrams = [];
for bigram,_ in bigram_counts.most_common(1000):
w1, w2 = bigram;
# compute pmi
actual = bigram_counts[bigram];
expected = unigram_counts[w1] * unigram_counts[w2];
pmi = math.log( actual / expected );
pmi_bigrams.append( (w1, w2, pmi) );
# sort pmi
pmi_sorted = sorted(pmi_bigrams, key=lambda x: x[2], reverse=True);
"""
Explanation: Collocations
To find collocations, we sort pairs of words by their pointwise mutual information,
$$
\mathrm{pmi}(x;y) = \log \frac{p(x,y)}{p(x)p(y)}
$$
End of explanation
"""
pmi_sorted[:30]
"""
Explanation: Here are the top 30 collocations according to PMI:
End of explanation
"""
pmi_sorted[-30:]
"""
Explanation: Just for fun, here are the bottom 30 collocations according to PMI. These are the word pairs that occur together less frequently than expected:
End of explanation
"""
unigram_path = "data/crime-and-punishment.txt.unigrams";
bigram_path = "data/crime-and-punishment.txt.bigrams";
with open(unigram_path) as f:
reader = csv.reader(f);
unigrams = { row[0] : int(row[1]) for row in csv.reader(f)}
with open(bigram_path) as f:
reader = csv.reader(f);
bigrams = { (row[0],row[1]) : int(row[2]) for row in csv.reader(f)}
bigrams
"""
Explanation: Reading from CSV
Here I'm just testing out reading from the CSV files I created:
End of explanation
"""
|
CompPhysics/MachineLearning | doc/src/week43/.ipynb_checkpoints/week43-checkpoint.ipynb | cc0-1.0 | %matplotlib inline
# Start importing packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, SimpleRNN, LSTM, GRU
from tensorflow.keras import optimizers
from tensorflow.keras import regularizers
from tensorflow.keras.utils import to_categorical
# convert into dataset matrix
def convertToMatrix(data, step):
X, Y =[], []
for i in range(len(data)-step):
d=i+step
X.append(data[i:d,])
Y.append(data[d,])
return np.array(X), np.array(Y)
step = 4
N = 1000
Tp = 800
t=np.arange(0,N)
x=np.sin(0.02*t)+2*np.random.rand(N)
df = pd.DataFrame(x)
df.head()
plt.plot(df)
plt.show()
values=df.values
train,test = values[0:Tp,:], values[Tp:N,:]
# add step elements into train and test
test = np.append(test,np.repeat(test[-1,],step))
train = np.append(train,np.repeat(train[-1,],step))
trainX,trainY =convertToMatrix(train,step)
testX,testY =convertToMatrix(test,step)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
model = Sequential()
model.add(SimpleRNN(units=32, input_shape=(1,step), activation="relu"))
model.add(Dense(8, activation="relu"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
model.summary()
model.fit(trainX,trainY, epochs=100, batch_size=16, verbose=2)
trainPredict = model.predict(trainX)
testPredict= model.predict(testX)
predicted=np.concatenate((trainPredict,testPredict),axis=0)
trainScore = model.evaluate(trainX, trainY, verbose=0)
print(trainScore)
index = df.index.values
plt.plot(index,df)
plt.plot(index,predicted)
plt.axvline(df.index[Tp], c="r")
plt.show()
"""
Explanation: <!-- HTML file automatically generated from DocOnce source (https://github.com/doconce/doconce/)
doconce format html week43.do.txt -->
<!-- dom:TITLE: Week 43: Deep Learning: Recurrent Neural Networks and other Deep Learning Methods. Principal Component analysis -->
Week 43: Deep Learning: Recurrent Neural Networks and other Deep Learning Methods. Principal Component analysis
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Oct 28, 2021
Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Plans for week 43
Thursday: Summary of Convolutional Neural Networks from week 42 and Recurrent Neural Networks
Friday: Recurrent Neural Networks and other Deep Learning methods such as Generalized Adversarial Neural Networks. Start discussing Principal component analysis
Excellent lectures on CNNs and RNNs.
Video on Convolutional Neural Networks from MIT
Video on Recurrent Neural Networks from MIT
Video on Deep Learning
More resources.
IN5400 at UiO Lecture
CS231 at Stanford Lecture
Reading Recommendations
Goodfellow et al, chapter 10 on Recurrent NNs, chapters 11 and 12 on various practicalities around deep learning are also recommended.
Aurelien Geron, chapter 14 on RNNs.
Summary on Deep Learning Methods
We have studied fully connected neural networks (also called artifical nueral networks) and convolutional neural networks (CNNs).
The first type of deep learning networks work very well on homogeneous and structured input data while CCNs are normally tailored to recognizing images.
CNNs in brief
In summary:
A CNN architecture is in the simplest case a list of Layers that transform the image volume into an output volume (e.g. holding the class scores)
There are a few distinct types of Layers (e.g. CONV/FC/RELU/POOL are by far the most popular)
Each Layer accepts an input 3D volume and transforms it to an output 3D volume through a differentiable function
Each Layer may or may not have parameters (e.g. CONV/FC do, RELU/POOL don’t)
Each Layer may or may not have additional hyperparameters (e.g. CONV/FC/POOL do, RELU doesn’t)
For more material on convolutional networks, we strongly recommend
the course
IN5400 – Machine Learning for Image Analysis
and the slides of CS231 which is taught at Stanford University (consistently ranked as one of the top computer science programs in the world). Michael Nielsen's book is a must read, in particular chapter 6 which deals with CNNs.
However, both standard feed forwards networks and CNNs perform well on data with unknown length.
This is where recurrent nueral networks (RNNs) come to our rescue.
Recurrent neural networks: Overarching view
Till now our focus has been, including convolutional neural networks
as well, on feedforward neural networks. The output or the activations
flow only in one direction, from the input layer to the output layer.
A recurrent neural network (RNN) looks very much like a feedforward
neural network, except that it also has connections pointing
backward.
RNNs are used to analyze time series data such as stock prices, and
tell you when to buy or sell. In autonomous driving systems, they can
anticipate car trajectories and help avoid accidents. More generally,
they can work on sequences of arbitrary lengths, rather than on
fixed-sized inputs like all the nets we have discussed so far. For
example, they can take sentences, documents, or audio samples as
input, making them extremely useful for natural language processing
systems such as automatic translation and speech-to-text.
Set up of an RNN
More to text to be added by Wednesday October 27.
A simple example
End of explanation
"""
# For matrices and calculations
import numpy as np
# For machine learning (backend for keras)
import tensorflow as tf
# User-friendly machine learning library
# Front end for TensorFlow
import tensorflow.keras
# Different methods from Keras needed to create an RNN
# This is not necessary but it shortened function calls
# that need to be used in the code.
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Input
from tensorflow.keras import regularizers
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, SimpleRNN, LSTM, GRU
# For timing the code
from timeit import default_timer as timer
# For plotting
import matplotlib.pyplot as plt
# The data set
datatype='VaryDimension'
X_tot = np.arange(2, 42, 2)
y_tot = np.array([-0.03077640549, -0.08336233266, -0.1446729567, -0.2116753732, -0.2830637392, -0.3581341341, -0.436462435, -0.5177783846,
-0.6019067271, -0.6887363571, -0.7782028952, -0.8702784034, -0.9649652536, -1.062292565, -1.16231451,
-1.265109911, -1.370782966, -1.479465113, -1.591317992, -1.70653767])
"""
Explanation: An extrapolation example
The following code provides an example of how recurrent neural
networks can be used to extrapolate to unknown values of physics data
sets. Specifically, the data sets used in this program come from
a quantum mechanical many-body calculation of energies as functions of the number of particles.
End of explanation
"""
# FORMAT_DATA
def format_data(data, length_of_sequence = 2):
"""
Inputs:
data(a numpy array): the data that will be the inputs to the recurrent neural
network
length_of_sequence (an int): the number of elements in one iteration of the
sequence patter. For a function approximator use length_of_sequence = 2.
Returns:
rnn_input (a 3D numpy array): the input data for the recurrent neural network. Its
dimensions are length of data - length of sequence, length of sequence,
dimnsion of data
rnn_output (a numpy array): the training data for the neural network
Formats data to be used in a recurrent neural network.
"""
X, Y = [], []
for i in range(len(data)-length_of_sequence):
# Get the next length_of_sequence elements
a = data[i:i+length_of_sequence]
# Get the element that immediately follows that
b = data[i+length_of_sequence]
# Reshape so that each data point is contained in its own array
a = np.reshape (a, (len(a), 1))
X.append(a)
Y.append(b)
rnn_input = np.array(X)
rnn_output = np.array(Y)
return rnn_input, rnn_output
# ## Defining the Recurrent Neural Network Using Keras
#
# The following method defines a simple recurrent neural network in keras consisting of one input layer, one hidden layer, and one output layer.
def rnn(length_of_sequences, batch_size = None, stateful = False):
"""
Inputs:
length_of_sequences (an int): the number of y values in "x data". This is determined
when the data is formatted
batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.
stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.
Returns:
model (a Keras model): The recurrent neural network that is built and compiled by this
method
Builds and compiles a recurrent neural network with one hidden layer and returns the model.
"""
# Number of neurons in the input and output layers
in_out_neurons = 1
# Number of neurons in the hidden layer
hidden_neurons = 200
# Define the input layer
inp = Input(batch_shape=(batch_size,
length_of_sequences,
in_out_neurons))
# Define the hidden layer as a simple RNN layer with a set number of neurons and add it to
# the network immediately after the input layer
rnn = SimpleRNN(hidden_neurons,
return_sequences=False,
stateful = stateful,
name="RNN")(inp)
# Define the output layer as a dense neural network layer (standard neural network layer)
#and add it to the network immediately after the hidden layer.
dens = Dense(in_out_neurons,name="dense")(rnn)
# Create the machine learning model starting with the input layer and ending with the
# output layer
model = Model(inputs=[inp],outputs=[dens])
# Compile the machine learning model using the mean squared error function as the loss
# function and an Adams optimizer.
model.compile(loss="mean_squared_error", optimizer="adam")
return model
"""
Explanation: Formatting the Data
The way the recurrent neural networks are trained in this program
differs from how machine learning algorithms are usually trained.
Typically a machine learning algorithm is trained by learning the
relationship between the x data and the y data. In this program, the
recurrent neural network will be trained to recognize the relationship
in a sequence of y values. This is type of data formatting is
typically used time series forcasting, but it can also be used in any
extrapolation (time series forecasting is just a specific type of
extrapolation along the time axis). This method of data formatting
does not use the x data and assumes that the y data are evenly spaced.
For a standard machine learning algorithm, the training data has the
form of (x,y) so the machine learning algorithm learns to assiciate a
y value with a given x value. This is useful when the test data has x
values within the same range as the training data. However, for this
application, the x values of the test data are outside of the x values
of the training data and the traditional method of training a machine
learning algorithm does not work as well. For this reason, the
recurrent neural network is trained on sequences of y values of the
form ((y1, y2), y3), so that the network is concerned with learning
the pattern of the y data and not the relation between the x and y
data. As long as the pattern of y data outside of the training region
stays relatively stable compared to what was inside the training
region, this method of training can produce accurate extrapolations to
y values far removed from the training data set.
<!-- -->
<!-- The idea behind formatting the data in this way comes from [this resource](https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/) and [this one](https://fairyonice.github.io/Understand-Keras%27s-RNN-behind-the-scenes-with-a-sin-wave-example.html). -->
<!-- -->
<!-- The following method takes in a y data set and formats it so the "x data" are of the form (y1, y2) and the "y data" are of the form y3, with extra brackets added in to make the resulting arrays compatable with both Keras and Tensorflow. -->
<!-- -->
<!-- Note: Using a sequence length of two is not required for time series forecasting so any lenght of sequence could be used (for example instead of ((y1, y2) y3) you could change the length of sequence to be 4 and the resulting data points would have the form ((y1, y2, y3, y4), y5)). While the following method can be used to create a data set of any sequence length, the remainder of the code expects the length of sequence to be 2. This is because the data sets are very small and the higher the lenght of the sequence the less resulting data points. -->
End of explanation
"""
def test_rnn (x1, y_test, plot_min, plot_max):
"""
Inputs:
x1 (a list or numpy array): The complete x component of the data set
y_test (a list or numpy array): The complete y component of the data set
plot_min (an int or float): the smallest x value used in the training data
plot_max (an int or float): the largest x valye used in the training data
Returns:
None.
Uses a trained recurrent neural network model to predict future points in the
series. Computes the MSE of the predicted data set from the true data set, saves
the predicted data set to a csv file, and plots the predicted and true data sets w
while also displaying the data range used for training.
"""
# Add the training data as the first dim points in the predicted data array as these
# are known values.
y_pred = y_test[:dim].tolist()
# Generate the first input to the trained recurrent neural network using the last two
# points of the training data. Based on how the network was trained this means that it
# will predict the first point in the data set after the training data. All of the
# brackets are necessary for Tensorflow.
next_input = np.array([[[y_test[dim-2]], [y_test[dim-1]]]])
# Save the very last point in the training data set. This will be used later.
last = [y_test[dim-1]]
# Iterate until the complete data set is created.
for i in range (dim, len(y_test)):
# Predict the next point in the data set using the previous two points.
next = model.predict(next_input)
# Append just the number of the predicted data set
y_pred.append(next[0][0])
# Create the input that will be used to predict the next data point in the data set.
next_input = np.array([[last, next[0]]], dtype=np.float64)
last = next
# Print the mean squared error between the known data set and the predicted data set.
print('MSE: ', np.square(np.subtract(y_test, y_pred)).mean())
# Save the predicted data set as a csv file for later use
name = datatype + 'Predicted'+str(dim)+'.csv'
np.savetxt(name, y_pred, delimiter=',')
# Plot the known data set and the predicted data set. The red box represents the region that was used
# for the training data.
fig, ax = plt.subplots()
ax.plot(x1, y_test, label="true", linewidth=3)
ax.plot(x1, y_pred, 'g-.',label="predicted", linewidth=4)
ax.legend()
# Created a red region to represent the points used in the training data.
ax.axvspan(plot_min, plot_max, alpha=0.25, color='red')
plt.show()
# Check to make sure the data set is complete
assert len(X_tot) == len(y_tot)
# This is the number of points that will be used in as the training data
dim=12
# Separate the training data from the whole data set
X_train = X_tot[:dim]
y_train = y_tot[:dim]
# Generate the training data for the RNN, using a sequence of 2
rnn_input, rnn_training = format_data(y_train, 2)
# Create a recurrent neural network in Keras and produce a summary of the
# machine learning model
model = rnn(length_of_sequences = rnn_input.shape[1])
model.summary()
# Start the timer. Want to time training+testing
start = timer()
# Fit the model using the training data genenerated above using 150 training iterations and a 5%
# validation split. Setting verbose to True prints information about each training iteration.
hist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150,
verbose=True,validation_split=0.05)
for label in ["loss","val_loss"]:
plt.plot(hist.history[label],label=label)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("The final validation loss: {}".format(hist.history["val_loss"][-1]))
plt.legend()
plt.show()
# Use the trained neural network to predict more points of the data set
test_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])
# Stop the timer and calculate the total time needed.
end = timer()
print('Time: ', end-start)
"""
Explanation: Predicting New Points With A Trained Recurrent Neural Network
End of explanation
"""
def rnn_2layers(length_of_sequences, batch_size = None, stateful = False):
"""
Inputs:
length_of_sequences (an int): the number of y values in "x data". This is determined
when the data is formatted
batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.
stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.
Returns:
model (a Keras model): The recurrent neural network that is built and compiled by this
method
Builds and compiles a recurrent neural network with two hidden layers and returns the model.
"""
# Number of neurons in the input and output layers
in_out_neurons = 1
# Number of neurons in the hidden layer, increased from the first network
hidden_neurons = 500
# Define the input layer
inp = Input(batch_shape=(batch_size,
length_of_sequences,
in_out_neurons))
# Create two hidden layers instead of one hidden layer. Explicitly set the activation
# function to be the sigmoid function (the default value is hyperbolic tangent)
rnn1 = SimpleRNN(hidden_neurons,
return_sequences=True, # This needs to be True if another hidden layer is to follow
stateful = stateful, activation = 'sigmoid',
name="RNN1")(inp)
rnn2 = SimpleRNN(hidden_neurons,
return_sequences=False, activation = 'sigmoid',
stateful = stateful,
name="RNN2")(rnn1)
# Define the output layer as a dense neural network layer (standard neural network layer)
#and add it to the network immediately after the hidden layer.
dens = Dense(in_out_neurons,name="dense")(rnn2)
# Create the machine learning model starting with the input layer and ending with the
# output layer
model = Model(inputs=[inp],outputs=[dens])
# Compile the machine learning model using the mean squared error function as the loss
# function and an Adams optimizer.
model.compile(loss="mean_squared_error", optimizer="adam")
return model
# Check to make sure the data set is complete
assert len(X_tot) == len(y_tot)
# This is the number of points that will be used in as the training data
dim=12
# Separate the training data from the whole data set
X_train = X_tot[:dim]
y_train = y_tot[:dim]
# Generate the training data for the RNN, using a sequence of 2
rnn_input, rnn_training = format_data(y_train, 2)
# Create a recurrent neural network in Keras and produce a summary of the
# machine learning model
model = rnn_2layers(length_of_sequences = 2)
model.summary()
# Start the timer. Want to time training+testing
start = timer()
# Fit the model using the training data genenerated above using 150 training iterations and a 5%
# validation split. Setting verbose to True prints information about each training iteration.
hist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150,
verbose=True,validation_split=0.05)
# This section plots the training loss and the validation loss as a function of training iteration.
# This is not required for analyzing the couple cluster data but can help determine if the network is
# being overtrained.
for label in ["loss","val_loss"]:
plt.plot(hist.history[label],label=label)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("The final validation loss: {}".format(hist.history["val_loss"][-1]))
plt.legend()
plt.show()
# Use the trained neural network to predict more points of the data set
test_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])
# Stop the timer and calculate the total time needed.
end = timer()
print('Time: ', end-start)
"""
Explanation: Other Things to Try
Changing the size of the recurrent neural network and its parameters
can drastically change the results you get from the model. The below
code takes the simple recurrent neural network from above and adds a
second hidden layer, changes the number of neurons in the hidden
layer, and explicitly declares the activation function of the hidden
layers to be a sigmoid function. The loss function and optimizer can
also be changed but are kept the same as the above network. These
parameters can be tuned to provide the optimal result from the
network. For some ideas on how to improve the performance of a
recurrent neural network.
End of explanation
"""
def lstm_2layers(length_of_sequences, batch_size = None, stateful = False):
"""
Inputs:
length_of_sequences (an int): the number of y values in "x data". This is determined
when the data is formatted
batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.
stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.
Returns:
model (a Keras model): The recurrent neural network that is built and compiled by this
method
Builds and compiles a recurrent neural network with two LSTM hidden layers and returns the model.
"""
# Number of neurons on the input/output layer and the number of neurons in the hidden layer
in_out_neurons = 1
hidden_neurons = 250
# Input Layer
inp = Input(batch_shape=(batch_size,
length_of_sequences,
in_out_neurons))
# Hidden layers (in this case they are LSTM layers instead if SimpleRNN layers)
rnn= LSTM(hidden_neurons,
return_sequences=True,
stateful = stateful,
name="RNN", use_bias=True, activation='tanh')(inp)
rnn1 = LSTM(hidden_neurons,
return_sequences=False,
stateful = stateful,
name="RNN1", use_bias=True, activation='tanh')(rnn)
# Output layer
dens = Dense(in_out_neurons,name="dense")(rnn1)
# Define the midel
model = Model(inputs=[inp],outputs=[dens])
# Compile the model
model.compile(loss='mean_squared_error', optimizer='adam')
# Return the model
return model
def dnn2_gru2(length_of_sequences, batch_size = None, stateful = False):
"""
Inputs:
length_of_sequences (an int): the number of y values in "x data". This is determined
when the data is formatted
batch_size (an int): Default value is None. See Keras documentation of SimpleRNN.
stateful (a boolean): Default value is False. See Keras documentation of SimpleRNN.
Returns:
model (a Keras model): The recurrent neural network that is built and compiled by this
method
Builds and compiles a recurrent neural network with four hidden layers (two dense followed by
two GRU layers) and returns the model.
"""
# Number of neurons on the input/output layers and hidden layers
in_out_neurons = 1
hidden_neurons = 250
# Input layer
inp = Input(batch_shape=(batch_size,
length_of_sequences,
in_out_neurons))
# Hidden Dense (feedforward) layers
dnn = Dense(hidden_neurons/2, activation='relu', name='dnn')(inp)
dnn1 = Dense(hidden_neurons/2, activation='relu', name='dnn1')(dnn)
# Hidden GRU layers
rnn1 = GRU(hidden_neurons,
return_sequences=True,
stateful = stateful,
name="RNN1", use_bias=True)(dnn1)
rnn = GRU(hidden_neurons,
return_sequences=False,
stateful = stateful,
name="RNN", use_bias=True)(rnn1)
# Output layer
dens = Dense(in_out_neurons,name="dense")(rnn)
# Define the model
model = Model(inputs=[inp],outputs=[dens])
# Compile the mdoel
model.compile(loss='mean_squared_error', optimizer='adam')
# Return the model
return model
# Check to make sure the data set is complete
assert len(X_tot) == len(y_tot)
# This is the number of points that will be used in as the training data
dim=12
# Separate the training data from the whole data set
X_train = X_tot[:dim]
y_train = y_tot[:dim]
# Generate the training data for the RNN, using a sequence of 2
rnn_input, rnn_training = format_data(y_train, 2)
# Create a recurrent neural network in Keras and produce a summary of the
# machine learning model
# Change the method name to reflect which network you want to use
model = dnn2_gru2(length_of_sequences = 2)
model.summary()
# Start the timer. Want to time training+testing
start = timer()
# Fit the model using the training data genenerated above using 150 training iterations and a 5%
# validation split. Setting verbose to True prints information about each training iteration.
hist = model.fit(rnn_input, rnn_training, batch_size=None, epochs=150,
verbose=True,validation_split=0.05)
# This section plots the training loss and the validation loss as a function of training iteration.
# This is not required for analyzing the couple cluster data but can help determine if the network is
# being overtrained.
for label in ["loss","val_loss"]:
plt.plot(hist.history[label],label=label)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("The final validation loss: {}".format(hist.history["val_loss"][-1]))
plt.legend()
plt.show()
# Use the trained neural network to predict more points of the data set
test_rnn(X_tot, y_tot, X_tot[0], X_tot[dim-1])
# Stop the timer and calculate the total time needed.
end = timer()
print('Time: ', end-start)
# ### Training Recurrent Neural Networks in the Standard Way (i.e. learning the relationship between the X and Y data)
#
# Finally, comparing the performace of a recurrent neural network using the standard data formatting to the performance of the network with time sequence data formatting shows the benefit of this type of data formatting with extrapolation.
# Check to make sure the data set is complete
assert len(X_tot) == len(y_tot)
# This is the number of points that will be used in as the training data
dim=12
# Separate the training data from the whole data set
X_train = X_tot[:dim]
y_train = y_tot[:dim]
# Reshape the data for Keras specifications
X_train = X_train.reshape((dim, 1))
y_train = y_train.reshape((dim, 1))
# Create a recurrent neural network in Keras and produce a summary of the
# machine learning model
# Set the sequence length to 1 for regular data formatting
model = rnn(length_of_sequences = 1)
model.summary()
# Start the timer. Want to time training+testing
start = timer()
# Fit the model using the training data genenerated above using 150 training iterations and a 5%
# validation split. Setting verbose to True prints information about each training iteration.
hist = model.fit(X_train, y_train, batch_size=None, epochs=150,
verbose=True,validation_split=0.05)
# This section plots the training loss and the validation loss as a function of training iteration.
# This is not required for analyzing the couple cluster data but can help determine if the network is
# being overtrained.
for label in ["loss","val_loss"]:
plt.plot(hist.history[label],label=label)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("The final validation loss: {}".format(hist.history["val_loss"][-1]))
plt.legend()
plt.show()
# Use the trained neural network to predict the remaining data points
X_pred = X_tot[dim:]
X_pred = X_pred.reshape((len(X_pred), 1))
y_model = model.predict(X_pred)
y_pred = np.concatenate((y_tot[:dim], y_model.flatten()))
# Plot the known data set and the predicted data set. The red box represents the region that was used
# for the training data.
fig, ax = plt.subplots()
ax.plot(X_tot, y_tot, label="true", linewidth=3)
ax.plot(X_tot, y_pred, 'g-.',label="predicted", linewidth=4)
ax.legend()
# Created a red region to represent the points used in the training data.
ax.axvspan(X_tot[0], X_tot[dim], alpha=0.25, color='red')
plt.show()
# Stop the timer and calculate the total time needed.
end = timer()
print('Time: ', end-start)
"""
Explanation: Other Types of Recurrent Neural Networks
Besides a simple recurrent neural network layer, there are two other
commonly used types of recurrent neural network layers: Long Short
Term Memory (LSTM) and Gated Recurrent Unit (GRU). For a short
introduction to these layers see https://medium.com/mindboard/lstm-vs-gru-experimental-comparison-955820c21e8b
and https://medium.com/mindboard/lstm-vs-gru-experimental-comparison-955820c21e8b.
The first network created below is similar to the previous network,
but it replaces the SimpleRNN layers with LSTM layers. The second
network below has two hidden layers made up of GRUs, which are
preceeded by two dense (feeddorward) neural network layers. These
dense layers "preprocess" the data before it reaches the recurrent
layers. This architecture has been shown to improve the performance
of recurrent neural networks (see the link above and also
https://arxiv.org/pdf/1807.02857.pdf.
End of explanation
"""
import os
import time
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model
"""
Explanation: Generative Models
Generative models describe a class of statistical models that are a contrast
to discriminative models. Informally we say that generative models can
generate new data instances while discriminative models discriminate between
different kinds of data instances. A generative model could generate new photos
of animals that look like 'real' animals while a discriminative model could tell
a dog from a cat. More formally, given a data set $x$ and a set of labels /
targets $y$. Generative models capture the joint probability $p(x, y)$, or
just $p(x)$ if there are no labels, while discriminative models capture the
conditional probability $p(y | x)$. Discriminative models generally try to draw
boundaries in the data space (often high dimensional), while generative models
try to model how data is placed throughout the space.
Note: this material is thanks to Linus Ekstrøm.
Generative Adversarial Networks
Generative Adversarial Networks are a type of unsupervised machine learning
algorithm proposed by Goodfellow et. al
in 2014 (short and good article).
The simplest formulation of
the model is based on a game theoretic approach, zero sum game, where we pit
two neural networks against one another. We define two rival networks, one
generator $g$, and one discriminator $d$. The generator directly produces
samples
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
x = g(z; \theta^{(g)})
\label{_auto1} \tag{1}
\end{equation}
$$
Discriminator
The discriminator attempts to distinguish between samples drawn from the
training data and samples drawn from the generator. In other words, it tries to
tell the difference between the fake data produced by $g$ and the actual data
samples we want to do prediction on. The discriminator outputs a probability
value given by
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
d(x; \theta^{(d)})
\label{_auto2} \tag{2}
\end{equation}
$$
indicating the probability that $x$ is a real training example rather than a
fake sample the generator has generated. The simplest way to formulate the
learning process in a generative adversarial network is a zero-sum game, in
which a function
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
v(\theta^{(g)}, \theta^{(d)})
\label{_auto3} \tag{3}
\end{equation}
$$
determines the reward for the discriminator, while the generator gets the
conjugate reward
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
-v(\theta^{(g)}, \theta^{(d)})
\label{_auto4} \tag{4}
\end{equation}
$$
Learning Process
During learning both of the networks maximize their own reward function, so that
the generator gets better and better at tricking the discriminator, while the
discriminator gets better and better at telling the difference between the fake
and real data. The generator and discriminator alternate on which one trains at
one time (i.e. for one epoch). In other words, we keep the generator constant
and train the discriminator, then we keep the discriminator constant to train
the generator and repeat. It is this back and forth dynamic which lets GANs
tackle otherwise intractable generative problems. As the generator improves with
training, the discriminator's performance gets worse because it cannot easily
tell the difference between real and fake. If the generator ends up succeeding
perfectly, the the discriminator will do no better than random guessing i.e.
50\%. This progression in the training poses a problem for the convergence
criteria for GANs. The discriminator feedback gets less meaningful over time,
if we continue training after this point then the generator is effectively
training on junk data which can undo the learning up to that point. Therefore,
we stop training when the discriminator starts outputting $1/2$ everywhere.
More about the Learning Process
At convergence we have
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
g^* = \underset{g}{\mathrm{argmin}}\hspace{2pt}
\underset{d}{\mathrm{max}}v(\theta^{(g)}, \theta^{(d)})
\label{_auto5} \tag{5}
\end{equation}
$$
The default choice for $v$ is
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
v(\theta^{(g)}, \theta^{(d)}) = \mathbb{E}{x\sim p\mathrm{data}}\log d(x)
+ \mathbb{E}{x\sim p\mathrm{model}}
\log (1 - d(x))
\label{_auto6} \tag{6}
\end{equation}
$$
The main motivation for the design of GANs is that the learning process requires
neither approximate inference (variational autoencoders for example) nor
approximation of a partition function. In the case where
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
\underset{d}{\mathrm{max}}v(\theta^{(g)}, \theta^{(d)})
\label{_auto7} \tag{7}
\end{equation}
$$
is convex in $\theta^{(g)} then the procedure is guaranteed to converge and is
asymptotically consistent
( Seth Lloyd on QuGANs ).
Additional References
This is in
general not the case and it is possible to get situations where the training
process never converges because the generator and discriminator chase one
another around in the parameter space indefinitely. A much deeper discussion on
the currently open research problem of GAN convergence is available
here. To
anyone interested in learning more about GANs it is a highly recommended read.
Direct quote: "In this best-performing formulation, the generator aims to
increase the log probability that the discriminator makes a mistake, rather than
aiming to decrease the log probability that the discriminator makes the correct
prediction." Another interesting read
Writing Our First Generative Adversarial Network
Let us now move on to actually implementing a GAN in tensorflow. We will study
the performance of our GAN on the MNIST dataset. This code is based on and
adapted from the
google tutorial
First we import our libraries
End of explanation
"""
BUFFER_SIZE = 60000
BATCH_SIZE = 256
EPOCHS = 30
data = tf.keras.datasets.mnist.load_data()
(train_images, train_labels), (test_images, test_labels) = data
train_images = np.reshape(train_images, (train_images.shape[0],
28,
28,
1)).astype('float32')
# we normalize between -1 and 1
train_images = (train_images - 127.5) / 127.5
training_dataset = tf.data.Dataset.from_tensor_slices(
train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
"""
Explanation: Next we define our hyperparameters and import our data the usual way
End of explanation
"""
plt.imshow(train_images[0], cmap='Greys')
plt.show()
"""
Explanation: MNIST and GANs
Let's have a quick look
End of explanation
"""
def generator_model():
"""
The generator uses upsampling layers tf.keras.layers.Conv2DTranspose() to
produce an image from a random seed. We start with a Dense layer taking this
random sample as an input and subsequently upsample through multiple
convolutional layers.
"""
# we define our model
model = tf.keras.Sequential()
# adding our input layer. Dense means that every neuron is connected and
# the input shape is the shape of our random noise. The units need to match
# in some sense the upsampling strides to reach our desired output shape.
# we are using 100 random numbers as our seed
model.add(layers.Dense(units=7*7*BATCH_SIZE,
use_bias=False,
input_shape=(100, )))
# we normalize the output form the Dense layer
model.add(layers.BatchNormalization())
# and add an activation function to our 'layer'. LeakyReLU avoids vanishing
# gradient problem
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, BATCH_SIZE)))
assert model.output_shape == (None, 7, 7, BATCH_SIZE)
# even though we just added four keras layers we think of everything above
# as 'one' layer
# next we add our upscaling convolutional layers
model.add(layers.Conv2DTranspose(filters=128,
kernel_size=(5, 5),
strides=(1, 1),
padding='same',
use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.Conv2DTranspose(filters=64,
kernel_size=(5, 5),
strides=(2, 2),
padding='same',
use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.Conv2DTranspose(filters=1,
kernel_size=(5, 5),
strides=(2, 2),
padding='same',
use_bias=False,
activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
"""
Explanation: Now we define our two models. This is where the 'magic' happens. There are a
huge amount of possible formulations for both models. A lot of engineering and
trial and error can be done here to try to produce better performing models. For
more advanced GANs this is by far the step where you can 'make or break' a
model.
We start with the generator. As stated in the introductory text the generator
$g$ upsamples from a random sample to the shape of what we want to predict. In
our case we are trying to predict MNIST images ($28\times 28$ pixels).
End of explanation
"""
def discriminator_model():
"""
The discriminator is a convolutional neural network based image classifier
"""
# we define our model
model = tf.keras.Sequential()
model.add(layers.Conv2D(filters=64,
kernel_size=(5, 5),
strides=(2, 2),
padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
# adding a dropout layer as you do in conv-nets
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(filters=128,
kernel_size=(5, 5),
strides=(2, 2),
padding='same'))
model.add(layers.LeakyReLU())
# adding a dropout layer as you do in conv-nets
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
"""
Explanation: And there we have our 'simple' generator model. Now we move on to defining our
discriminator model $d$, which is a convolutional neural network based image
classifier.
End of explanation
"""
generator = generator_model()
plot_model(generator, show_shapes=True, rankdir='LR')
discriminator = discriminator_model()
plot_model(discriminator, show_shapes=True, rankdir='LR')
"""
Explanation: Other Models
Let us take a look at our models. Note: double click images for bigger view.
End of explanation
"""
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
"""
Explanation: Next we need a few helper objects we will use in training
End of explanation
"""
def generator_loss(fake_output):
loss = cross_entropy(tf.ones_like(fake_output), fake_output)
return loss
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_liks(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
"""
Explanation: The first object, cross_entropy is our loss function and the two others are
our optimizers. Notice we use the same learning rate for both $g$ and $d$. This
is because they need to improve their accuracy at approximately equal speeds to
get convergence (not necessarily exactly equal). Now we define our loss
functions
End of explanation
"""
noise_dimension = 100
n_examples_to_generate = 16
seed_images = tf.random.normal([n_examples_to_generate, noise_dimension])
"""
Explanation: Next we define a kind of seed to help us compare the learning process over
multiple training epochs.
End of explanation
"""
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dimension])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss,
generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator,
generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator,
discriminator.trainable_variables))
return gen_loss, disc_loss
"""
Explanation: Training Step
Now we have everything we need to define our training step, which we will apply
for every step in our training loop. Notice the @tf.function flag signifying
that the function is tensorflow 'compiled'. Removing this flag doubles the
computation time.
End of explanation
"""
def generate_and_save_images(model, epoch, test_input):
# we're making inferences here
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig(f'./images_from_seed_images/image_at_epoch_{str(epoch).zfill(3)}.png')
plt.close()
#plt.show()
"""
Explanation: Next we define a helper function to produce an output over our training epochs
to see the predictive progression of our generator model. Note: I am including
this code here, but comment it out in the training loop.
End of explanation
"""
# Setting up checkpoints to save model during training
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
"""
Explanation: Checkpoints
Setting up checkpoints to periodically save our model during training so that
everything is not lost even if the program were to somehow terminate while
training.
End of explanation
"""
def train(dataset, epochs):
generator_loss_list = []
discriminator_loss_list = []
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
gen_loss, disc_loss = train_step(image_batch)
generator_loss_list.append(gen_loss.numpy())
discriminator_loss_list.append(disc_loss.numpy())
#generate_and_save_images(generator, epoch + 1, seed_images)
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f'Time for epoch {epoch} is {time.time() - start}')
#generate_and_save_images(generator, epochs, seed_images)
loss_file = './data/lossfile.txt'
with open(loss_file, 'w') as outfile:
outfile.write(str(generator_loss_list))
outfile.write('\n')
outfile.write('\n')
outfile.write(str(discriminator_loss_list))
outfile.write('\n')
outfile.write('\n')
"""
Explanation: Now we define our training loop
End of explanation
"""
train(train_dataset, EPOCHS)
"""
Explanation: To train simply call this function. Warning: this might take a long time so
there is a folder of a pretrained network already included in the repository.
End of explanation
"""
from IPython.display import HTML
_s = """
<embed src="images_from_seed_images/generation.gif" autoplay="false" loop="true"></embed>
<p><em></em></p>
"""
HTML(_s)
"""
Explanation: And here is the result of training our model for 100 epochs
<!-- dom:MOVIE: [images_from_seed_images/generation.gif] -->
<!-- begin movie -->
End of explanation
"""
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
restored_generator = checkpoint.generator
restored_discriminator = checkpoint.discriminator
print(restored_generator)
print(restored_discriminator)
"""
Explanation: <!-- end movie -->
Now to avoid having to train and everything, which will take a while depending
on your computer setup we now load in the model which produced the above gif.
End of explanation
"""
def generate_latent_points(number=100, scale_means=1, scale_stds=1):
latent_dim = 100
means = scale_means * tf.linspace(-1, 1, num=latent_dim)
stds = scale_stds * tf.linspace(-1, 1, num=latent_dim)
latent_space_value_range = tf.random.normal([number, latent_dim],
means,
stds,
dtype=tf.float64)
return latent_space_value_range
def generate_images(latent_points):
# notice we set training to false because we are making inferences
generated_images = restored_generator.predict(latent_points)
return generated_images
def plot_result(generated_images, number=100):
# obviously this assumes sqrt number is an int
fig, axs = plt.subplots(int(np.sqrt(number)), int(np.sqrt(number)),
figsize=(10, 10))
for i in range(int(np.sqrt(number))):
for j in range(int(np.sqrt(number))):
axs[i, j].imshow(generated_images[i*j], cmap='Greys')
axs[i, j].axis('off')
plt.show()
generated_images = generate_images(generate_latent_points())
plot_result(generated_images)
"""
Explanation: Exploring the Latent Space
We have successfully loaded in our latest model. Let us now play around a bit
and see what kind of things we can learn about this model. Our generator takes
an array of 100 numbers. One idea can be to try to systematically change our
input. Let us try and see what we get
End of explanation
"""
plot_number = 225
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=5,
scale_stds=1))
plot_result(generated_images, number=plot_number)
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=-5,
scale_stds=1))
plot_result(generated_images, number=plot_number)
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=1,
scale_stds=5))
plot_result(generated_images, number=plot_number)
"""
Explanation: Getting Results
We see that the generator generates images that look like MNIST
numbers: $1, 4, 7, 9$. Let's try to tweak it a bit more to see if we are able
to generate a similar plot where we generate every MNIST number. Let us now try
to 'move' a bit around in the latent space. Note: decrease the plot number if
these following cells take too long to run on your computer.
End of explanation
"""
plot_number = 400
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=1,
scale_stds=10))
plot_result(generated_images, number=plot_number)
"""
Explanation: Again, we have found something interesting. Moving around using our means
takes us from digit to digit, while moving around using our standard
deviations seem to increase the number of different digits! In the last image
above, we can barely make out every MNIST digit. Let us make on last plot using
this information by upping the standard deviation of our Gaussian noises.
End of explanation
"""
def interpolation(point_1, point_2, n_steps=10):
ratios = np.linspace(0, 1, num=n_steps)
vectors = []
for i, ratio in enumerate(ratios):
vectors.append(((1.0 - ratio) * point_1 + ratio * point_2))
return tf.stack(vectors)
"""
Explanation: A pretty cool result! We see that our generator indeed has learned a
distribution which qualitatively looks a whole lot like the MNIST dataset.
Interpolating Between MNIST Digits
Another interesting way to explore the latent space of our generator model is by
interpolating between the MNIST digits. This section is largely based on
this excellent blogpost
by Jason Brownlee.
So let us start by defining a function to interpolate between two points in the
latent space.
End of explanation
"""
plot_number = 100
latent_points = generate_latent_points(number=plot_number)
results = None
for i in range(0, 2*np.sqrt(plot_number), 2):
interpolated = interpolation(latent_points[i], latent_points[i+1])
generated_images = generate_images(interpolated)
if results is None:
results = generated_images
else:
results = tf.stack((results, generated_images))
plot_results(results, plot_number)
"""
Explanation: Now we have all we need to do our interpolation analysis.
End of explanation
"""
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
W = np.vstack((x, y))
C = np.cov(W)
print(C)
"""
Explanation: Basic ideas of the Principal Component Analysis (PCA)
The principal component analysis deals with the problem of fitting a
low-dimensional affine subspace $S$ of dimension $d$ much smaller than
the total dimension $D$ of the problem at hand (our data
set). Mathematically it can be formulated as a statistical problem or
a geometric problem. In our discussion of the theorem for the
classical PCA, we will stay with a statistical approach.
Historically, the PCA was first formulated in a statistical setting in order to estimate the principal component of a multivariate random variable.
We have a data set defined by a design/feature matrix $\boldsymbol{X}$ (see below for its definition)
* Each data point is determined by $p$ extrinsic (measurement) variables
We may want to ask the following question: Are there fewer intrinsic variables (say $d << p$) that still approximately describe the data?
If so, these intrinsic variables may tell us something important and finding these intrinsic variables is what dimension reduction methods do.
A good read is for example Vidal, Ma and Sastry.
Introducing the Covariance and Correlation functions
Before we discuss the PCA theorem, we need to remind ourselves about
the definition of the covariance and the correlation function. These are quantities
Suppose we have defined two vectors
$\hat{x}$ and $\hat{y}$ with $n$ elements each. The covariance matrix $\boldsymbol{C}$ is defined as
$$
\boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{cov}[\boldsymbol{x},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \
\mathrm{cov}[\boldsymbol{y},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{y},\boldsymbol{y}] \
\end{bmatrix},
$$
where for example
$$
\mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
With this definition and recalling that the variance is defined as
$$
\mathrm{var}[\boldsymbol{x}]=\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})^2,
$$
we can rewrite the covariance matrix as
$$
\boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{var}[\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \
\mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] & \mathrm{var}[\boldsymbol{y}] \
\end{bmatrix}.
$$
The covariance takes values between zero and infinity and may thus
lead to problems with loss of numerical precision for particularly
large values. It is common to scale the covariance matrix by
introducing instead the correlation matrix defined via the so-called
correlation function
$$
\mathrm{corr}[\boldsymbol{x},\boldsymbol{y}]=\frac{\mathrm{cov}[\boldsymbol{x},\boldsymbol{y}]}{\sqrt{\mathrm{var}[\boldsymbol{x}] \mathrm{var}[\boldsymbol{y}]}}.
$$
The correlation function is then given by values $\mathrm{corr}[\boldsymbol{x},\boldsymbol{y}]
\in [-1,1]$. This avoids eventual problems with too large values. We
can then define the correlation matrix for the two vectors $\boldsymbol{x}$
and $\boldsymbol{y}$ as
$$
\boldsymbol{K}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} 1 & \mathrm{corr}[\boldsymbol{x},\boldsymbol{y}] \
\mathrm{corr}[\boldsymbol{y},\boldsymbol{x}] & 1 \
\end{bmatrix},
$$
In the above example this is the function we constructed using pandas.
In our derivation of the various regression algorithms like Ordinary Least Squares or Ridge regression
we defined the design/feature matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=\begin{bmatrix}
x_{0,0} & x_{0,1} & x_{0,2}& \dots & \dots x_{0,p-1}\
x_{1,0} & x_{1,1} & x_{1,2}& \dots & \dots x_{1,p-1}\
x_{2,0} & x_{2,1} & x_{2,2}& \dots & \dots x_{2,p-1}\
\dots & \dots & \dots & \dots \dots & \dots \
x_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \dots & \dots x_{n-2,p-1}\
x_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \dots & \dots x_{n-1,p-1}\
\end{bmatrix},
$$
with $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors/features $p$ refering to the column numbers and the
entries $n$ being the row elements.
We can rewrite the design/feature matrix in terms of its column vectors as
$$
\boldsymbol{X}=\begin{bmatrix} \boldsymbol{x}0 & \boldsymbol{x}_1 & \boldsymbol{x}_2 & \dots & \dots & \boldsymbol{x}{p-1}\end{bmatrix},
$$
with a given vector
$$
\boldsymbol{x}i^T = \begin{bmatrix}x{0,i} & x_{1,i} & x_{2,i}& \dots & \dots x_{n-1,i}\end{bmatrix}.
$$
With these definitions, we can now rewrite our $2\times 2$
correaltion/covariance matrix in terms of a moe general design/feature
matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. This leads to a $p\times p$
covariance matrix for the vectors $\boldsymbol{x}_i$ with $i=0,1,\dots,p-1$
$$
\boldsymbol{C}[\boldsymbol{x}] = \begin{bmatrix}
\mathrm{var}[\boldsymbol{x}0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\
\mathrm{cov}[\boldsymbol{x}1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\
\mathrm{cov}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}_1] & \mathrm{var}[\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\
\dots & \dots & \dots & \dots & \dots & \dots \
\dots & \dots & \dots & \dots & \dots & \dots \
\mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}{2}] & \dots & \dots & \mathrm{var}[\boldsymbol{x}{p-1}]\
\end{bmatrix},
$$
and the correlation matrix
$$
\boldsymbol{K}[\boldsymbol{x}] = \begin{bmatrix}
1 & \mathrm{corr}[\boldsymbol{x}0,\boldsymbol{x}_1] & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\
\mathrm{corr}[\boldsymbol{x}1,\boldsymbol{x}_0] & 1 & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\
\mathrm{corr}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}_1] & 1 & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\
\dots & \dots & \dots & \dots & \dots & \dots \
\dots & \dots & \dots & \dots & \dots & \dots \
\mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_{2}] & \dots & \dots & 1\
\end{bmatrix},
$$
The Numpy function np.cov calculates the covariance elements using
the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have
the exact mean values. The following simple function uses the
np.vstack function which takes each vector of dimension $1\times n$
and produces a $2\times n$ matrix $\boldsymbol{W}$
$$
\boldsymbol{W} = \begin{bmatrix} x_0 & y_0 \
x_1 & y_1 \
x_2 & y_2\
\dots & \dots \
x_{n-2} & y_{n-2}\
x_{n-1} & y_{n-1} &
\end{bmatrix},
$$
which in turn is converted into into the $2\times 2$ covariance matrix
$\boldsymbol{C}$ via the Numpy function np.cov(). We note that we can also calculate
the mean value of each set of samples $\boldsymbol{x}$ etc using the Numpy
function np.mean(x). We can also extract the eigenvalues of the
covariance matrix through the np.linalg.eig() function.
End of explanation
"""
import numpy as np
n = 100
# define two vectors
x = np.random.random(size=n)
y = 4+3*x+np.random.normal(size=n)
#scaling the x and y vectors
x = x - np.mean(x)
y = y - np.mean(y)
variance_x = np.sum(x@x)/n
variance_y = np.sum(y@y)/n
print(variance_x)
print(variance_y)
cov_xy = np.sum(x@y)/n
cov_xx = np.sum(x@x)/n
cov_yy = np.sum(y@y)/n
C = np.zeros((2,2))
C[0,0]= cov_xx/variance_x
C[1,1]= cov_yy/variance_y
C[0,1]= cov_xy/np.sqrt(variance_y*variance_x)
C[1,0]= C[0,1]
print(C)
"""
Explanation: Correlation Matrix
The previous example can be converted into the correlation matrix by
simply scaling the matrix elements with the variances. We should also
subtract the mean values for each column. This leads to the following
code which sets up the correlations matrix for the previous example in
a more brute force way. Here we scale the mean values for each column of the design matrix, calculate the relevant mean values and variances and then finally set up the $2\times 2$ correlation matrix (since we have only two vectors).
End of explanation
"""
import numpy as np
import pandas as pd
n = 10
x = np.random.normal(size=n)
x = x - np.mean(x)
y = 4+3*x+np.random.normal(size=n)
y = y - np.mean(y)
X = (np.vstack((x, y))).T
print(X)
Xpd = pd.DataFrame(X)
print(Xpd)
correlation_matrix = Xpd.corr()
print(correlation_matrix)
"""
Explanation: We see that the matrix elements along the diagonal are one as they
should be and that the matrix is symmetric. Furthermore, diagonalizing
this matrix we easily see that it is a positive definite matrix.
The above procedure with numpy can be made more compact if we use pandas.
We whow here how we can set up the correlation matrix using pandas, as done in this simple code
End of explanation
"""
# Common imports
import numpy as np
import pandas as pd
def FrankeFunction(x,y):
term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))
term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))
term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))
term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)
return term1 + term2 + term3 + term4
def create_X(x, y, n ):
if len(x.shape) > 1:
x = np.ravel(x)
y = np.ravel(y)
N = len(x)
l = int((n+1)*(n+2)/2) # Number of elements in beta
X = np.ones((N,l))
for i in range(1,n+1):
q = int((i)*(i+1)/2)
for k in range(i+1):
X[:,q+k] = (x**(i-k))*(y**k)
return X
# Making meshgrid of datapoints and compute Franke's function
n = 4
N = 100
x = np.sort(np.random.uniform(0, 1, N))
y = np.sort(np.random.uniform(0, 1, N))
z = FrankeFunction(x, y)
X = create_X(x, y, n=n)
Xpd = pd.DataFrame(X)
# subtract the mean values and set up the covariance matrix
Xpd = Xpd - Xpd.mean()
covariance_matrix = Xpd.cov()
print(covariance_matrix)
"""
Explanation: We expand this model to the Franke function discussed above.
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
n = 10000
mean = (-1, 2)
cov = [[4, 2], [2, 2]]
X = np.random.multivariate_normal(mean, cov, n)
"""
Explanation: We note here that the covariance is zero for the first rows and
columns since all matrix elements in the design matrix were set to one
(we are fitting the function in terms of a polynomial of degree $n$). We would however not include the intercept
and wee can simply
drop these elements and construct a correlation
matrix without them.
We can rewrite the covariance matrix in a more compact form in terms of the design/feature matrix $\boldsymbol{X}$ as
$$
\boldsymbol{C}[\boldsymbol{x}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}= \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}].
$$
To see this let us simply look at a design matrix $\boldsymbol{X}\in {\mathbb{R}}^{2\times 2}$
$$
\boldsymbol{X}=\begin{bmatrix}
x_{00} & x_{01}\
x_{10} & x_{11}\
\end{bmatrix}=\begin{bmatrix}
\boldsymbol{x}{0} & \boldsymbol{x}{1}\
\end{bmatrix}.
$$
If we then compute the expectation value
$$
\mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}=\begin{bmatrix}
x_{00}^2+x_{01}^2 & x_{00}x_{10}+x_{01}x_{11}\
x_{10}x_{00}+x_{11}x_{01} & x_{10}^2+x_{11}^2\
\end{bmatrix},
$$
which is just
$$
\boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]=\begin{bmatrix} \mathrm{var}[\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] \
\mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] \
\end{bmatrix},
$$
where we wrote $$\boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]$$ to indicate that this the covariance of the vectors $\boldsymbol{x}$ of the design/feature matrix $\boldsymbol{X}$.
It is easy to generalize this to a matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$.
Towards the PCA theorem
We have that the covariance matrix (the correlation matrix involves a simple rescaling) is given as
$$
\boldsymbol{C}[\boldsymbol{x}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}= \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}].
$$
Let us now assume that we can perform a series of orthogonal transformations where we employ some orthogonal matrices $\boldsymbol{S}$.
These matrices are defined as $\boldsymbol{S}\in {\mathbb{R}}^{p\times p}$ and obey the orthogonality requirements $\boldsymbol{S}\boldsymbol{S}^T=\boldsymbol{S}^T\boldsymbol{S}=\boldsymbol{I}$. The matrix can be written out in terms of the column vectors $\boldsymbol{s}i$ as $\boldsymbol{S}=[\boldsymbol{s}_0,\boldsymbol{s}_1,\dots,\boldsymbol{s}{p-1}]$ and $\boldsymbol{s}_i \in {\mathbb{R}}^{p}$.
Assume also that there is a transformation $\boldsymbol{S}^T\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{S}=\boldsymbol{C}[\boldsymbol{y}]$ such that the new matrix $\boldsymbol{C}[\boldsymbol{y}]$ is diagonal with elements $[\lambda_0,\lambda_1,\lambda_2,\dots,\lambda_{p-1}]$.
That is we have
$$
\boldsymbol{C}[\boldsymbol{y}] = \mathbb{E}[\boldsymbol{S}^T\boldsymbol{X}^T\boldsymbol{X}T\boldsymbol{S}]=\boldsymbol{S}^T\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{S},
$$
since the matrix $\boldsymbol{S}$ is not a data dependent matrix. Multiplying with $\boldsymbol{S}$ from the left we have
$$
\boldsymbol{S}\boldsymbol{C}[\boldsymbol{y}] = \boldsymbol{C}[\boldsymbol{x}]\boldsymbol{S},
$$
and since $\boldsymbol{C}[\boldsymbol{y}]$ is diagonal we have for a given eigenvalue $i$ of the covariance matrix that
$$
\boldsymbol{S}_i\lambda_i = \boldsymbol{C}[\boldsymbol{x}]\boldsymbol{S}_i.
$$
In the derivation of the PCA theorem we will assume that the eigenvalues are ordered in descending order, that is
$\lambda_0 > \lambda_1 > \dots > \lambda_{p-1}$.
The eigenvalues tell us then how much we need to stretch the
corresponding eigenvectors. Dimensions with large eigenvalues have
thus large variations (large variance) and define therefore useful
dimensions. The data points are more spread out in the direction of
these eigenvectors. Smaller eigenvalues mean on the other hand that
the corresponding eigenvectors are shrunk accordingly and the data
points are tightly bunched together and there is not much variation in
these specific directions. Hopefully then we could leave it out
dimensions where the eigenvalues are very small. If $p$ is very large,
we could then aim at reducing $p$ to $l << p$ and handle only $l$
features/predictors.
The Algorithm before theorem
Here's how we would proceed in setting up the algorithm for the PCA, see also discussion below here.
* Set up the datapoints for the design/feature matrix $\boldsymbol{X}$ with $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors/features $p$ referring to the column numbers and the entries $n$ being the row elements.
$$
\boldsymbol{X}=\begin{bmatrix}
x_{0,0} & x_{0,1} & x_{0,2}& \dots & \dots x_{0,p-1}\
x_{1,0} & x_{1,1} & x_{1,2}& \dots & \dots x_{1,p-1}\
x_{2,0} & x_{2,1} & x_{2,2}& \dots & \dots x_{2,p-1}\
\dots & \dots & \dots & \dots \dots & \dots \
x_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \dots & \dots x_{n-2,p-1}\
x_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \dots & \dots x_{n-1,p-1}\
\end{bmatrix},
$$
Center the data by subtracting the mean value for each column. This leads to a new matrix $\boldsymbol{X}\rightarrow \overline{\boldsymbol{X}}$.
Compute then the covariance/correlation matrix $\mathbb{E}[\overline{\boldsymbol{X}}^T\overline{\boldsymbol{X}}]$.
Find the eigenpairs of $\boldsymbol{C}$ with eigenvalues $[\lambda_0,\lambda_1,\dots,\lambda_{p-1}]$ and eigenvectors $[\boldsymbol{s}0,\boldsymbol{s}_1,\dots,\boldsymbol{s}{p-1}]$.
Order the eigenvalue (and the eigenvectors accordingly) in order of decreasing eigenvalues.
Keep only those $l$ eigenvalues larger than a selected threshold value, discarding thus $p-l$ features since we expect small variations in the data here.
Writing our own PCA code
We will use a simple example first with two-dimensional data
drawn from a multivariate normal distribution with the following mean and covariance matrix (we have fixed these quantities but will play around with them below):
$$
\mu = (-1,2) \qquad \Sigma = \begin{bmatrix} 4 & 2 \
2 & 2
\end{bmatrix}
$$
Note that the mean refers to each column of data.
We will generate $n = 10000$ points $X = { x_1, \ldots, x_N }$ from
this distribution, and store them in the $1000 \times 2$ matrix $\boldsymbol{X}$. This is our design matrix where we have forced the covariance and mean values to take specific values.
The following Python code aids in setting up the data and writing out the design matrix.
Note that the function multivariate returns also the covariance discussed above and that it is defined by dividing by $n-1$ instead of $n$.
End of explanation
"""
df = pd.DataFrame(X)
# Pandas does the centering for us
df = df -df.mean()
# we center it ourselves
X_centered = X - X.mean(axis=0)
"""
Explanation: Now we are going to implement the PCA algorithm. We will break it down into various substeps.
The first step of PCA is to compute the sample mean of the data and use it to center the data. Recall that the sample mean is
$$
\mu_n = \frac{1}{n} \sum_{i=1}^n x_i
$$
and the mean-centered data $\bar{X} = { \bar{x}_1, \ldots, \bar{x}_n }$ takes the form
$$
\bar{x}_i = x_i - \mu_n.
$$
When you are done with these steps, print out $\mu_n$ to verify it is
close to $\mu$ and plot your mean centered data to verify it is
centered at the origin!
The following code elements perform these operations using pandas or using our own functionality for doing so. The latter, using numpy is rather simple through the mean() function.
End of explanation
"""
print(df.cov())
print(np.cov(X_centered.T))
"""
Explanation: Alternatively, we could use the functions we discussed
earlier for scaling the data set. That is, we could have used the
StandardScaler function in Scikit-Learn, a function which ensures
that for each feature/predictor we study the mean value is zero and
the variance is one (every column in the design/feature matrix). You
would then not get the same results, since we divide by the
variance. The diagonal covariance matrix elements will then be one,
while the non-diagonal ones need to be divided by $2\sqrt{2}$ for our
specific case.
Now we are going to use the mean centered data to compute the sample covariance of the data by using the following equation
$$
\Sigma_n = \frac{1}{n-1} \sum_{i=1}^n \bar{x}i^T \bar{x}_i = \frac{1}{n-1} \sum{i=1}^n (x_i - \mu_n)^T (x_i - \mu_n)
$$
where the data points $x_i \in \mathbb{R}^p$ (here in this example $p = 2$) are column vectors and $x^T$ is the transpose of $x$.
We can write our own code or simply use either the functionaly of numpy or that of pandas, as follows
End of explanation
"""
# extract the relevant columns from the centered design matrix of dim n x 2
x = X_centered[:,0]
y = X_centered[:,1]
Cov = np.zeros((2,2))
Cov[0,1] = np.sum(x.T@y)/(n-1.0)
Cov[0,0] = np.sum(x.T@x)/(n-1.0)
Cov[1,1] = np.sum(y.T@y)/(n-1.0)
Cov[1,0]= Cov[0,1]
print("Centered covariance using own code")
print(Cov)
plt.plot(x, y, 'x')
plt.axis('equal')
plt.show()
"""
Explanation: Note that the way we define the covariance matrix here has a factor $n-1$ instead of $n$. This is included in the cov() function by numpy and pandas.
Our own code here is not very elegant and asks for obvious improvements. It is tailored to this specific $2\times 2$ covariance matrix.
End of explanation
"""
# diagonalize and obtain eigenvalues, not necessarily sorted
EigValues, EigVectors = np.linalg.eig(Cov)
# sort eigenvectors and eigenvalues
#permute = EigValues.argsort()
#EigValues = EigValues[permute]
#EigVectors = EigVectors[:,permute]
print("Eigenvalues of Covariance matrix")
for i in range(2):
print(EigValues[i])
FirstEigvector = EigVectors[:,0]
SecondEigvector = EigVectors[:,1]
print("First eigenvector")
print(FirstEigvector)
print("Second eigenvector")
print(SecondEigvector)
#thereafter we do a PCA with Scikit-learn
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X2Dsl = pca.fit_transform(X)
print("Eigenvector of largest eigenvalue")
print(pca.components_.T[:, 0])
"""
Explanation: Depending on the number of points $n$, we will get results that are close to the covariance values defined above.
The plot shows how the data are clustered around a line with slope close to one. Is this expected? Try to change the covariance and the mean values. For example, try to make the variance of the first element much larger than that of the second diagonal element. Try also to shrink the covariance (the non-diagonal elements) and see how the data points are distributed.
Diagonalize the sample covariance matrix to obtain the principal components
Now we are ready to solve for the principal components! To do so we
diagonalize the sample covariance matrix $\Sigma$. We can use the
function np.linalg.eig to do so. It will return the eigenvalues and
eigenvectors of $\Sigma$. Once we have these we can perform the
following tasks:
We compute the percentage of the total variance captured by the first principal component
We plot the mean centered data and lines along the first and second principal components
Then we project the mean centered data onto the first and second principal components, and plot the projected data.
Finally, we approximate the data as
$$
x_i \approx \tilde{x}_i = \mu_n + \langle x_i, v_0 \rangle v_0
$$
where $v_0$ is the first principal component.
Collecting all these steps we can write our own PCA function and
compare this with the functionality included in Scikit-Learn.
The code here outlines some of the elements we could include in the
analysis. Feel free to extend upon this in order to address the above
questions.
End of explanation
"""
import numpy as np
import pandas as pd
from IPython.display import display
np.random.seed(100)
# setting up a 10 x 5 vanilla matrix
rows = 10
cols = 5
X = np.random.randn(rows,cols)
df = pd.DataFrame(X)
# Pandas does the centering for us
df = df -df.mean()
display(df)
# we center it ourselves
X_centered = X - X.mean(axis=0)
# Then check the difference between pandas and our own set up
print(X_centered-df)
#Now we do an SVD
U, s, V = np.linalg.svd(X_centered)
c1 = V.T[:, 0]
c2 = V.T[:, 1]
W2 = V.T[:, :2]
X2D = X_centered.dot(W2)
print(X2D)
"""
Explanation: This code does not contain all the above elements, but it shows how we can use Scikit-Learn to extract the eigenvector which corresponds to the largest eigenvalue. Try to address the questions we pose before the above code. Try also to change the values of the covariance matrix by making one of the diagonal elements much larger than the other. What do you observe then?
Classical PCA Theorem
We assume now that we have a design matrix $\boldsymbol{X}$ which has been
centered as discussed above. For the sake of simplicity we skip the
overline symbol. The matrix is defined in terms of the various column
vectors $[\boldsymbol{x}0,\boldsymbol{x}_1,\dots, \boldsymbol{x}{p-1}]$ each with dimension
$\boldsymbol{x}\in {\mathbb{R}}^{n}$.
The PCA theorem states that minimizing the above reconstruction error
corresponds to setting $\boldsymbol{W}=\boldsymbol{S}$, the orthogonal matrix which
diagonalizes the empirical covariance(correlation) matrix. The optimal
low-dimensional encoding of the data is then given by a set of vectors
$\boldsymbol{z}_i$ with at most $l$ vectors, with $l << p$, defined by the
orthogonal projection of the data onto the columns spanned by the
eigenvectors of the covariance(correlations matrix).
To show the PCA theorem let us start with the assumption that there is one vector $\boldsymbol{s}_0$ which corresponds to a solution which minimized the reconstruction error $J$. This is an orthogonal vector. It means that we now approximate the reconstruction error in terms of $\boldsymbol{w}_0$ and $\boldsymbol{z}_0$ as
We are almost there, we have obtained a relation between minimizing
the reconstruction error and the variance and the covariance
matrix. Minimizing the error is equivalent to maximizing the variance
of the projected data.
We could trivially maximize the variance of the projection (and
thereby minimize the error in the reconstruction function) by letting
the norm-2 of $\boldsymbol{w}_0$ go to infinity. However, this norm since we
want the matrix $\boldsymbol{W}$ to be an orthogonal matrix, is constrained by
$\vert\vert \boldsymbol{w}_0 \vert\vert_2^2=1$. Imposing this condition via a
Lagrange multiplier we can then in turn maximize
$$
J(\boldsymbol{w}_0)= \boldsymbol{w}_0^T\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{w}_0+\lambda_0(1-\boldsymbol{w}_0^T\boldsymbol{w}_0).
$$
Taking the derivative with respect to $\boldsymbol{w}_0$ we obtain
$$
\frac{\partial J(\boldsymbol{w}_0)}{\partial \boldsymbol{w}_0}= 2\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{w}_0-2\lambda_0\boldsymbol{w}_0=0,
$$
meaning that
$$
\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{w}_0=\lambda_0\boldsymbol{w}_0.
$$
The direction that maximizes the variance (or minimizes the construction error) is an eigenvector of the covariance matrix! If we left multiply with $\boldsymbol{w}_0^T$ we have the variance of the projected data is
$$
\boldsymbol{w}_0^T\boldsymbol{C}[\boldsymbol{x}]\boldsymbol{w}_0=\lambda_0.
$$
If we want to maximize the variance (minimize the construction error)
we simply pick the eigenvector of the covariance matrix with the
largest eigenvalue. This establishes the link between the minimization
of the reconstruction function $J$ in terms of an orthogonal matrix
and the maximization of the variance and thereby the covariance of our
observations encoded in the design/feature matrix $\boldsymbol{X}$.
The proof
for the other eigenvectors $\boldsymbol{w}_1,\boldsymbol{w}_2,\dots$ can be
established by applying the above arguments and using the fact that
our basis of eigenvectors is orthogonal, see Murphy chapter
12.2. The
discussion in chapter 12.2 of Murphy's text has also a nice link with
the Singular Value Decomposition theorem. For categorical data, see
chapter 12.4 and discussion therein.
For more details, see for example Vidal, Ma and Sastry, chapter 2.
Geometric Interpretation and link with Singular Value Decomposition
For a detailed demonstration of the geometric interpretation, see Vidal, Ma and Sastry, section 2.1.2.
Principal Component Analysis (PCA) is by far the most popular dimensionality reduction algorithm.
First it identifies the hyperplane that lies closest to the data, and then it projects the data onto it.
The following Python code uses NumPy’s svd() function to obtain all the principal components of the
training set, then extracts the first two principal components. First we center the data using either pandas or our own code
End of explanation
"""
W2 = V.T[:, :2]
X2D = X_centered.dot(W2)
"""
Explanation: PCA assumes that the dataset is centered around the origin. Scikit-Learn’s PCA classes take care of centering
the data for you. However, if you implement PCA yourself (as in the preceding example), or if you use other libraries, don’t
forget to center the data first.
Once you have identified all the principal components, you can reduce the dimensionality of the dataset
down to $d$ dimensions by projecting it onto the hyperplane defined by the first $d$ principal components.
Selecting this hyperplane ensures that the projection will preserve as much variance as possible.
End of explanation
"""
#thereafter we do a PCA with Scikit-learn
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X2D = pca.fit_transform(X)
print(X2D)
"""
Explanation: PCA and scikit-learn
Scikit-Learn’s PCA class implements PCA using SVD decomposition just like we did before. The
following code applies PCA to reduce the dimensionality of the dataset down to two dimensions (note
that it automatically takes care of centering the data):
End of explanation
"""
pca.components_.T[:, 0]
"""
Explanation: After fitting the PCA transformer to the dataset, you can access the principal components using the
components variable (note that it contains the PCs as horizontal vectors, so, for example, the first
principal component is equal to
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print("Train set accuracy from Logistic Regression: {:.2f}".format(logreg.score(X_train,y_train)))
# We scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Then perform again a log reg fit
logreg.fit(X_train_scaled, y_train)
print("Train set accuracy scaled data: {:.2f}".format(logreg.score(X_train_scaled,y_train)))
#thereafter we do a PCA with Scikit-learn
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X2D_train = pca.fit_transform(X_train_scaled)
# and finally compute the log reg fit and the score on the training data
logreg.fit(X2D_train,y_train)
print("Train set accuracy scaled and PCA data: {:.2f}".format(logreg.score(X2D_train,y_train)))
"""
Explanation: Another very useful piece of information is the explained variance ratio of each principal component,
available via the $explained_variance_ratio$ variable. It indicates the proportion of the dataset’s
variance that lies along the axis of each principal component.
Back to the Cancer Data
We can now repeat the above but applied to real data, in this case our breast cancer data.
Here we compute performance scores on the training data using logistic regression.
End of explanation
"""
pca = PCA()
pca.fit(X)
cumsum = np.cumsum(pca.explained_variance_ratio_)
d = np.argmax(cumsum >= 0.95) + 1
"""
Explanation: We see that our training data after the PCA decomposition has a performance similar to the non-scaled data.
Instead of arbitrarily choosing the number of dimensions to reduce down to, it is generally preferable to
choose the number of dimensions that add up to a sufficiently large portion of the variance (e.g., 95%).
Unless, of course, you are reducing dimensionality for data visualization — in that case you will
generally want to reduce the dimensionality down to 2 or 3.
The following code computes PCA without reducing dimensionality, then computes the minimum number
of dimensions required to preserve 95% of the training set’s variance:
End of explanation
"""
pca = PCA(n_components=0.95)
X_reduced = pca.fit_transform(X)
"""
Explanation: You could then set $n_components=d$ and run PCA again. However, there is a much better option: instead
of specifying the number of principal components you want to preserve, you can set $n_components$ to be
a float between 0.0 and 1.0, indicating the ratio of variance you wish to preserve:
End of explanation
"""
from sklearn.decomposition import KernelPCA
rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=0.04)
X_reduced = rbf_pca.fit_transform(X)
"""
Explanation: Incremental PCA
One problem with the preceding implementation of PCA is that it requires the whole training set to fit in
memory in order for the SVD algorithm to run. Fortunately, Incremental PCA (IPCA) algorithms have
been developed: you can split the training set into mini-batches and feed an IPCA algorithm one minibatch
at a time. This is useful for large training sets, and also to apply PCA online (i.e., on the fly, as new
instances arrive).
Randomized PCA
Scikit-Learn offers yet another option to perform PCA, called Randomized PCA. This is a stochastic
algorithm that quickly finds an approximation of the first d principal components. Its computational
complexity is $O(m \times d^2)+O(d^3)$, instead of $O(m \times n^2) + O(n^3)$, so it is dramatically faster than the
previous algorithms when $d$ is much smaller than $n$.
Kernel PCA
The kernel trick is a mathematical technique that implicitly maps instances into a
very high-dimensional space (called the feature space), enabling nonlinear classification and regression
with Support Vector Machines. Recall that a linear decision boundary in the high-dimensional feature
space corresponds to a complex nonlinear decision boundary in the original space.
It turns out that the same trick can be applied to PCA, making it possible to perform complex nonlinear
projections for dimensionality reduction. This is called Kernel PCA (kPCA). It is often good at
preserving clusters of instances after projection, or sometimes even unrolling datasets that lie close to a
twisted manifold.
For example, the following code uses Scikit-Learn’s KernelPCA class to perform kPCA with an
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/examples/statistical_rethinking/notebooks/02_small_worlds_and_large_worlds.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Install { display-mode: "form" }
TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']
if TF_Installation == 'TF Nightly':
!pip install -q --upgrade tf-nightly
print('Installation of `tf-nightly` complete.')
elif TF_Installation == 'TF Stable':
!pip install -q --upgrade tensorflow
print('Installation of `tensorflow` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "System" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
import numpy as np
import arviz as az
import pandas as pd
import tensorflow as tf
import tensorflow_probability as tfp
import scipy.stats as stats
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
az.style.use('seaborn-colorblind')
"""
Explanation: Chapter 2 - Small Worlds and Large Worlds
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/statistical_rethinking/notebooks/02_small_worlds_and_large_worlds"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/statistical_rethinking/notebooks/02_small_worlds_and_large_worlds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/statistical_rethinking/notebooks/02_small_worlds_and_large_worlds.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/examples/statistical_rethinking/notebooks/02_small_worlds_and_large_worlds.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Imports and utility functions
End of explanation
"""
# define a list of constants
# divide by ways each value can occur
ways = tf.constant([0., 3, 8, 9, 0])
new_ways = ways / tf.reduce_sum(ways)
new_ways
"""
Explanation: 2.1.3. From counts to probability
Suppose we have a bag of four marbles, each of which can be either blue or white. There are five possible compositions of the bag, ranging from all white marbles to all blue marbles. Now, suppose we make three draws with replacement from the bag, and end up with blue, then white, then blue. If we were to count the ways that each bag composition could produce this combination of samples, we would find the vector in ways below. For example, there are zero ways that four white marbles could lead to this sample, three ways that a bag composition of one blue marble and three white marbles could lead to the sample, as so on.
We can convert these counts to probabilities by simply dividing the counts by the sum of all the counts.
Code 2.1
End of explanation
"""
# probability of 6 successes in 9 trials with 0.5 probability
tfd.Binomial(total_count=9, probs=0.5).prob(6)
"""
Explanation: 2.3.2.1. Observed variables
Consider a globe of the earth that we toss into the air nine times. We want to compute the probability that our right index finger will land on water six times out of nine. We can use the binomial distribution to compute this, using a probability of 0.5 of landing on water on each toss.
Code 2.2
End of explanation
"""
# define grid
n_points = 20 # change to an odd number for Code 2.5 graphs to
# match book examples in Figure 2.6
p_grid = tf.linspace(start=0., stop=1., num=n_points)
#define prior
prior = tf.ones([n_points])
# compute likelihood at each value in grid
likelihood = tfd.Binomial(total_count=9, probs=p_grid).prob(6)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / tf.reduce_sum(unstd_posterior)
posterior
"""
Explanation: 2.4.3.Grid Approximation
Create a grid approximation for the globe tossing model, for the scenario described in the section above. In the grid approximation technique, we approximate a continuous posterior distribution by using a discrete grid of parameter values.
Code 2.3
End of explanation
"""
_, ax = plt.subplots(figsize=(9, 4))
ax.plot(p_grid, posterior, "-o")
ax.set(
xlabel="probability of water",
ylabel="posterior probability",
title="20 points");
"""
Explanation: Code 2.4
End of explanation
"""
first_prior = tf.where(condition=p_grid < 0.5, x=0., y=1)
second_prior = tf.exp(-5 * abs(p_grid - 0.5))
_, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 4),
constrained_layout=True)
axes[0, 0].plot(p_grid, first_prior)
axes[0, 0].set_title('First prior')
axes[1, 0].plot(p_grid, first_prior * likelihood)
axes[1, 0].set_title('First posterior')
axes[0, 1].plot(p_grid, second_prior)
axes[0, 1].set_title('Second prior')
axes[1, 1].plot(p_grid, second_prior * likelihood)
axes[1, 1].set_title('Second posterior');
"""
Explanation: Code 2.3 and 2.4 using TFP distributions (OPTIONAL)
To replicate the other two priors depicted in Figure 2.6 in the text, try using the following priors, one at time in the Code 2.3 sample above.
Code 2.5
End of explanation
"""
W = 6
L = 3
dist = tfd.JointDistributionNamed({
"water": lambda probability: tfd.Binomial(total_count=W + L,
probs=probability),
"probability": tfd.Uniform(low=0., high=1.)
})
def neg_log_prob(x):
return tfp.math.value_and_gradient(
lambda p: -dist.log_prob(
water=W,
probability=tf.clip_by_value(p[-1], 0., 1.)),
x,
)
results = tfp.optimizer.bfgs_minimize(neg_log_prob, initial_position=[0.5])
assert results.converged
"""
Explanation: 2.4.4. Quadratic Approximation
Code 2.6
TFP doesn't natively provide a quap function, like in the book. But it has a lot of optimization tools. We'll use bfgs, which gives us a somewhat straightforward way to get both the measurement and the standard error.
End of explanation
"""
approximate_posterior = tfd.Normal(
results.position,
tf.sqrt(results.inverse_hessian_estimate),
)
print(
"mean:", approximate_posterior.mean(),
"\nstandard deviation: ", approximate_posterior.stddev(),
)
"""
Explanation: The results object itself has a lot of information about the optimization process. The estimate is called the position, and we get the standard error from the inverse_hessian_estimate.
End of explanation
"""
_, ax = plt.subplots(figsize=(9, 4))
x = tf.linspace(0., 1., num=101)
ax.plot(x, tfd.Beta(W + 1, L + 1).prob(x), label='Analytic posterior')
# values obained from quadratic approximation
ax.plot(x, tf.squeeze(approximate_posterior.prob(x)), "--",
label='Quadratic approximation')
ax.set(
xlabel='Probability of water',
ylabel='Posterior probability',
title='Comparing quadratic approximation to analytic posterior'
)
ax.legend();
"""
Explanation: Let's compare the mean and standard deviation from the quadratic approximation with an analytical approach based on the Beta distribution.
Code 2.7
End of explanation
"""
@tf.function
def do_sampling():
def get_model_log_prob(probs):
return tfd.Binomial(total_count=W + L, probs=probs).log_prob(W)
sampling_kernel = tfp.mcmc.RandomWalkMetropolis(get_model_log_prob)
return tfp.mcmc.sample_chain(
num_results=5000,
current_state=.5,
kernel=sampling_kernel,
num_burnin_steps=500,
trace_fn=None,
)
samples = do_sampling()
"""
Explanation: 2.4.5. Markov chain Monte Carlo
We can estimate the posterior using a Markov chain Monte Carlo (MCMC) technique, which will be explained further in Chapter 9. An outline of the algorithm follows. It is written primarily in numpy to illustrate the steps you would take.
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = tfd.Normal(loc=p[i - 1], scale=0.1).sample(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = tfd.Binomial(total_count=W+L, probs=p[i - 1]).prob(W)
q1 = tfd.Binomial(total_count=W+L, probs=p_new).prob(W)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
But to actually use Metropolis Hastings in TFP, you can use the mcmc library. The version of the algorithm that we want is called RandomWalkMetropolis.
Code 2.8
End of explanation
"""
_, ax = plt.subplots(figsize=(9, 4))
az.plot_kde(samples, label="Metropolis approximation", ax=ax)
x = tf.linspace(0., 1., num=100)
ax.plot(x, tfd.Beta(W + 1, L + 1).prob(x), "C1", label="True posterior")
ax.legend();
"""
Explanation: Code 2.9
With the samples from the posterior distribution in p, we can compare the results of the MCMC approximation to the analytical posterior.
End of explanation
"""
|
mxbu/logbook | blog-notebooks/arctic_crypto_database.ipynb | mit | import urllib
import json
import time
import pandas as pd
import datetime
from arctic import Arctic
import arctic
import subprocess
import platform
import os
import krakenex
if platform.system() == "Darwin":
os.chdir('/users/'+os.getlogin()+'/MEGA/App')
if platform.system() == "Darwin":
subprocess.Popen(['/usr/local/bin/mongod', '--dbpath', '/users/'+os.getlogin()+'/MEGA/App/cryptodb', '--logpath', '/users/'+os.getlogin()+'/MEGA/App/cryptodb/krakendb.log', '--fork'])
k = krakenex.API()
k.load_key('kraken.key')
# Connect to Local MONGODB
krakendb = Arctic('localhost')
# Create the library - defaults to VersionStore
krakendb.initialize_library('Kraken')
# Access the library
kraken = krakendb['Kraken']
"""
Explanation: There are many reasons why you should create your own stock database. You have your data always accessible and your able to conduct backtests (e.g. with the bt package, maybe i will write about it someday, it's really cool!), to name two of them. The arctic package which is developed by Man AHL provides everything we need for our purposes. Arctic is a high performance datastore for numeric data. It supports Pandas, numpy arrays and pickled objects out-of-the-box, with pluggable support for other data types and optional versioning. It can query millions of rows per second per client, achieves ~10x compression on network bandwidth, ~10x compression on disk, and scales to hundreds of millions of rows per second per MongoDB instance. As a Pythonist, what more do you need? Visit the arctic page for more details or the installation instructions. Now, I want to create a database with trade data of some of the most popular cryptocurrencies traded at the kraken exchange. So, first we import the necessery packages, connect to the kraken API and initialize the mongodb.
End of explanation
"""
def updateTickData(pairs, db):
s = '\n Begin import of '+', '.join(pairs[0:len(pairs)-1])+' and '+pairs[-1]+'\n This could take some time! \n'
print(s)
tickdata_collection = {}
for pair in pairs:
print(pair+': ')
tickdata_collection[pair]=get_all_kraken_trades(pair, db = kraken)
s = '\nAll Pairs are up do date now!\n'
print(s)
return tickdata_collection
def getInfo(db):
infolist = kraken.list_versions()
s = '\n Last updates: \n'
print(s)
for list in infolist:
s = list['symbol']+' updated at ' + list['date'].strftime('%Y-%m-%d %H:%M:%S')+ ', Version: '+ str(list['version'])+'\n'
print(s)
snapshots = kraken.list_snapshots()
s = '\n Last snapshots: \n'
print(s)
for list in snapshots:
s = list+'\n'
print(s)
def get_all_kraken_trades(pair, since = None, db = None):
"""
Input:
pair = pair name
since = unix datestamp, default is None (imports every trade from the beginning, this could take a long time)
Output:
Pandas DataFrame
"""
history = pd.DataFrame( columns = ['price', 'volume', 'time', 'buy/sell', 'market/limit'])
if pair in db.list_symbols():
since = db.read(pair).metadata['last']
elif since == None:
since = 0
try:
while True:
data = urllib.request.urlopen("https://api.kraken.com/0/public/Trades?pair="+pair+"&since="+str(since)).read()
data = data.decode()
data = json.loads(data)
last = int(data['result']['last'])
data = data['result'][pair]
data = pd.DataFrame(data)
if data.empty:
break
dates = [datetime.datetime.fromtimestamp(ts) for ts in (data[2].values)]
data.index = pd.DatetimeIndex(dates)
data = data.iloc[:,0:5]
data.iloc[:,0:3] = data.iloc[:,0:3].astype(float)
data.columns = ['price', 'volume', 'time', 'buy/sell', 'market/limit']
history = history.append(data) #ignore_index=True)
since = last
print('imported data until: '+history.index[-1].strftime('%Y-%m-%d %H:%M:%S'))
time.sleep(3)
except Exception as e:
print(str(e))
db.append(pair, history, metadata={'last': last, 'source': 'Kraken'})
time.sleep(2)
alltrades = db.read(pair).data
return alltrades
def get_kraken_balance(db = None):
balance = k.query_private('Balance')['result']
df = pd.DataFrame(list(balance.items()))
df = df.transpose()
df.columns = df.iloc[0]
df = df.reindex(df.index.drop(0))
df = df.astype(float)
last = datetime.datetime.now()
df.index = pd.DatetimeIndex([last])
if db:
db.append('Balance', df, metadata={'last': last, 'source': 'Kraken'})
allbalance = db.read('Balance').data
return df, allbalance
else:
return df
def add_kraken_order(pair, buysell, ordertype, volume, **kwargs ):
'''
Input:
pair = asset pair
buysell = type of order (buy/sell)
ordertype = order type:
market
limit (price = limit price)
stop-loss (price = stop loss price)
take-profit (price = take profit price)
stop-loss-profit (price = stop loss price, price2 = take profit price)
stop-loss-profit-limit (price = stop loss price, price2 = take profit price)
stop-loss-limit (price = stop loss trigger price, price2 = triggered limit price)
take-profit-limit (price = take profit trigger price, price2 = triggered limit price)
trailing-stop (price = trailing stop offset)
trailing-stop-limit (price = trailing stop offset, price2 = triggered limit offset)
stop-loss-and-limit (price = stop loss price, price2 = limit price)
settle-position
price = price (optional. dependent upon ordertype)
price2 = secondary price (optional. dependent upon ordertype)
volume = order volume in lots
leverage = amount of leverage desired (optional. default = none)
oflags = comma delimited list of order flags (optional):
viqc = volume in quote currency (not available for leveraged orders)
fcib = prefer fee in base currency
fciq = prefer fee in quote currency
nompp = no market price protection
post = post only order (available when ordertype = limit)
starttm = scheduled start time (optional):
0 = now (default)
+<n> = schedule start time <n> seconds from now
<n> = unix timestamp of start time
expiretm = expiration time (optional):
0 = no expiration (default)
+<n> = expire <n> seconds from now
<n> = unix timestamp of expiration time
userref = user reference id. 32-bit signed number. (optional)
validate = validate inputs only. do not submit order (optional)
optional closing order to add to system when order gets filled:
close[ordertype] = order type
close[price] = price
close[price2] = secondary price
Output:
descr = order description info
order = order description
close = conditional close order description (if conditional close set)
txid = array of transaction ids for order (if order was added successfully)
'''
orderinfo = k.query_private('AddOrder', {'pair': pair,'type' : buysell,
'ordertype' : ordertype, 'volume' : volume,
**kwargs })
if bool(orderinfo['error']):
raise Exception(orderinfo['error'])
return orderinfo['result']['txid'], orderinfo['result']['descr']
"""
Explanation: The kraken API can be installed via pip: pip install krakenex. It contains public commands like gettting OHLC data, the order book or recent trades. There is also the possibility to execute private commands like adding standard orders. For that you need to create a key file with your API key and your secret. Visit the Kraken API page for more details.
Next we define the functions to import the data, get information about the data,which is already in the database or to get current account balance. Note here that I found it was about 2 times faster to access public data via https://api.kraken.com/0/public than via the krakenex API, thats why I didn't use it here. Another remark: The Kraken API has a built-in call rate limit in place to protect against DoS attacks and order book manipulation. So, it's only possible to make a call every 2 or 3 seconds, Therefore, there is this time.sleep(3) to pause the function, which imports the trade data for 3 seconds. Actually, this is a pity because it slows the function down by huge amount of time.
End of explanation
"""
get_kraken_balance()
"""
Explanation: Let's now run the functions to see what happens:
End of explanation
"""
getInfo(kraken)
"""
Explanation: The function get_kraken_balance() returns a pandas Dataframe with the current balance. If you give the function your arctic database as input, the balance will be saved in the database and it also returns a Dataframe with all your past balance(if you've saved it in your database)
End of explanation
"""
pairs = ['XETHZEUR','XXBTZEUR', 'XZECZEUR', 'XXRPZEUR']
trades = updateTickData(pairs, kraken)
"""
Explanation: getInfo(db) prints among three others the last version of all items in the database with the exact date. It's also possible to snapshot your databse. This function will also print the snapshots you've created so far.
Now the main function in this post is updateTickData(pairs, db). Basically, for each pair in the list "pairs", it looks into the databse for the last imported values and fetches the values henceforward. Otherwise it begins from the beginning if no starting value is passed. Besides the previously mentioned reason why it takes so long time to import all the trades, there is another one which is quite time consuming. That is for every loop iteration you can only import up to 1000 trades. I think to have all the trades in our database we have to go through it, but if you have a quicker solution, I would be a happy recipient.
End of explanation
"""
trades['XETHZEUR'].head()
"""
Explanation: As I have already most of the data in the database, it was quite quick to update it. updateTickData(pairs, db) returns a collection of pandas Dataframes containing all trades of the desired pairs. Let's have a look at the first few trades of the pair ETH/EUR. For every trade we have the price, volume, a unix timestamp, whether it was buy or sell and whether it was at the market or a limit order.
End of explanation
"""
%matplotlib inline
import matplotlib.dates as mdates
import numpy as np
from mpl_finance import candlestick_ohlc
import matplotlib
import matplotlib.pyplot as plt
stop = datetime.datetime.now()
start = stop-datetime.timedelta(days=90)
mask = (trades['XETHZEUR'].index > start) & (trades['XETHZEUR'].index <= datetime.datetime.now())
data = trades['XETHZEUR'].loc[mask]
ohlc = data.price.resample('1D').ohlc().dropna().iloc[1:,:]
volume = data.volume.resample('1D').sum().dropna().iloc[1:]
dateStamp = np.array(ohlc.index).astype('datetime64[s]')
dateStamp = dateStamp.tolist()
df = pd.DataFrame({'Datetime':dateStamp})
df['MPLDate'] = df['Datetime'].apply(lambda date: mdates.date2num(date.to_pydatetime()))
df.index=dateStamp
ohlc.insert(0,'MPLDate', df.MPLDate)
fig=plt.figure(figsize=(17, 8))
# Main Graph
a = plt.subplot2grid((10,8), (0,0), rowspan = 8, colspan = 8)
# Volume
a2 = plt.subplot2grid((10,8), (8,0), sharex = a, rowspan = 2, colspan = 8)
matplotlib.style.use("ggplot")
darkColor = "#183A54"
lightColor = "#00A3E0"
candlestick_ohlc(a, ohlc[['MPLDate', 'open', 'high', 'low', 'close']].astype(float).values, width=0.768, colorup=lightColor, colordown=darkColor)
a.set_ylabel("Price")
a.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(3))
a.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d %H:%M'))
a2.set_ylabel("Volume")
a2.fill_between(ohlc.MPLDate,0, volume.astype(float), facecolor='#183A54')
plt.setp(a.get_xticklabels(), visible=False)
sma=ohlc['close'].rolling(window=10).mean()
label = str(10)+" SMA"
a.plot(ohlc['MPLDate'],sma, label=label)
a.legend(loc=0)
"""
Explanation: What possibilities do we have, now that we can quickly load all the trades into the python workspace? For example we can do nice plots or with the Kraken API we can automate our trading strategies. The two following cells illustrate simple, such examples.
End of explanation
"""
pairs = ['XXBTZEUR']
import math
import threading
def automatedTradingStrategy():
fastMA = {}
slowMA = {}
mask = {}
tickdata_collection = {}
for pair in pairs:
actbalance, allbalance = get_kraken_balance(db = kraken)
tickdata_collection[pair]=get_all_kraken_trades(pair, db = kraken)
tickdata_collection[pair]=tickdata_collection[pair].price.resample('1H').ohlc().dropna().iloc[-241:-1,:]
fastMA[pair] = tickdata_collection[pair].close.ewm(span=120).mean()
slowMA[pair] = tickdata_collection[pair].close.ewm(span=240).mean()
mask[pair] = fastMA[pair][-1]>slowMA[pair]
if bool((actbalance.ZEUR.values >= 100) & (mask[pair][-1] == True)):
txid, descr = add_kraken_order(pair, 'buy', 'market', math.floor(float(actbalance.ZEUR))/tickdata_collection[pair].close[-1])
print('New Order: ' +str(txid)+ '\nInfo: ')
print(descr)
elif bool((actbalance[pair[0:4]].values > 0) & (mask[pair][-1] == False)):
txid, descr = gkd.add_kraken_order(pair, 'sell', 'market', int(actbalance[pair[0:4]].values) )
print('New Order: ' +str(txid)+ '\nInfo: ')
print(descr)
else:
print('Nothing new this hour!')
global t
t = threading.Timer(3600, automatedTradingStrategy)
t.start()
automatedTradingStrategy()
"""
Explanation: A nice thing is that we can customize our candlestick charts. We can choose the time frame(here the last 90 days) and with the pandas resample function we can create the desired ohlc data(here one day data). The mpl_finance module allows now to plot nice candlestick charts. It's also possible to plot some financial indicators like e.g. here a simple moving average. The same way as above we could add a MACD indicator for example. There is a library called TA-Lib which contains more than 200 indicators. It's worth to have a look at. Let's now look at one possibility to implement an automated trading strategy:
End of explanation
"""
t.cancel()
"""
Explanation: This is a simple strategy just for the pair XBT/EUR but very complex strategies for more than one pair are also possible to code. This strategie states "buy/hold" if the exponential moving average of 120 hours (5 days) is above the exponential moving average of 240 hours (10 days) and "sell" if it's below. With the threading module we can execute the function every hour, so that it first updates the latest trade data and computes the relevant conditions afterwards. Then when the conditions are met it buys or sells at the market or it holds the current positions. To stop the function just call t.cancel(). I'm sure there are plenty possibilities to implement automated trading, that's just one of them and I'm not sure if it is a good one.
Be aware that the above strategy is very, very simple and will not guarantee any gains!
End of explanation
"""
|
parrt/lolviz | examples.ipynb | bsd-3-clause | from lolviz import *
objviz([u'2016-08-12',107.779999,108.440002,107.779999,108.18])
table = [
['Date','Open','High','Low','Close','Volume'],
['2016-08-12',107.779999,108.440002,107.779999,108.18,18612300,108.18],
]
objviz(table)
d = dict([(c,chr(c)) for c in range(ord('a'),ord('f'))])
objviz(d)
tuplelist = d.items()
listviz(tuplelist)
tuplelist = d.items()
listviz(tuplelist, showassoc=False)
objviz(tuplelist)
T = ['11','12','13','14',['a','b','c'],'16']
lolviz(T)
objviz({'hi','mom'})
objviz({'superuser':True, 'mgr':False})
objviz(set(['elem%d'%i for i in range(20)])) # long set shown vertically
# test linked list node
class Node:
def __init__(self, value, next=None):
self.value = value
self.next = next
head = Node('tombu')
head = Node('parrt', head)
head = Node("xue", head)
objviz(head)
a = {Node('parrt'),Node('mary')}
objviz(a)
head2 = ('parrt',('mary',None))
objviz(head2)
data = [[]] * 5 # INCORRECT list of list init
lolviz(data)
data[0].append( ('a',4) )
data[2].append( ('b',9) ) # whoops! should be different list object
lolviz(data)
table = [ [] for i in range(5) ] # correct way to init
lolviz(table)
key = 'a'
value = 99
def hashcode(o): return ord(o) # assume keys are single-element strings
print("hashcode =", hashcode(key))
bucket_index = hashcode(key) % len(table)
print("bucket_index =", bucket_index)
bucket = table[bucket_index]
bucket.append( (key,value) ) # add association to the bucket
lolviz(table)
key = 'f'
value = 99
print("hashcode =", hashcode(key))
bucket_index = hashcode(key) % len(table)
print("bucket_index =", bucket_index)
bucket = table[bucket_index]
bucket.append( (key,value) ) # add association to the bucket
lolviz(table)
"""
Explanation: Examples for lolviz
Install
If on mac, I had to do this:
bash
$ brew install graphviz # had to upgrade graphviz on el capitan
Then
bash
$ pip install lolviz
Sample visualizations
End of explanation
"""
objviz(table)
courses = [
['msan501', 51],
['msan502', 32],
['msan692', 101]
]
mycourses = courses
print(id(mycourses), id(courses))
objviz(courses)
"""
Explanation: If we don't indicate we want a simple 2-level list of list with lolviz(), we get a generic object graph:
End of explanation
"""
strviz('New York')
class Tree:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
root = Tree('parrt',
Tree('mary',
Tree('jim',
Tree('srinivasan'),
Tree('april'))),
Tree('xue',None,Tree('mike')))
treeviz(root)
from IPython.display import display
N = 100
def f(x):
a = ['hi','mom']
thestack = callsviz(varnames=['table','x','head','courses','N','a'])
display(thestack)
f(99)
"""
Explanation: You can also display strings as arrays in isolation (but not in other data structures as I figured it's not that useful in most cases):
End of explanation
"""
def f(x):
thestack = callsviz(varnames=['table','x','tree','head','courses'])
print(thestack.source[:100]) # show first 100 char of graphviz syntax
thestack.render("/tmp/t") # save as PDF
f(99)
"""
Explanation: If you'd like to save an image from jupyter, use render():
End of explanation
"""
import numpy as np
A = np.array([[1,2,8,9],[3,4,22,1]])
objviz(A)
B = np.ones((100,100))
for i in range(100):
for j in range(100):
B[i,j] = i+j
B
matrixviz(A)
matrixviz(B)
A = np.array(np.arange(-5.0,5.0,2.1))
B = A.reshape(-1,1)
matrices = [A,B]
def f():
w,h = 20,20
C = np.ones((w,h), dtype=int)
for i in range(w):
for j in range(h):
C[i,j] = i+j
display(callsviz(varnames=['matrices','A','C']))
f()
"""
Explanation: Numpy viz
End of explanation
"""
import pandas as pd
df = pd.DataFrame()
df["sqfeet"] = [750, 800, 850, 900,950]
df["rent"] = [1160, 1200, 1280, 1450,2000]
objviz(df)
objviz(df.rent)
"""
Explanation: Pandas dataframes, series
End of explanation
"""
|
computational-class/cjc2016 | code/0.common_questions.ipynb | mit | import graphlab as gl
from IPython.display import display
from IPython.display import Image
gl.canvas.set_target('ipynb')
"""
Explanation: 在anaconda 环境中运行jupyter notebook
问题及其解决方法
Mac电脑如何快速找到用户目录
- 1、在finder的偏好设置中选择边栏选中个人收藏下房子的图标,然后在边栏就可以看到用户目录,然后就可以找到目录了。
2、在finder的偏好设置中选择通用,然后选择磁盘,磁盘就出现在桌面了,这样也可以很方便的进入根目录,进而找到用户目录;
3、桌面目录下,菜单前往-个人也可以进入用户目录
如何打开jupyter notebook
Mac users: 打开terminal (可以在launchpad中找到),输入:jupyter notebook
windows users: 在电脑左下角输入'cmd'打开terminal, 输入:jupyter notebook
在terminal里成功安装第三方的包,结果发现在notebook里无法import
这个问题多出现于mac用户,因为mac有一个系统自带的python,成功安装的第三方包都被安装到了系统自带的python里。因此需要确保我们使用的是conda自己的pip,即需要指定pip的路径名,比如我的pip路径名在:/Users/datalab/anaconda/bin/pip,那么在terminal里输入:
/Users/datalab/anaconda/bin/pip install package_name
或者在notebook的初始页面,右上方-new-terminal,在这个terminal里输入
pip install package_name
或者通过anaconda自带的spyder安装
常用的包也可以直接
conda install package_name
如何查看anaconda自带的包和已经安装的包?
打开terminal,输入: conda list
windows用户安装graphlab-create出错:unistall tornado, permission denied: tornado/speedup.pdy, 解决方法:
首先,卸载tornado:
conda remove tornado
然后,重新运行:
pip install -U graphlab-create
<del>添加Anaconda的国内镜像,快速安装Python包
添加清华镜像
https://mirrors.tuna.tsinghua.edu.cn/help/anaconda/
```python
conda config --add channels https://mirrors.sjtug.sjtu.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.sjtug.sjtu.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.sjtug.sjtu.edu.cn/anaconda/cloud/conda-forge/
conda config --set show_channel_urls yes
```
设置搜索时显示通道地址
conda config --set show_channel_urls yes
如果命令行方法添加不上,可以在用户目录下的.condarc中添加https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/:
如果没有该文件可以直接创建,Windows为C://Users/username/.condarc,Linux/Mac为~/.condarc
To install a different version of Python without overwriting the current version
https://conda.io/docs/user-guide/tasks/manage-python.html
Creating a new environment and install the second Python version into it. To create the new environment for Python 2.7, in your Terminal window or an Anaconda Prompt, run:
conda create -n py27 python=2.7 anaconda=4.0.0
Activate the new environment 切换到新环境
linux/Mac下需要使用: source activate py27
windows需要使用: activate py27
退出环境: source deactivate py27
也可以使用 activate root切回root环境
Verify that the new environment is your current environment.
To verify that the current environment uses the new Python version, in your Terminal window or an Anaconda Prompt, run: python --version
使用py27环境时的一个例子:
激活py27环境:source activate py27
打开notebook: jupyter notebook
关闭py27环境:source deactivate py27
如何让graphlab在notebook中展示所有的结果(不另外打开新的窗口)
运行以下代码
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(2, 2)
plt.text(2, 2, '汉字', fontsize = 300)
plt.show()
"""
Explanation: 如何卸载一个包
conda remove package_name
roll back to a specific commit
open the terminal, and cd to your github repo, e.g.,
cd github/cjc2016
git reset --hard <old-commit-id>, and if your old-commit-id is 3808166
git reset --hard 3808166
git push origin HEAD --force
http://stackoverflow.com/questions/4372435/how-can-i-rollback-a-github-repository-to-a-specific-commit
python matplotlib plot 数据中的中文无法正常显示的解决办法
原因:matplotlib默认字体并不是中文字体。
解决方法:将某中文字体设为默认首选字体,本文拟将默认字体设为微软雅黑。
环境:windows
过程:
在python的安装目录中找到配置文件:%Python_Home%\Lib\site-packages\matplotlib\mpl-data\matplotlibrc,用任意文本编辑器打开。(最好先备份一下)
找到第139行:#font.family, 将其注释去掉,冒号后面的值改为Microsoft YaHei
找到第151行:#font.sans-serif, 将其注释去掉,并将Microsoft YaHei添加到冒号后面的最前面,注意还要再加一个英文逗号(,)
为保险其间,到C:\Windows\Fonts\中找到微软雅黑对应的字体文件msyh.ttf,将其复制到D:\Python32\Lib\site-packages\matplotlib\mpl-data\fonts\ttf\目录下
python matplotlib plot 数据中的中文无法正常显示的解决办法
原因:matplotlib默认字体并不是中文字体。
解决方法:将某中文字体设为默认首选字体,本文拟将默认字体设为微软雅黑。
环境:Mac
过程:
下载微软雅黑对应的字体文件msyh.ttf,双击并安装msyh.ttf。
在python的安装目录中找到配置文件:%Python_Home%\Lib\site-packages\matplotlib\mpl-data\matplotlibrc,用任意文本编辑器打开。(最好先备份一下)
> Users/datalab/Applications/anaconda/lib/python3.5/site-packages/matplotlib/mpl-data/
找到第139行:#font.family, 将其注释去掉,冒号后面的值改为Microsoft YaHei
找到第151行:#font.sans-serif, 将其注释去掉,并将Microsoft YaHei添加到冒号后面的最前面,注意还要再加一个英文逗号(,)
为保险期间,可以将msyh.ttf复制到%Python_Home%\Lib\site-packages\matplotlib\mpl-data\fonts\ttf\目录下
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/guide/checkpoint.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
"""
Explanation: 체크포인트 훈련하기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Google Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃헙(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
docs-ko@tensorflow.org로
메일을 보내주시기 바랍니다.
"텐서플로 모델 저장하기" 라는 문구는 보통 둘중 하나를 의미합니다:
Checkpoints, 혹은
SavedModel.
Checkpoint는 모델이 사용한 모든 매개변수(tf.Variable 객체들)의 정확한 값을 캡처합니다. Chekcpoint는 모델에 의해 정의된 연산에 대한 설명을 포함하지 않으므로 일반적으로 저장된 매개변수 값을 사용할 소스 코드를 사용할 수 있을 때만 유용합니다.
반면 SavedModel 형식은 매개변수 값(체크포인트) 외에 모델에 의해 정의된 연산에 대한 일련화된 설명을 포함합니다. 이 형식의 모델은 모델을 만든 소스 코드와 독립적입니다. 따라서 TensorFlow Serving, TensorFlow Lite, TensorFlow.js 또는 다른 프로그래밍 언어(C, C++, Java, Go, Rust, C# 등. TensorFlow APIs)로 배포하기에 적합합니다.
이 가이드는 체크포인트 쓰기 및 읽기를 위한 API들을 다룹니다.
설치
End of explanation
"""
net.save_weights('easy_checkpoint')
"""
Explanation: tf.keras 훈련 API들로부터 저장하기
tf.keras 저장하고 복구하는
가이드를 읽어봅시다.
tf.keras.Model.save_weights 가 텐서플로 CheckPoint를 저장합니다.
End of explanation
"""
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat(10).batch(2)
def train_step(net, example, optimizer):
"""Trains `net` on `example` using `optimizer`."""
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
"""
Explanation: Checkpoints 작성하기
텐서플로 모델의 지속적인 상태는 tf.Variable 객체에 저장되어 있습니다. 이들은 직접으로 구성할 수 있지만, tf.keras.layers 혹은 tf.keras.Model와 같은 고수준 API들로 만들어 지기도 합니다.
변수를 관리하는 가장 쉬운 방법은 Python 객체에 변수를 연결한 다음 해당 객체를 참조하는 것입니다.
tf.train.Checkpoint, tf.keras.layers.Layer, and tf.keras.Model의 하위클래스들은 해당 속성에 할당된 변수를 자동 추적합니다. 다음 예시는 간단한 선형 model을 구성하고, 모든 model 변수의 값을 포합하는 checkpoint를 씁니다.
Model.save_weights를 사용해 손쉽게 model-checkpoint를 저장할 수 있습니다.
직접 Checkpoint작성하기
설치
tf.train.Checkpoint의 모든 특성을 입증하기 위해서 toy dataset과 optimization step을 정의해야 합니다.
End of explanation
"""
opt = tf.keras.optimizers.Adam(0.1)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
"""
Explanation: Checkpoint객체 생성
인위적으로 checkpoint를 만드려면 tf.train.Checkpoint 객체가 필요합니다. Checkpoint하고 싶은 객체의 위치는 객체의 특성으로 설정이 되어 있습니다.
tf.train.CheckpointManager도 다수의 checkpoint를 관리할때 도움이 됩니다
End of explanation
"""
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for example in toy_dataset():
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
"""
Explanation: 훈련하고 model checkpoint작성하기
다음 훈련 루프는 model과 optimizer의 인스턴스를 만든 후 tf.train.Checkpoint 객체에 수집합니다. 이것은 각 데이터 배치에 있는 루프의 훈련 단계를 호출하고, 주기적으로 디스크에 checkpoint를 작성합니다.
End of explanation
"""
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
"""
Explanation: 복구하고 훈련 계속하기
첫 번째 과정 이후 새로운 model과 매니저를 전달할 수 있지만, 일을 마무리 한 정확한 지점에서 훈련을 가져와야 합니다:
End of explanation
"""
print(manager.checkpoints) # 남은 checkpoint들 나열
"""
Explanation: tf.train.CheckpointManager 객체가 이전 checkpoint들을 제거합니다. 위는 가장 최근의 3개 checkpoint만 유지하도록 구성되어 있습니다.
End of explanation
"""
!ls ./tf_ckpts
"""
Explanation: 예를 들어, './tf_ckpts/ckpt-10'같은 경로들은 디스크에 있는 파일이 아닙니다. 대신에 이 경로들은 index 파일과 변수 값들을 담고있는 파일들의 전위 표기입니다. 이 전위 표기들은 CheckpointManager 가 상태를 저장하는 하나의 checkpoint 파일 ('./tf_ckpts/checkpoint') 에 그룹으로 묶여있습니다.
End of explanation
"""
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # 모두 0입니다.
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # 우리는 복구된 변수를 이제 얻었습니다.
"""
Explanation: <a id="loading_mechanics"/>
작동 원리
텐서플로는 로드되는 객체에서 시작하여 명명된 엣지가 있는 방향 그래프를 통과시켜 변수를 checkpoint된 값과 일치시킵니다. 엣지의 이름들은 특히 기여한 객체의 이름에서 따왔습니다. 예를들면, self.l1 = tf.keras.layers.Dense(5)안의 "l1". tf.train.Checkpoint 이것의 키워드 전달인자 이름을 사용했습니다, 여기에서는 "step" in tf.train.Checkpoint(step=...).
위의 예에서 나온 종속성 그래프는 다음과 같습니다.:
optimizer는 빨간색으로, regular 변수는 파란색으로, optimizer 슬롯 변수는 주황색으로 표시합니다. 다른 nodes는, 예를 들면 tf.train.Checkpoint, 이 검은색임을 나타냅니다.
슬롯 변수는 optimizer의 일부지만 특정 변수에 대해 생성됩니다. 'm' 위의 엣지는 모멘텀에 해당하며, 아담 optimizer는 각 변수에 대해 추적합니다. 슬롯 변수는 변수와 optimizer가 모두 저장될 경우에만 checkpoint에 저장되며, 따라서 파선 엣지가 됩니다.
tf.train.Checkpoint로 불러온 restore() 오브젝트 큐는그Checkpoint 개체에서 일치하는 방법이 있습니다. 변수 값 복원을 요청한 복원 작업 대기 행렬로 정리합니다. 예를 들어, 우리는 네트워크와 계층을 통해 그것에 대한 하나의 경로를 재구성함으로서 위에서 정의한 모델에서 커널만 로드할 수 있습니다.
End of explanation
"""
status.assert_existing_objects_matched()
"""
Explanation: 이 새로운 개체에 대한 의존도 그래프는 우리가 위에 적은 더 큰 checkpoint보다 작은 하위 그래프입니다. 이것은 오직 tf.train.Checkpoint에서 checkpoints 셀때 편향과 저장 카운터만 포함합니다.
restore() 함수는 선택적으로 확인을 거친 객체의 상태를 반환합니다. 새로 만든 checkpoint에서 우리가 만든 모든 개체가 복원되어 status.assert_existing_objects_match()가 통과합니다.
End of explanation
"""
delayed_restore = tf.Variable(tf.zeros([1, 5]))
print(delayed_restore.numpy()) # 아직 복원이 안되어 값이 0입니다.
fake_layer.kernel = delayed_restore
print(delayed_restore.numpy()) # 복원되었습니다.
"""
Explanation: checkpoint에는 계층의 커널과 optimizer의 변수를 포함하여 일치하지 않는 많은 개체가 있습니다. status.assert_consumed()는 checkpoint와 프로그램이 정확히 일치할 경우에만 통과하고 여기에 예외를 둘 것입니다.
복구 지연
텐서플로우의 Layer 객체는 입력 형상을 이용할 수 있을 때 변수 생성을 첫 번째 호출로 지연시킬 수 있습니다. 예를 들어, 'Dense' 층의 커널의 모양은 계층의 입력과 출력 형태 모두에 따라 달라지기 때문에, 생성자 인수로 필요한 출력 형태는 그 자체로 변수를 만들기에 충분한 정보가 아닙니다. 예를 들어, 'Dense' 층의 커널의 모양은 계층의 입력과 출력 형태 모두에 따라 달라지기 때문에, 생성자 인수로 필요한 출력 형태는 그 자체로 변수를 만들기에 충분한 정보가 아닙니다.
이 관용구를 지지하려면 tf.train.Checkpoint queues는 일치하는 변수가 없는 것들을 복원합니다.
End of explanation
"""
tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/'))
"""
Explanation: checkpoints 수동 검사
tf.train.list_variables에는 checkpoint 키와 변수 형태가 나열돼있습니다. Checkpoint의 키들은 위에 있는 그래프의 경로입니다.
End of explanation
"""
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # 아직 복구되지 않았습니다.
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
"""
Explanation: 목록 및 딕셔너리 추적
self.l1 = tf.keras.layer.Dense(5),와 같은 직접적인 속성 할당은 목록과 사전적 속성에 할당하면 내용이 추적됩니다.
End of explanation
"""
restore.listed = []
print(restore.listed) # 리스트래퍼([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # 이전 셀의 restore()에서 v1 복원합니다.
assert 1. == v1.numpy()
"""
Explanation: 당신은 래퍼(wrapper) 객체를 목록과 사전에 있음을 알아차릴겁니다. 이러한 래퍼는 기본 데이터 구조의 checkpoint 가능한 버전입니다. 속성 기반 로딩과 마찬가지로, 이러한 래퍼들은 변수의 값이 용기에 추가되는 즉시 복원됩니다.
End of explanation
"""
import tensorflow.compat.v1 as tf_compat
def model_fn(features, labels, mode):
net = Net()
opt = tf.keras.optimizers.Adam(0.1)
ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),
optimizer=opt, net=net)
with tf.GradientTape() as tape:
output = net(features['x'])
loss = tf.reduce_mean(tf.abs(output - features['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
return tf.estimator.EstimatorSpec(
mode,
loss=loss,
train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),
ckpt.step.assign_add(1)),
# Estimator가 "ckpt"를 객체 기반의 꼴로 저장하게 합니다.
scaffold=tf_compat.train.Scaffold(saver=ckpt))
tf.keras.backend.clear_session()
est = tf.estimator.Estimator(model_fn, './tf_estimator_example/')
est.train(toy_dataset, steps=10)
"""
Explanation: f.keras의 하위 클래스에 동일한 추적이 자동으로 적용되고 예를 들어 레이어 목록을 추적하는 데 사용할 수 있는 모델입니다.
Estimator를 사용하여 객체 기반 checkpoint를 저장하기
Estimator 가이드를 보십시오.
Estimators는 기본적으로 이전 섹션에서 설명한 개체 그래프 대신 변수 이름을 가진 체크포인트를 저장합니다. tf.train.Checkpoint는 이름 기반 체크포인트를 사용할 수 있지만, 모델의 일부를 Estimator's model_fn 외부로 이동할 때 변수 이름이 변경될 수 있습니다. 객체 기반 checkpoints를 저장하면 Estimator 내에서 모델을 훈련시킨 후 외부에서 쉽게 사용할 수 있습니다.
End of explanation
"""
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
ckpt = tf.train.Checkpoint(
step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)
ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))
ckpt.step.numpy() # est.train(..., steps=10)부터
"""
Explanation: tf.train.Checkpoint는 그런 다음 model_dir에서 Estimator의 checkpoints를 로드할 수 있습니다.
End of explanation
"""
|
mdeff/ntds_2016 | algorithms/06_sol_recurrent_nn.ipynb | mit | # Import libraries
import tensorflow as tf
import numpy as np
import collections
import os
# Load text data
data = open(os.path.join('datasets', 'text_ass_6.txt'), 'r').read() # must be simple plain text file
print('Text data:',data)
chars = list(set(data))
print('\nSingle characters:',chars)
data_len, vocab_size = len(data), len(chars)
print('\nText data has %d characters, %d unique.' % (data_len, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
print('\nMapping characters to numbers:',char_to_ix)
print('\nMapping numbers to characters:',ix_to_char)
"""
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Assignment 3 : Recurrent Neural Networks
End of explanation
"""
# hyperparameters of RNN
batch_size = 3 # batch size
batch_len = data_len // batch_size # batch length
T = 5 # temporal length
epoch_size = (batch_len - 1) // T # nb of iterations to get one epoch
D = vocab_size # data dimension = nb of unique characters
H = 5*D # size of hidden state, the memory layer
print('data_len=',data_len,' batch_size=',batch_size,' batch_len=',
batch_len,' T=',T,' epoch_size=',epoch_size,' D=',D)
"""
Explanation: Goal
The goal is to define with TensorFlow a vanilla recurrent neural network (RNN) model:
$$
\begin{aligned}
h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\
y_t &= W_y h_t + b_y
\end{aligned}
$$
to predict a sequence of characters. $x_t \in \mathbb{R}^D$ is the input character of the RNN in a dictionary of size $D$. $y_t \in \mathbb{R}^D$ is the predicted character (through a distribution function) by the RNN system. $h_t \in \mathbb{R}^H$ is the memory of the RNN, called hidden state at time $t$. Its dimensionality is arbitrarly chosen to $H$. The variables of the system are $W_h \in \mathbb{R}^{H\times H}$, $W_x \in \mathbb{R}^{H\times D}$, $W_y \in \mathbb{R}^{D\times H}$, $b_h \in \mathbb{R}^D$, and $b_y \in \mathbb{R}^D$. <br>
The number of time steps of the RNN is $T$, that is we will learn a sequence of data of length $T$: $x_t$ for $t=0,...,T-1$.
End of explanation
"""
# input variables of computational graph (CG)
Xin = tf.placeholder(tf.float32, [batch_size,T,D]); #print('Xin=',Xin) # Input
Ytarget = tf.placeholder(tf.int64, [batch_size,T]); #print('Y_=',Y_) # target
hin = tf.placeholder(tf.float32, [batch_size,H]); #print('hin=',hin.get_shape())
"""
Explanation: Step 1
Initialize input variables of the computational graph:<br>
(1) Xin of size batch_size x T x D and type tf.float32. Each input character is encoded on a vector of size D.<br>
(2) Ytarget of size batch_size x T and type tf.int64. Each target character is encoded by a value in {0,...,D-1}.<br>
(3) hin of size batch_size x H and type tf.float32<br>
End of explanation
"""
# Model variables
Wx = tf.Variable(tf.random_normal([D,H], stddev=tf.sqrt(6./tf.to_float(D+H)))); print('Wx=',Wx.get_shape())
Wh = tf.Variable(0.01*np.identity(H, np.float32)); print('Wh=',Wh.get_shape())
Wy = tf.Variable(tf.random_normal([H,D], stddev=tf.sqrt(6./tf.to_float(H+D)))); print('Wy=',Wy.get_shape())
bh = tf.Variable(tf.zeros([H])); print('bh=',bh.get_shape())
by = tf.Variable(tf.zeros([D])); print('by=',by.get_shape())
"""
Explanation: Step 2
Define the variables of the computational graph:<br>
(1) $W_x$ is a random variable of shape D x H with normal distribution of variance $\frac{6}{D+H}$<br>
(2) $W_h$ is an identity matrix multiplies by constant $0.01$<br>
(3) $W_y$ is a random variable of shape H x D with normal distribution of variance $\frac{6}{D+H}$<br>
(4) $b_h$, $b_y$ are zero vectors of size H, and D<br>
End of explanation
"""
# Vanilla RNN implementation
Y = []
ht = hin
for t, xt in enumerate(tf.split(1, T, Xin)):
if batch_size>1:
xt = tf.squeeze(xt); #print('xt=',xt)
else:
xt = tf.squeeze(xt)[None,:]
ht = tf.matmul(ht, Wh); #print('ht1=',ht)
ht += tf.matmul(xt, Wx); #print('ht2=',ht)
ht += bh; #print('ht3=',ht)
ht = tf.tanh(ht); #print('ht4=',ht)
yt = tf.matmul(ht, Wy); #print('yt1=',yt)
yt += by; #print('yt2=',yt)
Y.append(yt)
#print('Y=',Y)
Y = tf.pack(Y);
if batch_size>1:
Y = tf.squeeze(Y);
Y = tf.transpose(Y, [1, 0, 2])
print('Y=',Y.get_shape())
print('Ytarget=',Ytarget.get_shape())
"""
Explanation: Step 3
Implement the recursive formula:
$$
\begin{aligned}
h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\
y_t &= W_y h_t + b_y
\end{aligned}
$$
with $h_{t=0}=hin$.<br>
Hints: <br>
(1) You may use functions tf.split(), enumerate(), tf.squeeze(), tf.matmul(), tf.tanh(), tf.transpose(), append(), pack().<br>
(2) You may use a matrix Y of shape batch_size x T x D. We recall that Ytarget should have the shape batch_size x T.<br>
End of explanation
"""
# perplexity
logits = tf.reshape(Y,[batch_size*T,D])
weights = tf.ones([batch_size*T])
cross_entropy_perplexity = tf.nn.seq2seq.sequence_loss_by_example([logits],[Ytarget],[weights])
cross_entropy_perplexity = tf.reduce_sum(cross_entropy_perplexity) / batch_size
loss = cross_entropy_perplexity
"""
Explanation: Step 4
Perplexity loss is implemented as:
End of explanation
"""
# Optimization
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
"""
Explanation: Step 5
Implement the optimization of the loss function.
Hint: You may use function tf.train.GradientDescentOptimizer().
End of explanation
"""
# Predict
idx_pred = tf.placeholder(tf.int64) # input seed
xtp = tf.one_hot(idx_pred,depth=D); #print('xtp1=',xtp.get_shape())
htp = tf.zeros([1,H])
Ypred = []
for t in range(T):
htp = tf.matmul(htp, Wh); #print('htp1=',htp)
htp += tf.matmul(xtp, Wx); #print('htp2=',htp)
htp += bh; #print('htp3=',htp) # (1, 100)
htp = tf.tanh(htp); #print('htp4=',htp) # (1, 100)
ytp = tf.matmul(htp, Wy); #print('ytp1=',ytp)
ytp += by; #print('ytp2=',ytp)
ytp = tf.nn.softmax(ytp); #print('yt1=',ytp)
ytp = tf.squeeze(ytp); #print('yt2=',ytp)
seed_idx = tf.argmax(ytp,dimension=0); #print('seed_idx=',seed_idx)
xtp = tf.one_hot(seed_idx,depth=D)[None,:]; #print('xtp2=',xtp.get_shape())
Ypred.append(seed_idx)
Ypred = tf.convert_to_tensor(Ypred)
# Prepare train data matrix of size "batch_size x batch_len"
data_ix = [char_to_ix[ch] for ch in data[:data_len]]
train_data = np.array(data_ix)
print('original train set shape',train_data.shape)
train_data = np.reshape(train_data[:batch_size*batch_len], [batch_size,batch_len])
print('pre-processed train set shape',train_data.shape)
# The following function tansforms an integer value d between {0,...,D-1} into an one hot vector, that is a
# vector of dimension D x 1 which has value 1 for index d-1, and 0 otherwise
from scipy.sparse import coo_matrix
def convert_to_one_hot(a,max_val=None):
N = a.size
data = np.ones(N,dtype=int)
sparse_out = coo_matrix((data,(np.arange(N),a.ravel())), shape=(N,max_val))
return np.array(sparse_out.todense())
"""
Explanation: Step 6
Implement the prediction scheme: from an input character e.g. "h" then the RNN should predict "ello". <br>
Hints: <br>
(1) You should use the learned RNN.<br>
(2) You may use functions tf.one_hot(), tf.nn.softmax(), tf.argmax().
End of explanation
"""
# Run CG
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
h0 = np.zeros([batch_size,H])
indices = collections.deque()
costs = 0.0; epoch_iters = 0
for n in range(50):
# Batch extraction
if len(indices) < 1:
indices.extend(range(epoch_size))
costs = 0.0; epoch_iters = 0
i = indices.popleft()
batch_x = train_data[:,i*T:(i+1)*T]
batch_x = convert_to_one_hot(batch_x,D); batch_x = np.reshape(batch_x,[batch_size,T,D])
batch_y = train_data[:,i*T+1:(i+1)*T+1]
#print(batch_x.shape,batch_y.shape)
# Train
idx = char_to_ix['h'];
loss_value,_,Ypredicted = sess.run([loss,train_step,Ypred], feed_dict={Xin: batch_x, Ytarget: batch_y, hin: h0, idx_pred: [idx]})
# Perplexity
costs += loss_value
epoch_iters += T
perplexity = np.exp(costs/epoch_iters)
if not n%1:
idx_char = Ypredicted
txt = ''.join(ix_to_char[ix] for ix in list(idx_char))
print('\nn=',n,', perplexity value=',perplexity)
print('starting char=',ix_to_char[idx], ', predicted sequences=',txt)
sess.close()
"""
Explanation: Step 7
Run the computational graph with batches of training data.<br>
Predict the sequence of characters starting from the character "h".<br>
Hints:<br>
(1) Initial memory is $h_{t=0}$ is 0.<br>
(2) Run the computational graph to optimize the perplexity loss, and to predict the the sequence of characters starting from the character "h".<br>
End of explanation
"""
|
elenduuche/deep-learning | gan_mnist/Intro_to_GANs_Solution.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
"""
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment05/InteractEx03.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 3
Imports
End of explanation
"""
np.sech?
def soliton(x, t, c, a):
"""Return phi(x, t) for a soliton wave with constants c and a."""
phi=.5*c*(np.cosh(.5*c**.5*(x-c*t-a)))**-2
return phi
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
"""
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
"""
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
"""
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
"""
q=[1,2,3,4]
b=[4,5,6]
B,A=np.meshgrid(b,q)
phi=soliton(A,B,c,a)
np.shape(phi)
T,X=np.meshgrid(t,x)
phi=soliton(X,T,c,a)
np.shape(phi)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
"""
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
"""
def plot_soliton_data(i=0):
"""Plot the soliton data at t[i] versus x."""
plt.plot(x,phi[::1,i])
plt.tick_params(direction='out')
plt.xlabel('x')
plt.ylabel('phi')
plt.ylim(0,.6)
plt.title('Soliton Data')
print('t=',i)
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
"""
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
"""
interact(plot_soliton_data,i=(0,tpoints))
assert True # leave this for grading the interact with plot_soliton_data cell
"""
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation
"""
|
jacobdein/alpine-soundscapes | Calculate elevation range.ipynb | mit | from geo.models import Raster
from geo.models import Boundary
import rasterio
from shapely.geometry import shape
import numpy
import numpy.ma
import rasterio.mask
from matplotlib import cm as colormaps
from matplotlib import pyplot
%matplotlib inline
"""
Explanation: Calculate elevation range
This notebook calculates the minimum and maximum elevation within a study area defined by a vecotor polygon boundary from a raster digital elevation model.
import statements
End of explanation
"""
dem_record = Raster.objects.get(name='dem')
"""
Explanation: load raster information from database
End of explanation
"""
boundary_record = Boundary.objects.get(name='study area')
boundary = [eval(boundary_record.geometry.geojson)]
"""
Explanation: load study area boundary
End of explanation
"""
with rasterio.open(path=dem_record.filepath, mode='r') as source:
dem, transform = rasterio.mask.mask(source, boundary, crop=True, nodata=-9999)
dem = numpy.ma.masked_equal(dem.data, -9999)
"""
Explanation: mask raster with boundary
End of explanation
"""
dem.min()
dem.max()
"""
Explanation: calculate the minimum and maximum value
End of explanation
"""
pyplot.imshow(dem[0], cmap='viridis', vmin=dem.min(), vmax=dem.max())
pyplot.tick_params(bottom=False, labelbottom=False,
left=False, labelleft=False,
top=False, right=False)
pyplot.axes().set_frame_on(False)
cb = pyplot.colorbar()
"""
Explanation: plot the masked raster
End of explanation
"""
|
InsightSoftwareConsortium/ITKExamples | src/Core/Transform/MutualInformationAffine/MutualInformationAffine.ipynb | apache-2.0 | import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from urllib.request import urlretrieve
import itk
from itkwidgets import compare, checkerboard
"""
Explanation: Mutual Information Metric
The MutualInformationImageToImageMetric class computes the mutual information between two images, i.e. the degree to which information content in one image is dependent on the other image. This example shows how MutualInformationImageToImageMetric can be used to map affine transformation parameters and register two images using a gradient ascent algorithm.
End of explanation
"""
fixed_image_path = 'fixed.png'
moving_image_path = 'moving.png'
if not os.path.exists(fixed_image_path):
url = 'https://data.kitware.com/api/v1/file/602c10a22fa25629b97d2896/download'
urlretrieve(url, fixed_image_path)
if not os.path.exists(moving_image_path):
url = 'https://data.kitware.com/api/v1/file/602c10a32fa25629b97d28a0/download'
urlretrieve(url, moving_image_path)
fixed_image = itk.imread(fixed_image_path, itk.F)
moving_image = itk.imread(moving_image_path, itk.F)
checkerboard(fixed_image, moving_image)
"""
Explanation: Retrieve fixed and moving images for registration
We aim to register two slice images, one of which has an arbitrary offset and rotation. We seek to use an affine transform to appropriately rotate and translate the moving image to register with the fixed image.
End of explanation
"""
ImageType = type(fixed_image)
fixed_normalized_image = itk.normalize_image_filter(fixed_image)
fixed_smoothed_image = itk.discrete_gaussian_image_filter(fixed_normalized_image, variance=2.0)
moving_normalized_image = itk.normalize_image_filter(moving_image)
moving_smoothed_image = itk.discrete_gaussian_image_filter(moving_normalized_image, variance=2.0)
compare(fixed_smoothed_image, moving_smoothed_image)
"""
Explanation: Prepare images for registration
End of explanation
"""
X_INDEX = 4 # Translation in the X direction
Y_INDEX = 5 # Translation in the Y direction
# Move at most 20 pixels away from the initial position
window_size = [0] * 6
window_size[X_INDEX] = 20 # Set lower if visualizing elements 0-3
window_size[Y_INDEX] = 20 # Set lower if visualizing elements 0-3
# Collect 50 steps of data along each axis
n_steps = [0] * 6
n_steps[X_INDEX] = 50
n_steps[Y_INDEX] = 50
dim = fixed_image.GetImageDimension()
TransformType = itk.AffineTransform[itk.D,dim]
transform = TransformType.New()
InterpolatorType = itk.LinearInterpolateImageFunction[ImageType, itk.D]
interpolator = InterpolatorType.New()
MetricType = itk.MutualInformationImageToImageMetric[ImageType, ImageType]
metric = MetricType.New()
metric.SetNumberOfSpatialSamples(100)
metric.SetFixedImageStandardDeviation(5.0)
metric.SetMovingImageStandardDeviation(5.0)
metric.ReinitializeSeed(121212)
ExhaustiveOptimizerType = itk.ExhaustiveOptimizer
optimizer = ExhaustiveOptimizerType.New()
# Map out [n_steps] in each direction
optimizer.SetNumberOfSteps(n_steps)
# Move [window_size / n_steps] units with every step
scales = optimizer.GetScales()
scales.SetSize(6)
for i in range(0,6):
scales.SetElement(i, (window_size[i] / n_steps[i]) if n_steps[i] != 0 else 1)
optimizer.SetScales(scales)
# Collect data describing the parametric surface with an observer
surface = dict()
def print_iteration():
surface[tuple(optimizer.GetCurrentPosition())] = optimizer.GetCurrentValue()
optimizer.AddObserver(itk.IterationEvent(), print_iteration)
RegistrationType = itk.ImageRegistrationMethod[ImageType, ImageType]
registrar = RegistrationType.New()
registrar.SetFixedImage(fixed_smoothed_image)
registrar.SetMovingImage(moving_smoothed_image)
registrar.SetOptimizer(optimizer)
registrar.SetTransform(transform)
registrar.SetInterpolator(interpolator)
registrar.SetMetric(metric)
registrar.SetFixedImageRegion(fixed_image.GetBufferedRegion())
registrar.SetInitialTransformParameters(transform.GetParameters())
registrar.Update()
# Check the extreme positions within the observed window
max_position = list(optimizer.GetMaximumMetricValuePosition())
min_position = list(optimizer.GetMinimumMetricValuePosition())
max_val = optimizer.GetMaximumMetricValue()
min_val = optimizer.GetMinimumMetricValue()
print(max_position)
print(min_position)
# Set up values for the plot
x_vals = [list(set([x[i]
for x in surface.keys()])) for i in range(0,transform.GetNumberOfParameters())]
for i in range(0, transform.GetNumberOfParameters()):
x_vals[i].sort()
X, Y = np.meshgrid(x_vals[X_INDEX], x_vals[Y_INDEX])
Z = np.array([[surface[(1,0,0,1,x0,x1)] for x1 in x_vals[X_INDEX]]for x0 in x_vals[Y_INDEX]])
# Plot the surface as a 2D heat map
fig = plt.figure()
# Invert the y-axis to represent the image coordinate system
plt.gca().invert_yaxis()
ax = plt.gca()
surf = ax.scatter(X, Y, c=Z, cmap=cm.coolwarm)
# Mark extremes on the plot
ax.plot(max_position[X_INDEX],max_position[Y_INDEX],'k^')
ax.plot(min_position[X_INDEX],min_position[Y_INDEX],'kv')
# Plot the surface as a 3D scatter plot
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X,Y,Z,cmap=cm.coolwarm)
"""
Explanation: Plot the MutualInformationImageToImageMetric surface
For this relatively simple example we seek to adjust only the x- and y-offset of the moving image with a TranslationTransform. We can acquire MutualInformationImageToImageMetric values comparing the two images at many different possible offset pairs with ExhaustiveOptimizer and visualize this data set as a surface with matplotlib.
The affine transform contains six parameters representing each element in an affine matrix A which will dictate how the moving image is sampled. We know that the moving image has been translated so we will visualize the two translation parameters, but we could set X_INDEX and Y_INDEX to visualize any pair of parameters. See https://en.wikipedia.org/wiki/Affine_transformation#Image_transformation for more information on affine transformations.
End of explanation
"""
transform = TransformType.New()
interpolator = InterpolatorType.New()
metric = MetricType.New()
metric.SetNumberOfSpatialSamples(100)
metric.SetFixedImageStandardDeviation(5.0)
metric.SetMovingImageStandardDeviation(5.0)
metric.ReinitializeSeed(121212)
n_iterations = 200
optimizer = itk.GradientDescentOptimizer.New()
optimizer.SetLearningRate(1.0)
optimizer.SetNumberOfIterations(n_iterations)
optimizer.MaximizeOn()
# Set scales so that the optimizer can take
# large steps along translation parameters,
# moderate steps along rotational parameters, and
# small steps along scale parameters
optimizer.SetScales([100,0.5,0.5,100,0.0001,0.0001])
descent_data = dict()
descent_data[0] = (1,0,0,1,0,0)
def log_iteration():
descent_data[optimizer.GetCurrentIteration() + 1] = tuple(optimizer.GetCurrentPosition())
optimizer.AddObserver(itk.IterationEvent(), log_iteration)
registrar = RegistrationType.New()
registrar.SetFixedImage(fixed_smoothed_image)
registrar.SetMovingImage(moving_smoothed_image)
registrar.SetTransform(transform)
registrar.SetInterpolator(interpolator)
registrar.SetMetric(metric)
registrar.SetOptimizer(optimizer)
registrar.SetFixedImageRegion(fixed_image.GetBufferedRegion())
registrar.SetInitialTransformParameters(transform.GetParameters())
registrar.Update()
print(f'Its: {optimizer.GetCurrentIteration()}')
print(f'Final Value: {optimizer.GetValue()}')
print(f'Final Position: {list(registrar.GetLastTransformParameters())}')
descent_data
x_vals = [descent_data[i][X_INDEX] for i in range(0,n_iterations)]
y_vals = [descent_data[i][Y_INDEX] for i in range(0,n_iterations)]
"""
Explanation: Follow gradient ascent
Once we understand the shape of the parametric surface it is easier to visualize the gradient ascent algorithm. We see that there is some roughness to the surface, but it has a clear slope upwards. We want to maximize the mutual information between the two images in order to optimize registration. The results of gradient ascent optimization can be superimposed onto the matplotlib plot.
End of explanation
"""
fig = plt.figure()
# Note: We invert the y-axis to represent the image coordinate system
plt.gca().invert_yaxis()
ax = plt.gca()
surf = ax.scatter(X, Y, c=Z, cmap=cm.coolwarm)
for i in range(0,n_iterations-1):
plt.plot(x_vals[i:i+2],y_vals[i:i+2],'wx-')
plt.plot(descent_data[0][X_INDEX], descent_data[0][Y_INDEX],'bo')
plt.plot(descent_data[n_iterations-1][X_INDEX],descent_data[n_iterations-1][Y_INDEX],'ro')
plt.plot(max_position[X_INDEX], max_position[Y_INDEX], 'k^')
plt.plot(min_position[X_INDEX], min_position[Y_INDEX], 'kv')
"""
Explanation: We see in the plot that the metric generally improves as transformation parameters are updated with each iteration, but the final position may not align with the maximum position on the plot. This is one case in which it is difficult to visualize gradient ascent over a hyperdimensional space, where the optimizer is stepping through six parameter dimensions but the 2D plot we collected with ExhaustiveOptimizer represents a 'slice' in space with x[0:4] fixed at (1,0,0,1). Here it may be more useful to directly compare the two images after registration to evaluate fitness.
End of explanation
"""
ResampleFilterType = itk.ResampleImageFilter[ImageType,ImageType]
resample = ResampleFilterType.New(
Transform=transform,
Input=moving_image,
Size=fixed_image.GetLargestPossibleRegion().GetSize(),
OutputOrigin=fixed_image.GetOrigin(),
OutputSpacing=fixed_image.GetSpacing(),
OutputDirection=fixed_image.GetDirection(),
DefaultPixelValue=100)
resample.Update()
checkerboard(fixed_image, resample.GetOutput())
"""
Explanation: Resample the moving image
In order to apply the results of gradient ascent we must resample the moving image into the domain of the fixed image. The TranslationTransform whose parameters have been selected through gradient ascent is used to dictate how the moving image is sampled from the fixed image domain. We can compare the two images with itkwidgets to verify that registration is successful.
End of explanation
"""
os.remove(fixed_image_path)
os.remove(moving_image_path)
"""
Explanation: The image comparison shows that the images were successfully translated to overlap, but were not fully rotated to exactly align. If we were to explore further we could use a different optimizer with the metric, such as the LBFGSBOptimizer class, which may be more successful in optimizing over a rough parametric surface. We can also explore different metrics such as the MattesMutualInformationImageToImageMetricv4 class to take advantage of the ITK v4+ registration framework, in contrast with the MutualInformationImageToImageMetric used in this example as part of the v3 framework.
Clean up
End of explanation
"""
|
cosmicBboy/mLearn | 02. Linear Regression.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
# X is the explanatory variable data structure
X = [[6], [8], [10], [14], [18]]
# Y is the response variable data structure
y = [[7], [9], [13], [17.5], [18]]
# instantiate a pyplot figure object
plt.figure()
plt.title('Figure 1. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
plt.show()
"""
Explanation: Linear Regression
In this chapter, you will learn:
Simple Linear Regression: A model that maps the relationship from a single explanatory variable to a continuous response variable with a linear model.
Multiple Linear Regression: A generalization of simple linear regression that maps the relationship from more than one explanatory variable to a continuous response variable.
Polynomial Regression: A special case of multiple linear regression that models nonlinear relationships.
Linear Regression Model Training: finding the parameter values for the linear regression model by minimizing a cost function.
Simple Linear Regression
Assumption: A linear relationship exists between the response variable and the explanatory variable. SLR models this relationship with a linear surface called a hyperplane. A hyperplane is a subspace that has one dimension less than the ambient space that contains it.
Task: Predict the price of a pizza
Explanatory Variable: Pizza size
Response Variable: Price
Data
| Training Instance | Diameter (inches) | Price (dollars) |
|-------------------|-------------------|-----------------|
| 1 | 6 | 7 |
| 2 | 8 | 9 |
| 3 | 10 | 13 |
| 4 | 14 | 17.5 |
| 5 | 18 | 18 |
Visualizing the Data
We can use matplotlib to visualize our training data
End of explanation
"""
from sklearn.linear_model import LinearRegression
# Training Data
# X is the explanatory variable data structure
X = [[6], [8], [10], [14], [18]]
# Y is the response variable data structure
y = [[7], [9], [13], [17.5], [18]]
# Create and fil the model
model = LinearRegression()
# Fit the model to the training data
model.fit(X, y)
# Make a prediction about how much a 12 inch pizza should cost
test_X = [12]
prediction = model.predict(test_X)
print 'A 12\" pizza should cost: $%.2f' % prediction[0]
"""
Explanation: Based on the visualization about, we can see that there is a positive relationship between pizza diameter and price.
Training a Simple Linear Regression Model
We use scikit-learn to train our first model
End of explanation
"""
# instantiate a pyplot figure object
plt.figure()
# re-plot a scatter plot
plt.title('Figure 2. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
# create the line of fit
line_X = [[i] for i in np.arange(0, 25)]
line_y = model.predict(line_X)
plt.plot(line_X, line_y, '-b')
plt.show()
"""
Explanation: The sklearn.linear_model.LinearRegression class is an estimator. Given a new value of the explanatory variable, estimators predict a response value. All estimators have the fit() and predict() methods
fit() is used to learn the parameters of a model, while predict() predicts the value of a response variable given an explanatory variable value.
The mathematical specification of a simple regression model is the following:
$${y} = \alpha+ \beta{x}$$
Where:
- ${y}$: The predicted value of the response variable. In this case, the price of the pizza.
- ${x}$: The explanatory variable. In this case, the diameter of the pizza in inches.
- $\alpha$: The y-intercept term.
- $\beta$: The coefficient term (i.e. the slope of the line).
End of explanation
"""
# instantiate a pyplot figure object
plt.figure()
# re-plot a scatter plot
plt.title('Figure 3. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
# create the line of fit
line_X = [[i] for i in np.arange(0, 25)]
line_y = model.predict(line_X)
plt.plot(line_X, line_y, '-b')
# create residual lines
for x_i, y_i in zip(X, y):
plt.vlines(x_i[0], y_i[0], model.predict(x_i), colors='r')
plt.show()
"""
Explanation: Training a model to learn the values of the parameters for simple linear regression to create the best unbiased estimator is called ordinary least squares or linear least squares. To get a better idea of what "best unbiased estimator" is estimating in the first place, let's define what is needed to fit a model to training data.
Evaluating the Fitness of a Model with a Cost Function
How do we know that the parameter values specified by a particular model is doing well or poorly? In other words, how can we assess which parameters produced the best-fitting regression line?
Cost Function / Loss Function
The cost function or loss function provides a function that measures the error of a model. In order to find the best-fitting regression line, the goal is to minimize the sum of the differences between the predicted prices and the corresponding observed prices of the pizzas in the training set, also known as residuals or training errors.
We can visualize the residuals by drawing a vertical line from the observed price and the predicted price. Fortunately, matplotlib provides the vlines() that takes the x, ymin, and ymax arguments to draw a vertical line on a plot. We re-create Figure 2 but with the residuals this time.
End of explanation
"""
import numpy as np
rrs = np.sum((model.predict(X) - y) ** 2)
mse = np.mean((model.predict(X) - y) ** 2)
print 'Residual sum of squares: %.2f' % rrs
print 'Mean squared error: %.2f' % mse
"""
Explanation: Now that we can clearly see the prediction error (in red) made by our model (in blue), it's important to quantify the overall error through a formal definition of residual sum of squares.
We do this by summing the squared residuals for all of our training examples (we square the residuals because we don't care whether the error is in the positive or negative direction).
$$RSS = \sum_{i=1}^n\big(y_{i} - f(x_{i})\big)^2 $$
Where:
- $y_{i}$ is the observed value
- $f(x_{i})$ is the predicted value.
A related measure of model error is mean squared error, which is simply the mean of the residuals:
$$MSE = \dfrac{1}{n}\sum_{i=1}^n\big(y_{i} - f(x_{i})\big)^2 $$
Let's go ahead and implement RSS and MSE using numpy:
End of explanation
"""
from __future__ import division
# calculate the mean
n = len(X)
xbar = sum([x[0] for x in X]) / n
# calculate the variance
variance = sum([(x[0] - xbar) ** 2 for x in X]) / (n - 1)
print 'Variance: %.2f' % variance
"""
Explanation: Now that we've defined the cost function, we can find the set of parameters that minimize the RSS or MSE.
Solving Ordinary Least Squares for Simple Linear Regression
Recall the equation for simple linear regression:
$$y = \alpha + \beta{x}$$
Goal:
Solve the values of $\beta$ and $\alpha$ such that they minimize the RSS cost function.
Solving for $\beta$
Step 1: Calculate the variance of $x$
Varience is a summary statistic that represents how spread apart a set of values is. Intuitively, the variance of set A = {0, 5, 10, 15, 20} is greater than the variance of set B = {5, 5, 5, 5, 5}. The formal definition of variance is:
$$var(x) = \dfrac{\sum_{i=1}^{n}\big(x_{i} - \bar{x}\big)^2}{n - 1}$$
Where:
- $\bar{x}$ is the mean of $x$
- $x_{i}$ is the value of $x$ for the $i^{th}$ training instance
- $n$ is the number of training instances
Let's implement variance in Python.
End of explanation
"""
|
flsantos/startup_acquisition_forecast | dataset_preparation.ipynb | mit | #All imports here
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
from datetime import datetime
from dateutil import relativedelta
%matplotlib inline
#Let's start by importing our csv files into dataframes
df_companies = pd.read_csv('data/companies.csv')
df_acquisitions = pd.read_csv('data/acquisitions.csv')
df_investments = pd.read_csv('data/investments.csv')
df_rounds = pd.read_csv('data/rounds.csv')
"""
Explanation: 1. Dataset Preparation
Overview
In this phase, a startups dataset will be properly created and prepared for further feature analysis. Different features will be created here by combining information from the CSV files we have available: acquisitions.csv, investments.csv, rounds.csv and companies.csv.
Load all available data from CSV general files
End of explanation
"""
#Our final database will be stored in 'startups_USA'
startups_USA = df_companies[df_companies['country_code'] == 'USA']
startups_USA.head()
"""
Explanation: Start main dataset by USA companies from companies.csv
We'll be using in the analysis only USA based companies since companies from other countries have a large amount of missing data
End of explanation
"""
from operator import methodcaller
def split_categories(categories):
#get a unique list of the categories
splitted_categories = list(categories.astype('str').unique())
#split each category by |
splitted_categories = map(methodcaller("split", "|"), splitted_categories)
#flatten the list of sub categories
splitted_categories = [item for sublist in splitted_categories for item in sublist]
return splitted_categories
def explore_categories(categories, top_n_categories):
cat = split_categories(categories)
print 'There are in total {} different categories'.format(len(cat))
prob = pd.Series(cat).value_counts()
print prob.head()
#select first <top_n_categories>
mask = prob > prob[top_n_categories]
head_prob = prob.loc[mask].sum()
tail_prob = prob.loc[~mask].sum()
total_sum = prob.sum()
prob = prob.loc[mask]
prob2 = pd.DataFrame({'top '+str(top_n_categories)+' categories': head_prob, 'others': tail_prob},index=[0])
fig, axs = plt.subplots(2,1, figsize=(15,6))
prob.plot(kind='bar', ax=axs[0])
prob2.plot(kind='bar', ax=axs[1])
for bar in axs[1].patches:
height = bar.get_height()
axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top')
fig.tight_layout()
plt.xticks(rotation=90)
plt.show()
explore_categories(startups_USA['category_list'], top_n_categories=50)
"""
Explanation: Extract company category features
Now that we have a first version of our dataset, we'll expand the category_list attribute into dummy variables for categories.
End of explanation
"""
def expand_top_categories_into_dummy_variables(df):
cat = df['category_list'].astype('str')
cat_count = cat.str.split('|').apply(lambda x: pd.Series(x).value_counts()).sum()
#Get a dummy dataset for categories
dummies = cat.str.get_dummies(sep='|')
#Count of categories splitted first 50)
top50categories = list(cat_count.sort_values(ascending=False).index[:50])
#Create a dataframe with the 50 top categories to be concatenated later to the complete dataframe
categories_df = dummies[top50categories]
categories_df = categories_df.add_prefix('Category_')
return pd.concat([df, categories_df], axis=1, ignore_index=False)
startups_USA = expand_top_categories_into_dummy_variables(startups_USA)
startups_USA.head()
"""
Explanation: Since there are too many categories, we'll be selecting the top 50 more frequent ones.
We see from the chart above, that with these 50 (out of 60813) categories we cover 46% of the companies.
End of explanation
"""
startups_USA['funding_rounds'].hist(bins=range(1,10))
plt.title("Histogram of the number of funding rounds")
plt.ylabel('Number of companies')
plt.xlabel('Number of funding rounds')
#funding_total_usd
#funding_rounds
plt.subplot()
startups_USA[startups_USA['funding_total_usd'] != '-']. \
set_index('name')['funding_total_usd'] \
.astype(float) \
.sort_values(ascending=False)\
[:30].plot(kind='barh', figsize=(5,7))
plt.gca().invert_yaxis()
plt.title('Companies with highest total funding')
plt.ylabel('Companies')
plt.xlabel('Total amount of funding (USD)')
"""
Explanation: So now we added more 50 categories to our dataset.
Analyzing total funding and funding round features
End of explanation
"""
# Investment types
df_rounds['funding_round_type'].value_counts()
import warnings
warnings.filterwarnings('ignore')
#Iterate over each kind of funding type, and add two new features for each into the dataframe
def add_dummy_for_funding_type(df, aggr_rounds, funding_type):
funding_df = aggr_rounds.iloc[aggr_rounds.index.get_level_values('funding_round_type') == funding_type].reset_index()
funding_df.columns = funding_df.columns.droplevel()
funding_df.columns = ['company_permalink', funding_type, funding_type+'_funding_total_usd', funding_type+'_funding_rounds']
funding_df = funding_df.drop(funding_type,1)
new_df = pd.merge(df, funding_df, on='company_permalink', how='left')
new_df = new_df.fillna(0)
return new_df
def expand_investment_rounds(df, df_rounds):
#Prepare an aggregated rounds dataframe grouped by company and funding type
rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]})
#Get available unique funding types
funding_types = list(rounds_agg.index.levels[1])
#Prepare the dataframe where all the dummy features for each funding type will be added (number of rounds and total sum for each type)
rounds_df = df[['permalink']]
rounds_df = rounds_df.rename(columns = {'permalink':'company_permalink'})
#For each funding type, add two more columns to rounds_df
for funding_type in funding_types:
rounds_df = add_dummy_for_funding_type(rounds_df, rounds_agg, funding_type)
#remove the company_permalink variable, since it's already available in the companies dataframe
rounds_df = rounds_df.drop('company_permalink', 1)
#set rounds_df to have the same index of the other dataframes
rounds_df.index = df.index
return pd.concat([df, rounds_df], axis=1, ignore_index=False)
startups_USA = expand_investment_rounds(startups_USA, df_rounds)
startups_USA.head()
"""
Explanation: Analyzing date variables
Extract investment rounds features
Here, we'll extract from the rounds.csv file the number of rounds and total amount invested for each different type of investment.
End of explanation
"""
startups_USA = startups_USA.set_index('permalink')
"""
Explanation: Change dataset index
We'll set the company id (permalink attribute) as the index for the dataset. This simple change will make it easier to attach new features to the dataset.
End of explanation
"""
import warnings
warnings.filterwarnings('ignore')
def extract_feature_number_of_acquisitions(df, df_acquisitions):
number_of_acquisitions = df_acquisitions.groupby(['acquirer_permalink'])['acquirer_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_acquisitions.columns = number_of_acquisitions.columns.droplevel()
number_of_acquisitions.columns = ['permalink', 'number_of_acquisitions']
number_of_acquisitions = number_of_acquisitions.set_index('permalink')
number_of_acquisitions = number_of_acquisitions.fillna(0)
new_df = df.join(number_of_acquisitions)
new_df['number_of_acquisitions'] = new_df['number_of_acquisitions'].fillna(0)
return new_df
startups_USA = extract_feature_number_of_acquisitions(startups_USA, df_acquisitions)
"""
Explanation: Extract acquisitions features
Here, we'll extract the number of acquisitions were made by each company in our dataset.
End of explanation
"""
import warnings
warnings.filterwarnings('ignore')
def extract_feature_number_of_investments(df, df_investments):
number_of_investments = df_investments.groupby(['investor_permalink'])['investor_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_investments.columns = number_of_investments.columns.droplevel()
number_of_investments.columns = ['permalink', 'number_of_investments']
number_of_investments = number_of_investments.set_index('permalink')
number_of_unique_investments = df_investments.groupby(['investor_permalink'])['company_permalink'].agg({'amount': [ pd.Series.nunique]}).reset_index()
number_of_unique_investments.columns = number_of_unique_investments.columns.droplevel()
number_of_unique_investments.columns = ['permalink', 'number_of_unique_investments']
number_of_unique_investments = number_of_unique_investments.set_index('permalink')
new_df = df.join(number_of_investments)
new_df['number_of_investments'] = new_df['number_of_investments'].fillna(0)
new_df = new_df.join(number_of_unique_investments)
new_df['number_of_unique_investments'] = new_df['number_of_unique_investments'].fillna(0)
return new_df
startups_USA = extract_feature_number_of_investments(startups_USA, df_investments)
"""
Explanation: Extract investments feature
Here, we'll extract the number of investments made by each company in our dataset.
Note: This is not the number of times in which someone invested in the startup. It is the number of times in which each startup have made an investment in other company.
End of explanation
"""
import warnings
warnings.filterwarnings('ignore')
def extract_feature_avg_investors_per_round(df, investments):
number_of_investors_per_round = investments.groupby(['company_permalink', 'funding_round_permalink'])['investor_permalink'].agg({'investor_permalink': [ pd.Series.count]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'funding_round_permalink', 'count']
number_of_investors_per_round = number_of_investors_per_round.groupby(['company_permalink']).agg({'count': [ pd.Series.mean]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'number_of_investors_per_round']
number_of_investors_per_round = number_of_investors_per_round.set_index('company_permalink')
new_df = df.join(number_of_investors_per_round)
new_df['number_of_investors_per_round'] = new_df['number_of_investors_per_round'].fillna(-1)
return new_df
def extract_feature_avg_amount_invested_per_round(df, investments):
investmentsdf = investments.copy()
investmentsdf['raised_amount_usd'] = investmentsdf['raised_amount_usd'].astype(float)
avg_amount_invested_per_round = investmentsdf.groupby(['company_permalink', 'funding_round_permalink'])['raised_amount_usd'].agg({'raised_amount_usd': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'funding_round_permalink', 'mean']
avg_amount_invested_per_round = avg_amount_invested_per_round.groupby(['company_permalink']).agg({'mean': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'avg_amount_invested_per_round']
avg_amount_invested_per_round = avg_amount_invested_per_round.set_index('company_permalink')
new_df = df.join(avg_amount_invested_per_round)
new_df['avg_amount_invested_per_round'] = new_df['avg_amount_invested_per_round'].fillna(-1)
return new_df
startups_USA = extract_feature_avg_investors_per_round(startups_USA, df_investments)
startups_USA = extract_feature_avg_amount_invested_per_round(startups_USA, df_investments)
startups_USA.head()
"""
Explanation: Extract average number of investors and amount invested per round
Here we'll extract two more features
The average number of investors that participated in each around of investment
The average amount invested among all the investment rounds a startup had
End of explanation
"""
#drop features
startups_USA = startups_USA.drop(['name','homepage_url', 'category_list', 'region', 'city', 'country_code'], 1)
#move status to the end of the dataframe
cols = list(startups_USA)
cols.append(cols.pop(cols.index('status')))
startups_USA = startups_USA.ix[:, cols]
"""
Explanation: Drop useless features
Here we'll drop homepage_url, category_list, region, city, country_code We'll also move status to the end of the dataframe
End of explanation
"""
def normalize_numeric_features(df, columns_to_scale = None):
min_max_scaler = preprocessing.MinMaxScaler()
startups_normalized = df.copy()
#Convert '-' to zeros in funding_total_usd
startups_normalized['funding_total_usd'] = startups_normalized['funding_total_usd'].replace('-', 0)
#scale numeric features
startups_normalized[columns_to_scale] = min_max_scaler.fit_transform(startups_normalized[columns_to_scale])
return startups_normalized
columns_to_scale = list(startups_USA.filter(regex=(".*(funding_rounds|funding_total_usd)|(number_of|avg_).*")).columns)
startups_USA = normalize_numeric_features(startups_USA, columns_to_scale)
"""
Explanation: Normalize numeric variables
Here we'll set all the numeric variables into the same scale (0 to 1)
End of explanation
"""
def date_to_age_in_months(date):
if date != date or date == 0: #is NaN
return 0
date1 = datetime.strptime(date, '%Y-%m-%d')
date2 = datetime.strptime('2017-01-01', '%Y-%m-%d') #get age until 01/01/2017
delta = relativedelta.relativedelta(date2, date1)
return delta.years * 12 + delta.months
def normalize_date_variables(df):
date_vars = ['founded_at', 'first_funding_at', 'last_funding_at']
for var in date_vars:
df[var] = df[var].map(date_to_age_in_months)
df = normalize_numeric_features(df, date_vars)
return df
startups_USA = normalize_date_variables(startups_USA)
"""
Explanation: Normalize date variables
Here we'll convert dates to ages in months up to the first day of 2017
End of explanation
"""
def explore_states(states, top_n_states):
print 'There are in total {} different states'.format(len(states.unique()))
prob = pd.Series(states).value_counts()
print prob.head()
#select first <top_n_categories>
mask = prob > prob[top_n_states]
head_prob = prob.loc[mask].sum()
tail_prob = prob.loc[~mask].sum()
total_sum = prob.sum()
prob = prob.loc[mask]
prob2 = pd.DataFrame({'top '+str(top_n_states)+' states': head_prob, 'others': tail_prob},index=[0])
fig, axs = plt.subplots(2,1, figsize=(15,6))
prob.plot(kind='bar', ax=axs[0])
prob2.plot(kind='bar', ax=axs[1])
for bar in axs[1].patches:
height = bar.get_height()
axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top')
fig.tight_layout()
plt.xticks(rotation=90)
plt.show()
explore_states(startups_USA['state_code'], top_n_states=15)
"""
Explanation: Extract state_code features
End of explanation
"""
def expand_top_states_into_dummy_variables(df):
states = df['state_code'].astype('str')
#Get a dummy dataset for categories
dummies = pd.get_dummies(states)
#select top most frequent states
top15states = list(states.value_counts().sort_values(ascending=False).index[:15])
#Create a dataframe with the 15 top states to be concatenated later to the complete dataframe
states_df = dummies[top15states]
states_df = states_df.add_prefix('State_')
new_df = pd.concat([df, states_df], axis=1, ignore_index=False)
new_df = new_df.drop(['state_code'], axis=1)
return new_df
startups_USA = expand_top_states_into_dummy_variables(startups_USA)
"""
Explanation: As we did for the categories variable, in order to decrease the amount of features in our dataset, let's just select the top 15 more frequent states (which cover already 82% of our companies)
End of explanation
"""
cols = list(startups_USA)
cols.append(cols.pop(cols.index('status')))
startups_USA = startups_USA.ix[:, cols]
startups_USA.to_csv('data/startups_pre_processed.csv')
startups_USA.head()
"""
Explanation: Move status to the end of dataframe and save to file
End of explanation
"""
|
sandeep-n/incubator-systemml | samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb | apache-2.0 | !pip show systemml
"""
Explanation: Linear Regression Algorithms using Apache SystemML
This notebook shows:
- Install SystemML Python package and jar file
- pip
- SystemML 'Hello World'
- Example 1: Matrix Multiplication
- SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output
- Examine execution plans, and increase data size to obverve changed execution plans
- Load diabetes dataset from scikit-learn
- Example 2: Implement three different algorithms to train linear regression model
- Algorithm 1: Linear Regression - Direct Solve (no regularization)
- Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization)
- Algorithm 3: Linear Regression - Conjugate Gradient (no regularization)
- Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API
- Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API
- Uninstall/Clean up SystemML Python package and jar file
This notebook is supported with SystemML 0.14.0 and above.
End of explanation
"""
from systemml import MLContext, dml, dmlFromResource
ml = MLContext(sc)
print ("Spark Version:" + sc.version)
print ("SystemML Version:" + ml.version())
print ("SystemML Built-Time:"+ ml.buildTime())
ml.execute(dml("""s = 'Hello World!'""").output("s")).get("s")
"""
Explanation: Import SystemML API
End of explanation
"""
import sys, os, glob, subprocess
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
plt.switch_backend('agg')
def printLastLogLines(n):
fname = max(glob.iglob(os.sep.join([os.environ["HOME"],'/logs/notebook/kernel-pyspark-*.log'])), key=os.path.getctime)
print(subprocess.check_output(['tail', '-' + str(n), fname]))
"""
Explanation: Import numpy, sklearn, and define some helper functions
End of explanation
"""
script = """
X = rand(rows=$nr, cols=1000, sparsity=0.5)
A = t(X) %*% X
s = sum(A)
"""
prog = dml(script).input('$nr', 1e5).output('s')
s = ml.execute(prog).get('s')
print (s)
"""
Explanation: Example 1: Matrix Multiplication
SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output
End of explanation
"""
ml = MLContext(sc)
ml = ml.setStatistics(True)
# re-execute ML program
# printLastLogLines(22)
prog = dml(script).input('$nr', 1e6).output('s')
out = ml.execute(prog).get('s')
print (out)
ml = MLContext(sc)
ml = ml.setStatistics(False)
"""
Explanation: Examine execution plans, and increase data size to obverve changed execution plans
End of explanation
"""
%matplotlib inline
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
diabetes_y_train = diabetes.target[:-20].reshape(-1,1)
diabetes_y_test = diabetes.target[-20:].reshape(-1,1)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
diabetes.data.shape
"""
Explanation: Load diabetes dataset from scikit-learn
End of explanation
"""
script = """
# add constant feature to X to model intercept
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
A = t(X) %*% X
b = t(X) %*% y
w = solve(A, b)
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
"""
Explanation: Example 2: Implement three different algorithms to train linear regression model
Algorithm 1: Linear Regression - Direct Solve (no regularization)
Least squares formulation
w* = argminw ||Xw-y||2 = argminw (y - Xw)'(y - Xw) = argminw (w'(X'X)w - w'(X'y))/2
Setting the gradient
dw = (X'X)w - (X'y) to 0, w = (X'X)-1(X' y) = solve(X'X, X'y)
End of explanation
"""
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
max_iter = 100
w = matrix(0, rows=ncol(X), cols=1)
for(i in 1:max_iter){
XtX = t(X) %*% X
dw = XtX %*%w - t(X) %*% y
alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)
w = w + dw*alpha
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias')
w, bias = ml.execute(prog).get('w', 'bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization)
Algorithm
Step 1: Start with an initial point
while(not converged) {
Step 2: Compute gradient dw.
Step 3: Compute stepsize alpha.
Step 4: Update: wnew = wold + alpha*dw
}
Gradient formula
dw = r = (X'X)w - (X'y)
Step size formula
Find number alpha to minimize f(w + alpha*r)
alpha = -(r'r)/(r'X'Xr)
End of explanation
"""
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
m = ncol(X); i = 1;
max_iter = 20;
w = matrix (0, rows = m, cols = 1); # initialize weights to 0
dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)
norm_r2 = sum (dw ^ 2);
for(i in 1:max_iter) {
q = t(X) %*% (X %*% p)
alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)
w = w + alpha * p; # update weights
dw = dw + alpha * q;
old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);
p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction
i = i + 1;
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Algorithm 3: Linear Regression - Conjugate Gradient (no regularization)
Problem with gradient descent: Takes very similar directions many times
Solution: Enforce conjugacy
Step 1: Start with an initial point
while(not converged) {
Step 2: Compute gradient dw.
Step 3: Compute stepsize alpha.
Step 4: Compute next direction p by enforcing conjugacy with previous direction.
Step 4: Update: w_new = w_old + alpha*p
}
End of explanation
"""
import os
from subprocess import call
dirName = os.path.dirname(os.path.realpath("~")) + "/scripts"
call(["mkdir", "-p", dirName])
call(["wget", "-N", "-q", "-P", dirName, "https://raw.githubusercontent.com/apache/systemml/master/scripts/algorithms/LinearRegDS.dml"])
scriptName = dirName + "/LinearRegDS.dml"
dml_script = dmlFromResource(scriptName)
prog = dml_script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')
w = ml.execute(prog).get('beta_out')
w = w.toNumPy()
bias=w[1]
print (bias)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w[0]*diabetes_X_test)+bias, color='red', linestyle ='dashed')
"""
Explanation: Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API
End of explanation
"""
from pyspark.sql import SQLContext
from systemml.mllearn import LinearRegression
sqlCtx = SQLContext(sc)
regr = LinearRegression(sqlCtx)
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
predictions = regr.predict(diabetes_X_test)
# Use the trained model to perform prediction
%matplotlib inline
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, predictions, color='black')
"""
Explanation: Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API
mllearn API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API.
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/4 Automatic Theorem Proving/Knuth-Bendix-Algorithm-KBO.ipynb | gpl-2.0 | %run Parser.ipynb
!cat Examples/quasigroups.eqn || type Examples\quasigroups.eqn
def test():
t = parse_term('x * y * z')
print(t)
print(to_str(t))
eq = parse_equation('i(x) * x = 1')
print(eq)
print(to_str(parse_file('Examples/quasigroups.eqn')))
test()
"""
Explanation: The Knuth-Bendix Completion Algorithm
This notebook presents the Knuth-Bendix completion algorithm for transforming a set of equations into a confluent term rewriting system. This notebook is divided into eight sections.
- Parsing
- Matching
- Term Rewriting
- Unification
- Knuth-Bendix Ordering
- Critical Pairs
- The Completion Algorithm
- Examples
Parsing
To begin, we need a parser that is capable of parsing terms and equations. This parser is implemented in the notebook Parser.ipynb and can parse equations of terms that use the binary operators +, -, *, /, \, %, and ^. The precedences of these operators are as follows:
1. + and - have the precedence $1$, which is the lowest precedence.
Furthermore, they are left-associative.
2. *, /, \, % have the precedence $2$ and are also left associative.
3. ^ has the precedence $3$ and is right associative.
Furthermore, function symbols and variables are supported. Every string consisting of letters, digits, and underscores that does start with a letter is considered a function symbol if it is followed by an opening parenthesis. Otherwise, it is taken to be a variable. Terms are defined inductively:
- Every variable is a term.
- If $f$ is a function symbol and $t_1$, $\cdots$, $t_n$ are terms, then $f(t_1,\cdots,t_n)$ is a term.
- If $s$ and $t$ are terms and $o$ is an operator, then $s\; o\; t$ is a term.
The notebook Parser.ipynb also provides the function to_str for turning terms or equations into strings. All together, the notebook provides the following functions:
- parse_file(file_name) parses a file containing equations between terms.
It returns a list of the equations that have been parsed.
- parse_equation(s) converts the string s into an equation.
- parse_term(s) converts the string s into a term.
- to_str(o) converts an object o into a string. The object o either is
* a term,
* an equation,
* a list of equations,
* a set of equations, or
* a dictionary representing a substitution.
Terms and equations are represented as nested tuples. These are defined recursively:
- a string is a nested tuple,
- a tuple t is a nested tuple iff t[0] is a string and for all
$i \in {1,\cdots,\texttt{len}(t)-1}$ we have that t[i] is a nested tuple.
The parser is implemented using the parser generator Ply.
End of explanation
"""
def is_var(t):
return t[0] == '$var'
"""
Explanation: Back to top
Matching
The substitution $\sigma$ maps variables to terms. It is represented as a dictionary. If $t$ is a term and $\sigma$ is the substitution
$$ \sigma = { x_1: s_1, \cdots, x_n:s_n }, $$
then applying the substitution $\sigma$ to the term $t$ replaces the variables $x_i$ with the terms $s_i$. The application of $\sigma$ to $t$ is written as $t\sigma$ and is defined by induction on $t$:
- $x_i\sigma := s_i$,
- $v\sigma := v$ if $v$ is a variable and $v \not\in {x_1,\cdots,x_n}$,
- $f(t_1,\cdots,t_n)\sigma := f(t_1\sigma, \cdots, t_n\sigma)$.
A term $p$ matches a term $t$ iff there exists a substitution $\sigma$ such that $p\sigma = t$.
The function is_var(t) checks whether the term t is interpreted a variable. Variables are represented as nested tuples of the form ($var, name), where name is the name of the variable.
End of explanation
"""
def make_var(x):
return ('$var', x)
"""
Explanation: Given a string x, the function make_var(x) creates a variable with name x.
End of explanation
"""
def match_pattern(pattern, term, σ):
match pattern:
case '$var', var:
if var in σ:
return σ[var] == term
else:
σ[var] = term # extend σ
return True
case _:
if pattern[0] == term[0] and len(pattern) == len(term):
return all(match_pattern(pattern[i], term[i], σ) for i in range(1, len(pattern)))
else:
return False
def test():
p = parse_term('i(x) * z')
t = parse_term('i(i(y)) * i(y)')
σ = {}
match_pattern(p, t, σ)
print(to_str(σ))
test()
"""
Explanation: Given a term p, a term t, and a substitution σ, the function match_pattern(p, t, σ) tries to extend the
substitution σ so that the equation
$$ p \sigma = t $$
is satisfied. If this is possible, the function returns True and updates the substitution σ so that
$p \sigma = t$ holds. Otherwise, the function returns False.
End of explanation
"""
def find_variables(t):
if isinstance(t, set) or isinstance(t, list):
return { var for term in t
for var in find_variables(term)
}
if is_var(t):
_, var = t
return { var }
_, *L = t
return find_variables(L)
def test():
eq = parse_equation('(x * y) * z = x * (y * z)')
print(find_variables(eq))
test()
"""
Explanation: Given a term t, the function find_variables(t) computes the set of all variables occurring in t. If, instead, $t$ is a list of terms or a set of terms, then find_variables(t) computes the set of those variables that occur in any of the terms of $t$.
End of explanation
"""
def apply(t, σ):
"Apply the substitution σ to the term t."
if is_var(t):
_, var = t
if var in σ:
return σ[var]
else:
return t
else:
f, *Ts = t
return (f,) + tuple(apply(s, σ) for s in Ts)
def test():
p = parse_term('i(x) * x')
t = parse_term('i(i(y)) * i(y)')
σ = {}
match_pattern(p, t, σ)
print(f'apply({to_str(p)}, {to_str(σ)}) = {to_str(apply(p, σ))}')
test()
"""
Explanation: Given a term t and a substitution σ that is represented as a dictionary of the form
$$ \sigma = { x_1: s_1, \cdots, x_n:s_n }, $$
the function apply(t, σ) computes the term that results from replacing the variables $x_i$ with the terms $s_i$ in t for all $i=1,\cdots,n$. This term is written as $t\sigma$ and if $\sigma = { x_1: s_1, \cdots, x_n:s_n }$, then $t\sigma$ is defined by induction on t as follows:
- $x_i\sigma := s_i$,
- $v\sigma := v$ if $v$ is a variable and $v \not\in {x_1,\cdots,x_n}$,
- $f(t_1,\cdots,t_m)\sigma := f(t_1\sigma, \cdots, t_m\sigma)$.
End of explanation
"""
def apply_set(Ts, σ):
return { apply(t, σ) for t in Ts }
"""
Explanation: Given a set of terms or equations Ts and a substitution σ, the function apply_set(Ts, σ) applies the substitution σ to all elements in Ts.
End of explanation
"""
def compose(σ, τ):
Result = { x: apply(s, τ) for (x, s) in σ.items() }
Result.update(τ)
return Result
def test():
t1 = parse_term('i(y)')
t2 = parse_term('a * b')
t3 = parse_term('i(b)')
σ = { 'x': t1 }
τ = { 'y': t2, 'z': t3 }
print(f'compose({to_str(σ)}, {to_str(τ)}) = {to_str(compose(σ, τ))}')
test()
"""
Explanation: If $\sigma = \big{ x_1 \mapsto s_1, \cdots, x_m \mapsto s_m \big}$ and
$\tau = \big{ y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big}$
are two substitutions that are non-overlapping, i.e. such that ${x_1,\cdots, x_m} \cap {y_1,\cdots,y_n} = {}$ holds,
then we define the composition $\sigma\tau$ of $\sigma$ and $\tau$ as follows:
$$\sigma\tau := \big{ x_1 \mapsto s_1\tau, \cdots, x_m \mapsto s_m\tau,\; y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big}$$
This definition implies that the following associative law is valid:
$$ s(\sigma\tau) = (s\sigma)\tau $$
The function $\texttt{compose}(\sigma, \tau)$ takes two non-overlapping substitutions and computes their composition $\sigma\tau$.
End of explanation
"""
from string import ascii_lowercase
ascii_lowercase
"""
Explanation: Back to top
Term Rewriting
End of explanation
"""
def rename_variables(s, Vars):
assert len(Vars) <= 13, f'Error: too many variables in {Vars}.'
NewVars = set(ascii_lowercase) - Vars
NewVars = sorted(list(NewVars))
σ = { x: make_var(NewVars[i]) for (i, x) in enumerate(Vars) }
return apply(s, σ)
def test():
t = parse_equation('x * y * z = x * (y * z)')
V = find_variables(t)
print(f'rename_variables({to_str(t)}, {V}) = {to_str(rename_variables(t, V))}')
test()
"""
Explanation: Given a term s and a set of variables V, the function rename_variables(s, V) renames the variables in s so that they differ from the variables in the set V. This will only work if the number of variables occurring in V times two is less than the number of letters in the latin alphabet, i.e. less than 26. Therefore, the set V must have at most 13 variables. For our examples, this is not a restriction.
End of explanation
"""
def simplify_step(t, Equations):
if is_var(t):
return None # variables can't be simplified
for eq in Equations:
_, lhs, rhs = eq
σ = {}
if match_pattern(lhs, t, σ):
return apply(rhs, σ)
f, *args = t
simpleArgs = []
change = False
for arg in args:
simple = simplify_step(arg, Equations)
if simple != None:
simpleArgs += [simple]
change = True
else:
simpleArgs += [arg]
if change:
return (f,) + tuple(simpleArgs)
return None
def test():
E = { parse_equation('(x * y) * z = x * (y * z)') }
t = parse_term('(a * b) * i(b)')
print(f'simplify_step({to_str(t)}, {to_str(E)}) = {to_str(simplify_step(t, E))}')
test()
"""
Explanation: The function simplify_step(t, E) takes two arguments:
- t is a term,
- E is a set of equations of the form ('=', l, r).
The function tries to find an equation l = r in E and a subterm s in the term t such that the left hand side l of the equation matches the subterm s using some substitution $\sigma$, i.e. we have $s = l\sigma$. Then the term t is simplified by replacing the subterm s in t by $r\sigma$. More formally, if u is the position of s in t, i.e. t/u = s then t is simplified into the term
$$ t = t[u \mapsto l\sigma] \rightarrow_{{l=r}} t[u \mapsto r\sigma]. $$
If an appropriate subterm s is found, the simplified term is returned. Otherwise, the function returns None.
If multiple subterms of t can simplified, then the function simplify_step(t, E) simplifies all subterms.
End of explanation
"""
def normal_form(t, E):
Vars = find_variables(t) | find_variables(E)
NewE = []
for eq in E:
NewE += [ rename_variables(eq, Vars) ]
while True:
s = simplify_step(t, NewE)
if s == None:
return t
t = s
!cat Examples/group-theory-1.eqn || type Examples\group-theory-1.eqn
def test():
E = parse_file('Examples/group-theory-1.eqn')
t = parse_term('1 * (b * i(a)) * a')
print(f'E = {to_str(E)}')
print(f'normal_form({to_str(t)}, E) = {to_str(normal_form(t, E))}')
test()
"""
Explanation: The function normal_form(t, E) takes a term t and a list (or set) of equations E and tries to simplify the term t as much as possible using the equations from E.
In the implementation, we have to be careful to rename the variables occurring in E so that they are different from the variables occurring in t. Furthermore, we have to take care that we don't identify different variables in E by accident. Therefore, we rename the variables in E so that they are both different from the variables in t and from the old variables occurring in E.
End of explanation
"""
def occurs(x, t):
if is_var(t):
_, var = t
return x == var
return any(occurs(x, arg) for arg in t[1:])
"""
Explanation: Back to top
Unification
In this section, we implement the unification algorithm of Martelli and Montanari.
Given a variable name x and a term t, the function occurs(x, t) checks whether x occurs in t.
End of explanation
"""
def unify(s, t):
return solve({('≐', s, t)}, {})
"""
Explanation: The algorithm implemented below takes a pair (E, σ) as its input. Here E is a set of syntactical equations that need to be solved and σ is a substitution that is initially empty. The pair (E, σ) is then transformed using the rules of Martelli and Montanari. The transformation is successful if the pair (E, σ) can be transformed into a pair of the form ({}, μ). Then μ is the solution to the system of equations E and hence μ is a most general unifier of E.
The rules that can be used to solve a system of syntactical equations are as follows:
- If $y\in\mathcal{V}$ is a variable that does not occur in the term $t$,
then we perform the following reduction:
$$ \Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E[y \mapsto t], \sigma\big[ y \mapsto t \big] \Big\rangle
$$
- If the variable $y$ occurs in the term $t$ and $y$ is different from $t$, then the system of
syntactical equations
$E \cup \big{ y \doteq t \big}$ is not solvable:
$$ \Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle\;\leadsto\; \texttt{None} \quad
\mbox{if $y \in \textrm{Var}(t)$ and $y \not=t$.}$$
- If $y\in\mathcal{V}$ is a variable and $t$ is no variable, then we use the following rule:
$$ \Big\langle E \cup \big{ t \doteq y \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle.
$$
- Trivial syntactical equations of variables can be dropped:
$$ \Big\langle E \cup \big{ x \doteq x \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E, \sigma \Big\rangle.
$$
- If $f$ is an $n$-ary function symbol, then we have:
$$ \Big\langle E \cup \big{ f(s_1,\cdots,s_n) \doteq f(t_1,\cdots,t_n) \big}, \sigma \Big\rangle
\;\leadsto\;
\Big\langle E \cup \big{ s_1 \doteq t_1, \cdots, s_n \doteq t_n}, \sigma \Big\rangle.
$$
- The system of syntactical equations $E \cup \big{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big}$
has no solution if the function symbols $f$ and $g$ are different:
$$ \Big\langle E \cup \big{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big},
\sigma \Big\rangle \;\leadsto\; \texttt{None} \qquad \mbox{if $f \not= g$}.
$$
Given two terms $s$ and $t$, the function $\texttt{unify}(s, t)$ computes the <em style="color:blue;">most general unifier</em> of $s$ and $t$.
End of explanation
"""
def solve(E, σ):
while E != set():
_, s, t = E.pop()
if s == t: # remove trivial equations
continue
if is_var(s):
_, x = s
if occurs(x, t):
return None
else: # set x to t
E = apply_set(E, { x: t })
σ = compose(σ, { x: t })
elif is_var(t):
E.add(('≐', t, s))
else:
f , g = s[0] , t[0]
sArgs, tArgs = s[1:] , t[1:]
m , n = len(sArgs), len(tArgs)
if f != g or m != n:
return None
else:
E |= { ('≐', sArgs[i], tArgs[i]) for i in range(m) }
return σ
def test():
s = parse_term('x * i(x) * (y * z)')
t = parse_term('a * i(1) * b')
print(f'unify({to_str(s)}, {to_str(t)}) = {to_str(unify(s, t))}')
test()
"""
Explanation: Given a set of <em style="color:blue;">syntactical equations</em> $E$ and a substitution $\sigma$, the function $\texttt{solve}(E, \sigma)$ applies the rules of Martelli and Montanari to solve $E$.
End of explanation
"""
def count(t, x):
match t:
case '$var', y:
return 1 if x == y else 0
case _, *Ts:
return sum(count(arg, x) for arg in Ts)
def test():
t = parse_term('x * (i(x) * y)')
print(f'count({to_str(t)}, "x") = {count(t, "x")}')
test()
"""
Explanation: Back to top
The Knuth-Bendix Ordering
In order to turn an equation $s = t$ into a rewrite rule, we have to check whether the term $s$ is more complex than the term $t$, so that $s$ should be simplified to $t$, or whether $t$ is more complex than $s$ and we should rewrite $t$ into $s$. To this end, we implement the Knuth-Bendix ordering, which is a method to compare terms.
Given a term t and a variable name x, the function count(t, x) computes the number of times that x occurs in t.
End of explanation
"""
WEIGHT = { '1': 1, '*': 1, '/': 1, '\\': 1, 'i': 0 }
ORDERING = { '1': 0, '*': 1, '/': 2, '\\': 3, 'i': 5 }
max_fct = lambda: 'i'
"""
Explanation: In order to define the Knuth-Bendix ordering on terms, three prerequisites need to be satisfied:
1. We need to assign a weight $w(f)$ to every function symbol $f$. These weights are
natural numbers. There must be at most one function symbol $g$ such that $w(g) = 0$.
Furthermore, if $w(g) = 0$, then $g$ has to be unary.
We define the weights via the dictionary Weight, i.e. we have $w(f) = \texttt{Weight}[f]$.
2. We need to define a strict order $<$ on the set of function symbols.
This ordering is implemented via the dictionary Ordering. We define
$$ f < g \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\; \texttt{Ordering}[f] < \texttt{Ordering}[f]. $$
3. The order $<$ on the function symbols has to be admissible with respect to the weight function $w$, i.e. the following
condition needs to be satisfied:
$$ w(f) = 0 \rightarrow \forall g: \bigl(g \not=f \rightarrow g < f\bigr). $$
To put this in words: If the function symbol $f$ has a weight of $0$, then
all other function symbols $g$ have to be smaller than $f$ w.r.t. the strict order $<$.
Note that this implies that there can be at most one function symbol with $f$ such that $w(f) = 0$.
This function symbol $f$ is then the maximum w.r.t. the order $<$.
Below, for efficiency reasons, the function max_fct returns the function symbol $f$ that is maximal w.r.t. the strict order $<$.
End of explanation
"""
def weight(t):
match t:
case '$var', _:
return 1
case f, *Ts:
return WEIGHT[f] + sum(weight(arg) for arg in Ts)
def test():
t = parse_term('x * (i(x) * 1)')
print(f'weight({to_str(t)}) = {weight(t)}')
test()
"""
Explanation: Given a term t the function weight(t) computes the weight $w(t)$, where $w(t)$ is defined by induction on $t$:
- $w(x) := 1$ for all variables $x$,
- $w\bigl(f(t_1,\cdots,t_n)\bigr) := \texttt{Weight}[f] + \sum\limits_{i=1}^n w(t_i)$.
End of explanation
"""
def is_tower(s, t):
if len(t) != 2: # f is not unary
return False
f, t1 = t
if f != max_fct():
return False
if t1 == s:
return True
return is_tower(s, t1)
def test():
t = parse_term('i(a)')
s = parse_term('i(i(a))')
print(f'is_tower({to_str(s)}, {to_str(t)}) = {is_tower(s, t)}')
test()
"""
Explanation: Given a term s and a term t, the function is_tower(s, t) returns True iff the following is true:
$$ \exists n\in\mathbb{N}:\bigl( n > 0 \wedge t = f^{n}(s) \wedge f = \texttt{max_fct}()\bigr). $$
Here the expression $f^n(s)$ is the $n$-fold application of $f$ to $s$, e.g. we have $f^1(s) = f(s)$, $f^2(s) = f(f(s))$, and in general $f^{n+1}(s) = f\bigl(f^{n}(s)\bigr)$.
End of explanation
"""
def is_simpler(s, t):
if is_var(t):
return False
if is_var(s):
_, x = s
return occurs(x, t)
Vs = find_variables(s)
for x in Vs:
if count(t, x) < count(s, x):
return False
ws = weight(s)
wt = weight(t)
if ws < wt:
return True
if ws > wt:
return False
# ws == wt
if is_tower(s, t):
return True
f, *Ss = s
g, *Ts = t
if ORDERING[f] < ORDERING[g]:
return True
if ORDERING[f] > ORDERING[g]:
return False
return is_simpler_list(Ss, Ts)
"""
Explanation: The Knuth-Bendix order $s \prec_{\textrm{kbo}} t$ is defined for terms $s$ and $t$. We have $s \prec_{\textrm{kbo}} t$ iff one of the following two conditions hold:
1. $w(s) < w(t)$ and $\texttt{count}(s, x) \leq \texttt{count}(t, x)$ for all variables $x$ occurring in $s$ .
2. $w(s) = w(t)$, $\texttt{count}(s, x) \leq \texttt{count}(t, x)$ for all variables $x$ occurring in $s$, and
one of the following subconditions holds:
* $t = f^n(s)$ where $n \geq 1$ and $f$ is the maximum w.r.t. the order $<$ on function symbols,
i.e. we have $f = \texttt{max_fct}()$.
* $s = f(s_1,\cdots,s_m)$, $t=g(t_1,\cdots,t_n)$, and $f<g$.
* $s = f(s_1,\cdots,s_m)$, $t=f(t_1,\cdots,t_m)$, and
$[s_1,\cdots,s_m] \prec_{\textrm{lex}} [t_1,\cdots,t_m]$.
Here, $\prec_{\textrm{lex}}$ denotes the *lexicographic extension* of the ordering $\prec_{\textrm{kbo}}$ to
lists of terms. It is defined as follows:
$$ [x] + R_1 \prec_{\textrm{lex}} [y] + R_2 \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\;
x \prec_{\textrm{kbo}} y \,\vee\, \bigl(x = y \wedge R_1 \prec_{\textrm{lex}} R_2\bigr)
$$
Given two terms s and t the function is_simpler(s, t) returns True if $s \prec_{\textrm{kbo}} t$.
End of explanation
"""
def is_simpler_list(S, T):
if S == [] == T:
return False
if is_simpler(S[0], T[0]):
return True
if S[0] == T[0]:
return is_simpler_list(S[1:], T[1:])
return False
def test():
#l = parse_term('(x * y) * z')
#r = parse_term('x * (y * z)')
l = parse_term('i(a)')
r = parse_term('i(i(a))')
print(f'is_simpler({to_str(r)}, {to_str(l)}) = {is_simpler(r, l)}')
print(f'is_simpler({to_str(l)}, {to_str(r)}) = {is_simpler(l, r)}')
test()
"""
Explanation: Given two lists S and T of terms, the function is_simpler_list(S, T) checks whether S is lexicographically simpler than T if the elements of S and T are compared with the Knuth-Bendix ordering $\prec_{\textrm{kbo}}$. It is assumed that S and T have the same length.
End of explanation
"""
class OrderException(Exception):
pass
"""
Explanation: We define the class OrderException to be able to deal with equations that can't be ordered into a rewrite rule.
End of explanation
"""
def order_equation(eq):
_, s, t = eq
if is_simpler(t, s):
return ('=', s, t)
elif is_simpler(s, t):
return ('=', t, s)
else:
Msg = f'Knuth-Bendix algorithm failed: Could not order {to_str(s)} = {to_str(t)}'
raise OrderException(Msg)
def test():
equation = 'i(i(a)) = i(i(i(i(a))))'
eq = parse_equation(equation)
print(f'order_equation({to_str(eq)}) = {to_str(order_equation(eq))}')
test()
"""
Explanation: Given an equation eq and an Ordering of the function symbols occurring eq, the function order_equation orders the equation eq with respect to the Knuth-Bendix ordering, i.e. in the ordered equation, the right hand side is simpler than the left hand side. If the left hand side and the right hand side are incomparable, the function raises an OrderException.
End of explanation
"""
def non_triv_positions(t):
if is_var(t):
return set()
_, *args = t
Result = { () }
for i, arg in enumerate(args):
Result |= { (i,) + a for a in non_triv_positions(arg) }
return Result
def test():
t = parse_term('x * i(x) * 1')
print(f'non_triv_positions({to_str(t)}) = {non_triv_positions(t)}')
test()
"""
Explanation: Back to top
Critical Pairs
The central notion of the Knuth-Bendix algorithm is the notion of a critical pair.
Given two equations lhs1 = rhs1 and lhs2 = rhs2, a pair of terms (s, t) is a critical pair of these equations if we have the following:
- u is a non-trivial position in lhs1, i.e. lhs1/u is not a variable,
- The subterm lhs1/u is unifiable with lhs2, i.e.
$$\mu = \texttt{mgu}(\texttt{lhs}_1 / \texttt{u}, \texttt{lhs}_2) \not= \texttt{None},$$
- $s = \texttt{lhs}_1\mu[\texttt{u} \mapsto \texttt{rhs}_2\mu]$ and $t = \texttt{rhs}_1\mu$.
The idea is then that the term $\texttt{lhs1}\mu$ can be rewritten into different ways:
- $\texttt{lhs1}\mu \rightarrow \texttt{rhs1}\mu = t$,
- $\texttt{lhs1}\mu \rightarrow \texttt{lhs}_1\mu[\texttt{u} \mapsto \texttt{rhs}_2\mu] = s$.
The function critical_pairs implemented in this section computes the critical pairs between two rewrite rules.
Given a term t, the function non_triv_positions computes the set $\mathcal{P}os(t)$ of all positions in t that do not point to variables. Such positions are called non-trivial positions. Given a term t, the set $\mathcal{P}os(t)$ of all positions in $t$ is defined by induction on t.
1. $\mathcal{P}os(v) := \bigl{()\bigr} \quad \mbox{if $v$ is a variable} $
2. $\mathcal{P}os\bigl(f(t_0,\cdots,t_{n-1})\bigr) :=
\bigl{()\bigr} \cup
\bigl{ (i,) + u \mid i \in{0,\cdots,n-1} \wedge u \in \mathcal{P}os(t_i) \bigr}
$
Note that since we are programming in Python, positions are zero-based. Given a position $v$ in a term $t$, we define $t/v$ as the subterm of $t$ at position $v$ by induction on $t$:
1. $t/() := t$,
2. $f(t_0,\cdots,t_{n-1})/u := t_{u\texttt{[0]}}/u\texttt{[1:]}$.
Given a term $s$, a term $t$, and a position $u \in \mathcal{P}os(t)$, we also define the replacement of the subterm at position $u$ by $t$, written $s[u \mapsto t]$ by induction on $u$:
1. $s\bigl[() \mapsto t\bigr] := t$.
2. $f(s_0,\cdots,s_{n-1})\bigl[\bigl((i,) + u\bigr) \mapsto t\bigr] := f\bigl(s_0,\cdots,s_i[u \mapsto t],\cdots,s_{n-1}\bigr)$.
End of explanation
"""
def subterm(t, u):
if len(u) == 0:
return t
_, *args = t
i, *ur = u
return subterm(args[i], ur)
def test():
t = parse_term('(x * i(x)) * 1')
print(f'subterm({to_str(t)}, (0,1)) = {to_str(subterm(t, (0,1)))}')
test()
"""
Explanation: Given a term t and a position u in t, the function subterm(t, u) extracts the subterm that is located at position u, i.e. it computes t/u. The position u is zero-based.
End of explanation
"""
def replace_at(t, u, s):
if len(u) == 0:
return s
i, *ur = u
f, *Args = t
NewArgs = []
for j, arg in enumerate(Args):
if j == i:
NewArgs.append(replace_at(arg, ur, s))
else:
NewArgs.append(arg)
return (f,) + tuple(NewArgs)
def test():
t = parse_term('(x * i(x)) * 1')
s = parse_term('a * b')
print(f'replace_at({to_str(t)}, (0,1), {to_str(s)}) = {to_str(replace_at(t, (0,1), s))}')
test()
"""
Explanation: Given a term t, a position u in t and a term s, the function replace_at(t, u, s) replaces the subterm at position u with t. The position u uses zero-based indexing. Hence it returns the term
$$ t[u \mapsto s]. $$
End of explanation
"""
def critical_pairs(eq1, eq2):
Vars = find_variables(eq1) | find_variables(eq2)
eq2 = rename_variables(eq2, Vars)
_, lhs1, rhs1 = eq1
_, lhs2, rhs2 = eq2
Result = set()
Positions = non_triv_positions(lhs1)
for u in Positions:
𝜇 = unify(subterm(lhs1, u), lhs2)
if 𝜇 != None:
lhs1_new = apply(replace_at(lhs1, u, rhs2), 𝜇)
rhs1_new = apply(rhs1, 𝜇)
Result.add( (('=', lhs1_new, rhs1_new), eq1, eq2))
return Result
def test():
eq1 = parse_equation('(x * y) * z = x * (y * z)')
eq2 = parse_equation('i(x) * x = 1')
for ((_, s, t), _, _) in critical_pairs(eq1, eq2):
print(f'critical_pairs({to_str(eq1)}, {to_str(eq2)}) = ' + '{' + f'{to_str(s)} = {to_str(t)}' + '}')
test()
"""
Explanation: Given two equations eq1 and eq2, the function critical_pairs(eq1, eq2) computes the set of all critical pairs between these equations. A pair of terms (s, t) is a critical pair of eq1 and eq2 if we have
- eq1 has the form lhs1 = rhs1,
- eq2 has the form lhs2 = rhs2,
- u is a non-trivial position in lhs1,
- $\mu = \texttt{mgu}(\texttt{lhs}_1/u, \texttt{lhs}_2) \not= \texttt{None}$,
- $s = \texttt{lhs}_1\mu[u \leftarrow \texttt{rhs}_2\mu]$ and $t = \texttt{rhs}_1\mu$.
End of explanation
"""
def simplify_rules(RewriteRules, rule):
UnusedRules = [ rule ]
while UnusedRules != []:
UnchangedRules = set()
r = UnusedRules.pop()
for eq in RewriteRules:
simple = normal_form(eq, { r })
if simple != eq:
simple = normal_form(simple, RewriteRules | { r })
if simple[1] != simple[2]:
simple = order_equation(simple)
UnusedRules.append(simple)
print('simplified:')
print(f'old: {to_str(eq)}')
print(f'new: {to_str(simple)}')
else:
print(f'removed: {to_str(eq)}')
else:
UnchangedRules.add(eq)
RewriteRules = UnchangedRules | { r }
return RewriteRules
"""
Explanation: Back to top
The Completion Algorithm
Given a set of RewriteRules and a newly derived rewrite rule, the function simplify_rules(RewriteRules, rule) adds rule to the set RewriteRules. When the function returns, every equation in the set RewriteRules is in normal form with respect to all other equations in RewriteRules.
End of explanation
"""
def print_equations(Equations):
cnt = 1
for _, l, r in Equations:
print(f'{cnt}. {to_str(l)} = {to_str(r)}')
cnt += 1
"""
Explanation: The function print_equations prints the set of Equations one by one and numbers them.
End of explanation
"""
def complexity(eq):
return len(to_str(eq))
"""
Explanation: Given an equation eq of the form eq = ('=', lhs, rhs), the function complexity(eq) computes a measure of complexity for the given equation. This measure of complexity is the length of the string that represents the equation. This measure of complexity is later used to choose between equations: Less complex equations are more interesting and should be considered first when computing critical pairs.
End of explanation
"""
def all_critical_pairs(RewriteRules, eq):
Result = set()
for eq1 in RewriteRules:
Result |= { cp for cp in critical_pairs(eq1, eq) }
Result |= { cp for cp in critical_pairs(eq, eq1) }
return Result
"""
Explanation: Given a set of equations RewriteRules and a single rewrite rule eq, the function all_critical_pairs(RewriteRules, eq) computes the set of all critical pairs that can be build by building critical pairs with an equation from RewriteRules and the equation eq. It is assumed that eq is already an element of RewriteRules.
End of explanation
"""
import heapq as hq
"""
Explanation: The module heapq provides heap-based priority queues, which are implemented as lists.
End of explanation
"""
def knuth_bendix_algorithm(file):
Equations = set()
Axioms = set(parse_file(file))
RewriteRules = set()
try:
for eq in Axioms:
ordered_eq = order_equation(eq)
Equations.add(ordered_eq)
print(f'given: {to_str(ordered_eq)}')
EquationQueue = []
for eq in Equations:
hq.heappush(EquationQueue, (complexity(eq), eq))
while EquationQueue != []:
_, eq = hq.heappop(EquationQueue)
eq = normal_form(eq, RewriteRules)
if eq[1] != eq[2]:
lr = order_equation(eq)
print(f'added: {to_str(lr)}')
Pairs = all_critical_pairs(RewriteRules | { lr }, lr)
for eq, r1, r2 in Pairs:
new_eq = normal_form(eq, RewriteRules)
if new_eq[1] != new_eq[2]:
print(f'found: {to_str(eq)} from {to_str(r1)}, {to_str(r2)}')
hq.heappush(EquationQueue, (complexity(new_eq), new_eq))
RewriteRules = simplify_rules(RewriteRules, lr)
except OrderException as e:
print(e)
print()
print_equations(RewriteRules)
return RewriteRules
"""
Explanation: Given a file name that contains a set of equations and a dictionary encoding an ordering of the function symbols, the function knuth_bendix_algorithm implements the Knuth-Bendix algorithm:
1. The equations read from the file are oriented into rewrite rules.
2. These oriented equations are pushed onto the priority queue EquationQueue according to their complexity.
3. The set RewriteRules is initialized as the empty set. The idea is that all critical pairs between
equations in RewriteRules have already been computed and that the resulting new equations have been added
to the priority queue EquationQueue.
4. As long as the priority queue EquationQueue is not empty, the least complex equation eq is removed from the
priority queue and simplified using the known RewriteRules.
5. If the simplified version of eq is not trivial, all critical pairs between eq and the
existing RewriteRules are computed. The resulting equations are pushed onto the priority queue EquationQueue.
6. When no new critical pairs can be found, the set of RewriteRules is returned.
This set is then guaranteed to be a confluent set of rewrite rules.
End of explanation
"""
!cat Examples/group-theory-1.eqn || type Examples\group-theory-1.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/group-theory-1.eqn')
"""
Explanation: Back to top
Examples
In this section we present a number of examples where the Knuth-Bendix completion algorithm is able to produce a confluent system of equations. In detail, we discuss the following examples:
1. Group Theory
2. Central Groupoids
3. Quasigroups
4. Quasigroups with Idempotence
5. Quasigroups with Unipotence
6. Loops
Group Theory
A structure $\mathcal{G} = \langle G, 1, *, i \rangle$ is a group iff
1. $G$ is a set.
2. $1 \in G$,
where $1$ is called the left-neutral element.
3. $*: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
4. $i: G \rightarrow G$,
where for any $x \in G$ the element $i(x)$ is called the left-inverse of $x$.
5. The following equations hold for all $x,y,z \in G$:
* $1 * x = x$, i.e. $1$ is a left-neutral element.
* $i(x) * x = 1$, i.e. $i(x)$ is a left-inverse of $x$.
* $(x * y) * z = x * (y * z)$, i.e. the multiplication is associative.
A typical example of a group is the set of invertible $n \times n$ matrices.
Given the axioms defining a group, the Knuth-Bendix completion algorithm is able to prove the following:
1. The left neutral element is also a right neutral element, we have:
$$ x * 1 = x \quad \mbox{for all $x\in G$.} $$
2. The left inverse is also a right inverse, we have:
$$ x * i(x) = 1 \quad \mbox{for all $x\in G$.} $$
3. The operations $i$ and $*$ commute as follows:
$$ i(x * y) = i(y) * i(x) \quad \mbox{for all $x,y\in G$.}$$
End of explanation
"""
!cat Examples/group-theory-2.eqn || type Examples\group-theory-2.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/group-theory-2.eqn')
"""
Explanation: It is natural to ask whether the axiom describing the left neutral element and the axiom describing the left inverse can be replaced by corresponding axioms that require $1$ to be a right neutral element and $i(x)$ to be a right inverse. The Knuth-Bendix completion algorithm shows that this is indeed the case.
End of explanation
"""
!cat Examples/lr-system.eqn || type Examples\lr-system.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/lr-system.eqn')
"""
Explanation: LR Systems
Next, it is natural to ask what happens if we have a left neutral element and a right inverse. Algebraic Structures of this kind are called LR systems. The Knuth-Bendix completion algorithm shows that, in general, LR systems are different from groups.
End of explanation
"""
!cat Examples/rl-system.eqn || type Examples\rl-system.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/rl-system.eqn')
"""
Explanation: RL Systems
Similarly, if we have a right neutral element and a left inverse the resulting structure need not be a group. Systems of this kind are called RL system.
End of explanation
"""
!cat Examples/central-groupoid.eqn || type Examples\central-groupoid.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/central-groupoid.eqn')
"""
Explanation: Central Groupoids
A structure $\mathcal{G} = \langle G, \rangle$ is a central groupoid iff
1. $G$ is a a non-empty set.
2. $: G \times G \rightarrow G$,
3. The following equation holds for all $x,y,z \in G$:
$$ (x * y) * (y * z) = y $$
Central Groupoids have been defined by Trevor Adams in his paper Products of Points—Some Simple Algebras and Their Identities and are also discussed by Donald E. Knuth in his paper notes on Central Groupoids.
End of explanation
"""
!cat Examples/quasigroups.eqn || type Examples\quasigroups.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroups.eqn')
"""
Explanation: Back to top
Quasigroups
A structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup iff
1. $G$ is a non-empty set.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$.
End of explanation
"""
!cat Examples/quasigroup-idempotence.eqn || type Examples\quasigroup-idempotence.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroup-idempotence.eqn')
"""
Explanation: Quasigroups with Idempotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = x$. Therefore, a structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$,
* $x * x = x$.
End of explanation
"""
!cat Examples/quasigroup-unipotence.eqn || type Examples\quasigroup-unipotence.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroup-unipotence.eqn')
"""
Explanation: Quasigroups with Unipotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = 1$
where $1$ is a constant symbol. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $1 \in G$.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$,
* $x * x = 1$.
End of explanation
"""
!cat Examples/loops.eqn || type Examples\loops.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/loops.eqn')
"""
Explanation: Loops
A loop is a quasigroup that additionally has an identity element. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a loop iff
1. $G$ is a set.
2. $1 \in G$.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $1 * x = x$,
* $x * 1 = x$,
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$.
End of explanation
"""
|
darrenxyli/deeplearning | lessons/handwritten/handwritten-digit-recognition-with-tflearn-exercise.ipynb | apache-2.0 | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=64)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/a68128275cc59b074b8c9782296d1d4a/decoding_rsa.ipynb | bsd-3-clause | # Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
"""
Explanation: Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations [1] [2] [3]_.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects [4]_. Here we use the 24 unique images of faces
and body parts.
<div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
build the images below.</p></div>
References
.. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
Science 210.4468 (1980): 390-398.
.. [2] Laakso, A. & Cottrell, G.. "Content and cluster analysis:
assessing representational similarity in neural systems." Philosophical
psychology 13.1 (2000): 47-76.
.. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. "Representational
similarity analysis-connecting the branches of systems neuroscience."
Frontiers in systems neuroscience 2 (2008): 4.
.. [4] Cichy, R. M., Pantazis, D., & Oliva, A. "Resolving human object
recognition in space and time." Nature neuroscience (2014): 17(3),
455-462.
End of explanation
"""
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
"""
Explanation: Let's restrict the number of conditions to speed up computation
End of explanation
"""
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
"""
Explanation: Define stimulus - trigger mapping
End of explanation
"""
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
"""
Explanation: Let's make the event_id dictionary
End of explanation
"""
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block, verbose='error')
for block in range(n_runs)] # ignore filename warnings
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
"""
Explanation: Read MEG data
End of explanation
"""
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
"""
Explanation: Epoch data
End of explanation
"""
epochs['face'].average().plot()
epochs['not-face'].average().plot()
"""
Explanation: Let's plot some conditions
End of explanation
"""
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='liblinear',
multi_class='auto'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
"""
Explanation: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
End of explanation
"""
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
"""
Explanation: Compute confusion matrix using ROC-AUC
End of explanation
"""
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
"""
Explanation: Plot
End of explanation
"""
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
"""
Explanation: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/2_dataset_api.ipynb | apache-2.0 | # Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
import json
import math
import os
from pprint import pprint
import numpy as np
import tensorflow as tf
print(tf.version.VERSION)
"""
Explanation: TensorFlow Dataset API
Learning Objectives
1. Learn how use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that is takes its data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
End of explanation
"""
N_POINTS = 10
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
"""
Explanation: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section:
End of explanation
"""
# TODO 1
def create_dataset(X, Y, epochs, batch_size):
dataset = # TODO: Your code goes here.
dataset = # TODO: Your code goes here.
return dataset
"""
Explanation: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset:
Remark: Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted.
If you want batches with the exact same number of elements per batch, we will have to discard the last batch by
setting:
python
dataset = dataset.batch(batch_size, drop_remainder=True)
We will do that here.
Lab Task #1: Complete the code below to
1. instantiate a tf.data dataset using tf.data.Dataset.from_tensor_slices.
2. Set up the dataset to
* repeat epochs times,
* create a batch of size batch_size, ignoring extra elements when the batch does not divide the number of input elements evenly.
End of explanation
"""
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=1, batch_size=3)
for i, (x, y) in enumerate(dataset):
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
assert EPOCH
"""
Explanation: Let's test our function by iterating twice over our dataset in batches of 3 datapoints:
End of explanation
"""
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
"""
Explanation: Loss function and gradients
The loss function and the function that computes the gradients are the same as before:
End of explanation
"""
# TODO 2
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = # TODO: Your code goes here.
for step, (X_batch, Y_batch) in # TODO: Your code goes here.
dw0, dw1 = #TODO: Your code goes here.
#TODO: Your code goes here.
#TODO: Your code goes here.
if step % 100 == 0:
loss = #TODO: Your code goes here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
"""
Explanation: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
Lab Task #2: Complete the code in the cell below to call your dataset above when training the model. Note that the step, (X_batch, Y_batch) iterates over the dataset. The inside of the for loop should be exactly as in the previous lab.
End of explanation
"""
!ls -l ../data/taxi*.csv
"""
Explanation: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare datast files been saved into ../data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
dataset by running the provious lab notebook:
End of explanation
"""
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
"""
Explanation: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function
tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use
tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
End of explanation
"""
# TODO 3
def create_dataset(pattern):
# TODO: Your code goes here.
return dataset
tempds = create_dataset('../data/taxi-train*')
print(tempds)
"""
Explanation: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located:
Lab Task #3: Complete the code in the create_dataset(...) function below to return a tf.data dataset made from the make_csv_dataset. Have a look at the documentation here. The pattern will be given as an argument of the function but you should set the batch_size, column_names and column_defaults.
End of explanation
"""
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
"""
Explanation: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Let's iterate over the two first element of this dataset using dataset.take(2) and let's convert them ordinary Python dictionary with numpy array as values for more readability:
End of explanation
"""
UNWANTED_COLS = ['pickup_datetime', 'key']
# TODO 4a
def features_and_labels(row_data):
label = # TODO: Your code goes here.
features = # TODO: Your code goes here.
# TODO: Your code goes here.
return features, label
"""
Explanation: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary:
Remove the unwanted column "key"
Keep the label separate from the features
Let's first implement a funciton that takes as input a row (represented as an OrderedDict in our tf.data.Dataset as above) and then returns a tuple with two elements:
The first element beeing the same OrderedDict with the label dropped
The second element beeing the label itself (fare_amount)
Note that we will need to also remove the key and pickup_datetime column, which we won't use.
Lab Task #4a: Complete the code in the features_and_labels(...) function below. Your function should return a dictionary of features and a label. Keep in mind row_data is already a dictionary and you will need to remove the pickup_datetime and key from row_data as well.
End of explanation
"""
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
"""
Explanation: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working:
End of explanation
"""
# TODO 4b
def create_dataset(pattern, batch_size):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO: Your code goes here.
return dataset
"""
Explanation: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented in order for our dataset to produce tuples of features and labels.
Lab Task #4b: Complete the code in the create_dataset(...) function below to return a tf.data dataset made from the make_csv_dataset. Now, the pattern and batch_size will be given as an arguments of the function but you should set the column_names and column_defaults as before. You will also apply a .map(...) method to create features and labels from each example.
End of explanation
"""
BATCH_SIZE = 2
tempds = create_dataset('../data/taxi-train*', batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
"""
Explanation: Let's test that our batches are of the right size:
End of explanation
"""
# TODO 4c
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO: Your code goes here.
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = # TODO: Your code goes here.
# take advantage of multi-threading; 1=AUTOTUNE
dataset = # TODO: Your code goes here.
return dataset
"""
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce a additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == tf.estimator.ModeKeys.TRAIN) from when it shouldn't (mode == tf.estimator.ModeKeys.EVAL).
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speedup training:
Lab Task #4c: The last step of our tf.data dataset will specify shuffling and repeating of our dataset pipeline. Complete the code below to add these three steps to the Dataset pipeline
1. follow the .map(...) operation which extracts features and labels with a call to .cache() the result.
2. during training, use .shuffle(...) and .repeat() to shuffle batches and repeat the dataset
3. use .prefetch(...) to take advantage of multi-threading and speedup training.
End of explanation
"""
tempds = create_dataset('../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)
print(list(tempds.take(1)))
tempds = create_dataset('../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)
print(list(tempds.take(1)))
"""
Explanation: Let's check that our function work well in both modes:
End of explanation
"""
|
marcelomiky/PythonCodes | Coursera/CICCP2/Curso Introdução à Ciência da Computação com Python - Parte 2.ipynb | mit | def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
x = cria_matriz(2, 3, 99)
x
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
x = cria_matriz(2, 3, 99)
x
"""
Explanation: Semana 1
End of explanation
"""
def cria_matriz(num_linhas, num_colunas):
matriz = [] #lista vazia
for i in range(num_linhas):
linha = []
for j in range(num_colunas):
linha.append(0)
matriz.append(linha)
for i in range(num_colunas):
for j in range(num_linhas):
matriz[j][i] = int(input("Digite o elemento [" + str(j) + "][" + str(i) + "]: "))
return matriz
x = cria_matriz(2, 3)
x
def tarefa(mat):
dim = len(mat)
for i in range(dim):
print(mat[i][dim-1-i], end=" ")
mat = [[1,2,3],[4,5,6],[7,8,9]]
tarefa(mat)
# Observação: o trecho do print (end = " ") irá mudar a finalização padrão do print
# que é pular para a próxima linha. Com esta mudança, o cursor permanecerá na mesma
# linha aguardando a impressão seguinte.
"""
Explanation: Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código?
Um exemplo: se o usuário digitasse o seguinte comando “x = cria_matriz(2,3)” e em seguida informasse os seis números para serem armazenados na matriz, na seguinte ordem: 1, 2, 3, 4, 5, 6; o x teria ao final da função a seguinte matriz: [[1, 3, 5], [2, 4, 6]].
End of explanation
"""
def dimensoes(A):
'''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.
Obs: i = colunas, j = linhas
Exemplo:
>>> minha_matriz = [[1],
[2],
[3]
]
>>> dimensoes(minha_matriz)
>>> 3X1
'''
lin = len(A)
col = len(A[0])
return print("%dX%d" % (lin, col))
matriz1 = [[1], [2], [3]]
dimensoes(matriz1)
matriz2 = [[1, 2, 3], [4, 5, 6]]
dimensoes(matriz2)
"""
Explanation: Exercício 1: Tamanho da matriz
Escreva uma função dimensoes(matriz) que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.
Exemplos:
minha_matriz = [[1], [2], [3]]
dimensoes(minha_matriz)
3X1
minha_matriz = [[1, 2, 3], [4, 5, 6]]
dimensoes(minha_matriz)
2X3
End of explanation
"""
def soma_matrizes(m1, m2):
def dimensoes(A):
lin = len(A)
col = len(A[0])
return ((lin, col))
if dimensoes(m1) != dimensoes(m2):
return False
else:
matriz = []
for i in range(len(m1)):
linha = []
for j in range(len(m1[0])):
linha.append(m1[i][j] + m2[i][j])
matriz.append(linha)
return matriz
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
"""
Explanation: Exercício 2: Soma de matrizes
Escreva a função soma_matrizes(m1, m2) que recebe 2 matrizes e devolve uma matriz que represente sua soma caso as matrizes tenham dimensões iguais. Caso contrário, a função deve devolver False.
Exemplos:
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2) => [[3, 5, 7], [9, 11, 13]]
m1 = [[1], [2], [3]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2) => False
End of explanation
"""
def imprime_matriz(A):
for i in range(len(A)):
for j in range(len(A[i])):
print(A[i][j])
minha_matriz = [[1], [2], [3]]
imprime_matriz(minha_matriz)
minha_matriz = [[1, 2, 3], [4, 5, 6]]
imprime_matriz(minha_matriz)
"""
Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais)
Exercício 1: Imprimindo matrizes
Como proposto na primeira vídeo-aula da semana, escreva uma função imprime_matriz(matriz), que recebe uma matriz como parâmetro e imprime a matriz, linha por linha. Note que NÃO se deve imprimir espaços após o último elemento de cada linha!
Exemplos:
minha_matriz = [[1], [2], [3]]
imprime_matriz(minha_matriz)
1
2
3
minha_matriz = [[1, 2, 3], [4, 5, 6]]
imprime_matriz(minha_matriz)
1 2 3
4 5 6
End of explanation
"""
def sao_multiplicaveis(m1, m2):
'''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas
da primeira é igual ao número de linhs da segunda). False se não forem
'''
if len(m1) == len(m2[0]):
return True
else:
return False
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
sao_multiplicaveis(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[1, 2, 3]]
sao_multiplicaveis(m1, m2)
"""
Explanation: Exercício 2: Matrizes multiplicáveis
Duas matrizes são multiplicáveis se o número de colunas da primeira é igual ao número de linhas da segunda. Escreva a função sao_multiplicaveis(m1, m2) que recebe duas matrizes como parâmetro e devolve True se as matrizes forem multiplicavéis (na ordem dada) e False caso contrário.
Exemplos:
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
sao_multiplicaveis(m1, m2) => False
m1 = [[1], [2], [3]]
m2 = [[1, 2, 3]]
sao_multiplicaveis(m1, m2) => True
End of explanation
"""
"áurea gosta de coentro".capitalize()
"AQUI".capitalize()
# função para remover espaços em branco
" email@company.com ".strip()
"o abecedário da Xuxa é didático".count("a")
"o abecedário da Xuxa é didático".count("á")
"o abecedário da Xuxa é didático".count("X")
"o abecedário da Xuxa é didático".count("x")
"o abecedário da Xuxa é didático".count("z")
"A vida como ela seje".replace("seje", "é")
"áurea gosta de coentro".capitalize().center(80) #80 caracteres de largura, no centro apareça este texto
texto = "Ao que se percebe, só há o agora"
texto
texto.find("q")
texto.find('se')
texto[7] + texto[8]
texto.find('w')
fruta = 'amora'
fruta[:4] # desde o começo até a posição TRÊS!
fruta[1:] # desde a posição 1 (começa no zero) até o final
fruta[2:4] # desde a posição 2 até a posição 3
"""
Explanation: Semana 2
End of explanation
"""
def mais_curto(lista_de_nomes):
menor = lista_de_nomes[0] # considerando que o menor nome está no primeiro lugar
for i in lista_de_nomes:
if len(i) < len(menor):
menor = i
return menor.capitalize()
lista = ['carlos', 'césar', 'ana', 'vicente', 'maicon', 'washington']
mais_curto(lista)
ord('a')
ord('A')
ord('b')
ord('m')
ord('M')
ord('AA')
'maçã' > 'banana'
'Maçã' > 'banana'
'Maçã'.lower() > 'banana'.lower()
txt = 'José'
txt = txt.lower()
txt
lista = ['ana', 'maria', 'José', 'Valdemar']
len(lista)
lista[3].lower()
lista[2]
lista[2] = lista[2].lower()
lista
for i in lista:
print(i)
lista[0][0]
"""
Explanation: Exercício
Escrever uma função que recebe uma lista de Strings contendo nomes de pessoas como parâmetro e devolve o nome mais curto. A função deve ignorar espaços antes e depois do nome e deve devolver o nome com a primeira letra maiúscula.
End of explanation
"""
def menor_string(array_string):
for i in range(len(array_string)):
array_string[i] = array_string[i].lower()
menor = array_string[0] # considera o primeiro como o menor
for i in array_string:
if ord(i[0][0]) < ord(menor[0]):
menor = i
return menor
lista = ['maria', 'José', 'Valdemar']
menor_string(lista)
# Código para inverter string e deixa maiúsculo
def fazAlgo(string):
pos = len(string)-1
string = string.upper()
while pos >= 0:
print(string[pos],end = "")
pos = pos - 1
fazAlgo("paralelepipedo")
# Código que deixa maiúsculo as letras de ordem ímpar:
def fazAlgo(string):
pos = 0
string1 = ""
string = string.lower()
stringMa = string.upper()
while pos < len(string):
if pos % 2 == 0:
string1 = string1 + stringMa[pos]
else:
string1 = string1 + string[pos]
pos = pos + 1
return string1
print(fazAlgo("paralelepipedo"))
# Código que tira os espaços em branco
def fazAlgo(string):
pos = 0
string1 = ""
while pos < len(string):
if string[pos] != " ":
string1 = string1 + string[pos]
pos = pos + 1
return string1
print(fazAlgo("ISTO É UM TESTE"))
# e para retornar "Istoéumteste", ou seja, só deixar a primeira letra maiúscula...
def fazAlgo(string):
pos = 0
string1 = ""
while pos < len(string):
if string[pos] != " ":
string1 = string1 + string[pos]
pos = pos + 1
string1 = string1.capitalize()
return string1
print(fazAlgo("ISTO É UM TESTE"))
x, y = 10, 20
x, y
x
y
def peso_altura():
return 77, 1.83
peso_altura()
peso, altura = peso_altura()
peso
altura
# Atribuição múltipla em C (vacas magras...)
'''
int a, b, temp
a = 10
b = 20
temp = a
a = b
b = temp
'''
a, b = 10, 20
a, b = b, a
a, b
# Atribuição aumentada
x = 10
x = x + 10
x
x = 10
x += 10
x
x = 3
x *= 2
x
x = 2
x **= 10
x
x = 100
x /= 3
x
def pagamento_semanal(valor_por_hora, num_horas = 40):
return valor_por_hora * num_horas
pagamento_semanal(10)
pagamento_semanal(10, 20) # aceita, mesmo assim, o segundo parâmetro.
# Asserção de Invariantes
def pagamento_semanal(valor_por_hora, num_horas = 40):
assert valor_por_hora >= 0 and num_horas > 0
return valor_por_hora * num_horas
pagamento_semanal(30, 10)
pagamento_semanal(10, -10)
x, y = 10, 12
x, y = y, x
print("x = ",x,"e y = ",y)
x = 10
x += 10
x /= 2
x //= 3
x %= 2
x *= 9
print(x)
def calculo(x, y = 10, z = 5):
return x + y * z;
calculo(1, 2, 3)
calculo(1, 2) # 2 entra em y.
def calculo(x, y = 10, z = 5):
return x + y * z;
print(calculo(1, 2, 3))
calculo()
print(calculo( ,12, 10))
def horario_em_segundos(h, m, s):
assert h >= 0 and m >= 0 and s >= 0
return h * 3600 + m * 60 + s
print(horario_em_segundos (3,0,50))
print(horario_em_segundos(1,2,3))
print(horario_em_segundos (-1,20,30))
# Módulos em Python
def fib(n): # escreve a série de Fibonacci até n
a, b = 0, 1
while b < n:
print(b, end = ' ')
a, b = b, a + b
print()
def fib2(n):
result = []
a, b = 0, 1
while b < n:
result.append(b)
a, b = b, a + b
return result
'''
E no shell do Python (chamado na pasta que contém o arquivo fibo.py)
>>> import fibo
>>> fibo.fib(100)
1 1 2 3 5 8 13 21 34 55 89
>>> fibo.fib2(100)
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.fib2(1000)
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987]
>>> meuFib = fibo.fib
>>> meuFib(20)
1 1 2 3 5 8 13
'''
"""
Explanation: Exercício
Escreva uma função que recebe um array de strings como parâmetro e devolve o primeiro string na ordem lexicográfica, ignorando-se maiúsculas e minúsculas
End of explanation
"""
def fazAlgo(string): # inverte a string e deixa as vogais maiúsculas
pos = len(string)-1 # define a variável posição do array
stringMi = string.lower() # aqui estão todas minúsculas
string = string.upper() # aqui estão todas maiúsculas
stringRe = "" # string de retorno
while pos >= 0:
if string[pos] == 'A' or string[pos] == 'E' or string[pos] == 'I' or string[pos] == 'O' or string[pos] == 'U':
stringRe = stringRe + string[pos]
else:
stringRe = stringRe + stringMi[pos]
pos = pos - 1
return stringRe
if __name__ == "__main__":
print(fazAlgo("teste"))
print(fazAlgo("o ovo do avestruz"))
print(fazAlgo("A CASA MUITO ENGRAÇADA"))
print(fazAlgo("A TELEvisão queBROU"))
print(fazAlgo("A Vaca Amarela"))
"""
Explanation: Incluindo <pre>print(__name__)</pre> na última linha de fibo.py, ao fazer a importação import fibo no shell do Python, imprime 'fibo', que é o nome do programa.
Ao incluir
<pre>
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
</pre>
podemos ver se está sendo executado como script (com o if do jeito que está) ou como módulo dentro de outro código (se o nome não for main, está sendo importado pra usar alguma função lá dentro).
End of explanation
"""
maiusculas('Programamos em python 2?')
# deve devolver 'P'
maiusculas('Programamos em Python 3.')
# deve devolver 'PP'
maiusculas('PrOgRaMaMoS em python!')
# deve devolver 'PORMMS'
def maiusculas(frase):
listRe = [] # lista de retorno vazia
stringRe = '' # string de retorno vazia
for ch in frase:
if ord(ch) >=65 and ord(ch) <= 91:
listRe.append(ch)
# retornando a lista para string
stringRe = ''.join(listRe)
return stringRe
maiusculas('Programamos em python 2?')
maiusculas('Programamos em Python 3.')
maiusculas('PrOgRaMaMoS em python!')
x = ord('A')
y = ord('a')
x, y
ord('B')
ord('Z')
"""
Explanation: Exercício 1: Letras maiúsculas
Escreva a função maiusculas(frase) que recebe uma frase (uma string) como parâmetro e devolve uma string com as letras maiúsculas que existem nesta frase, na ordem em que elas aparecem.
Para resolver este exercício, pode ser útil verificar uma tabela ASCII, que contém os valores de cada caractere. Ver http://equipe.nce.ufrj.br/adriano/c/apostila/tabascii.htm
Note que para simplificar a solução do exercício, as frases passadas para a sua função não possuirão caracteres que não estejam presentes na tabela ASCII apresentada, como ç, á, É, ã, etc.
Dica: Os valores apresentados na tabela são os mesmos devolvidos pela função ord apresentada nas aulas.
Exemplos:
End of explanation
"""
menor_nome(['maria', 'josé', 'PAULO', 'Catarina'])
# deve devolver 'José'
menor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])
# deve devolver 'José'
menor_nome(['Bárbara', 'JOSÉ ', 'Bill'])
# deve devolver José
def menor_nome(nomes):
tamanho = len(nomes) # pega a quantidade de nomes na lista
menor = '' # variável para escolher o menor nome
lista_limpa = [] # lista de nomes sem os espaços em branco
# ignora espaços em branco
for str in nomes:
lista_limpa.append(str.strip())
# verifica o menor nome
menor = lista_limpa[0] # considera o primeiro como menor
for str in lista_limpa:
if len(str) < len(menor): # não deixei <= senão pegará um segundo menor de mesmo tamanho
menor = str
return menor.capitalize() # deixa a primeira letra maiúscula
menor_nome(['maria', 'josé', 'PAULO', 'Catarina'])
# deve devolver 'José'
menor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])
# deve devolver 'José'
menor_nome(['Bárbara', 'JOSÉ ', 'Bill'])
# deve devolver José
menor_nome(['Bárbara', 'JOSÉ ', 'Bill', ' aDa '])
"""
Explanation: Exercício 2: Menor nome
Como pedido no primeiro vídeo desta semana, escreva uma função menor_nome(nomes) que recebe uma lista de strings com nome de pessoas como parâmetro e devolve o nome mais curto presente na lista.
A função deve ignorar espaços antes e depois do nome e deve devolver o menor nome presente na lista. Este nome deve ser devolvido com a primeira letra maiúscula e seus demais caracteres minúsculos, independente de como tenha sido apresentado na lista passada para a função.
Quando houver mais de um nome com o menor comprimento dentre os nomes na lista, a função deve devolver o primeiro nome com o menor comprimento presente na lista.
Exemplos:
End of explanation
"""
def conta_letras(frase, contar = 'vogais'):
pos = len(frase) - 1 # atribui na variável pos (posição) a posição do array
count = 0 # define o contador de vogais
while pos >= 0: # conta as vogais
if frase[pos] == 'a' or frase[pos] == 'e' or frase[pos] == 'i' or frase[pos] == 'o' or frase[pos] == 'u':
count += 1
pos = pos - 1
if contar == 'consoantes':
frase = frase.replace(' ', '') # retira espaços em branco
return len(frase) - count # subtrai do total as vogais
else:
return count
conta_letras('programamos em python')
conta_letras('programamos em python', 'vogais')
conta_letras('programamos em python', 'consoantes')
conta_letras('bcdfghjklmnpqrstvxywz', 'consoantes')
len('programamos em python')
frase = 'programamos em python'
frase.replace(' ', '')
frase
"""
Explanation: Exercícios adicionais
Exercício 1: Contando vogais ou consoantes
Escreva a função conta_letras(frase, contar="vogais"), que recebe como primeiro parâmetro uma string contendo uma frase e como segundo parâmetro uma outra string. Este segundo parâmetro deve ser opcional.
Quando o segundo parâmetro for definido como "vogais", a função deve devolver o numero de vogais presentes na frase. Quando ele for definido como "consoantes", a função deve devolver o número de consoantes presentes na frase. Se este parâmetro não for passado para a função, deve-se assumir o valor "vogais" para o parâmetro.
Exemplos:
conta_letras('programamos em python')
6
conta_letras('programamos em python', 'vogais')
6
conta_letras('programamos em python', 'consoantes')
13
End of explanation
"""
def primeiro_lex(lista):
resposta = lista[0] # define o primeiro item da lista como a resposta...mas verifica depois.
for str in lista:
if ord(str[0]) < ord(resposta[0]):
resposta = str
return resposta
assert primeiro_lex(['oĺá', 'A', 'a', 'casa']), 'A'
assert primeiro_lex(['AAAAAA', 'b']), 'AAAAAA'
primeiro_lex(['casa', 'a', 'Z', 'A'])
primeiro_lex(['AAAAAA', 'b'])
"""
Explanation: Exercício 2: Ordem lexicográfica
Como pedido no segundo vídeo da semana, escreva a função primeiro_lex(lista) que recebe uma lista de strings como parâmetro e devolve o primeiro string na ordem lexicográfica. Neste exercício, considere letras maiúsculas e minúsculas.
Dica: revise a segunda vídeo-aula desta semana.
Exemplos:
primeiro_lex(['oĺá', 'A', 'a', 'casa'])
'A'
primeiro_lex(['AAAAAA', 'b'])
'AAAAAA'
End of explanation
"""
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
# import matriz # descomentar apenas no arquivo .py
def soma_matrizes(A, B):
num_lin = len(A)
num_col = len(A[0])
C = cria_matriz(num_lin, num_col, 0) # matriz com zeros
for lin in range(num_lin): # percorre as linhas da matriz
for col in range(num_col): # percorre as colunas da matriz
C[lin][col] = A[lin][col] + B[lin][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]
print(soma_matrizes(A, B))
# No arquivo matriz.py
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
# E no arquivo soma_matrizes.py
import matriz
def soma_matrizes(A, B):
num_lin = len(A)
num_col = len(A[0])
C = matriz.cria_matriz(num_lin, num_col, 0) # matriz com zeros
for lin in range(num_lin): # percorre as linhas da matriz
for col in range(num_col): # percorre as colunas da matriz
C[lin][col] = A[lin][col] + B[lin][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]
print(soma_matrizes(A, B))
'''
Multiplicação de matrizes:
1 2 3 1 2 22 28
4 5 6 * 3 4 = 49 64
5 6
1*1 + 2*3 + 3*5 = 22
1*2 + 2*4 + 3*6 = 28
4*1 + 5*3 + 6*5 = 49
4*2 + 5*4 + 6*6 = 64
c11 = a11*b11 + a12*b21 + c13*c31
c12 = a11*b21 + a12*b22 + c13*c23
c21 = a21*b11 + a22*b21 + c23*c31
c22 = a21*b21 + a22*b22 + c23*c23
'''
def multiplica_matrizes (A, B):
num_linA, num_colA = len(A), len(A[0])
num_linB, num_colB = len(B), len(B[0])
assert num_colA == num_linB
C = []
for lin in range(num_linA): # percorre as linhas da matriz A
# começando uma nova linha
C.append([])
for col in range(num_colB): # percorre as colunas da matriz B
# Adicionando uma nova coluna na linha
C[lin].append(0)
for k in range(num_colA):
C[lin][col] += A[lin][k] * B[k][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6]]
B = [[1, 2], [3, 4], [5, 6]]
print(multiplica_matrizes(A, B))
"""
Explanation: Semana 3 - POO – Programação Orientada a Objetos
End of explanation
"""
class Carro:
pass
meu_carro = Carro()
meu_carro
carro_do_trabalho = Carro()
carro_do_trabalho
meu_carro.ano = 1968
meu_carro.modelo = 'Fusca'
meu_carro.cor = 'azul'
meu_carro.ano
meu_carro.cor
carro_do_trabalho.ano = 1981
carro_do_trabalho.modelo = 'Brasília'
carro_do_trabalho.cor = 'amarela'
carro_do_trabalho.ano
novo_fusca = meu_carro # duas variáveis apontando para o mesmo objeto
novo_fusca #repare que é o mesmo end. de memória
novo_fusca.ano += 10
novo_fusca.ano
novo_fusca
"""
Explanation: POO
End of explanation
"""
class Pato:
pass
pato = Pato()
patinho = Pato()
if pato == patinho:
print("Estamos no mesmo endereço!")
else:
print("Estamos em endereços diferentes!")
class Carro:
def __init__(self, modelo, ano, cor): # init é o Construtor da classe
self.modelo = modelo
self.ano = ano
self.cor = cor
carro_do_meu_avo = Carro('Ferrari', 1980, 'vermelha')
carro_do_meu_avo
carro_do_meu_avo.cor
"""
Explanation: Testes para praticar
End of explanation
"""
def main():
carro1 = Carro('Brasília', 1968, 'amarela', 80)
carro2 = Carro('Fuscão', 1981, 'preto', 95)
carro1.acelere(40)
carro2.acelere(50)
carro1.acelere(80)
carro1.pare()
carro2.acelere(100)
class Carro:
def __init__(self, modelo, ano, cor, vel_max):
self.modelo = modelo
self.ano = ano
self.cor = cor
self.vel = 0
self.maxV = vel_max # velocidade máxima
def imprima(self):
if self.vel == 0: # parado dá para ver o ano
print('%s %s %d' % (self.modelo, self.cor, self.ano))
elif self.vel < self.maxV:
print('%s %s indo a %d km/h' % (self.modelo, self.cor, self.vel))
else:
print('%s %s indo muito rapido!' % (self.modelo, self.cor))
def acelere(self, velocidade):
self.vel = velocidade
if self.vel > self.maxV:
self.vel = self.maxV
self.imprima()
def pare(self):
self.vel = 0
self.imprima()
main()
"""
Explanation: POO – Programação Orientada a Objetos – Parte 2
End of explanation
"""
class Cafeteira:
def __init__(self, marca, tipo, tamanho, cor):
self.marca = marca
self.tipo = tipo
self.tamanho = tamanho
self.cor = cor
class Cachorro:
def __init__(self, raça, idade, nome, cor):
self.raça = raça
self.idade = idade
self.nome = nome
self.cor = cor
rex = Cachorro('vira-lata', 2, 'Bobby', 'marrom')
'vira-lata' == rex.raça
rex.idade > 2
rex.idade == '2'
rex.nome == 'rex'
Bobby.cor == 'marrom'
rex.cor == 'marrom'
class Lista:
def append(self, elemento):
return "Oops! Este objeto não é uma lista"
lista = []
a = Lista()
b = a.append(7)
lista.append(b)
a
b
lista
"""
Explanation: TESTE PARA PRATICAR POO – Programação Orientada a Objetos – Parte 2
End of explanation
"""
import math
class Bhaskara:
def delta(self, a, b, c):
return b ** 2 - 4 * a * c
def main(self):
a_digitado = float(input("Digite o valor de a:"))
b_digitado = float(input("Digite o valor de b:"))
c_digitado = float(input("Digite o valor de c:"))
print(self.calcula_raizes(a_digitado, b_digitado, c_digitado))
def calcula_raizes(self, a, b, c):
d = self.delta(self, a, b, c)
if d == 0:
raiz1 = (-b + math.sqrt(d)) / (2 * a)
return 1, raiz1 # indica que tem uma raiz e o valor dela
else:
if d < 0:
return 0
else:
raiz1 = (-b + math.sqrt(d)) / (2 * a)
raiz2 = (-b - math.sqrt(d)) / (2 * a)
return 2, raiz1, raiz2
main()
main()
import Bhaskara
class TestBhaskara:
def testa_uma_raiz(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(1, 0, 0) == (1, 0)
def testa_duas_raizes(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)
def testa_zero_raizes(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(10, 10, 10) == 0
def testa_raiz_negativa(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(10, 20, 10) == (1, -1)
"""
Explanation: Códigos Testáveis
End of explanation
"""
# Nos estudos ficou pytest_bhaskara.py
import Bhaskara
import pytest
class TestBhaskara:
@pytest.fixture
def b(self):
return Bhaskara.Bhaskara()
def testa_uma_raiz(self, b):
assert b.calcula_raizes(1, 0, 0) == (1, 0)
def testa_duas_raizes(self, b):
assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)
def testa_zero_raizes(self, b):
assert b.calcula_raizes(10, 10, 10) == 0
def testa_raiz_negativa(self, b):
assert b.calcula_raizes(10, 20, 10) == (1, -1)
"""
Explanation: Fixture: valor fixo para um conjunto de testes
@pytest.fixture
End of explanation
"""
def fatorial(n):
if n < 0:
return 0
i = fat = 1
while i <= n:
fat = fat * i
i += 1
return fat
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(-10, 0),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
"""
Explanation: Parametrização
End of explanation
"""
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def perimetro(self):
return self.a + self.b + self.c
t = Triangulo(1, 1, 1)
t.a
t.b
t.c
t.perimetro()
"""
Explanation: Exercícios
Escreva uma versão do TestaBhaskara usando @pytest.mark.parametrize
Escreva uma bateria de testes para o seu código preferido
Tarefa de programação: Lista de exercícios - 3
Exercício 1: Uma classe para triângulos
Defina a classe Triangulo cujo construtor recebe 3 valores inteiros correspondentes aos lados a, b e c de um triângulo.
A classe triângulo também deve possuir um método perimetro, que não recebe parâmetros e devolve um valor inteiro correspondente ao perímetro do triângulo.
t = Triangulo(1, 1, 1)
deve atribuir uma referência para um triângulo de lados 1, 1 e 1 à variável t
Um objeto desta classe deve responder às seguintes chamadas:
t.a
deve devolver o valor do lado a do triângulo
t. b
deve devolver o valor do lado b do triângulo
t.c
deve devolver o valor do lado c do triângulo
t.perimetro()
deve devolver um inteiro correspondente ao valor do perímetro do triângulo.
End of explanation
"""
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def tipo_lado(self):
if self.a == self.b and self.a == self.c:
return 'equilátero'
elif self.a != self.b and self.a != self.c and self.b != self.c:
return 'escaleno'
else:
return 'isósceles'
t = Triangulo(4, 4, 4)
t.tipo_lado()
u = Triangulo(3, 4, 5)
u.tipo_lado()
v = Triangulo(1, 3, 3)
v.tipo_lado()
t = Triangulo(5, 8, 5)
t.tipo_lado()
t = Triangulo(5, 5, 6)
t.tipo_lado()
'''
Exercício 1: Triângulos retângulos
Escreva, na classe Triangulo, o método retangulo() que devolve
True se o triângulo for retângulo, e False caso contrário.
Exemplos:
t = Triangulo(1, 3, 5)
t.retangulo()
# deve devolver False
u = Triangulo(3, 4, 5)
u.retangulo()
# deve devolver True
'''
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def retangulo(self):
if self.a > self.b and self.a > self.c:
if self.a ** 2 == self.b ** 2 + self.c ** 2:
return True
else:
return False
elif self.b > self.a and self.b > self.c:
if self.b ** 2 == self.c ** 2 + self.a ** 2:
return True
else:
return False
else:
if self.c ** 2 == self.a ** 2 + self.b ** 2:
return True
else:
return False
t = Triangulo(1, 3, 5)
t.retangulo()
t = Triangulo(3, 1, 5)
t.retangulo()
t = Triangulo(5, 1, 3)
t.retangulo()
u = Triangulo(3, 4, 5)
u.retangulo()
u = Triangulo(4, 5, 3)
u.retangulo()
u = Triangulo(5, 3, 4)
u.retangulo()
"""
Explanation: Exercício 2: Tipos de triângulos
Na classe triângulo, definida na Questão 1, escreva o metodo tipo_lado() que devolve uma string dizendo se o triângulo é:
isóceles (dois lados iguais)
equilátero (todos os lados iguais)
escaleno (todos os lados diferentes)
Note que se o triângulo for equilátero, a função não deve devolver isóceles.
Exemplos:
t = Triangulo(4, 4, 4)
t.tipo_lado()
deve devolver 'equilátero'
u = Triangulo(3, 4, 5)
.tipo_lado()
deve devolver 'escaleno'
End of explanation
"""
class Triangulo:
'''
O resultado dos testes com seu programa foi:
***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(6, 8, 10) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.2 pontos]: Testando método semelhantes(Triangulo(6, 8, 10)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.4 pontos]: Testando método semelhantes(Triangulo(3, 3, 3)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
'''
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
# https://stackoverflow.com/questions/961048/get-class-that-defined-method
def semelhantes(self, Triangulo):
list1 = []
for arg in self:
list1.append(arg)
list2 = []
for arg in self1:
list2.append(arg)
for i in list2:
print(i)
t1 = Triangulo(2, 2, 2)
t2 = Triangulo(4, 4, 4)
t1.semelhantes(t2)
"""
Explanation: Exercício 2: Triângulos semelhantes
Ainda na classe Triangulo, escreva um método semelhantes(triangulo)
que recebe um objeto do tipo Triangulo como parâmetro e verifica
se o triângulo atual é semelhante ao triângulo passado como parâmetro.
Caso positivo, o método deve devolver True. Caso negativo,
deve devolver False.
Verifique a semelhança dos triângulos através do comprimento
dos lados.
Dica: você pode colocar os lados de cada um dos triângulos em uma
lista diferente e ordená-las.
Exemplo:
t1 = Triangulo(2, 2, 2)
t2 = Triangulo(4, 4, 4)
t1.semelhantes(t2)
deve devolver True
'''
End of explanation
"""
def busca_sequencial(seq, x):
'''(list, bool) -> bool'''
for i in range(len(seq)):
if seq[i] == x:
return True
return False
# código com cara de C =\
list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
busca_sequencial(list, 3)
list = ['casa', 'texto', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
busca_sequencial(list, 'texto')
class Musica:
def __init__(self, titulo, interprete, compositor, ano):
self.titulo = titulo
self.interprete = interprete
self.compositor = compositor
self.ano = ano
class Buscador:
def busca_por_titulo(self, playlist, titulo):
for i in range(len(playlist)):
if playlist[i].titulo == titulo:
return i
return -1
def vamos_buscar(self):
playlist = [Musica("Ponta de Areia", "Milton Nascimento", "Milton Nascimento", 1975),
Musica("Podres Poderes", "Caetano Veloso", "Caetano Veloso", 1984),
Musica("Baby", "Gal Costa", "Caetano Veloso", 1969)]
onde_achou = self.busca_por_titulo(playlist, "Baby")
if onde_achou == -1:
print("A música buscada não está na playlist")
else:
preferida = playlist[onde_achou]
print(preferida.titulo, preferida.interprete, preferida.compositor, preferida.ano, sep = ', ')
b = Buscador()
b.vamos_buscar()
"""
Explanation: Week 4
Busca Sequencial
End of explanation
"""
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
lista = [10, 3, 8, -10, 200, 17, 32]
o = Ordenador()
o.selecao_direta(lista)
lista
lista_nomes = ['maria', 'carlos', 'wilson', 'ana']
o.selecao_direta(lista_nomes)
lista_nomes
import random
print(random.randint(1, 10))
from random import shuffle
x = [i for i in range(100)]
shuffle(x)
x
o.selecao_direta(x)
x
def comprova_ordem(list):
flag = True
for i in range(len(list) - 1):
if list[i] > list[i + 1]:
flag = False
return flag
comprova_ordem(x)
list = [1, 2, 3, 4, 5]
list2 = [1, 3, 2, 4, 5]
comprova_ordem(list)
comprova_ordem(list2)
def busca_sequencial(seq, x):
for i in range(len(seq)):
if seq[i] == x:
return True
return False
def selecao_direta(lista):
fim = len(lista)
for i in range(fim-1):
pos_menor = i
for j in range(i+1,fim):
if lista[j] < lista[pos_menor]:
pos_menor = j
lista[i],lista[pos_menor] = lista[pos_menor],lista[i]
return lista
numeros = [55,33,0,900,-432,10,77,2,11]
"""
Explanation: Complexidade Computacional
Análise matemática do desempenho de um algoritmo
Estudo analítico de:
Quantas operações um algoritmo requer para que ele seja executado
Quanto tempo ele vai demorar para ser executado
Quanto de memória ele vai ocupar
Análise da Busca Sequencial
Exemplo:
Lista telefônica de São Paulo, supondo 2 milhões de telefones fixos.
Supondo que cada iteração do for comparação de string dure 1 milissegundo.
Pior caso: 2000s = 33,3 minutos
Caso médio (1 milhão): 1000s = 16,6 minutos
Complexidade Computacional da Busca Sequencial
Dada uma lista de tamanho n
A complexidade computacional da busca sequencial é:
n, no pior caso
n/2, no caso médio
Conclusão
Busca sequencial é boa pois é bem simples
Funciona bem quando a busca é feita num volume pequeno de dados
Sua Complexidade Computacional é muito alta
É muito lenta quando o volume de dados é grande
Portanto, dizemos que é um algoritmo ineficiente
Algoritmo de Ordenação Seleção Direta
Seleção Direta
A cada passo, busca pelo menor elemento do pedaço ainda não ordenado da lista e o coloca no início da lista
No 1º passo, busca o menor elemento de todos e coloca na posição inicial da lista.
No 2º passo, busca o 2º menor elemento da lista e coloca na 2ª posição da lista.
No 3º passo, busca o 3º menor elemento da lista e coloca na 3ª posição da lista.
Repete até terminar a lista
End of explanation
"""
def ordenada(list):
flag = True
for i in range(len(list) - 1):
if list[i] > list[i + 1]:
flag = False
return flag
"""
Explanation: Tarefa de programação: Lista de exercícios - 4
Exercício 1: Lista ordenada
Escreva a função ordenada(lista), que recebe uma lista com números inteiros como parâmetro e devolve o booleano True se a lista estiver ordenada e False se a lista não estiver ordenada.
End of explanation
"""
def busca(lista, elemento):
for i in range(len(lista)):
if lista[i] == elemento:
return i
return False
busca(['a', 'e', 'i'], 'e')
busca([12, 13, 14], 15)
"""
Explanation: Exercício 2: Busca sequencial
Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca sequencial. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.
busca(['a', 'e', 'i'], 'e')
deve devolver => 1
busca([12, 13, 14], 15)
deve devolver => False
End of explanation
"""
def lista_grande(n):
import random
return random.sample(range(1, 1000), n)
lista_grande(10)
"""
Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais)
Exercício 1: Gerando listas grandes
Escreva a função lista_grande(n), que recebe como parâmetro um número inteiro n e devolve uma lista contendo n números inteiros aleatórios.
End of explanation
"""
def ordena(lista):
fim = len(lista)
for i in range(fim - 1):
min = i
for j in range(i + 1, fim):
if lista[j] < lista[min]:
min = j
lista[i], lista[min] = lista[min], lista[i]
return lista
lista = [10, 3, 8, -10, 200, 17, 32]
ordena(lista)
lista
"""
Explanation: Exercício 2: Ordenação com selection sort
Implemente a função ordena(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo selection sort.
End of explanation
"""
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
"""
Explanation: Week 5 - Algoritmo de Ordenação da Bolha - Bubblesort
Lista como um tubo de ensaio vertical, os elementos mais leves sobem à superfície como uma bolha, os mais pesados afundam.
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem
End of explanation
"""
lista = [10, 3, 8, -10, 200, 17, 32]
o = Ordenador()
o.bolha(lista)
lista
"""
Explanation: Exemplo do algoritmo bubblesort em ação:
Inicial:
5 1 7 3 2
1 5 7 3 2
1 5 3 7 2
1 5 3 2 7 (fim da primeira iteração)
1 3 5 2 7
1 3 2 5 7 (fim da segunda iteração)
1 2 3 5 7
End of explanation
"""
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
import random
import time
class ContaTempos:
def lista_aleatoria(self, n): # n = número de elementos da lista
from random import randrange
lista = [0 for x in range(n)] # lista com n elementos, todos sendo zero
for i in range(n):
lista[i] = random.randrange(1000) # inteiros entre 0 e 999
return lista
def compara(self, n):
lista1 = self.lista_aleatoria(n)
lista2 = lista1
o = Ordenador()
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
c = ContaTempos()
c.compara(1000)
print("Diferença de", 0.16308164596557617 - 0.05245494842529297)
c.compara(5000)
"""
Explanation: Comparação de Desempenho
Módulo time:
função time()
devolve o tempo decorrido (em segundos) desde 1/1/1970 (no Unix)
Para medir um intervalo de tempo
import time
antes = time.time()
algoritmo_a_ser_cronometrado()
depois = time.time()
print("A execução do algoritmo demorou ", depois - antes, "segundos")
End of explanation
"""
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
def bolha_curta(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
trocou = False
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
trocou = True
if not trocou: # que é igual a if trocou == False
return
import random
import time
class ContaTempos:
def lista_aleatoria(self, n): # n = número de elementos da lista
from random import randrange
lista = [random.randrange(1000) for x in range(n)] # lista com n elementos, todos sendo aleatórios de 0 a 999
return lista
def lista_quase_ordenada(self, n):
lista = [x for x in range(n)] # lista ordenada
lista[n//10] = -500 # localizou o -500 no primeiro décimo da lista
return lista
def compara(self, n):
lista1 = self.lista_aleatoria(n)
lista2 = lista1
lista3 = lista2
o = Ordenador()
print("Comparando lista aleatórias")
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
antes = time.time()
o.bolha_curta(lista3)
depois = time.time()
print("Bolha otimizada", depois - antes, "segundos")
print("\nComparando lista quase ordenadas")
lista1 = self.lista_quase_ordenada(n)
lista2 = lista1
lista3 = lista2
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
antes = time.time()
o.bolha_curta(lista3)
depois = time.time()
print("Bolha otimizada", depois - antes, "segundos")
c = ContaTempos()
c.compara(1000)
c.compara(5000)
"""
Explanation: Melhoria no Algoritmo de Ordenação da Bolha
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem.
Melhoria: se em uma das iterações, nenhuma troca é realizada, isso significa que a lista já está ordenada e podemos finalizar o algoritmo.
End of explanation
"""
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]:
posicao_do_minimo = j
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
def bolha_curta(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
trocou = False
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
trocou = True
if not trocou:
return
import random
import time
class ContaTempos:
def lista_aleatoria(self, n):
from random import randrange
lista = [random.randrange(1000) for x in range(n)]
return lista
def lista_quase_ordenada(self, n):
lista = [x for x in range(n)]
lista[n//10] = -500
return lista
import pytest
class TestaOrdenador:
@pytest.fixture
def o(self):
return Ordenador()
@pytest.fixture
def l_quase(self):
c = ContaTempos()
return c.lista_quase_ordenada(100)
@pytest.fixture
def l_aleatoria(self):
c = ContaTempos()
return c.lista_aleatoria(100)
def esta_ordenada(self, l):
for i in range(len(l) - 1):
if l[i] > l[i+1]:
return False
return True
def test_bolha_curta_aleatoria(self, o, l_aleatoria):
o.bolha_curta(l_aleatoria)
assert self.esta_ordenada(l_aleatoria)
def test_selecao_direta_aleatoria(self, o, l_aleatoria):
o.selecao_direta(l_aleatoria)
assert self.esta_ordenada(l_aleatoria)
def test_bolha_curta_quase(self, o, l_quase):
o.bolha_curta(l_quase)
assert self.esta_ordenada(l_quase)
def test_selecao_direta_quase(self, o, l_quase):
o.selecao_direta(l_quase)
assert self.esta_ordenada(l_quase)
[5, 2, 1, 3, 4]
2 5 1 3 4
2 1 5 3 4
2 1 3 5 4
2 1 3 4 5
[2, 3, 4, 5, 1]
2 3 4 1 5
2 3 1 4 5
2 1 3 4 5
1 2 3 4 5
"""
Explanation: Site com algoritmos de ordenação http://nicholasandre.com.br/sorting/
Testes automatizados dos algoritmos de ordenação
End of explanation
"""
class Buscador:
def busca_por_titulo(self, playlist, titulo):
for i in range(len(playlist)):
if playlist[i].titulo == titulo:
return i
return -1
def busca_binaria(self, lista, x):
primeiro = 0
ultimo = len(lista) - 1
while primeiro <= ultimo:
meio = (primeiro + ultimo) // 2
if lista[meio] == x:
return meio
else:
if x < lista[meio]: # busca na primeira metade da lista
ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos
else:
primeiro = meio + 1
return -1
lista = [-100, 0, 20, 30, 50, 100, 3000, 5000]
b = Buscador()
b.busca_binaria(lista, 30)
"""
Explanation: Busca Binária
Objetivo: localizar o elemento x em uma lista
Considere o elemento m do meio da lista
se x == m ==> encontrou!
se x < m ==> procure apenas na 1ª metade (da esquerda)
se x > m ==> procure apenas na 2ª metade (da direita),
repetir o processo até que o x seja encontrado ou que a sub-lista em questão esteja vazia
End of explanation
"""
def busca(lista, elemento):
primeiro = 0
ultimo = len(lista) - 1
while primeiro <= ultimo:
meio = (primeiro + ultimo) // 2
if lista[meio] == elemento:
print(meio)
return meio
else:
if elemento < lista[meio]: # busca na primeira metade da lista
ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos
print(meio) # função deve imprimir cada um dos índices testados pelo algoritmo.
else:
primeiro = meio + 1
print(meio)
return False
busca(['a', 'e', 'i'], 'e')
busca([1, 2, 3, 4, 5], 6)
busca([1, 2, 3, 4, 5, 6], 4)
"""
Explanation: Complexidade da Busca Binária
Dado uma lista de n elementos
No pior caso, teremos que efetuar:
$$log_2n$$ comparações
No exemplo da lista telefônica (com 2 milhões de números):
$$log_2(2 milhões) = 20,9$$
Portanto: resposta em menos de 21 milissegundos!
Conclusão
Busca Binária é um algoritmo bastante eficiente
Ao estudar a eficiência de um algoritmo é interessante:
Analisar a complexidade computacional
Realizar experimentos medindo o desempenho
Tarefa de programação: Lista de exercícios - 5
Exercício 1: Busca binária
Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca binária. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.
Além de devolver o índice correspondente à posição do elemento encontrado, sua função deve imprimir cada um dos índices testados pelo algoritmo.
Exemplo:
busca(['a', 'e', 'i'], 'e')
1
deve devolver => 1
busca([1, 2, 3, 4, 5], 6)
2
3
4
deve devolver => False
busca([1, 2, 3, 4, 5, 6], 4)
2
4
3
deve devolver => 3
End of explanation
"""
def bubble_sort(lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
print(lista)
print(lista)
return lista
bubble_sort([5, 1, 4, 2, 8])
#[1, 4, 2, 5, 8]
#[1, 2, 4, 5, 8]
#[1, 2, 4, 5, 8]
#deve devolver [1, 2, 4, 5, 8]
bubble_sort([1, 3, 4, 2, 0, 5])
#Esperado:
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#O resultado dos testes com seu programa foi:
#***** [0.6 pontos]: Verificando funcionamento do bubble sort - Falhou *****
#AssertionError: Expected
#[1, 3, 4, 2, 0, 5]
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
# but got
#[1, 3, 4, 2, 0, 5]
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
"""
Explanation: Exercício 2: Ordenação com bubble sort
Implemente a função bubble_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo bubble sort.
Além de devolver uma lista ordenada, sua função deve imprimir os resultados parciais da ordenação ao fim de cada iteração do algoritmo ao longo da lista. Observe que, como a última iteração do algoritmo apenas verifica que a lista está ordenada, o último resultado deve ser impresso duas vezes. Portanto, se seu algoritmo precisa de duas passagens para ordenar a lista, e uma terceira para verificar que a lista está ordenada, 3 resultados parciais devem ser impressos.
bubble_sort([5, 1, 4, 2, 8])
[1, 4, 2, 5, 8]
[1, 2, 4, 5, 8]
[1, 2, 4, 5, 8]
deve devolver [1, 2, 4, 5, 8]
End of explanation
"""
def insertion_sort(lista):
fim = len(lista)
for i in range(fim - 1):
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]:
posicao_do_minimo = j
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
return lista
"""
Explanation: Praticar tarefa de programação: Exercício adicional (opcional)
Exercício 1: Ordenação com insertion sort
Implemente a função insertion_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo insertion sort.
End of explanation
"""
def fatorial(n):
if n <= 1: # base da recursão
return 1
else:
return n * fatorial(n - 1) # chamada recursiva
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(2, 2),
(3, 6),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
#fatorial.py
def fatorial(n):
if n <= 1: # base da recursão
return 1
else:
return n * fatorial(n - 1) # chamada recursiva
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(2, 2),
(3, 6),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
# fibonacci.py
# Fn = 0 if n = 0
# Fn = 1 if n = 1
# Fn+1 + Fn-2 if n > 1
def fibonacci(n):
if n < 2:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 0),
(1, 1),
(2, 1),
(3, 2),
(4, 3),
(5, 5),
(6, 8),
(7, 13)
])
def testa_fibonacci(entrada, esperado):
assert fibonacci(entrada) == esperado
# busca binária
def busca_binaria(lista, elemento, min = 0, max = None):
if max == None: # se nada for passado, o tamanho máximo é o tamanho da lista
max = len(lista) - 1
if max < min: # situação que não encontrou o elemento
return False
else:
meio = min + (max - min) // 2
if lista[meio] > elemento:
return busca_binaria(lista, elemento, min, meio - 1)
elif lista[meio] < elemento:
return busca_binaria(lista, elemento, meio + 1, max)
else:
return meio
a = [10, 20, 30, 40, 50, 60]
import pytest
@pytest.mark.parametrize("lista, valor, esperado", [
(a, 10, 0),
(a, 20, 1),
(a, 30, 2),
(a, 40, 3),
(a, 50, 4),
(a, 60, 5),
(a, 70, False),
(a, 70, False),
(a, 15, False),
(a, -10, False)
])
def testa_busca_binaria(lista, valor, esperado):
assert busca_binaria(lista, valor) == esperado
"""
Explanation: Week 6
Recursão (Definição. Como resolver um problema recursivo. Exemplos. Implementações.)
End of explanation
"""
def merge_sort(lista):
if len(lista) <= 1:
return lista
meio = len(lista) // 2
lado_esquerdo = merge_sort(lista[:meio])
lado_direito = merge_sort(lista[meio:])
return merge(lado_esquerdo, lado_direito) # intercala os dois lados
def merge(lado_esquerdo, lado_direito):
if not lado_esquerdo: # se o lado esquerdo for uma lista vazia...
return lado_direito
if not lado_direito: # se o lado direito for uma lista vazia...
return lado_esquerdo
if lado_esquerdo[0] < lado_direito[0]: # compara o primeiro elemento da posição do lado esquerdo com o primeiro do lado direito
return [lado_esquerdo[0]] + merge(lado_esquerdo[1:], lado_direito) # merge(lado_esquerdo[1:]) ==> pega o lado esquerdo, menos o primeiro elemento
return [lado_direito[0]] + merge(lado_esquerdo, lado_direito[1:])
"""
Explanation: Mergesort
Ordenação por Intercalação:
Divida a lista na metade recursivamente, até que cada sublista contenha apenas 1 elemento (portanto, já ordenada).
Repetidamente, intercale as sublistas para produzir novas listas ordenadas.
Repita até que tenhamos apenas 1 lista no final (que estará ordenada).
Ex:
6 5 3 1 8 7 2 4
5 6 1 3 7 8 2 4
1 3 5 6 2 4 7 8
1 2 3 4 5 6 7 8
End of explanation
"""
def x(n):
if n == 0:
#<espaço A>
print(n)
else:
#<espaço B>
x(n-1)
print(n)
#<espaço C>
#<espaço D>
#<espaço E>
x(10)
def x(n):
if n >= 0 or n <= 2:
print(n)
# return n
else:
print(n-1)
print(n-2)
print(n-3)
#return x(n-1) + x(n-2) + x(n-3)
print(x(6))
def busca_binaria(lista, elemento, min=0, max=None):
if max == None:
max = len(lista)-1
if max < min:
return False
else:
meio = min + (max-min)//2
print(lista[meio])
if lista[meio] > elemento:
return busca_binaria(lista, elemento, min, meio - 1)
elif lista[meio] < elemento:
return busca_binaria(lista, elemento, meio + 1, max)
else:
return meio
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
a
busca_binaria(a, 99)
"""
Explanation: Base da recursão é a condição que faz o problema ser definitivamente resolvido. Caso essa condição, essa base da recursão, não seja satisfeita, o problema continua sendo reduzido em instâncias menores até que a condição passe a ser satisfeita.
Chamada recursiva é a linha onde a função faz uma chamada a ela mesma.
Função recursiva é a função que chama ela mesma.
A linha 2 tem a condição que é a base da recursão
A linha 5 tem a chamada recursiva
Para o algoritmo funcionar corretamente, é necessário trocar a linha 3 por “return 1”
if (n < 2):
if (n <= 1):
No <espaço A> e no <espaço C>
looping infinito
Resultado: 6. Chamadas recursivas: nenhuma.
Resultado: 20. Chamadas recursivas: 24
1
End of explanation
"""
def soma_lista_tradicional_way(lista):
soma = 0
for i in range(len(lista)):
soma += lista[i]
return soma
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
soma_lista_tradicional_way(a)
b = [-10, -2, 0, 5]
soma_lista_tradicional_way(b)
def soma_lista(lista):
if len(lista) == 1:
return lista[0]
else:
return lista[0] + soma_lista(lista[1:])
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
soma_lista(a) # retorna 2952
b = [-10, -2, 0, 5]
soma_lista(b)
"""
Explanation: Tarefa de programação: Lista de exercícios - 6
Exercício 1: Soma dos elementos de uma lista
Implemente a função soma_lista(lista), que recebe como parâmetro uma lista de números inteiros e devolve um número inteiro correspondente à soma dos elementos desta lista.
Sua solução deve ser implementada utilizando recursão.
End of explanation
"""
def encontra_impares_tradicional_way(lista):
lista_impares = []
for i in lista:
if i % 2 != 0: # é impar!
lista_impares.append(i)
return lista_impares
a = [5, 66, 77, 99, 102, 239, 567, 875, 934]
encontra_impares_tradicional_way(a)
b = [2, 5, 34, 66, 100, 102, 999]
encontra_impares_tradicional_way(b)
stack = ['a','b']
stack.extend(['g','h'])
stack
def encontra_impares(lista):
if len(lista) == 0:
return []
if lista[0] % 2 != 0: # se o elemento é impar
return [lista[0]] + encontra_impares(lista[1:])
else:
return encontra_impares(lista[1:])
a = [5, 66, 77, 99, 102, 239, 567, 875, 934]
encontra_impares(a)
encontra_impares([5])
encontra_impares([1, 2, 3])
encontra_impares([2, 4, 6, 8])
encontra_impares([9])
encontra_impares([4, 11])
encontra_impares([2, 10, 20, 7, 30, 12, 6, 6])
encontra_impares([])
encontra_impares([4, 331, 1001, 4])
"""
Explanation: Exercício 2: Encontrando ímpares em uma lista
Implemente a função encontra_impares(lista), que recebe como parâmetro uma lista de números inteiros e devolve uma outra lista apenas com os números ímpares da lista dada.
Sua solução deve ser implementada utilizando recursão.
Dica: você vai precisar do método extend() que as listas possuem.
End of explanation
"""
def incomodam(n):
if type(n) != int or n <= 0:
return ''
else:
s1 = 'incomodam '
return s1 + incomodam(n - 1)
incomodam('-1')
incomodam(2)
incomodam(3)
incomodam(8)
incomodam(-3)
incomodam(1)
incomodam(7)
def incomodam(n):
if type(n) != int or n <= 0:
return ''
else:
s1 = 'incomodam '
return s1 + incomodam(n - 1)
def elefantes(n):
if type(n) != int or n <= 0:
return ''
if n == 1:
return "Um elefante incomoda muita gente"
else:
return elefantes(n - 1) + str(n) + " elefantes " + incomodam(n) + ("muita gente" if n % 2 > 0 else "muito mais") + "\r\n"
elefantes(1)
print(elefantes(3))
elefantes(2)
elefantes(3)
print(elefantes(4))
type(str(3))
def incomodam(n):
if type(n) != int or n < 0:
return ''
else:
return print('incomodam ' * n)
def elefantes(n):
texto_inicial = 'Um elefante incomoda muita gente\n'
texto_posterior1 = '%d elefantes ' + incomodam(n) + 'muito mais\n\n'
texto_posterior2 = 'elefantes ' + incomodam(n) + 'muita gente\n'
if n == 1:
return print(texto_inicial)
else:
return print(texto_inicial) + print(texto_posterior1)
elefantes(1)
elefantes(2)
"""
Explanation: Exercício 3: Elefantes
Este exercício tem duas partes:
Implemente a função incomodam(n) que devolve uma string contendo "incomodam " (a palavra seguida de um espaço) n vezes. Se n não for um inteiro estritamente positivo, a função deve devolver uma string vazia. Essa função deve ser implementada utilizando recursão.
Utilizando a função acima, implemente a função elefantes(n) que devolve uma string contendo a letra de "Um elefante incomoda muita gente..." de 1 até n elefantes. Se n não for maior que 1, a função deve devolver uma string vazia. Essa função também deve ser implementada utilizando recursão.
Observe que, para um elefante, você deve escrever por extenso e no singular ("Um elefante..."); para os demais, utilize números e o plural ("2 elefantes...").
Dica: lembre-se que é possível juntar strings com o operador "+". Lembre-se também que é possível transformar números em strings com a função str().
Dica: Será que neste caso a base da recursão é diferente de n==1?
Por exemplo, uma chamada a elefantes(4) deve devolver uma string contendo:
Um elefante incomoda muita gente
2 elefantes incomodam incomodam muito mais
2 elefantes incomodam incomodam muita gente
3 elefantes incomodam incomodam incomodam muito mais
3 elefantes incomodam incomodam incomodam muita gente
4 elefantes incomodam incomodam incomodam incomodam muito mais
End of explanation
"""
def fib(n): # escreve a série de Fibonacci até n
a, b = 0, 1
while b < n:
print(b, end = ' ')
a, b = b, a + b
print('\n\nO último termo é:', a)
fib(10)
def fib2(n):
result = []
a, b = 0, 1
while b < n:
result.append(b)
a, b = b, a + b
return result
fib2(60)
## Example 2: Using recursion
def fibR(n):
if n==1 or n==2:
return 1
return fibR(n-1)+fibR(n-2)
print (fibR(4))
def F(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return F(n-1)+F(n-2)
F(2)
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(4)
fibonacci(2)
"""
Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais)
Exercício 1: Fibonacci
Implemente a função fibonacci(n), que recebe como parâmetro um número inteiro e devolve um número inteiro correspondente ao n-ésimo elemento da sequência de Fibonacci. Sua solução deve ser implementada utilizando recursão.
Exemplo:
fibonacci(4)
deve devolver => 3
fibonacci(2)
deve devolver => 1
End of explanation
"""
def fatorial(x):
if x == 0 or x == 1:
return 1
else:
return x * fatorial(x - 1)
fatorial(4)
fatorial(5)
fatorial(3)
"""
Explanation: Exercício 2: Fatorial
Implemente a função fatorial(x), que recebe como parâmetro um número inteiro e devolve um número inteiro correspondente ao fatorial de x.
Sua solução deve ser implementada utilizando recursão.
End of explanation
"""
|
quoniammm/happy-machine-learning | Udacity-ML/boston_housing-master_1/boston_housing.ipynb | mit | # Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
"""
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
"""
# TODO: Minimum price of the data
#目标:计算价值的最小值
minimum_price = prices.min()
# TODO: Maximum price of the data
#目标:计算价值的最大值
maximum_price = prices.max()
# TODO: Mean price of the data
#目标:计算价值的平均值
mean_price = prices.mean()
# TODO: Median price of the data
#目标:计算价值的中值
median_price = prices.median()
# TODO: Standard deviation of prices of the data
#目标:计算价值的标准差
std_price = prices.std()
# Show the calculated statistics
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
"""
Explanation: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
"""
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
"""
Explanation: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
回答:
RM 增大,MEDV 增大,因为房屋面积变大;
LSTAT 增大,MEDV 减小,因为低收入者变多;
PTRATIO 增大,MEDV 增大,因为教育资源变得更加丰富
建模
在项目的第二部分中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
练习:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
End of explanation
"""
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
"""
Explanation: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你会觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
"""
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
"""
Explanation: 回答: 我觉得成功描述了。因为决定系数很的范围为 0 ~ 1,越接近1,说明这个模型可以对目标变量进行预测的效果越好,结果决定系数计算出来为 0.923 ,说明模型对目标变量的变化进行了良好的描述。
练习: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重新排序,以消除数据集中由于排序而产生的偏差。
在下面的代码中,你需要:
- 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
- 最终分离出的子集为X_train,X_test,y_train,和y_test。
End of explanation
"""
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
"""
Explanation: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案: 这样做,可以使得我们可以通过测试用的数据集来对模型的泛化误差进行评估,检验模型的好坏。
分析模型的表现
在项目的第三部分,我们来看一下几个模型针对不同的数据集在学习和测试上的表现。另外,你需要专注于一个特定的算法,用全部训练集训练时,提高它的'max_depth' 参数,观察这一参数的变化如何影响模型的表现。把你模型的表现画出来对于分析过程十分有益。可视化可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观的显示了随着训练数据量的增加,模型学习曲线的训练评分和测试评分的变化。注意,曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。这个模型的训练和测试部分都使用决定系数R<sup>2</sup>来评分。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
"""
vs.ModelComplexity(X_train, y_train)
"""
Explanation: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案: 第二个,最大深度为3。训练曲线开始逐渐降低,测试曲线开始逐渐升高,但它们最后都趋于平稳,所以并不能有效提升模型的表现。
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练的变化,一个是测试的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
"""
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
"""
Explanation: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案: 为1时,出现了很大的偏差,因为此时无论是测试数据还是训练数据b标准系数都很低,测试数据和训练数据的标准系数之间差异很小,说明模型无法对数据进行良好预测。
为 10 时,出现了很大的方差,测试数据和训练数据的标准系数之间差异很大,说明出现了过拟合情况。
问题 6- 最优模型的猜测
你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
答案: 3。因为此时测试数据和训练数据的分数之间差异最小,且测试数据的标准系数达到最高。
评价模型表现
在这个项目的最后,你将自己建立模型,并使用最优化的fit_model函数,基于客户房子的特征来预测该房屋的价值。
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化学习算法?
回答:
是一种把参数网格化的算法。
它会自动生成一个不同参数值组成的“网格”:
===================================
('param1', param3) | ('param1', param4)
('param2', param3) | ('param2', param4)
==================================
通过尝试所有"网格"中使用的参数,并从中选取最佳的参数组合来优化学习算法。
问题 8- 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?优化模型时,使用这种方法对网格搜索有什么好处?网格搜索是如何结合交叉验证来完成对最佳参数组合的选择的?
提示: 跟为何需要一组测试集的原因差不多,网格搜索时如果不使用交叉验证会有什么问题?GridSearchCV中的'cv_results'属性能告诉我们什么?
答案:
K折交叉验证法是将训练数据平均分配到K个容器,每次去其中一个做测试数据,其余做训练数据,进行K次后,对训练结果取平均值的一种获得更高精确度的一种算法。
可以时网格搜索的训练结果获得更高的精确度,如果不使用交叉验证,模型的泛化误差会变大,从而影响网格搜索的效果。
网格搜索可以使拟合函数尝试所有的参数组合,并返回一个合适的分类器,自动调整至最佳参数组合。
练习:训练模型
在最后一个练习中,你将需要将所学到的内容整合,使用决策树演算法训练一个模型。为了保证你得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
此外,你会发现你的实现使用的是 ShuffleSplit() 。它也是交叉验证的一种方式(见变量 'cv_sets')。虽然这不是问题8中描述的 K-Fold 交叉验证,这个教程验证方法也很有用!这里 ShuffleSplit() 会创造10个('n_splits')混洗过的集合,每个集合中20%('test_size')的数据会被用作验证集。当你在实现的时候,想一想这跟 K-Fold 交叉验证有哪些相同点,哪些不同点?
在下方 fit_model 函数中,你需要做的是:
- 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
- 将这个回归函数储存到 'regressor' 变量中;
- 为 'max_depth' 创造一个字典,它的值是从1至10的数组,并储存到 'params' 变量中;
- 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
- 将 performance_metric 作为参数传至这个函数中;
- 将评分函数储存到 'scoring_fnc' 变量中;
- 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;
- 将变量'regressor', 'params', 'scoring_fnc', 和 'cv_sets' 作为参数传至这个对象中;
- 将 GridSearchCV 存到 'grid' 变量中。
如果有同学对python函数如何传递多个参数不熟悉,可以参考这个MIT课程的视频。
End of explanation
"""
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
"""
Explanation: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
"""
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
"""
Explanation: Answer: 4。与猜测不同,猜测结果为3。
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
"""
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: 答案:
第一个顾客: $403,025.00.
第二个顾客:: $237,478.72.
第三个顾客:: $931,636.36.
这样的价格是合理的,以第三个顾客为例,他的房间数最多,社区贫困指数最低,且教育资源最丰富,因而价格最贵。以此类推,顾客一二的预测也是合理地。
敏感度
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
"""
### 你的代码
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('bj_housing.csv')
prices = data['Value']
features = data.drop('Value', axis = 1)
print features.head()
print prices.head()
# Success
# 完成
# print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
client_data = [[128, 3, 2, 0, 2005, 13], [150, 3, 2, 0, 2005, 13]]
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ¥{:,.2f}".format(i+1, price)
"""
Explanation: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案: 不能,首先这只是波士顿的房价,并不具有代表性,而且时间久远,房屋的价格还和其他特性有关,比如装修的程度。
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation
"""
|
dcavar/python-tutorial-for-ipython | notebooks/Python Scikit-Learn for Computational Linguists.ipynb | apache-2.0 | from sklearn import datasets
"""
Explanation: Python Scikit-Learn for Computational Linguists
(C) 2017 by Damir Cavar
Version: 1.0, January 2017
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the Computational Linguistics Program of the Department of Linguistics at Indiana University.
This material is based on various other tutorials, including:
An introduction to machine learning with scikit-learn
Introduction
One of the problems or issues that Machine Learning aims to solve is to make predictions from previous experience. This can be achieved by extracting features from existing data collections. Scikit-Learn comes with some sample datasets. The datasets are the Iris flower data (classification), the Pen-Based Recognition of Handwritten Digits Data Set (classification), and the Boston Housing Data Set (regression). The datasets are part of the Scikit and do not have to be downloads. We can load these datasets by loading the datasets module from sklearn and then loading the individual datasets.
End of explanation
"""
diabetes = datasets.load_diabetes()
"""
Explanation: We can load a dataset using the following function:
End of explanation
"""
iris = datasets.load_iris()
print(iris.DESCR)
"""
Explanation: Some datasets provide a description in the DESCR field:
End of explanation
"""
digits = datasets.load_digits()
print(digits)
"""
Explanation: We can see the content of the datasets by printing them out:
End of explanation
"""
print(digits.data)
"""
Explanation: The data of the digits dataset is stored in the data member. This data represents the features of the digit image.
End of explanation
"""
print(digits.target)
print(digits.DESCR)
"""
Explanation: The target member contains the real target labels or values of the feature sets, that is the numbers that the feature sets represent.
End of explanation
"""
print(0, '\n', digits.images[0])
print()
print(1, '\n', digits.images[1])
"""
Explanation: In case of the digits dataset the 2D shapes of the images are mapped on a 8x8 matrix. You can print them out using the images member:
End of explanation
"""
from sklearn import svm
"""
Explanation: The digits dataset is a set of images of digits that can be used to train a classifier and test the classification on unseen images. To use a Support Vector Classifier we import the svm module:
End of explanation
"""
classifier = svm.SVC(gamma=0.001, C=100.)
"""
Explanation: We create a classifier instance with manually set parameters. The parameters can be automatically set using various methods.
End of explanation
"""
classifier.fit(digits.data[:-1], digits.target[:-1])
"""
Explanation: The classifier instance has to be trained on the data. The fit method of the instance requires two parameters, the features and the array with the corresponding classes or labels. The features are stored in the data member. The labels are stored in the target member. We use all but the last data and target element for training or fitting.
End of explanation
"""
print("Prediction:", classifier.predict(digits.data[-1:]))
print("Image:\n", digits.images[-1])
print("Label:", digits.target[-1])
"""
Explanation: We can use the predict method to request a guess about the last element in the data member:
End of explanation
"""
classifier.fit(iris.data, iris.target)
"""
Explanation: Storing Models
We can train a new model from the Iris data using the fit method:
End of explanation
"""
import pickle
"""
Explanation: To store the model in a file, we can use the pickle module:
End of explanation
"""
s = pickle.dumps(classifier)
"""
Explanation: We can serialize the classifier to a variable that we can process or save to disk:
End of explanation
"""
ofp = open("irisModel.dat", mode='bw')
ofp.write(s)
ofp.close()
"""
Explanation: We will save the model to a file irisModel.dat.
End of explanation
"""
ifp = open("irisModel.dat", mode='br')
model = ifp.read()
ifp.close()
classifier2 = pickle.loads(model)
"""
Explanation: The model can be read back into memory using the following code:
End of explanation
"""
print("Prediction:", classifier2.predict(iris.data[0:1]))
print("Target:", iris.target[0])
"""
Explanation: We can use this unpickled classifier2 in the same way as shown above:
End of explanation
"""
import numpy
"""
Explanation: Nearest Neighbor Classification
We will use the numpy module for arrays and operations on those.
End of explanation
"""
print(iris.target)
print(numpy.unique(iris.target))
"""
Explanation: We can print out the unique list (or array) of classes (or targets) from the iris dataset using the following code:
End of explanation
"""
numpy.random.seed(0)
indices = numpy.random.permutation(len(iris.data))
print(indices)
indices = numpy.random.permutation(len(iris.data))
print(indices)
text = "Hello"
for i in range(len(text)):
print(i, ':', text[i])
irisTrain_data = iris.data[indices[:-10]]
irisTrain_target = iris.target[indices[:-10]]
irisTest_data = iris.data[indices[-10:]]
irisTest_target = iris.target[indices[-10:]]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(irisTrain_data, irisTrain_target)
knn.predict(irisTest_data)
irisTest_target
"""
Explanation: We can split the iris dataset in a training and testing dataset using random permutations.
End of explanation
"""
from sklearn import cluster
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(iris.data)
print(k_means.labels_[::10])
print(iris.target[::10])
"""
Explanation: Clustering
K-means Clustering
End of explanation
"""
svc = svm.SVC(kernel='linear', gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
"""
Explanation: Classification
Using Kernels
Linear kernel
End of explanation
"""
svc = svm.SVC(kernel='poly', degree=3, gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
"""
Explanation: Polynomial kernel:
The degree is polynomial.
End of explanation
"""
svc = svm.SVC(kernel='rbf', gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
"""
Explanation: RBF kernel (Radial Basis Function):
End of explanation
"""
from sklearn import linear_model
logistic = linear_model.LogisticRegression(C=1e5)
logistic.fit(irisTrain_data, irisTrain_target)
logistic.predict(irisTest_data)
irisTest_target
from sklearn import ensemble
rfc = ensemble.RandomForestClassifier()
rfc.fit(irisTrain_data, irisTrain_target)
rfc.predict(irisTest_data)
irisTest_target
text_s1 = """
User (computing)
A user is a person who uses a computer or network service. Users generally use a system or a software product[1] without the technical expertise required to fully understand it.[1] Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.[2][3]
A user often has a user account and is identified to the system by a username (or user name). Other terms for username include login name, screenname (or screen name), nickname (or nick) and handle, which is derived from the identical Citizen's Band radio term.
Some software products provide services to other systems and have no direct end users.
End user
See also: End user
End users are the ultimate human users (also referred to as operators) of a software product. The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users.[4] In user-centered design, it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements.[5][6] This abstraction is primarily useful in designing the user interface, and refers to a relevant subset of characteristics that most expected users would have in common.
In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines. When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users.[7] In this context, graphical user interfaces (GUIs) are usually preferred to command-line interfaces (CLIs) for the sake of usability.[8]
The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language.
Systems whose actor is another system or a software agent have no direct end users.
User account
A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log in to an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging, and resource management.
Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation. In Unix systems, the username is correlated with a user identifier or user id.
Computer systems operate in one of two types based on what kind of users they have:
Single-user systems do not have a concept of several user accounts.
Multi-user systems have such a concept, and require users to identify themselves before using the system.
Each user account on a multi-user system typically has a home directory, in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile, which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory.
While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account.
Usernames
Various computer operating-systems and applications expect/enforce different rules for the formats of user names.
In Microsoft Windows environments, for example, note the potential use of:[9]
User Principal Name (UPN) format - for example: UserName@orgName.com
Down-Level Logon Name format - for example: DOMAIN\accountName
Some online communities use usernames as nicknames for the account holders. In some cases, a user may be better known by their username than by their real name, such as CmdrTaco (Rob Malda), founder of the website Slashdot.
Terminology
Some usability professionals have expressed their dislike of the term "user", proposing it to be changed.[10] Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'."[11]
See also
Information technology portal iconSoftware portal
1% rule (Internet culture)
Anonymous post
Pseudonym
End-user computing, systems in which non-programmers can create working applications.
End-user database, a collection of data developed by individual end-users.
End-user development, a technique that allows people who are not professional developers to perform programming tasks, i.e. to create or modify software.
End-User License Agreement (EULA), a contract between a supplier of software and its purchaser, granting the right to use it.
User error
User agent
User experience
User space
"""
text_s2 = """
Personal account
A personal account is an account for use by an individual for that person's own needs. It is a relative term to differentiate them from those accounts for corporate or business use. The term "personal account" may be used generically for financial accounts at banks and for service accounts such as accounts with the phone company, or even for e-mail accounts.
Banking
In banking "personal account" refers to one's account at the bank that is used for non-business purposes. Most likely, the service at the bank consists of one of two kinds of accounts or sometimes both: a savings account and a current account.
Banks differentiate their services for personal accounts from business accounts by setting lower minimum balance requirements, lower fees, free checks, free ATM usage, free debit card (Check card) usage, etc. The term does not apply to any one service or limit the banks from providing the same services to non-individuals. Personal account can be classified into three categories: 1. Persons of Nature, 2. Persons of Artificial Relationship, 3. Persons of Representation.
At the turn of the 21st century, many banks started offering free checking, a checking account with no minimum balance, a free check book, and no hidden fees. This encouraged Americans who would otherwise live from check to check to open their "personal" account at financial institutions. For businesses that issue corporate checks to employees, this enables reduction in the amount of paperwork.
Finance
In the financial industry, 'personal account' (usually "PA") refers to trading or investing for yourself, rather than the company one is working for. There are often restrictions on what may be done with a PA, to avoid conflict of interest.
"""
test_text = """
A user account is a location on a network server used to store a computer username, password, and other information. A user account allows or does not allow a user to connect to a network, another computer, or other share. Any network that has multiple users requires user accounts.
"""
from nltk import word_tokenize, sent_tokenize
sentences_s1 = sent_tokenize(text_s1)
#print(sentences_s1)
toksentences_s1 = [ word_tokenize(sentence) for sentence in sentences_s1 ]
#print(toksentences_s1)
tokens_s1 = set(word_tokenize(text_s1))
tokens_s2 = set(word_tokenize(text_s2))
#print(set.intersection(tokens_s1, tokens_s2))
unique_s1 = tokens_s1 - tokens_s2
unique_s2 = tokens_s2 - tokens_s1
#print(unique_s1)
#print(unique_s2)
testTokens = set(word_tokenize(test_text))
print(len(set.intersection(testTokens, unique_s1)))
print(len(set.intersection(testTokens, unique_s2)))
"""
Explanation: Logistic Regression
End of explanation
"""
|
mathemage/h2o-3 | examples/deeplearning/notebooks/deeplearning_anomaly_detection.ipynb | apache-2.0 | import h2o
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
h2o.init()
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import os.path
PATH = os.path.expanduser("~/h2o-3/")
train_ecg = h2o.import_file(PATH + "smalldata/anomaly/ecg_discord_train.csv")
test_ecg = h2o.import_file(PATH + "smalldata/anomaly/ecg_discord_test.csv")
"""
Explanation: Deep Autoencoder Networks
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural
network with a small central layer to reconstruct high-dimensional input vectors.
This kind of neural network is named Autoencoder.
Autoencoders is nonlinear dimensionality reduction technique (Hinton et al, 2006) used for unsupervised learning of features, and they can
learn low-dimensional codes that work much
better than principal components analysis as a tool to reduce the dimensionality of data.
Anomaly Heart Beats Detection
If enough training data
resembling some underlying pattern is provided, we can train the network to learn the patterns in the data.
An
anomalous test point is a point that does not match the typical data patterns. The autoencoder will
likely have a high error rate in reconstructing this data, indicating the anomaly.
This framework is used to develop an anomaly detection demonstration using a
deep autoencoder. The dataset is an ECG time series of heartbeats and the goal
is to determine which heartbeats are outliers. The training data (20 “good”
heartbeats) and the test data (training data with 3 “bad” heartbeats appended
for simplicity) can be downloaded directly into the H2O cluster, as shown below.
Each row represents a single heartbeat.
End of explanation
"""
train_ecg.shape
# transpose the frame to have the time serie as a single colum to plot
train_ecg.as_data_frame().T.plot(legend=False, title="ECG Train Data", color='blue'); # don't display the legend
"""
Explanation: let's explore the dataset.
End of explanation
"""
model = H2OAutoEncoderEstimator(
activation="Tanh",
hidden=[50],
l1=1e-5,
score_interval=0,
epochs=100
)
model.train(x=train_ecg.names, training_frame=train_ecg)
model
"""
Explanation: in the train data we have 20 time series each of 210 data points. Notice that all the lines are compact and follow a similar shape. Is important to remember that when training with autoencoders you want to use only VALID data. All the anomalies should be removed.
Now let's use the AutoEncoderEstimator to train our neural network
End of explanation
"""
reconstruction_error = model.anomaly(test_ecg)
"""
Explanation: Our Neural Network is now able to Encode the time series.
Now we try to Compute reconstruction error with the Anomaly detection function.
This is the Mean Square Error between output and input layers.
Low error means that the neural network is able to encode the input well, and that means is a "known" case.
A High error means that the neural network has not seen that example before and so is an anomaly.
End of explanation
"""
df = reconstruction_error.as_data_frame()
df['Rank'] = df['Reconstruction.MSE'].rank(ascending=False)
df_sorted = df.sort_values('Rank')
df_sorted
anomalies = df_sorted[ df_sorted['Reconstruction.MSE'] > 1.0 ]
anomalies
data = test_ecg.as_data_frame()
data.T.plot(legend=False, title="ECG Test Data", color='blue')
ax = data.T.plot(legend=False, color='blue')
data.T[anomalies.index].plot(legend=False, title="ECG Anomalies in the Data", color='red', ax=ax);
"""
Explanation: Now the question is: Which of the test_ecg time series are most likely an anomaly?
We can select the top N that have high error rate
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/70e603ce6ceb1fd2cb094ccee99a1920/resolution_metrics_eegmeg.ipynb | bsd-3-clause | # Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm.resolution_matrix import make_inverse_resolution_matrix
from mne.minimum_norm.spatial_resolution import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects/'
meg_path = data_path / 'MEG' / 'sample'
fname_fwd_emeg = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = meg_path / 'sample_audvis-cov.fif'
fname_evo = meg_path / 'sample_audvis-ave.fif'
# read forward solution with EEG and MEG
forward_emeg = mne.read_forward_solution(fname_fwd_emeg)
# forward operator with fixed source orientations
forward_emeg = mne.convert_forward_solution(forward_emeg, surf_ori=True,
force_fixed=True)
# create a forward solution with MEG only
forward_meg = mne.pick_types_forward(forward_emeg, meg=True, eeg=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution for MEG and EEGMEG
inv_emeg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_emeg, noise_cov=noise_cov, loose=0.,
depth=None)
inv_meg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_meg, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
"""
Explanation: Compute spatial resolution metrics to compare MEG with EEG+MEG
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions. This example mimics some results from :footcite:HaukEtAl2019,
namely Figure 3 (peak localisation error for PSFs, L2-MNE vs dSPM) and Figure 4
(spatial deviation for PSFs, L2-MNE vs dSPM). It shows that combining MEG with
EEG reduces the point-spread function and increases the spatial resolution of
source imaging, especially for deeper sources.
End of explanation
"""
rm_emeg = make_inverse_resolution_matrix(forward_emeg, inv_emeg,
method='MNE', lambda2=lambda2)
ple_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='peak_err')
sd_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='sd_ext')
del rm_emeg
"""
Explanation: EEGMEG
Compute resolution matrices, localization error, and spatial deviations
for MNE:
End of explanation
"""
rm_meg = make_inverse_resolution_matrix(forward_meg, inv_meg,
method='MNE', lambda2=lambda2)
ple_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='peak_err')
sd_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='sd_ext')
del rm_meg
"""
Explanation: MEG
Do the same for MEG:
End of explanation
"""
brain_ple_emeg = ple_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_emeg.add_text(0.1, 0.9, 'PLE PSF EMEG', 'title', font_size=16)
"""
Explanation: Visualization
Look at peak localisation error (PLE) across the whole cortex for PSF:
End of explanation
"""
brain_ple_meg = ple_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_meg.add_text(0.1, 0.9, 'PLE PSF MEG', 'title', font_size=16)
"""
Explanation: For MEG only:
End of explanation
"""
diff_ple = ple_psf_emeg - ple_psf_meg
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_ple_diff.add_text(0.1, 0.9, 'PLE EMEG-MEG', 'title', font_size=16)
"""
Explanation: Subtract the two distributions and plot this difference:
End of explanation
"""
brain_sd_emeg = sd_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_emeg.add_text(0.1, 0.9, 'SD PSF EMEG', 'title', font_size=16)
"""
Explanation: These plots show that with respect to peak localization error, adding EEG to
MEG does not bring much benefit. Next let's visualise spatial deviation (SD)
across the whole cortex for PSF:
End of explanation
"""
brain_sd_meg = sd_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_meg.add_text(0.1, 0.9, 'SD PSF MEG', 'title', font_size=16)
"""
Explanation: For MEG only:
End of explanation
"""
diff_sd = sd_psf_emeg - sd_psf_meg
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_sd_diff.add_text(0.1, 0.9, 'SD EMEG-MEG', 'title', font_size=16)
"""
Explanation: Subtract the two distributions and plot this difference:
End of explanation
"""
|
llclave/Springboard-Mini-Projects | Heights and Weights Using Logistic Regression/Mini_Project_Logistic_Regression.ipynb | mit | %matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
import sklearn.model_selection
c0=sns.color_palette()[0]
c1=sns.color_palette()[1]
c2=sns.color_palette()[2]
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light,
cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold,
s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold,
alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light,
cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False,
colorscale=colorscale, cdiscrete=cdiscrete,
psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
"""
Explanation: Classification
$$
\renewcommand{\like}{{\cal L}}
\renewcommand{\loglike}{{\ell}}
\renewcommand{\err}{{\cal E}}
\renewcommand{\dat}{{\cal D}}
\renewcommand{\hyp}{{\cal H}}
\renewcommand{\Ex}[2]{E_{#1}[#2]}
\renewcommand{\x}{{\mathbf x}}
\renewcommand{\v}[1]{{\mathbf #1}}
$$
Note: We've adapted this Mini Project from Lab 5 in the CS109 course. Please feel free to check out the original lab, both for more exercises, as well as solutions.
We turn our attention to classification. Classification tries to predict, which of a small set of classes, an observation belongs to. Mathematically, the aim is to find $y$, a label based on knowing a feature vector $\x$. For instance, consider predicting gender from seeing a person's face, something we do fairly well as humans. To have a machine do this well, we would typically feed the machine a bunch of images of people which have been labelled "male" or "female" (the training set), and have it learn the gender of the person in the image from the labels and the features used to determine gender. Then, given a new photo, the trained algorithm returns us the gender of the person in the photo.
There are different ways of making classifications. One idea is shown schematically in the image below, where we find a line that divides "things" of two different types in a 2-dimensional feature space. The classification show in the figure below is an example of a maximum-margin classifier where construct a decision boundary that is far as possible away from both classes of points. The fact that a line can be drawn to separate the two classes makes the problem linearly separable. Support Vector Machines (SVM) are an example of a maximum-margin classifier.
End of explanation
"""
dflog = pd.read_csv("data/01_heights_weights_genders.csv")
dflog.head()
"""
Explanation: A Motivating Example Using sklearn: Heights and Weights
We'll use a dataset of heights and weights of males and females to hone our understanding of classifiers. We load the data into a dataframe and plot it.
End of explanation
"""
# your turn
plt.scatter(dflog.Weight[dflog.Gender == "Male"], dflog.Height[dflog.Gender == "Male"], alpha=0.2, c="red")
plt.scatter(dflog.Weight[dflog.Gender == "Female"], dflog.Height[dflog.Gender == "Female"], alpha=0.2, c="blue")
plt.xlabel('Weight')
plt.ylabel('Height')
plt.title('Weight vs Height')
plt.show()
"""
Explanation: Remember that the form of data we will use always is
with the "response" or "label" $y$ as a plain array of 0s and 1s for binary classification. Sometimes we will also see -1 and +1 instead. There are also multiclass classifiers that can assign an observation to one of $K > 2$ classes and the labe may then be an integer, but we will not be discussing those here.
y = [1,1,0,0,0,1,0,1,0....].
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set I</h3>
<ul>
<li> <b>Exercise:</b> Create a scatter plot of Weight vs. Height
<li> <b>Exercise:</b> Color the points differently by Gender
</ul>
</div>
End of explanation
"""
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Split the data into a training and test set.
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender == "Male").values,random_state=5)
clf = LogisticRegression()
# Fit the model on the trainng data.
clf.fit(Xlr, ylr)
# Print the accuracy from the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
"""
Explanation: Training and Test Datasets
When fitting models, we would like to ensure two things:
We have found the best model (in terms of model parameters).
The model is highly likely to generalize i.e. perform well on unseen data.
<br/>
<div class="span5 alert alert-success">
<h4>Purpose of splitting data into Training/testing sets</h4>
<ul>
<li> We built our model with the requirement that the model fit the data well. </li>
<li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>
<ul>
<li> We wanted the model for predictions, right?</li>
</ul>
<li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>
<li> This also leads directly to the idea of cross-validation, next section. </li>
</ul>
</div>
First, we try a basic Logistic Regression:
Split the data into a training and test (hold-out) set
Train on the training set, and test for accuracy on the testing set
End of explanation
"""
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
for train, test in KFold(nfold).split(x): # split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
"""
Explanation: Tuning the Model
The model has some hyperparameters we can tune for hopefully better performance. For tuning the parameters of your model, you will use a mix of cross-validation and grid search. In Logistic Regression, the most important parameter to tune is the regularization parameter C. Note that the regularization parameter is not always part of the logistic regression model.
The regularization parameter is used to control for unlikely high regression coefficients, and in other cases can be used when data is sparse, as a method of feature selection.
You will now implement some code to perform model tuning and selecting the regularization parameter $C$.
We use the following cv_score function to perform K-fold cross-validation and apply a scoring function to each test fold. In this incarnation we use accuracy score as the default scoring function.
End of explanation
"""
clf = LogisticRegression()
score = cv_score(clf, Xlr, ylr)
print(score)
"""
Explanation: Below is an example of using the cv_score function for a basic logistic regression model without regularization.
End of explanation
"""
#the grid of parameters to search over
Cs = [0.001, 0.1, 1, 10, 100]
# your turn
best_score = 0
best_C = 0
for C in Cs:
model = LogisticRegression(C=C)
score = cv_score(model, Xlr, ylr)
print("C =", C, " Score =", score)
if score > best_score:
best_score = score
best_C = C
print("\nThe best C is", best_C, "with a score of", best_score)
"""
Explanation: <div class="span5 alert alert-info">
<h3>Checkup Exercise Set II</h3>
<b>Exercise:</b> Implement the following search procedure to find a good model
<ul>
<li> You are given a list of possible values of `C` below
<li> For each C:
<ol>
<li> Create a logistic regression model with that value of C
<li> Find the average score for this model using the `cv_score` function **only on the training set** `(Xlr, ylr)`
</ol>
<li> Pick the C with the highest average score
</ul>
Your goal is to find the best model parameters based *only* on the training set, without showing the model test set at all (which is why the test set is also called a *hold-out* set).
</div>
End of explanation
"""
# your turn
model = LogisticRegression(C=best_C)
model.fit(Xlr, ylr)
print("The score is", accuracy_score(model.predict(Xtestlr), ytestlr))
"""
Explanation: <div class="span5 alert alert-info">
<h3>Checkup Exercise Set III</h3>
**Exercise:** Now you want to estimate how this model will predict on unseen data in the following way:
<ol>
<li> Use the C you obtained from the procedure earlier and train a Logistic Regression on the training data
<li> Calculate the accuracy on the test data
</ol>
<p>You may notice that this particular value of `C` may or may not do as well as simply running the default model on a random train-test split. </p>
<ul>
<li> Do you think that's a problem?
<li> Why do we need to do this whole cross-validation and grid search stuff anyway?
</ul>
</div>
End of explanation
"""
# your turn
from sklearn.model_selection import GridSearchCV
model2 = LogisticRegression()
params = {'C': Cs}
model_cv = GridSearchCV(model2, param_grid=params, cv=5, scoring="accuracy")
model_cv.fit(Xlr, ylr)
print("Best params =", model_cv.best_params_)
print("Best score =", model_cv.best_score_)
model2 = model_cv.best_estimator_
model2.fit(Xlr, ylr)
print("The score is", accuracy_score(model2.predict(Xtestlr), ytestlr))
"""
Explanation: This value of C scored high and performed better than the default model. Cross-validation and grid search is necessary in order to optimize your model. You improve your prediction results and also increase the probability that your model will perform as well with new, unseen data. You lower the probability that your model only did well by chance on a particular set of data.
Black Box Grid Search in sklearn
Scikit-learn, as with many other Python packages, provides utilities to perform common operations so you do not have to do it manually. It is important to understand the mechanics of each operation, but at a certain point, you will want to use the utility instead to save time...
<div class="span5 alert alert-info">
<h3>Checkup Exercise Set IV</h3>
<b>Exercise:</b> Use scikit-learn's [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) tool to perform cross validation and grid search.
* Instead of writing your own loops above to iterate over the model parameters, can you use GridSearchCV to find the best model over the training set?
* Does it give you the same best value of `C`?
* How does this model you've obtained perform on the test set?
End of explanation
"""
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):
gs = sklearn.model_selection.GridSearchCV(clf, param_grid=parameters, cv=n_folds)
gs.fit(Xtrain, ytrain)
print("BEST PARAMS", gs.best_params_)
best = gs.best_estimator_
return best
"""
Explanation: Similar values were obtained using built-in tools.
A Walkthrough of the Math Behind Logistic Regression
Setting up Some Demo Code
Let's first set some code up for classification that we will need for further discussion on the math. We first set up a function cv_optimize which takes a classifier clf, a grid of hyperparameters (such as a complexity parameter or regularization parameter) implemented as a dictionary parameters, a training set (as a samples x features array) Xtrain, and a set of labels ytrain. The code takes the traning set, splits it into n_folds parts, sets up n_folds folds, and carries out a cross-validation by splitting the training set into a training and validation section for each foldfor us. It prints the best value of the parameters, and retuens the best classifier to us.
End of explanation
"""
from sklearn.model_selection import train_test_split
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):
subdf=indf[featurenames]
if standardize:
subdfstd=(subdf - subdf.mean())/subdf.std()
else:
subdfstd=subdf
X=subdfstd.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
clf = cv_optimize(clf, parameters, Xtrain, ytrain)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print("Accuracy on training data: {:0.2f}".format(training_accuracy))
print("Accuracy on test data: {:0.2f}".format(test_accuracy))
return clf, Xtrain, ytrain, Xtest, ytest
"""
Explanation: We then use this best classifier to fit the entire training set. This is done inside the do_classify function which takes a dataframe indf as input. It takes the columns in the list featurenames as the features used to train the classifier. The column targetname sets the target. The classification is done by setting those samples for which targetname has value target1val to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using cv_optimize, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
End of explanation
"""
h = lambda z: 1. / (1 + np.exp(-z))
zs=np.arange(-5, 5, 0.1)
plt.plot(zs, h(zs), alpha=0.5);
"""
Explanation: Logistic Regression: The Math
We could approach classification as linear regression, there the class, 0 or 1, is the target variable $y$. But this ignores the fact that our output $y$ is discrete valued, and futhermore, the $y$ predicted by linear regression will in general take on values less than 0 and greater than 1. Additionally, the residuals from the linear regression model will not be normally distributed. This violation means we should not use linear regression.
But what if we could change the form of our hypotheses $h(x)$ instead?
The idea behind logistic regression is very simple. We want to draw a line in feature space that divides the '1' samples from the '0' samples, just like in the diagram above. In other words, we wish to find the "regression" line which divides the samples. Now, a line has the form $w_1 x_1 + w_2 x_2 + w_0 = 0$ in 2-dimensions. On one side of this line we have
$$w_1 x_1 + w_2 x_2 + w_0 \ge 0,$$
and on the other side we have
$$w_1 x_1 + w_2 x_2 + w_0 < 0.$$
Our classification rule then becomes:
\begin{eqnarray}
y = 1 &\mbox{if}& \v{w}\cdot\v{x} \ge 0\
y = 0 &\mbox{if}& \v{w}\cdot\v{x} < 0
\end{eqnarray}
where $\v{x}$ is the vector ${1,x_1, x_2,...,x_n}$ where we have also generalized to more than 2 features.
What hypotheses $h$ can we use to achieve this? One way to do so is to use the sigmoid function:
$$h(z) = \frac{1}{1 + e^{-z}}.$$
Notice that at $z=0$ this function has the value 0.5. If $z > 0$, $h > 0.5$ and as $z \to \infty$, $h \to 1$. If $z < 0$, $h < 0.5$ and as $z \to -\infty$, $h \to 0$. As long as we identify any value of $y > 0.5$ as 1, and any $y < 0.5$ as 0, we can achieve what we wished above.
This function is plotted below:
End of explanation
"""
dflog.head()
clf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(),
{"C": [0.01, 0.1, 1, 10, 100]},
dflog, ['Weight', 'Height'], 'Gender','Male')
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);
"""
Explanation: So we then come up with our rule by identifying:
$$z = \v{w}\cdot\v{x}.$$
Then $h(\v{w}\cdot\v{x}) \ge 0.5$ if $\v{w}\cdot\v{x} \ge 0$ and $h(\v{w}\cdot\v{x}) \lt 0.5$ if $\v{w}\cdot\v{x} \lt 0$, and:
\begin{eqnarray}
y = 1 &if& h(\v{w}\cdot\v{x}) \ge 0.5\
y = 0 &if& h(\v{w}\cdot\v{x}) \lt 0.5.
\end{eqnarray}
We will show soon that this identification can be achieved by minimizing a loss in the ERM framework called the log loss :
$$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right )$$
We will also add a regularization term:
$$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right ) + \frac{1}{C} \v{w}\cdot\v{w},$$
where $C$ is the regularization strength (equivalent to $1/\alpha$ from the Ridge case), and smaller values of $C$ mean stronger regularization. As before, the regularization tries to prevent features from having terribly high weights, thus implementing a form of feature selection.
How did we come up with this loss? We'll come back to that, but let us see how logistic regression works out.
End of explanation
"""
clf_l.predict_proba(Xtest_l)
"""
Explanation: In the figure here showing the results of the logistic regression, we plot the actual labels of both the training(circles) and test(squares) samples. The 0's (females) are plotted in red, the 1's (males) in blue. We also show the classification boundary, a line (to the resolution of a grid square). Every sample on the red background side of the line will be classified female, and every sample on the blue side, male. Notice that most of the samples are classified well, but there are misclassified people on both sides, as evidenced by leakage of dots or squares of one color ontothe side of the other color. Both test and traing accuracy are about 92%.
The Probabilistic Interpretaion
Remember we said earlier that if $h > 0.5$ we ought to identify the sample with $y=1$? One way of thinking about this is to identify $h(\v{w}\cdot\v{x})$ with the probability that the sample is a '1' ($y=1$). Then we have the intuitive notion that lets identify a sample as 1 if we find that the probabilty of being a '1' is $\ge 0.5$.
So suppose we say then that the probability of $y=1$ for a given $\v{x}$ is given by $h(\v{w}\cdot\v{x})$?
Then, the conditional probabilities of $y=1$ or $y=0$ given a particular sample's features $\v{x}$ are:
\begin{eqnarray}
P(y=1 | \v{x}) &=& h(\v{w}\cdot\v{x}) \
P(y=0 | \v{x}) &=& 1 - h(\v{w}\cdot\v{x}).
\end{eqnarray}
These two can be written together as
$$P(y|\v{x}, \v{w}) = h(\v{w}\cdot\v{x})^y \left(1 - h(\v{w}\cdot\v{x}) \right)^{(1-y)} $$
Then multiplying over the samples we get the probability of the training $y$ given $\v{w}$ and the $\v{x}$:
$$P(y|\v{x},\v{w}) = P({y_i} | {\v{x}i}, \v{w}) = \prod{y_i \in \cal{D}} P(y_i|\v{x_i}, \v{w}) = \prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}$$
Why use probabilities? Earlier, we talked about how the regression function $f(x)$ never gives us the $y$ exactly, because of noise. This hold for classification too. Even with identical features, a different sample may be classified differently.
We said that another way to think about a noisy $y$ is to imagine that our data $\dat$ was generated from a joint probability distribution $P(x,y)$. Thus we need to model $y$ at a given $x$, written as $P(y|x)$, and since $P(x)$ is also a probability distribution, we have:
$$P(x,y) = P(y | x) P(x)$$
and can obtain our joint probability $P(x, y)$.
Indeed its important to realize that a particular training set can be thought of as a draw from some "true" probability distribution (just as we did when showing the hairy variance diagram). If for example the probability of classifying a test sample as a '0' was 0.1, and it turns out that the test sample was a '0', it does not mean that this model was necessarily wrong. After all, in roughly a 10th of the draws, this new sample would be classified as a '0'! But, of-course its more unlikely than its likely, and having good probabilities means that we'll be likely right most of the time, which is what we want to achieve in classification. And furthermore, we can quantify this accuracy.
Thus its desirable to have probabilistic, or at the very least, ranked models of classification where you can tell which sample is more likely to be classified as a '1'. There are business reasons for this too. Consider the example of customer "churn": you are a cell-phone company and want to know, based on some of my purchasing habit and characteristic "features" if I am a likely defector. If so, you'll offer me an incentive not to defect. In this scenario, you might want to know which customers are most likely to defect, or even more precisely, which are most likely to respond to incentives. Based on these probabilities, you could then spend a finite marketing budget wisely.
Maximizing the Probability of the Training Set
Now if we maximize $P(y|\v{x},\v{w})$, we will maximize the chance that each point is classified correctly, which is what we want to do. While this is not exactly the same thing as maximizing the 1-0 training risk, it is a principled way of obtaining the highest probability classification. This process is called maximum likelihood estimation since we are maximising the likelihood of the training data y,
$$\like = P(y|\v{x},\v{w}).$$
Maximum likelihood is one of the corenerstone methods in statistics, and is used to estimate probabilities of data.
We can equivalently maximize
$$\loglike = \log{P(y|\v{x},\v{w})}$$
since the natural logarithm $\log$ is a monotonic function. This is known as maximizing the log-likelihood. Thus we can equivalently minimize a risk that is the negative of $\log(P(y|\v{x},\v{w}))$:
$$R_{\cal{D}}(h(x)) = -\loglike = -\log \like = -\log{P(y|\v{x},\v{w})}.$$
Thus
\begin{eqnarray}
R_{\cal{D}}(h(x)) &=& -\log\left(\prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\
&=& -\sum_{y_i \in \cal{D}} \log\left(h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\
&=& -\sum_{y_i \in \cal{D}} \log\,h(\v{w}\cdot\v{x_i})^{y_i} + \log\,\left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\
&=& - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right )
\end{eqnarray}
This is exactly the risk we had above, leaving out the regularization term (which we shall return to later) and was the reason we chose it over the 1-0 risk.
Notice that this little process we carried out above tells us something very interesting: Probabilistic estimation using maximum likelihood is equivalent to Empiricial Risk Minimization using the negative log-likelihood, since all we did was to minimize the negative log-likelihood over the training samples.
sklearn will return the probabilities for our samples, or for that matter, for any input vector set ${\v{x}_i}$, i.e. $P(y_i | \v{x}_i, \v{w})$:
End of explanation
"""
plt.figure()
ax = plt.gca()
points_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);
"""
Explanation: Discriminative vs Generative Classifier
Logistic regression is what is known as a discriminative classifier as we learn a soft boundary between/among classes. Another paradigm is the generative classifier where we learn the distribution of each class. For more examples of generative classifiers, look here.
Let us plot the probabilities obtained from predict_proba, overlayed on the samples with their true labels:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mpi-m/cmip6/models/icon-esm-lr/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MPI-M
Source ID: ICON-ESM-LR
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
AllenDowney/ModSimPy | soln/rabbits3soln.ipynb | mit | %matplotlib inline
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
system = System(t0 = 0,
t_end = 20,
juvenile_pop0 = 0,
adult_pop0 = 10,
birth_rate = 0.9,
mature_rate = 0.33,
death_rate = 0.5)
system
"""
Explanation: Rabbit is Rich
This notebook starts with a version of the rabbit population growth model. You will modify it using some of the tools in Chapter 5. Before you attempt this diagnostic, you should have a good understanding of State objects, as presented in Section 5.4. And you should understand the version of run_simulation in Section 5.7.
Separating the State from the System
Here's the System object from the previous diagnostic. Notice that it includes system parameters, which don't change while the simulation is running, and population variables, which do. We're going to improve that by pulling the population variables into a State object.
End of explanation
"""
# Solution
init = State(juveniles=0, adults=10)
init
# Solution
system = System(t0 = 0,
t_end = 20,
init = init,
birth_rate = 0.9,
mature_rate = 0.33,
death_rate = 0.5)
system
"""
Explanation: In the following cells, define a State object named init that contains two state variables, juveniles and adults, with initial values 0 and 10. Make a version of the System object that does NOT contain juvenile_pop0 and adult_pop0, but DOES contain init.
End of explanation
"""
def run_simulation(system):
"""Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
"""
juveniles = TimeSeries()
juveniles[system.t0] = system.juvenile_pop0
adults = TimeSeries()
adults[system.t0] = system.adult_pop0
for t in linrange(system.t0, system.t_end):
maturations = system.mature_rate * juveniles[t]
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
if adults[t] > 30:
market = adults[t] - 30
else:
market = 0
juveniles[t+1] = juveniles[t] + births - maturations
adults[t+1] = adults[t] + maturations - deaths - market
system.adults = adults
system.juveniles = juveniles
"""
Explanation: Updating run_simulation
Here's the version of run_simulation from last time:
End of explanation
"""
# Solution
def run_simulation(system):
"""Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
"""
juveniles = TimeSeries()
juveniles[system.t0] = system.init.juveniles
adults = TimeSeries()
adults[system.t0] = system.init.adults
for t in linrange(system.t0, system.t_end):
maturations = system.mature_rate * juveniles[t]
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
if adults[t] > 30:
market = adults[t] - 30
else:
market = 0
juveniles[t+1] = juveniles[t] + births - maturations
adults[t+1] = adults[t] + maturations - deaths - market
system.adults = adults
system.juveniles = juveniles
"""
Explanation: In the cell below, write a version of run_simulation that works with the new System object (the one that contains a State object named init).
Hint: you only have to change two lines.
End of explanation
"""
run_simulation(system)
system.adults
"""
Explanation: Test your changes in run_simulation:
End of explanation
"""
def plot_results(system, title=None):
"""Plot the estimates and the model.
system: System object with `results`
"""
newfig()
plot(system.adults, 'bo-', label='adults')
plot(system.juveniles, 'gs-', label='juveniles')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
"""
Explanation: Plotting the results
Here's a version of plot_results that plots both the adult and juvenile TimeSeries.
End of explanation
"""
plot_results(system, title='Proportional growth model')
"""
Explanation: If your changes in the previous section were successful, you should be able to run this new version of plot_results.
End of explanation
"""
# Solution
def run_simulation(system):
"""Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
"""
results = TimeFrame(columns = system.init.index)
results.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
juveniles, adults = results.loc[t]
maturations = system.mature_rate * juveniles
births = system.birth_rate * adults
deaths = system.death_rate * adults
if adults > 30:
market = adults - 30
else:
market = 0
juveniles += births - maturations
adults += maturations - deaths - market
results.loc[t+1] = juveniles, adults
system.results = results
run_simulation(system)
# Solution
def plot_results(system, title=None):
"""Plot the estimates and the model.
system: System object with `results`
"""
newfig()
plot(system.results.adults, 'bo-', label='adults')
plot(system.results.juveniles, 'gs-', label='juveniles')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
plot_results(system)
"""
Explanation: That's the end of the diagnostic. If you were able to get it done quickly, and you would like a challenge, here are two bonus questions:
Bonus question #1
Write a version of run_simulation that puts the results into a single TimeFrame named results, rather than two TimeSeries objects.
Write a version of plot_results that can plot the results in this form.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation
"""
# Solution
def update(state, system):
"""Compute the state of the system after one time step.
state: State object with juveniles and adults
system: System object
returns: State object
"""
juveniles, adults = state
maturations = system.mature_rate * juveniles
births = system.birth_rate * adults
deaths = system.death_rate * adults
if adults > 30:
market = adults - 30
else:
market = 0
juveniles += births - maturations
adults += maturations - deaths - market
return State(juveniles=juveniles, adults=adults)
def run_simulation(system, update_func):
"""Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
"""
results = TimeFrame(columns = system.init.index)
results.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
results.loc[t+1] = update_func(results.loc[t], system)
system.results = results
run_simulation(system, update)
plot_results(system)
"""
Explanation: Bonus question #2
Factor out the update function.
Write a function called update that takes a State object and a System object and returns a new State object that represents the state of the system after one time step.
Write a version of run_simulation that takes an update function as a parameter and uses it to compute the update.
Run your new version of run_simulation and plot the results.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation
"""
|
quantumlib/Cirq | docs/tutorials/basics.ipynb | apache-2.0 | # @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2022 The Cirq Developers
End of explanation
"""
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
import cirq_google
"""
Explanation: Cirq basics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/basics"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial will teach the basics of how to use Cirq. It will walk through how to use qubits, gates, and operations to create and simulate your first quantum circuit using Cirq. It will briefly introduce devices, unitary matrices, decompositions, and transformers as well.
This tutorial isn’t a quantum computing 101 tutorial: it assumes familiarity of quantum computing at about the level of the textbook “Quantum Computation and Quantum Information” by Nielsen and Chuang.
For more in-depth examples of quantum algorithms and experiments, see Experiments.
To begin, please follow the instructions for installing Cirq.
End of explanation
"""
# Using named qubits can be useful for abstract algorithms
# as well as algorithms not yet mapped onto hardware.
q0 = cirq.NamedQubit('source')
q1 = cirq.NamedQubit('target')
# Line qubits can be created individually
q3 = cirq.LineQubit(3)
# Or created in a range
# This will create LineQubit(0), LineQubit(1), LineQubit(2)
q0, q1, q2 = cirq.LineQubit.range(3)
# Grid Qubits can also be referenced individually
q4_5 = cirq.GridQubit(4, 5)
# Or created in bulk in a square
# This will create 16 qubits from (0,0) to (3,3)
qubits = cirq.GridQubit.square(4)
"""
Explanation: Qubits
The first part of creating a quantum circuit is to define a set of qubits (also known as a quantum registers) to act on.
Cirq has three main ways of defining qubits:
cirq.NamedQubit: used to label qubits by an abstract name
cirq.LineQubit: qubits labelled by number in a linear array
cirq.GridQubit: qubits labelled by two numbers in a rectangular lattice.
Here are some examples of defining each type of qubit.
End of explanation
"""
print(cirq_google.Sycamore)
"""
Explanation: There are also pre-packaged sets of qubits called Devices. These are qubits along with a set of rules for how they can be used. A cirq.Device can be used to ensure that two-qubit gates are only applied to qubits that are adjacent in the hardware, and other constraints. The following example will use the cirq_google.Sycamore device that comes with cirq. It is a diamond-shaped grid with 54 qubits that mimics early hardware released by Google.
End of explanation
"""
# Example gates
cnot_gate = cirq.CNOT
pauli_z = cirq.Z
# Use exponentiation to get square root gates.
sqrt_x_gate = cirq.X**0.5
# Some gates can also take parameters
sqrt_sqrt_y = cirq.YPowGate(exponent=0.25)
# Create two qubits at once, in a line.
q0, q1 = cirq.LineQubit.range(2)
# Example operations
z_op = cirq.Z(q0)
not_op = cirq.CNOT(q0, q1)
sqrt_iswap_op = cirq.SQRT_ISWAP(q0, q1)
# You can also use the gates you specified earlier.
cnot_op = cnot_gate(q0, q1)
pauli_z_op = pauli_z(q0)
sqrt_x_op = sqrt_x_gate(q0)
sqrt_sqrt_y_op = sqrt_sqrt_y(q0)
"""
Explanation: Gates and operations
The next step is to use the qubits to create operations that can be used in the circuit. Cirq has two concepts that are important to understand here:
A Gate is an effect that can be applied to a set of qubits.
An Operation is a gate applied to a set of qubits.
For instance, cirq.H is the quantum Hadamard and is a Gate object. cirq.H(cirq.LineQubit(1)) is an Operation object and is the Hadamard gate applied to a specific qubit (line qubit number 1).
Many textbook gates are included within cirq. cirq.X, cirq.Y, and cirq.Z refer to the single-qubit Pauli gates. cirq.CZ, cirq.CNOT, cirq.SWAP are a few of the common two-qubit gates. cirq.measure is a macro to apply a MeasurementGate to a set of qubits. You can find more, as well as instructions on how to create your own custom gates, on the Gates documentation page.
Here are some examples of operations that can be performed on gates and operations:
End of explanation
"""
circuit = cirq.Circuit()
qiubits = cirq.LineQubit.range(3)
circuit.append(cirq.H(qubits[0]))
circuit.append(cirq.H(qubits[1]))
circuit.append(cirq.H(qubits[2]))
print(circuit)
"""
Explanation: Circuits and moments
You are now ready to construct a quantum circuit. A Circuit is a collection of Moments. A Moment is a collection of Operations that all act during the same abstract time slice. Each Operation must be applied to a disjoint set of qubits compared to each of the other Operations in the Moment. A Moment can be thought of as a vertical slice of a quantum circuit diagram.
Circuits can be constructed in several different ways. By default, cirq will attempt to slide your operation into the earliest possible Moment when you insert it. You can use the append function in two ways:
By appending each operation one-by-one:
End of explanation
"""
circuit = cirq.Circuit()
ops = [cirq.H(q) for q in cirq.LineQubit.range(3)]
circuit.append(ops)
print(circuit)
"""
Explanation: Or by appending some iterable of operations. A preconstructed list works:
End of explanation
"""
# Append with generator
circuit = cirq.Circuit()
circuit.append(cirq.H(q) for q in cirq.LineQubit.range(3))
print(circuit)
# Initializer with generator
print(cirq.Circuit(cirq.H(q) for q in cirq.LineQubit.range(3)))
"""
Explanation: A generator that yields operations also works. This syntax will be used often in documentation, and works both with the cirq.Circuit() initializer and the cirq.Circuit.append() function.
End of explanation
"""
print(cirq.Circuit(cirq.SWAP(q, q + 1) for q in cirq.LineQubit.range(3)))
"""
Explanation: Note that all of the Hadamard gates are pushed as far left as possible, and put into the same Moment since none overlap.
If your operations are applied to the same qubits, they will be put in sequential, insertion-ordered moments. In the following example, the two-qubit gates overlap, and are placed in consecutive moments.
End of explanation
"""
# Creates each gate in a separate moment by passing an iterable of Moments instead of Operations.
print(cirq.Circuit(cirq.Moment([cirq.H(q)]) for q in cirq.LineQubit.range(3)))
"""
Explanation: Sometimes, you may not want cirq to automatically shift operations all the way to the left. To construct a circuit without doing this, you can create the circuit moment-by-moment or use a different InsertStrategy, explained more in the Circuit documentation.
End of explanation
"""
# Create some qubits.
q0 = cirq.GridQubit(5, 6)
q1 = cirq.GridQubit(5, 5)
q2 = cirq.GridQubit(4, 5)
# Create operations using the Sycamore gate, which is supported by the Sycamore device.
# However, create operations for both adjacent and non-adjacent qubit pairs.
adjacent_op = cirq_google.SYC(q0, q1)
nonadjacent_op = cirq_google.SYC(q0, q2)
# A working circuit for the Sycamore device raises no issues.
working_circuit = cirq.Circuit()
working_circuit.append(adjacent_op)
valid = cirq_google.Sycamore.validate_circuit(working_circuit)
# A circuit using invalid operations.
bad_circuit = cirq.Circuit()
bad_circuit.append(nonadjacent_op)
try:
cirq_google.Sycamore.validate_circuit(bad_circuit)
except ValueError as e:
print(e)
"""
Explanation: Circuits and devices
One important consideration when using real quantum devices is that there are often constraints on circuits that are able to be run on the hardware. Device objects specify these constraints and can be used to validate your circuit to make sure that it contains no illegal operations. For more information on what constraints Device objects can specify and how to use them, see the Devices page.
The following example demonstrates this with the Sycamore Device:
End of explanation
"""
# Create a circuit to generate a Bell State:
# 1/sqrt(2) * ( |00⟩ + |11⟩ )
bell_circuit = cirq.Circuit()
q0, q1 = cirq.LineQubit.range(2)
bell_circuit.append(cirq.H(q0))
bell_circuit.append(cirq.CNOT(q0, q1))
# Initialize Simulator
s = cirq.Simulator()
print('Simulate the circuit:')
results = s.simulate(bell_circuit)
print(results)
# For sampling, we need to add a measurement at the end
bell_circuit.append(cirq.measure(q0, q1, key='result'))
# Sample the circuit
samples = s.run(bell_circuit, repetitions=1000)
"""
Explanation: Simulation
The results of the application of a quantum circuit can be calculated by a Simulator. Cirq comes bundled with a simulator that can calculate the results of circuits up to about a limit of 20 qubits. It can be initialized with cirq.Simulator().
There are two different approaches to using a simulator:
simulate(): When classically simulating a circuit, a simulator can directly access and view the resulting wave function. This is useful for debugging, learning, and understanding how circuits will function.
run(): When using actual quantum devices, we can only access the end result of a computation and must sample the results to get a distribution of results. Running the simulator as a sampler mimics this behavior and only returns bit strings as output.
Next simulate a 2-qubit "Bell State":
End of explanation
"""
import matplotlib.pyplot as plt
cirq.plot_state_histogram(samples, plt.subplot())
plt.show()
"""
Explanation: Visualizing Results
When you use run() to get a sample distribution of measurements, you can directly graph the simulated samples as a histogram with cirq.plot_state_histogram.
End of explanation
"""
# Pull of histogram counts from the result data structure
counts = samples.histogram(key='result')
print(counts)
# Graph the histogram counts instead of the results
cirq.plot_state_histogram(counts, plt.subplot())
plt.show()
"""
Explanation: However, this histogram has some empty qubit states, which may become problematic if you work with more qubits. To graph sparse sampled data, first get the Counts from your results with its histogram() function, and pass that to cirq.plot_state_histogram. By collecting the results into counts, all the qubit states that were never seen are ignored.
End of explanation
"""
import sympy
# Perform an X gate with variable exponent
q = cirq.GridQubit(1, 1)
circuit = cirq.Circuit(cirq.X(q) ** sympy.Symbol('t'), cirq.measure(q, key='m'))
# Sweep exponent from zero (off) to one (on) and back to two (off)
param_sweep = cirq.Linspace('t', start=0, stop=2, length=200)
# Simulate the sweep
s = cirq.Simulator()
trials = s.run_sweep(circuit, param_sweep, repetitions=1000)
# Plot all the results
x_data = [trial.params['t'] for trial in trials]
y_data = [trial.histogram(key='m')[1] / 1000.0 for trial in trials]
plt.scatter('t', 'p', data={'t': x_data, 'p': y_data})
plt.xlabel("trials")
plt.ylabel("frequency of qubit measured to be one")
plt.show()
"""
Explanation: A histogram over the states that were actually observed can often be more useful when analyzing results. To learn more about the available options for creating result histograms, see the State Histograms page.
Using parameter sweeps
Cirq circuits allow for gates to have symbols as free parameters within the circuit. This is especially useful for variational algorithms, which vary parameters within the circuit in order to optimize a cost function, but it can be useful in a variety of circumstances.
For parameters, cirq uses the library sympy to add sympy.Symbol as parameters to gates and operations.
Once the circuit is complete, you can fill in the possible values of each of these parameters with a Sweep. There are several possibilities that can be used as a sweep:
cirq.Points: A list of manually specified values for one specific symbol as a sequence of floats
cirq.Linspace: A linear sweep from a starting value to an ending value.
cirq.ListSweep: A list of manually specified values for several different symbols, specified as a list of dictionaries.
cirq.Zip and cirq.Product: Sweeps can be combined list-wise by zipping them together or through their Cartesian product.
A parameterized circuit and sweep together can be run using the simulator or other sampler by changing run() to run_sweep() and adding the sweep as a parameter.
Here is an example of sweeping an exponent of a X gate:
End of explanation
"""
print('Unitary of the X gate')
print(cirq.unitary(cirq.X))
print('Unitary of SWAP operator on two qubits.')
q0, q1 = cirq.LineQubit.range(2)
print(cirq.unitary(cirq.SWAP(q0, q1)))
print('Unitary of a sample circuit')
print(cirq.unitary(cirq.Circuit(cirq.X(q0), cirq.SWAP(q0, q1))))
"""
Explanation: Unitary matrices and decompositions
Many quantum operations have unitary matrix representations. This matrix can be accessed by applying cirq.unitary(operation) to that operation. This can be applied to gates, operations, and circuits that support this protocol and will return the unitary matrix that represents the object. See Protocols for more about this and other protocols.
End of explanation
"""
print(cirq.decompose(cirq.H(cirq.LineQubit(0))))
"""
Explanation: Decompositions
Many gates can be decomposed into an equivalent circuit with simpler operations and gates. This is called decomposition and can be accomplished with the cirq.decompose protocol.
For instance, a Hadamard H gate can be decomposed into X and Y gates:
End of explanation
"""
q0, q1, q2 = cirq.LineQubit.range(3)
print(cirq.Circuit(cirq.decompose(cirq.TOFFOLI(q0, q1, q2))))
"""
Explanation: Another example is the 3-qubit Toffoli gate, which is equivalent to a controlled-controlled-X gate. Many devices do not support a three qubit gate, so it is important
End of explanation
"""
q = cirq.GridQubit(1, 1)
c = cirq.Circuit(cirq.X(q) ** 0.25, cirq.Y(q) ** 0.25, cirq.Z(q) ** 0.25)
print(c)
c = cirq.merge_single_qubit_gates_to_phxz(c)
print(c)
"""
Explanation: The above decomposes the Toffoli into a simpler set of one-qubit gates and two-qubit CZ gates at the cost of lengthening the circuit considerably.
Transformers
The last concept in this tutorial is the transformer. An transformer can take a circuit and modify it. Usually, this will entail combining or modifying operations to make it more efficient and shorter, though an transformer can, in theory, do any sort of circuit manipulation.
For example, the cirq.merge_single_qubit_gates_to_phxz transformer will take consecutive single-qubit operations and merge them into a single PhasedXZ operation.
End of explanation
"""
|
martinggww/lucasenlights | MachineLearning/DataScience-Python3/MatPlotLib.ipynb | cc0-1.0 | %matplotlib inline
from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.01)
plt.plot(x, norm.pdf(x))
plt.show()
"""
Explanation: MatPlotLib Basics
Draw a line graph
End of explanation
"""
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Mutiple Plots on One Graph
End of explanation
"""
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.savefig('C:\\Users\\Frank\\MyPlot.png', format='png')
"""
Explanation: Save it to a File
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Adjust the Axes
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Add a Grid
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.show()
"""
Explanation: Change Line Types and Colors
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.xlabel('Greebles')
plt.ylabel('Probability')
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.legend(['Sneetches', 'Gacks'], loc=4)
plt.show()
"""
Explanation: Labeling Axes and Adding a Legend
End of explanation
"""
plt.xkcd()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
"""
Explanation: XKCD Style :)
End of explanation
"""
# Remove XKCD mode:
plt.rcdefaults()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
explode = [0, 0, 0.2, 0, 0]
labels = ['India', 'United States', 'Russia', 'China', 'Europe']
plt.pie(values, colors= colors, labels=labels, explode = explode)
plt.title('Student Locations')
plt.show()
"""
Explanation: Pie Chart
End of explanation
"""
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
plt.bar(range(0,5), values, color= colors)
plt.show()
"""
Explanation: Bar Chart
End of explanation
"""
from pylab import randn
X = randn(500)
Y = randn(500)
plt.scatter(X,Y)
plt.show()
"""
Explanation: Scatter Plot
End of explanation
"""
incomes = np.random.normal(27000, 15000, 10000)
plt.hist(incomes, 50)
plt.show()
"""
Explanation: Histogram
End of explanation
"""
uniformSkewed = np.random.rand(100) * 100 - 40
high_outliers = np.random.rand(10) * 50 + 100
low_outliers = np.random.rand(10) * -50 - 100
data = np.concatenate((uniformSkewed, high_outliers, low_outliers))
plt.boxplot(data)
plt.show()
"""
Explanation: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100:
End of explanation
"""
|
lknelson/DH-Institute-2017 | 01-Intro to NLP/.ipynb_checkpoints/Intro to NLP-checkpoint.ipynb | bsd-2-clause | print("For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media.")
# Assign the quote to a variable, so we can refer back to it later
# We get to make up the name of our variable, so let's give it a descriptive label: "sentence"
sentence = "For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media."
# Oh, also: anything on a line starting with a hashtag is called a comment,
# and is meant to clarify code for human readers. The computer ignores these lines.
# Print the contents of the variable 'sentence'
print(sentence)
"""
Explanation: Introduction to Natural Language Processing (NLP)
Generally speaking, <i>Computational Text Analysis</i> is a set of interpretive methods which seek to understand patterns in human discourse, in part through statistics. More familiar methods, such as close reading, are exceptionally well-suited to the analysis of individual texts, however our research questions typically compel us to look for relationships across texts, sometimes counting in the thousands or even millions. We have to zoom out, in order to perform so-called <i>distant reading</i>. Fortunately for us, computers are well-suited to identify the kinds of textual relationships that exist at scale.
We will spend the week exploring research questions that computational methods can help to answer and thinking about how these complement -- rather than displace -- other interpretive methods. Before moving to that conceptual level, however, we will familiarize ourselves with the basic tools of the trade.
<i>Natural Language Processing</i> is an umbrella term for the methods by which a computer handles human language text. This includes transforming the text into a numerical form that the computer manipulates natively, as well as the measurements that reserchers often perform. In the parlance, <i>natural language</i> refers to a language spoken by humans, as opposed to a <i>formal language</i>, such as Python, which comprises a set of logical operations.
The goal of this lesson is to jump right in to text analysis and natural language processing. Rather than starting with the nitty gritty of programming in Python, this lesson will demonstrate some neat things you can do with a minimal amount of coding. Today, we aim to build intuition about how computers read human text and learn some of the basic operations we'll perform with them.
Lesson Outline
Jargon
Text in Python
Tokenization & Term Frequency
Pre-Processing:
Changing words to lowercase
Removing stop words
Removing punctuation
Part-of-Speech Tagging
Tagging tokens
Counting tagged tokens
Demonstration: Guess the Novel!
Concordance
0. Key Jargon
General
programming (or coding)
A program is a sequence of instructions given to the computer, in order to perform a specific task. Those instructions are written in a specific programming language, in our case, Python. Writing these instructions can be an art as much as a science.
Python
A general-use programming language that is popular for NLP and statistics.
script
A block of executable code.
Jupyter Notebook
Jupyter is a popular interface in which Python scripts can be written and executed. Stand-alone scripts are saved in Notebooks. The script can be sub-divided into units called <i>cells</i> and executed individually. Cells can also contain discursive text and html formatting (such as in this cell!)
package (or module)
Python offers a basic set of functions that can be used off-the-shelf. However, we often wish to go beyond the basics. To that end, <i>packages</i> are collections of python files that contain pre-made functions. These functions are made available to our program when we <i>import</i> the package that contains them.
Anaconda
Anaconda is a <i>platform</i> for programming in Python. A platform constitutes a closed environment on your computer that has been standardized for functionality. For example, Anaconda contains common packages and programming interfaces for Python, and its developers ensure compatibility among the moving parts.
When Programming
variable
A variable is a generic container that stores a value, such as a number or series of letters. This is not like a variable from high-school algebra, which had a single "correct" value that must be solved. Rather, the user <i>assigns</i> values to the variable in order to perform operations on it later.
string
A type of object consisting of a single sequence of alpha-numeric characters. In Python, a string is indicated by quotation marks around the sequence"
list
A type of object that consists of a sequence of elements.
Natural Language Processing
pre-processing
Transforming a human lanugage text into computer-manipulable format. A typical pre-processing workflow includes <i>stop-word</i> removal, setting text in lower case, and <i>term frequency</i> counting.
token
An individual word unit within a sentence.
stop words
The function words in a natural langauge, such as <i>the</i>, <i>of</i>, <i>it</i>, etc. These are typically the most common words.
term frequency
The number of times a term appears in a given text. This is either reported as a raw tally or it is <i>normalized</i> by dividing by the total number of words in a text.
POS tagging
One common task in NLP is the determination of a word's part-of-speech (POS). The label that describes a word's POS is called its <i>tag</i>. Specialized functions that make these determinations are called <i>POS Taggers</i>.
concordance
Index of instances of a given word (or other linguistic feature) in a text. Typically, each instance is presented within a contextual window for human readability.
NLTK (Natural Language Tool Kit)
A common Python package that contains many NLP-related functions
Further Resources:
Check out the full range of techniques included in Python's nltk package here: http://www.nltk.org/book/
1. Text in Python
First, a quote about what digital humanities means, from digital humanist Kathleen Fitzpatrick. Source: "On Scholarly Communication and the Digital Humanities: An Interview with Kathleen Fitzpatrick", In the Library with the Lead Pipe
End of explanation
"""
# Import the NLTK (Natural Language Tool Kit) package
import nltk
# Tokenize our sentence!
nltk.word_tokenize(sentence)
# Create new variable that contains our tokenized sentence
sentence_tokens = nltk.word_tokenize(sentence)
# Inspect our new variable
# Note the square braces at the beginning and end that indicate we are looking at a list-type object
print(sentence_tokens)
"""
Explanation: 2. Tokenizing Text and Counting Words
The above output is how a human would read that sentence. Next we look the main way in which a computer "reads", or parses, that sentence.
The first step is typically to <i>tokenize</i> it, or to change it into a series of <i>tokens</i>. Each token roughly corresponds to either a word or punctuation mark. These smaller units are more straight-forward for the computer to handle for tasks like counting.
End of explanation
"""
# How many tokens are in our list?
len(sentence_tokens)
# How often does each token appear in our list?
import collections
collections.Counter(sentence_tokens)
# Assign those token counts to a variable
token_frequency = collections.Counter(sentence_tokens)
# Get an ordered list of the most frequent tokens
token_frequency.most_common(10)
"""
Explanation: Note on Tokenization
While seemingly simple, tokenization is a non-trivial task.
For example, notice how the tokenizer has handled contractions: a contracted word is divided into two separate tokens! What do you think is the motivation for this? How else might you tokenize them?
Also notice each token is either a word or punctuation mark. In practice, it is sometimes useful to remove punctuation marks and at other times to include them, depending on the situation.
In the coming days, we will see other tokenizers and have opportunities to explore their reasoning. For now, we will look at a few examples of NLP tasks that tokenization enables.
End of explanation
"""
# Let's revisit our original sentence
sentence
# And now transform it to lower case, all at once
sentence.lower()
# Okay, let's set our list of tokens to lower case, one at a time
# The syntax of the line below is tricky. Don't worry about it for now.
# We'll spend plenty of time on it tomorrow!
lower_case_tokens = [ word.lower() for word in sentence_tokens ]
# Inspect
print(lower_case_tokens)
"""
Explanation: Note on Term Frequency
Some of the most frequent words appear to summarize the sentence: in particular the words "humanistic", "digital", and "media". However, most of the these terms seem to add noise in the summary: "the", "it", "to", ".", etc.
There are many strategies for identifying the most important words in a text, and we will cover the most popular ones in the next week. Today, we will look at two of them. In the first, we will simply remove the noisey tokens. In the second, we will identify important words using their parts of speech.
3. Pre-Processing: Lower Case, Remove Stop Words and Punctuation
Typically, a text goes through a number of pre-processing steps before beginning to the actual analysis. We have already seen the tokenization step. Typically, pre-processing includes transforming tokens to lower case and removing stop words and punctuation marks.
Again, pre-processing is a non-trivial process that can have large impacts on the analysis that follows. For instance, what will be the most common token in our example sentence, once we set all tokens to lower case?
Lower Case
End of explanation
"""
# Import the stopwords list
from nltk.corpus import stopwords
# Take a look at what stop words are included
print(stopwords.words('english'))
# Try another language
print(stopwords.words('spanish'))
# Create a new variable that contains the sentence tokens but NOT the stopwords
tokens_nostops = [ word for word in lower_case_tokens if word not in stopwords.words('english') ]
# Inspect
print(tokens_nostops)
"""
Explanation: Stop Words
End of explanation
"""
# Import a list of punctuation marks
import string
# Inspect
string.punctuation
# Remove punctuation marks from token list
tokens_clean = [word for word in tokens_nostops if word not in string.punctuation]
# See what's left
print(tokens_clean)
"""
Explanation: Punctuation
End of explanation
"""
# Count the new token list
word_frequency_clean = collections.Counter(tokens_clean)
# Most common words
word_frequency_clean.most_common(10)
"""
Explanation: Re-count the Most Frequent Words
End of explanation
"""
# Let's revisit our original list of tokens
print(sentence_tokens)
# Use the NLTK POS tagger
nltk.pos_tag(sentence_tokens)
# Assign POS-tagged list to a variable
tagged_tokens = nltk.pos_tag(sentence_tokens)
"""
Explanation: Better! The ten most frequent words now give us a pretty good sense of the substance of this sentence. But we still have problems. For example, the token "'s" sneaked in there. One solution is to keep adding stop words to our list, but this could go on forever and is not a good solution when processing lots of text.
There's another way of identifying content words, and it involves identifying the part of speech of each word.
4. Part-of-Speech Tagging
You may have noticed that stop words are typically short function words, like conjunctions and prepositions. Intuitively, if we could identify the part of speech of a word, we would have another way of identifying which contribute to the text's subject matter. NLTK can do that too!
NLTK has a <i>POS Tagger</i>, which identifies and labels the part-of-speech (POS) for every token in a text. The particular labels that NLTK uses come from the Penn Treebank corpus, a major resource from corpus linguistics.
You can find a list of all Penn POS tags here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
Note that, from this point on, the code is going to get a little more complex. Don't worry about the particularities of each line. For now, we will focus on the NLP tasks themselves and the textual patterns they identify.
End of explanation
"""
# We'll tread lightly here, and just say that we're counting POS tags
tag_frequency = collections.Counter( [ tag for (word, tag) in tagged_tokens ])
# POS Tags sorted by frequency
tag_frequency.most_common()
"""
Explanation: Most Frequent POS Tags
End of explanation
"""
# Let's filter our list, so it only keeps adjectives
adjectives = [word for word,pos in tagged_tokens if pos == 'JJ' or pos=='JJR' or pos=='JJS']
# Inspect
print( adjectives )
# Tally the frequency of each adjective
adj_frequency = collections.Counter(adjectives)
# Most frequent adjectives
adj_frequency.most_common(5)
# Let's do the same for nouns.
nouns = [word for word,pos in tagged_tokens if pos=='NN' or pos=='NNS']
# Inspect
print(nouns)
# Tally the frequency of the nouns
noun_frequency = collections.Counter(nouns)
# Most Frequent Nouns
print(noun_frequency.most_common(5))
"""
Explanation: Now it's getting interesting
The "IN" tag refers to prepositions, so it's no surprise that it should be the most common. However, we can see at a glance now that the sentence contains a lot of adjectives, "JJ". This feels like it tells us something about the rhetorical style or structure of the sentence: certain qualifiers seem to be important to the meaning of the sentence.
Let's dig in to see what those adjectives are.
End of explanation
"""
# And we'll do the verbs in one fell swoop
verbs = [word for word,pos in tagged_tokens if pos == 'VB' or pos=='VBD' or pos=='VBG' or pos=='VBN' or pos=='VBP' or pos=='VBZ']
verb_frequency = collections.Counter(verbs)
print(verb_frequency.most_common(5))
# If we bring all of this together we get a pretty good summary of the sentence
print(adj_frequency.most_common(3))
print(noun_frequency.most_common(3))
print(verb_frequency.most_common(3))
"""
Explanation: And now verbs.
End of explanation
"""
# Read the two text files from your hard drive
# Assign first mystery text to variable 'text1' and second to 'text2'
text1 = open('text1.txt').read()
text2 = open('text2.txt').read()
# Tokenize both texts
text1_tokens = nltk.word_tokenize(text1)
text2_tokens = nltk.word_tokenize(text2)
# Set to lower case
text1_tokens_lc = [word.lower() for word in text1_tokens]
text2_tokens_lc = [word.lower() for word in text2_tokens]
# Remove stopwords
text1_tokens_nostops = [word for word in text1_tokens_lc if word not in stopwords.words('english')]
text2_tokens_nostops = [word for word in text2_tokens_lc if word not in stopwords.words('english')]
# Remove punctuation using the list of punctuation from the string pacage
text1_tokens_clean = [word for word in text1_tokens_nostops if word not in string.punctuation]
text2_tokens_clean = [word for word in text2_tokens_nostops if word not in string.punctuation]
# Frequency distribution
text1_word_frequency = collections.Counter(text1_tokens_clean)
text2_word_frequency = collections.Counter(text2_tokens_clean)
# Guess the novel!
text1_word_frequency.most_common(20)
# Guess the novel!
text2_word_frequency.most_common(20)
"""
Explanation: 5. Demonstration: Guess the Novel
To illustrate this process on a slightly larger scale, we will do the exactly what we did above, but will do so on two unknown novels. Your challenge: guess the novels from the most frequent words.
We will do this in one chunk of code, so another challenge for you during breaks or the next few weeks is to see how much of the following code you can follow (or, in computer science terms, how much of the code you can parse). If the answer is none, not to worry! Tomorrow we will take a step back and work on the nitty gritty of programming.
End of explanation
"""
# Transform our raw token lists in NLTK Text-objects
text1_nltk = nltk.Text(text1_tokens)
text2_nltk = nltk.Text(text2_tokens)
# Really they're no differnt from the raw text, but they have additional useful functions
print(text1_nltk)
print(text2_nltk)
# Like a concordancer!
text1_nltk.concordance("monstrous")
text2_nltk.concordance("monstrous")
"""
Explanation: Computational Text Analysis is not simply the processing of texts through computers, but involves reflection on the part of human interpreters. How were you able to tell what each novel was? Do you notice any differences between each novel's list of frequent words?
The patterns that we notice in our computational model often enrich and extend our research questions -- sometimes in surprising ways! What next steps would you take to investigate these novels?
6. Concordances and Similar Words using NLTK
Tallying word frequencies gives us a bird's-eye-view of our text but we lose one important aspect: context. As the dictum goes: "You shall know a word by the company it keeps."
Concordances show us every occurrence of a given word in a text, inside a window of context words that appear before and after it. This is helpful for close reading to get at a word's meaning by seeing how it is used. We can also use the logic of shared context in order to identify which words have similar meanings. To illustrate this, we can compare the way the word "monstrous" is used in our two novels.
Concordance
End of explanation
"""
# Get words that appear in a similar context to "monstrous"
text1_nltk.similar("monstrous")
text2_nltk.similar("monstrous")
"""
Explanation: Contextual Similarity
End of explanation
"""
|
kit-cel/lecture-examples | qc/quantization/Uniform_Quantization_Sine.ipynb | gpl-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import librosa
import librosa.display
import IPython.display as ipd
"""
Explanation: Illustration of Uniform Quantization
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Uniform scalar quantization with midrise characteristic
End of explanation
"""
sr = 22050 # sample rate
T = 1.0 # seconds
t = np.linspace(0, T, int(T*sr), endpoint=False) # time variable
x = np.sin(2*np.pi*2*t) # pure sine wave at 2 Hz
"""
Explanation: Generate artificial signal
$$
x[k] = \sin\left(2\pi\frac{2k}{f_s}\right),\qquad k = 0,\ldots,f_s
$$
End of explanation
"""
# Sample to 4 bit ... 16 quantization levels
w = 4
# fix x_max based on the current signal, leave some tiny room
x_max = np.max(x) + 1e-10
Delta_x = x_max / (2**(w-1))
xh_max = (2**w-1)*Delta_x/2
# Quantize
xh_uniform_midrise = np.sign(x)*Delta_x*(np.floor(np.abs(x)/Delta_x)+0.5)
font = {'size' : 12}
plt.rc('font', **font)
plt.rc('text', usetex=True)
plt.figure(figsize=(6, 6))
plt.subplot(3,1,1)
plt.plot(range(len(t)),x, c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Original')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$x[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.subplot(3,1,2)
plt.plot(range(len(t)),xh_uniform_midrise, c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Quantized')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$\hat{x}[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.subplot(3,1,3)
plt.plot(range(len(t)),xh_uniform_midrise-x,c=(0,0.59,0.51))
plt.autoscale(enable=True, axis='x', tight=True)
#plt.title('Quantization error signal')
plt.xlabel('Sample index $k$', fontsize=14)
plt.ylabel('$e[k]$', fontsize=14)
plt.ylim((-1.1,+1.1))
plt.tight_layout()
#plt.savefig('figure_DST_7.2c.pdf',bbox_inches='tight')
"""
Explanation: Uniform Quantization
End of explanation
"""
|
amirziai/learning | python/Using reindex for adding missing columns to a dataframe.ipynb | mit | import pandas as pd
df = pd.DataFrame([
{
'a': 1,
'b': 2,
'd': 4
}
])
df
"""
Explanation: Use reindex for adding missing columns to a dataframe
End of explanation
"""
columns = ['a', 'b', 'c', 'd']
df.reindex(columns=columns, fill_value=0)
"""
Explanation: Using reindex to add missing columns to a dataframe
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
End of explanation
"""
columns_subset = columns[:2]
columns_subset
df.reindex(columns=columns_subset, fill_value=0)
"""
Explanation: This can also be used to get a subset of the columns
End of explanation
"""
df[columns_subset]
"""
Explanation: Which is probably better done this way
End of explanation
"""
|
Mynti207/cs207project | docs/demo.ipynb | mit | # you must specify the length of the time series when loading the database
ts_length = 100
# when running from the terminal
# python go_server_persistent.py --ts_length 100 --db_name 'demo'
# here we load the server as a subprocess for demonstration purposes
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(ts_length), '--data_dir', '../db_files', '--db_name', 'demo'])
time.sleep(5) # make sure it loads completely
"""
Explanation: Time Series Database
Summary
This package implements a
time series database with the following functionality:
* Insert time series data. May be followed by running a pre-defined function (trigger), if previously specified.
* Upsert (insert/update) time series metadata.
* Delete time series data and all associated metadata.
* Perform select (query) of time series data and/or metadata.
* Perform augmented select (query, followed by a pre-defined function) of time series data and/or metadata.
* Add a trigger that will cause a pre-defined function to be run upon execution of a particular database operation (e.g. calculate metadata fields after adding a new time series).
* Remove a trigger associated with a database operation and a pre-defined function.
* Add a vantage point (necessary to run vantage point similarity searches).
* Remove a vantage point and all associated data.
* Run a vantage point similarity search, to find the closest (most similar) time series in the database.
* Run an iSAX tree-based similarity search, to find the closest (most similar) time series in the database. This is a faster search technique, but it only returns an approximate answer and may not always find a match.
* Visualize the iSAX tree.
Initialization
The time series database can be accessed through a web interface, which directly executes database operations via the webserver (REST API).
Before running any database operations, you must:
Load the database server. You may pass the following arguments when loading the server.
--ts_length: Specifies the length of the time series, which must be consistent for all time series loaded into the database.
--data_dir: Specifies the directory where the database files are stored (optional).
--db_name: Specifies the database name (optional, but strongly recommended!).
End of explanation
"""
# when running from the terminal
# python go_webserver.py
# here we load the server as a subprocess for demonstration purposes
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
"""
Explanation: Load the database webserver.
End of explanation
"""
from webserver import *
web_interface = WebInterface()
"""
Explanation: Import the web interface and initialize it.
End of explanation
"""
from timeseries import *
def tsmaker(m, s, j):
'''
Helper function: randomly generates a time series for testing.
Parameters
----------
m : float
Mean value for generating time series data
s : float
Standard deviation value for generating time series data
j : float
Quantifies the "jitter" to add to the time series data
Returns
-------
A time series and associated meta data.
'''
# generate metadata
meta = {}
meta['order'] = int(np.random.choice(
[-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]))
meta['blarg'] = int(np.random.choice([1, 2]))
# generate time series data
t = np.arange(0.0, 1.0, 0.01)
v = norm.pdf(t, m, s) + j * np.random.randn(ts_length)
# return time series and metadata
return meta, TimeSeries(t, v)
# generate sample time series
num_ts = 50
mus = np.random.uniform(low=0.0, high=1.0, size=num_ts)
sigs = np.random.uniform(low=0.05, high=0.4, size=num_ts)
jits = np.random.uniform(low=0.05, high=0.2, size=num_ts)
# initialize dictionaries for time series and their metadata
primary_keys = []
tsdict = {}
metadict = {}
# fill dictionaries with randomly generated entries for database
for i, m, s, j in zip(range(num_ts), mus, sigs, jits):
meta, tsrs = tsmaker(m, s, j) # generate data
pk = "ts-{}".format(i) # generate primary key
primary_keys.append(pk) # keep track of all primary keys
tsdict[pk] = tsrs # store time series data
metadict[pk] = meta # store metadata
"""
Explanation: The instructions below assume that these three steps have been carried out.
Database Operations
Let's create some dummy data to aid in our demonstration. You will need to import the timeseries package to work with the TimeSeries format.
Note: the database is persistent, so can store data between sessions, but we will start with an empty database here for demonstration purposes.
End of explanation
"""
# insert all the time series
for k in primary_keys:
web_interface.insert_ts(pk=k, ts=tsdict[k])
# check what is in the database
web_interface.select(fields=None, additional={'sort_by': '+pk', 'limit': 10})
# successfully inserting data will yield a success code
web_interface.insert_ts(pk='sample1', ts=tsdict[primary_keys[0]])
# errors will yield an error code (e.g. attempting to insert the same primary key twice)
web_interface.insert_ts(pk='sample1', ts=tsdict[primary_keys[0]])
# let's remove the test time series
web_interface.delete_ts('sample1')
"""
Explanation: Insert Time Series
Inserts a new time series into the database. If any triggers are associated with time series insertion, then these are run and the results of their operations are also stored in the database.
Function signature:
insert_ts(pk, ts)
Parameters
----------
pk : any hashable type
Primary key for the new database entry
ts : TimeSeries
Time series to be inserted into the database
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# upsert the metadata
for k in primary_keys:
web_interface.upsert_meta(k, metadict[k])
# let's check the first five entries in the database - they should include metadata
web_interface.select(fields=[], additional={'sort_by': '+pk', 'limit': 5})
"""
Explanation: Upsert Metadata
Inserts or updates metadata associated with a time series. Non-specified fields will be assigned a default value.
Function signature:
upsert_meta(pk, md)
Parameters
----------
pk : any hashable type
Primary key for the database entry
md : dictionary
Metadata to be upserted into the database
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# example primary key to delete
primary_keys[0]
# delete an existing time series
web_interface.delete_ts(primary_keys[0])
# check what is in the database - should not include the deleted key
# note: select operations return dictionaries, so you can use the keys(), values(), and items() methods
web_interface.select(additional={'sort_by': '+pk'}).keys()
# double-check!
primary_keys[0] in web_interface.select(additional={'sort_by': '+pk'}).keys()
# add the time series and metadata back in
web_interface.insert_ts(primary_keys[0], tsdict[primary_keys[0]])
web_interface.upsert_meta(primary_keys[0], metadict[primary_keys[0]])
# check what is in the database - should include the newly added key
web_interface.select(additional={'sort_by': '+pk'}).keys()
"""
Explanation: Delete Time Series
Deletes a time series and all associated metadata from the database.
Function signature:
delete_ts(pk)
Parameters
----------
pk : any hashable type
Primary key for the database entry to be deleted
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# select all database entries; no metadata fields
web_interface.select(additional={'sort_by': '+pk', 'limit': 10})
# select all database entries; all metadata fields
web_interface.select(fields=[], additional={'sort_by': '+pk', 'limit': 10})
# select a specific time series; all metadata fields
web_interface.select(md={'pk': 'ts-0'}, fields=[])
"""
Explanation: Select
Queries the database for time series and/or associated metadata.
Function signature:
select(md={}, fields=None, additional=None)
Parameters
----------
md : dictionary (default={})
Criteria to apply to metadata
fields : list (default=None)
List of fields to return
additional : dictionary (default=None)
Additional criteria (e.g. 'sort_by' and 'limit')
Returns
-------
Query results (or error message).
Additional search criteria:
sort_by: Sorts the query results in either ascending or descending order. Use + to denote ascending order and - to denote descending order.
e.g. {'sort_by': '+pk'} will sort by primary key in ascending order; {'sort_by': '-order'} will sort by the order metadata field in descending order.
limit: Caps the number of fields that are returned when used in conjunction with sort_by.
e.g. {'sort_by': '+pk', 'limit': 5} for the top 5 primary keys
Examples:
End of explanation
"""
# return a specific time series and the result of the 'stats' function (mean and standard deviation)
web_interface.augmented_select(
proc='stats', target=['mean', 'std'], arg=None, md={'pk': 'ts-0'}, additional=None)
"""
Explanation: Augmented Select
Queries the database for time series and/or associated metadata, then executes a pre-specified function on the data that is returned.
Note: the result of the function is not stored in the database.
Function signature:
augmented_select(proc, target, arg=None, md={}, additional=None)
Parameters
----------
proc : string
Name of the function to run when the trigger is met
target : string
Field names used to identify the results of the function.
arg : string (default=None)
Possible additional arguments (e.g. time series for similarity search)
md : dictionary (default={})
Criteria to apply to metadata
additional : dictionary (default=None)
Additional criteria ('sort_by' and 'order')
Returns
-------
Query results (or error message).
Additional search criteria:
sort_by: Sorts the query results in either ascending or descending order. Use + to denote ascending order and - to denote descending order.
e.g. {'sort_by': '+pk'} will sort by primary key in ascending order; {'sort_by': '-order'} will sort by the order metadata field in descending order.
limit: Caps the number of fields that are returned when used in conjunction with sort_by.
e.g. {'sort_by': '+pk', 'limit': 5} for the top 5 primary keys
Available trigger functions:
corr: Calculates the distance between two time series, using the normalize kernelized cross-correlation metric. Required argument: a TimeSeries object.
stats: Calculates the mean and standard deviation of time series values. No arguments required.
Examples:
End of explanation
"""
# add trigger
web_interface.add_trigger('stats', 'insert_ts', ['mean', 'std'], None)
# add a new time series with the trigger (note: not adding metadata)
web_interface.insert_ts('test', tsdict[primary_keys[0]])
# inspect the results of the trigger - should include mean and std fields
web_interface.select(md={'pk': 'test'}, fields=[])
# delete back out
web_interface.delete_ts('test')
"""
Explanation: Add Trigger
Adds a trigger that will cause a pre-defined function to be run upon execution of a particular database operation. For example, additional metadata fields may be calculated upon insertion of new time series data.
Function signature:
add_trigger(proc, onwhat, target, arg=None)
Parameters
----------
proc : string
Name of the function to run when the trigger is hit
onwhat : string
Operation that triggers the function (e.g. 'insert_ts')
target : string
Array of field names to which to apply the results of the function
arg : string (default=None)
Possible additional arguments for the function
Returns
-------
Result of the database operation (or error message).
Available trigger functions:
corr: Calculates the distance between two time series, using the normalize kernelized cross-correlation metric. Required argument: a TimeSeries object.
stats: Calculates the mean and standard deviation of time series values. No arguments required.
Examples:
End of explanation
"""
# remove trigger
web_interface.remove_trigger('stats', 'insert_ts')
# add a new time series without the trigger (note: not adding metadata)
web_interface.insert_ts('sample2', tsdict[primary_keys[0]])
# inspect the results of the trigger - should not include mean and std fields
web_interface.select(md={'pk': 'sample2'}, fields=[])
# delete back out
web_interface.delete_ts('sample2')
"""
Explanation: Remove Trigger
Removes a trigger associated with a database operation and a pre-defined function.
Function signature:
remove_trigger(proc, onwhat, target=None)
Parameters
----------
proc : string
Name of the function that is run when the trigger is hit
onwhat : string
Operation that triggers the function (e.g. 'insert_ts')
target : string
Array of field names to which the results are applied. If not provided, all triggers associated with the database operation and function will be removed.
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# randomly choose time series as vantage points
num_vps = 5
random_vps = np.random.choice(range(num_ts), size=num_vps, replace=False)
vpkeys = ['ts-{}'.format(i) for i in random_vps]
# add the time series as vantage points
for i in range(num_vps):
web_interface.insert_vp(vpkeys[i])
"""
Explanation: Add Vantage Point
Marks a time series as a vantage point. Vantage points are necessary to carry out vantage point similarity searches.
Function signature:
insert_vp(pk)
Parameters
----------
pk : any hashable type
Primary key for the time series to be marked as a vantage point
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# delete one of the vantage points
web_interface.delete_vp(vpkeys[0])
# add it back in
web_interface.insert_vp(vpkeys[0])
"""
Explanation: Delete Vantage Point
Unmarks a time series as a vantage point.
Function signature:
delete_vp(pk)
Parameters
----------
pk : any hashable type
Primary key for the time series to be unmarked as a vantage point
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# run similarity search on a time series already in the database
# should return the same time series
primary_keys[0], web_interface.vp_similarity_search(tsdict[primary_keys[0]], 1)
# create dummy time series for demonstration purposes
_, query = tsmaker(np.random.uniform(low=0.0, high=1.0),
np.random.uniform(low=0.05, high=0.4),
np.random.uniform(low=0.05, high=0.2))
results = web_interface.vp_similarity_search(query, 1)
results
# visualize the results
plt.plot(query, label='Query TS')
plt.plot(tsdict[list(results.keys())[0]], label='Closest TS')
plt.legend(loc='best')
plt.xticks([])
plt.show()
"""
Explanation: Vantage Point Similarity Search
Runs a vantage point similarity search, to find the closest (most similar) time series in the database.
Function signature:
similarity_search(self, query, top=1)
Parameters
----------
query : TimeSeries
The time series being compared to those in the database
top : int
The number of closest time series to return (default=1)
Returns
-------
Primary key and distance to the closest time series (or error message if database operation fails).
Examples:
End of explanation
"""
# run similarity search on a time series already in the database
# should return the same time series
primary_keys[0], web_interface.isax_similarity_search(tsdict[primary_keys[0]])
# create dummy time series for demonstration purposes
_, query = tsmaker(np.random.uniform(low=0.0, high=1.0),
np.random.uniform(low=0.05, high=0.4),
np.random.uniform(low=0.05, high=0.2))
# note: because this is an approximate search, it will not be able
# to find a match for all query time series
results = web_interface.isax_similarity_search(query)
results
# visualize the results
plt.plot(query, label='Query TS')
plt.plot(tsdict[list(results.keys())[0]], label='Closest TS')
plt.legend(loc='best')
plt.xticks([])
plt.show()
"""
Explanation: iSAX Tree Similarity Search
Runs an iSAX tree-based similarity search, which returns a faster but only returns an approximate result.
Function signature:
isax_similarity_search(query)
Parameters
----------
query : TimeSeries
The time series being compared to those in the database
Returns
-------
Primary key of the closest time series (or error message if database operation fails).
Examples:
End of explanation
"""
# note: print() is required to visualize the tree correctly with carriage returns
print(web_interface.isax_tree())
"""
Explanation: iSAX Tree Representation
Returns a visual representation of the current contents of the iSAX tree.
Function signature:
isax_tree()
Parameters
----------
None
Returns
-------
Result of the database operation (or error message).
Examples:
End of explanation
"""
# insert all the time series
for k in primary_keys:
web_interface.delete_ts(pk=k)
# check that no data is left
web_interface.select()
"""
Explanation: Termination
Let's delete all the data before closing, so that we can start again from scratch in future demonstrations.
End of explanation
"""
# terminate processes before exiting
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
webserver.terminate()
"""
Explanation: Remember to terminate any outstanding processes!
End of explanation
"""
|
rainyear/pytips | Tips/2016-03-08-Functional-Programming-in-Python.ipynb | mit | # map 函数的模拟实现
def myMap(func, iterable):
for arg in iterable:
yield func(arg)
names = ["ana", "bob", "dogge"]
print(map(lambda x: x.capitalize(), names)) # Python 2.7 中直接返回列表
for name in myMap(lambda x: x.capitalize(), names):
print(name)
# filter 函数的模拟实现
def myFilter(func, iterable):
for arg in iterable:
if func(arg):
yield arg
print(filter(lambda x: x % 2 == 0, range(10))) # Python 2.7 中直接返回列表
for i in myFilter(lambda x: x % 2 == 0, range(10)):
print(i)
"""
Explanation: Python 中的函数式编程
函数式编程(英语:functional programming)或称函数程序设计,又称泛函编程,是一种编程范型,它将电脑运算视为数学上的函数计算,并且避免使用程序状态以及易变对象。函数编程语言最重要的基础是λ演算(lambda calculus)。而且λ演算的函数可以接受函数当作输入(引数)和输出(传出值)。(维基百科:函数式编程)
所谓编程范式(Programming paradigm)是指编程风格、方法或模式,比如面向过程编程(C语言)、面向对象编程(C++)、面向函数式编程(Haskell),并不是说某种编程语言一定属于某种范式,例如 Python 就是多范式编程语言。
函数式编程
函数式编程具有以下特点:
避免状态变量
函数也是变量(一等公民,First-Class Citizen)
高阶函数
面向问题描述而不是面向问题解决步骤
值得一提的是,函数式编程的这些特点在实践过程中可能并不是那么 Pythonic,甚至与0x00中提到的 The Zen of Python 相悖。例如函数式编程面向问题描述的特点可能让你更快地写出更简洁的代码,但可读性却也大打折扣(可参考这一段Haskell代码)。不过,虽然 Pythonic 很重要但并不是唯一的准则,The Choice Is Yours。
map(function, iterable, ...)/filter(function, iterable)
End of explanation
"""
from functools import reduce
print(reduce(lambda a, b: a*b, range(1,5)))
"""
Explanation: functools.reduce(function, iterable[, initializer])
Python 3.5 中reduce 被降格到标准库functools,reduce 也是遍历可迭代对象元素作为第一个函数的参数,并将结果累计:
End of explanation
"""
from functools import partial
add = lambda a, b: a + b
add1024 = partial(add, 1024)
add1024(1)
"""
Explanation: functools.partial(func, *args, **keywords)
偏应用函数(Partial Application)让我们可以固定函数的某些参数:
End of explanation
"""
|
CalPolyPat/phys202-2015-work | assignments/assignment06/InteractEx05.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import display, SVG
"""
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
s = """
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
"""
SVG(s)
"""
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
"""
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
s="""<svg width="%s" height="%s">
<circle cx="%s" cy="%s" r="%s" fill="%s" />
</svg>"""%(str(width), str(height), str(cx), str(cy), str(r), str(fill))
display(SVG(s))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
"""
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
"""
w=interactive(draw_circle, width=fixed(300), height=fixed(300), cx=(0,300, 1), cy=(0,300,1), r=(0,50,1), fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
"""
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
"""
display(w)
assert True # leave this to grade the display of the widget
"""
Explanation: Use the display function to show the widgets created by interactive:
End of explanation
"""
|
ethen8181/machine-learning | time_series/fft/fft.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#First-Foray-Into-Discrete/Fast-Fourier-Transformation" data-toc-modified-id="First-Foray-Into-Discrete/Fast-Fourier-Transformation-1"><span class="toc-item-num">1 </span>First Foray Into Discrete/Fast Fourier Transformation</a></span><ul class="toc-item"><li><span><a href="#Correlation" data-toc-modified-id="Correlation-1.1"><span class="toc-item-num">1.1 </span>Correlation</a></span></li><li><span><a href="#Fourier-Transformation" data-toc-modified-id="Fourier-Transformation-1.2"><span class="toc-item-num">1.2 </span>Fourier Transformation</a></span></li><li><span><a href="#DFT-In-Action" data-toc-modified-id="DFT-In-Action-1.3"><span class="toc-item-num">1.3 </span>DFT In Action</a></span></li><li><span><a href="#Fast-Fourier-Transformation-(FFT)" data-toc-modified-id="Fast-Fourier-Transformation-(FFT)-1.4"><span class="toc-item-num">1.4 </span>Fast Fourier Transformation (FFT)</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
# create examples of two signals that are dissimilar
# and two that are similar to illustrate the concept
def create_signal(sample_duration, sample_freq, signal_type, signal_freq):
"""
Create some signals to work with, e.g. if we were to sample at 100 Hz
(100 times per second) and collect the data for 10 seconds, resulting
in 1000 samples in total. Then we would specify sample_duration = 10,
sample_freq = 100.
Apart from that, we will also give the option of generating sine or cosine
wave and the frequencies of these signals
"""
raw_value = 2 * np.pi * signal_freq * np.arange(0, sample_duration, 1. / sample_freq)
if signal_type == 'cos':
return np.cos(raw_value)
elif signal_type == 'sin':
return np.sin(raw_value)
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
plt.style.use('fivethirtyeight')
# dissimilar signals have low correlation
signal1 = create_signal(10, 100, 'sin', 0.1)
signal2 = create_signal(10, 100, 'cos', 0.1)
plt.plot(signal1, label='Sine')
plt.plot(signal2, label='Cosine')
plt.title('Correlation={:.1f}'.format(np.dot(signal1, signal2)))
plt.legend()
plt.show()
# similar signals have high correlation
signal1 = create_signal(10, 100, 'sin', 0.1)
signal2 = create_signal(10, 100, 'sin', 0.1)
plt.plot(signal1, label='Sine 1')
plt.plot(signal2, label='Sine 2', linestyle='--')
plt.title('Correlation={}'.format(np.dot(signal1, signal2)))
plt.legend()
plt.show()
"""
Explanation: First Foray Into Discrete/Fast Fourier Transformation
In many real-world applications, signals are typically represented as a sequence of numbers that are time dependent. For example, digital audio signal would be one common example, or the hourly temperature in California would be another one. In order to extract meaningful characteristics from these kind of data, many different transformation techniques have been developed to decompose it into simpler individual pieces that are much easier and compact to reason with.
Discrete Fourier Transformation (DFT) is one of these algorithms that takes a signal as an input and breaks it down into many individual frequency components. Giving us, the end-user, easier pieces to work with. For the digital audio signal, applying DFT gives us what tones are represented in the sound and at what energies.
Some basics of digital signal processing is assumed. The following link contains an excellent primer to get people up to speed. I feverishly recommend going through all of it if the reader is not pressed with time. Blog: Seeing Circles, Sines, And Signals a Compact Primer On Digital Signal Processing
Correlation
Correlation is a widely used concept in signal processing. It must be noted that the definition of correlation here is slightly different from the definition we encounter in statistics. In the context of signal processing, correlation measures how similar two signals are by computing the dot product between the two. i.e. given two signals $x$ and $y$, the correlation of the two signal can be computed using:
\begin{align}
\sum_{n=0}^N x_n \cdot y_n
\end{align}
The intuition behind this is that if the two signals are indeed similar, then whenever $x_n$ is positive/negative then $y_n$ should also be positive/negative. Hence when two signals' sign often matches, the resulting correlation number will also be large, indicating that the two signals are similar to one another. It is worth noting that correlation can also take on negative values, a large negative correlation means that the signal is also similar to each other, but one is inverted with respect to the other.
End of explanation
"""
# reminder:
# sample_duration means we're collecting the data for x seconds
# sample_freq means we're sampling x times per second
sample_duration = 10
sample_freq = 100
signal_type = 'sin'
num_samples = sample_freq * sample_duration
num_components = 4
components = np.zeros((num_components, num_samples))
components[0] = np.ones(num_samples)
components[1] = create_signal(sample_duration, sample_freq, signal_type, 10)
components[2] = create_signal(sample_duration, sample_freq, signal_type, 2)
components[3] = create_signal(sample_duration, sample_freq, signal_type, 0.5)
fig, ax = plt.subplots(nrows=num_components, sharex=True, figsize=(12,8))
for i in range(num_components):
ax[i].plot(components[i])
ax[i].set_ylim((-1.1, 1.1))
ax[i].set_title('Component {}'.format(i))
ax[i].set_ylabel('Amplitude')
ax[num_components - 1].set_xlabel('Samples')
plt.tight_layout()
"""
Explanation: Correlation is one of the key concept behind DFT, because as we'll soon see, in DFT, our goal is to find frequencies that gives a high correlation with the signal at hand and a high amplitude of this correlation indicates the presence of this frequency in our signal.
Fourier Transformation
Fourier Transformation takes a time-based signal as an input, measures every possible cycle and returns the overall cycle components (by cycle, we're essentially preferring to circles). Each cycle components stores information such as for each cycle:
Amplitude: how big is the circle?
Frequency: How fast is it moving? The faster the cycle component is moving, the higher the frequency of the wave.
Phase: Where does it start, or what angle does it start?
This cycle component is also referred to as phasor. The following gif aims to make this seemingly abstract description into a concrete process that we can visualize.
<img src="img/fft_decompose.gif">
After applying DFT to our signal shown on the right, we realized that it can be decomposed into five different phasors. Here, the center of the first phasor/cycle component is placed at the origin, and the center of each subsequent phasor is "attached" to the tip of the previous phasor. Once the chain of phasors is built, we begin rotating the phasor. We can then reconstruct the time domain signal by tracing the vertical distance from the origin to the tip of the last phasor.
Let's now take a look at DFT's formula:
\begin{align}
X_k = \sum_{n=0}^{N-1} x_n \cdot e^{ -\varphi \mathrm{i} }
\end{align}
$x_n$: The signal's value at time $n$.
$e^{-\varphi\mathrm{i}}$: Is a compact way of describing a pair of sine and cosine waves.
$\varphi = \frac{n}{N} 2\pi k$: Records phase and frequency of our cycle components. Where $N$ is the number of samples we have. $n$ the current sample we're considering. $k$ the currenct frequency we're considering. The $2\pi k$ part represents the cycle component's speed measured in radians and $n / N$ measures the percentage of time that our cycle component has traveled.
$X_k$ Amount of cycle component with frequency $k$.
Side Note: If the readers are a bit rusty with trigonometry (related to sine and cosine) and complex numbers. e.g. There're already many excellent materials out there that covers these concepts. Blog: Trigonometry Review and Blog: Complex Numbers
From the formula, we notice that it's taking the dot product between the original signal $x_n$ and $e^{ -\varphi \mathrm{i} }$. If we expand $e^{ -\varphi \mathrm{i} }$ using the Euler's formula. $e^{ -\varphi \mathrm{i} } = cos(\varphi) - sin(\varphi)i$, we end up with the formula:
\begin{align}
X_k &= \sum_{n=0}^{N-1} x_n \cdot \big( cos(\varphi) - sin(\varphi)i \big) \
&= \sum_{n=0}^{N-1} x_n \cdot cos(\varphi) - i \sum_{n=0}^{N-1} x_n \cdot sin(\varphi)
\end{align}
By breaking down the formula a little bit, we can see that underneath the hood, what fourier transformation is doing is taking the input signal and doing 2 correlation calculations, one with the sine wave (it will give us the y coordinates of the circle) and one with the cosine wave (which will give us the x coordinates or the circle). And the following succinct one-sentence colour-coded explanation is also a great reference that we can use for quick reference.
<img src="img/fft_one_sentence.png" width="50%" height="50%">
DFT In Action
To see DFT in action, we will create a dummy signal that will be composed of four sinusoidal waves of different frequencies. 0, 10, 2 and 0.5 Hz respectively.
End of explanation
"""
signal = -0.5 * components[0] + 0.1 * components[1] + 0.2 * components[2] - 0.6 * components[3]
plt.plot(signal)
plt.xlabel('Samples')
plt.ylabel('Amplitude')
plt.show()
"""
Explanation: Then we will combine these individual signals together with some weights assigned to each signal.
End of explanation
"""
fft_result = np.fft.fft(signal)
print('length of fft result: ', len(fft_result))
fft_result[:5]
"""
Explanation: By looking at the dummy signal we've created visually, we might be able to notice the presence of a signal which shows 5 periods in the sampling duration of 10 seconds. In other words, after applying DFT to our signal, we should expect the presence a signal with the frequency of 0.5 HZ.
Here, we will leverage numpy's implementation to check whether the result makes intuitive sense or not. The implementation is called fft, but let's not worry about that for the moment.
End of explanation
"""
plt.plot(np.abs(fft_result))
plt.xlim((-5, 120)) # notice that we limited the x-axis to 120 to focus on the interesting part
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
"""
Explanation: The fft routine returns an array of length 1000 which is equivalent to the number of samples. If we look at each individual element in the array, we'll notice that these are the DFT coefficients. It has two components, the real number corresponds to the cosine waves and the imaginary number that comes from the sine waves. In general though, we don't really care if there's a cosine or sine wave present, as we are only concerned which frequency pattern has a higher correlation with our original signal. This can be done by considering the absolute value of these coefficients.
End of explanation
"""
t = np.linspace(0, sample_freq, len(fft_result))
plt.plot(t, np.abs(fft_result))
plt.xlim((-1, 15))
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
"""
Explanation: If we plot the absolute values of the fft result, we can clearly see a spike at K=0, 5, 20, 100 in the graph above. However, we are often times more interested in the energy of of each frequency. Frequency Resolution is the distance in Hz between two adjacent data points in DFT, which is defined as:
\begin{align}
\Delta f = \frac{f_s}{N}
\end{align}
Where $f_s$ is the sampling rate and $N$ is the number of data points. The denominator can be expressed in terms of sampling rate and time, $N = f_s \cdot t$. Looking closely at the formula, it is telling us the only thing that increases frequency resolution is time.
In our case, the sample_duration we've specified above was 10, thus the frequencies corresponding to these K are: 0 Hz, 0.5 Hz, 2 Hz and 10 Hz respectively (remember that these frequencies were the components that was used in the dummy signal that we've created). And based on the graph depicted below, we can see that by passing our signal to a DFT, we were able to retrieve its underlying frequency information.
End of explanation
"""
def dft(x):
"""Compute the Discrete Fourier Transform of the 1d ndarray x."""
N = x.size
n = np.arange(N)
k = n.reshape((N, 1))
# complex number in python are denoted by the j symbol,
# instead of i that we're showing in the formula
e = np.exp(-2j * np.pi * k * n / N)
return np.dot(e, x)
# apply dft to our original signal and confirm
# the results looks the same
dft_result = dft(signal)
print('result matches:', np.allclose(dft_result, fft_result))
plt.plot(np.abs(dft_result))
plt.xlim((-5, 120))
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
"""
Explanation: Fast Fourier Transformation (FFT)
Recall that the formula for Discrete Fourier Transformation was:
\begin{align}
X_k = \sum_{n=0}^{N-1} x_n \cdot e^{ -\frac{n}{N} 2\pi k \mathrm{i} }
\end{align}
Since we now know that it's computing the dot product between the original signal and a cycle component at every frequency, we can implement this ourselves.
End of explanation
"""
%timeit dft(signal)
%timeit np.fft.fft(signal)
"""
Explanation: However, if we compare the timing between our simplistic implementation versus the one from numpy, we can see a dramatic time difference.
End of explanation
"""
def fft(x):
N = x.shape[0]
if N % 2 > 0:
raise ValueError('size of x must be a power of 2')
elif N <= 32: # this cutoff should be enough to start using the non-recursive version
return dft(x)
else:
fft_even = fft(x[0::2])
fft_odd = fft(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([fft_even + factor[:N // 2] * fft_odd,
fft_even + factor[N // 2:] * fft_odd])
# here, we assume the input data length is a power of two
# if it doesn't, we can choose to zero-pad the input signal
x = np.random.random(1024)
np.allclose(fft(x), np.fft.fft(x))
%timeit dft(x)
%timeit fft(x)
%timeit np.fft.fft(x)
"""
Explanation: If we leave aside the fact that one is implemented using Python's numpy and one is most likely implemented in optimized C++, the time difference actually comes from the fact that in practice, people uses a more optimized version of Fourier Transformation called Fast Fourier Transformation (how unexpected ...) to perform the calculation. The algorithm accomplish significant speedup by exploiting symmetry property. i.e. if we devise a hypothetical algorithm which can decompose a 1024-point DFT into two 512-point DFTs, then we are essentially halving our computational cost. Let's take a look at how we can achieve this by looking at an example with 8 data points.
\begin{align}
X_k = x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } + \dots + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 7 }
\end{align}
Our goal is to examine the possibility of rewriting this eight-point DFT in terms of two DFTs of smaller length. Let's first examine choosing all the terms with an even sample index, i.e. $x_0$, $x_2$, $x_4$, and $x_6$. Giving us:
\begin{align}
G_k &= x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_2 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 2 } + x_4 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 4 } + x_6 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 6 } \
&= x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 0 } + x_2 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 1 } + x_4 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 2 } + x_6 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 3 }
\end{align}
After plugging the values for the even sample index and simplifying the fractions in the complex exponentials, we can observe that our $G_k$ is a 4 samples DFT with $x_0$, $x_2$, $x_4$, $x_6$ as our input signal. Now that we've shown that we can decompose the even index samples, let's see if we can simplify the remaining terms, the odd-index samples, are given by:
\begin{align}
Q_k &= x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 3 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 5 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 7 } \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot \big( x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 2 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 4 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 6 } \big) \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot \big( x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 0 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 1 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 2 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 3 } \big) \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot H_k
\end{align}
After the derivation, we can see our $Q_k$ is obtained by multiplying $e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 }$ by the four point DFT with the odd index samples of $x_1$, $x_3$, $x_5$, $x_7$, which we'll denote as $H_k$. Hence, we have achieved the goal of decomposing an eight-point DFT into two four-point ones:
\begin{align}
X_k &= G_k + e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot H_k
\end{align}
We have only worked through rearranging the terms a bit, next we'll introduce a symmetric trick that allows us to compute the sub-result only once and save computational cost.
The question that we'll be asking ourselves is what is the value of $X_{N+k}$ is. From our above expression:
\begin{align}
X_{N + k} &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~(N + k)~n~/~N}\
&= \sum_{n=0}^{N-1} x_n \cdot e^{- i~2\pi~n} \cdot e^{-i~2\pi~k~n~/~N}\
&= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}
\end{align}
Here we've used the property that $exp[2\pi~i~n] = 1$ for any integer $n$, since $exp[2\pi~i]$ means that we're going 1 full circle, and multiplying that number by any integer $n$ means we're spinning for $n$ circles. The last line shows a nice symmetry property of the DFT: $X_{N+k}=X_k$. This means when we break our eight-point DFT into two four-point DFTs, it allows us to re-use a lot of the results for both $X_k$ and $X_{k + 4}$ and significantly reduce the number of calculations through the symmetric property:
\begin{align}
X_{k + 4} &= G_{k + 4} + e^{ -\mathrm{i} \frac{2\pi}{8} (k + 4) ~\times~ 1 } \cdot H_{k + 4} \
&= G_k + e^{ -\mathrm{i} \frac{2\pi}{8} (k + 4) ~\times~ 1 } \cdot H_k
\end{align}
We saw that the starting point of the algorithm was that the DFT length $N$ was even and we were able to decrease the computation by splitting it into two DFTS of length $N/2$, following this procedure we can again decompose each of the $N/2$ DFTs into two $N/4$ DFTs. This property turns the original $\mathcal{O}[N^2]$ DFT computation into a $\mathcal{O}[N\log N]$ algorithm to compute DFT.
End of explanation
"""
|
rvernagus/data-science-notebooks | Data Science From Scratch/7 - Hypothesis And Inference.ipynb | mit | def normal_approximation_to_binomial(n, p):
"""return mu and sigma corresponding to Binomial(n, p)"""
mu = p * n
sigma = math.sqrt(p * (1 - p) * n)
return mu, sigma
normal_probability_below = normal_cdf
def normal_probability_above(lo, mu=0, sigma=1):
return 1 - normal_cdf(lo, mu, sigma)
def normal_probability_between(lo, hi, mu=0, sigma=1):
return normal_cdf(hi, mu, sigma) - normal_cdf(lo, mu, sigma)
def normal_probabilty_outside(lo, hi, mu=0, sigma=1):
return 1 - normal_probability_between(lo, hi, mu, sigma)
def normal_upper_bound(probability, mu=0, sigma=1):
"""returns the z for which P(Z <= z) = probability"""
return inverse_normal_cdf(probability, mu, sigma)
def normal_lower_bound(probability, mu=0, sigma=1):
"""returns the z for which P(Z >= z) = probability"""
return inverse_normal_cdf(1 - probability, mu, sigma)
def normal_two_sided_bounds(probability, mu=0, sigma=1):
"""returns the symmetric (about the mean) bounds
that contain the specified probability"""
tail_probability = (1 - probability) / 2
# upper bound should have tail_probability above it
upper_bound = normal_lower_bound(tail_probability, mu, sigma)
# lower bound should have tail_probability below it
lower_bound = normal_upper_bound(tail_probability, mu, sigma)
return lower_bound, upper_bound
"""
Explanation: Statistical Hypothesis Testing
Classical testing involves a null hypothesis $H_0$ that represents the default and an alternative hypothesis $H_1$ to test. Statistics helps us to determine whether $H_0$ should be considered false or not.
Example: Flipping A Coin
Null hypothesis = the coin is fair = $p = 0.5$
Alternative hypothesis = coins is not fair = $p \not = 0.5$
To test, $n$ samples will be collected. Each toss is a Bernoulli(n, p) trial.
End of explanation
"""
mu_0, sigma_0 = normal_approximation_to_binomial(1000, 0.5)
mu_0, sigma_0
"""
Explanation: Now we flip the coin 1,000 times and see if our null hypothesis is true. If so, $X$ will be approximately normally distributed with a mean of 500 and a standard deviation of 15.8:
End of explanation
"""
normal_two_sided_bounds(0.95, mu_0, sigma_0)
"""
Explanation: A decision must be made with respect to significance, i.e., how willing are we to accept "false positives" (type 1 errors) by rejecting $H_0$ even though it is true? This is often set to 5% or 1%. We will use 5%:
End of explanation
"""
# 95% bounds based on assumption p is 0.5
lo, hi = normal_two_sided_bounds(0.95, mu_0, sigma_0)
# actual mu and sigma based on p = 0.55
mu_1, sigma_1 = normal_approximation_to_binomial(1000, 0.55)
# a type 2 error means we fail to reject the null hypothesis
# which will happen when X is still in our original interval
type_2_probability = normal_probability_between(lo, hi, mu_1, sigma_1)
power = 1 - type_2_probability # 0.887
"""
Explanation: If $H_0$ is true, $p$ should equal 0.5. The interval above denotes the values outside of which there is a 5% chance that this is false. Now we can determine the power of a test, i.e., how willing are we to accept "false positives" (type 2 errors) by failing to reject $H_0$ even though it is false.
End of explanation
"""
hi = normal_upper_bound(0.95, mu_0, sigma_0)
# is 526 (< 531, since we need more probability in the upper tail)
type_2_probability = normal_probability_below(hi, mu_1, sigma_1)
power = 1 - type_2_probability # 0.936
def two_sided_p_value(x, mu=0, sigma=1):
if x >= mu:
# if x is greater than the mean, the tail is what's greater than x
return 2 * normal_probability_above(x, mu, sigma)
else:
# if x is less than the mean, the tail is what's less than x
return 2 * normal_probability_below(x, mu, sigma)
two_sided_p_value(529.5, mu_0, sigma_0) # 0.062
extreme_value_count = 0
for _ in range(100000):
num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads
for _ in range(1000)) # in 1000 flips
if num_heads >= 530 or num_heads <= 470: # and count how often
extreme_value_count += 1 # the # is 'extreme'
print(extreme_value_count / 100000) # 0.062
two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463
"""
Explanation: Run a 5% significance test to find the cutoff below which 95% of the probability lies:
End of explanation
"""
math.sqrt(p * (1 - p) / 1000)
"""
Explanation: Make sure your data is roughly normally distributed before using normal_probability_above to compute
p-values. The annals of bad data science are filled with examples of people opining that the chance of some
observed event occurring at random is one in a million, when what they really mean is “the chance,
assuming the data is distributed normally,” which is pretty meaningless if the data isn’t.
There are various statistical tests for normality, but even plotting the data is a good start.
Confidence Intervals
We can construct a confidence interval around an observed value of a parameter. We can do this for our assumption of an unfair coin (biased towards heads in that we observed 525 of 1,000 flips giving heads):
End of explanation
"""
p_hat = 525 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000)
sigma
"""
Explanation: Not knowing $p$ we use our estimate:
End of explanation
"""
normal_two_sided_bounds(0.95, mu, sigma)
"""
Explanation: Assuming a normal distribution, we conclude that we are 95% confident that the interval below includes the true $p$:
End of explanation
"""
p_hat = 540 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) # 0.0158
normal_two_sided_bounds(0.95, mu, sigma) # [0.5091, 0.5709]
"""
Explanation: 0.5 falls within our interval so we do not conclude that the coin is unfair.
What if we observe 540 heads though?
End of explanation
"""
def run_experiment():
"""flip a fair coin 1000 times, True = heads, False = tails"""
return [random.random() < 0.5 for _ in range(1000)]
def reject_fairness(experiment):
"""using the 5% significance levels"""
num_heads = len([flip for flip in experiment if flip])
return num_heads < 469 or num_heads > 531
random.seed(0)
experiments = [run_experiment() for _ in range(1000)]
num_rejections = len([experiment for experiment in experiments if reject_fairness(experiment)])
num_rejections # 46
"""
Explanation: In this scenario, 0.5 falls outside of our interval so the "fair coin" hypothesis is not confirmed.
P-hacking
P-hacking involves using various "hacks" to get a $p$ value to go below 0.05: creating a superfluous number of hypotheses, selectively removing outliers, etc.
End of explanation
"""
def estimated_parameters(N, n):
p = n / N
sigma = math.sqrt(p * (1 - p) / N)
return p, sigma
def a_b_test_statistic(N_A, n_A, N_B, n_B):
p_A, sigma_A = estimated_parameters(N_A, n_A)
p_B, sigma_B = estimated_parameters(N_B, n_B)
return (p_B - p_A) / math.sqrt(sigma_A ** 2 + sigma_B ** 2)
z = a_b_test_statistic(1000, 200, 1000, 180)
z
two_sided_p_value(z)
z = a_b_test_statistic(1000, 200, 1000, 150)
two_sided_p_value(z)
"""
Explanation: Valid inferences come from a priori hypotheses (hypotheses created before collecting any data) and data cleansing without reference to the hypotheses.
Example: Running An A/B Test
End of explanation
"""
def B(alpha, beta):
"""a normalizing constant so that the total probability is 1"""
return math.gamma(alpha) * math.gamma(beta) / math.gamma(alpha + beta)
def beta_pdf(x, alpha, beta):
if x < 0 or x > 1: # no weight outside of [0, 1]
return 0
return x ** (alpha - 1) * (1 - x) ** (beta - 1) / B(alpha, beta)
"""
Explanation: Bayesian Inference
In Bayesian Inference we start with a prior distribution for the parameters and then use the actual observations to get the posterior distribution of the same parameters. So instead of judging the probability of hypotheses, we make probability judgments about the parameters.
We will use the Beta distribution to convert all probabilities to a value between 0 and 1.
End of explanation
"""
|
tbphu/fachkurs_bachelor | tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb | mit | Repressilator = urllib2.urlopen('http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt').read()
"""
Explanation: Roadrunner Methoden
Antimony Modell aus Modell-Datenbank abfragen:
Lade mithilfe von urllib2 das Antimony-Modell des "Repressilator" herunter. Benutze dazu die urllib2 Methoden urlopen() und read()
Die URL für den Repressilator lautet:
http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt
Elowitz, M. B., & Leibler, S. (2000). A synthetic oscillatory network of transcriptional regulators. Nature, 403(6767), 335-338.
End of explanation
"""
rr = te.loada(Repressilator)
"""
Explanation: Erstelle eine Instanz von roadrunner, indem du gleichzeitig den Repressilator als Modell lädst. Benutze dazu loada() von tellurium.
End of explanation
"""
print rr.getAntimony()
print rr.getSBML()
"""
Explanation: Im folgenden Teil wollen wir einige der Methoden von telluriums roadrunner ausprobieren.
Lass dir dazu das Modell als Antimony oder SBML anzeigen. Dies erreichst du mit getAntimony() oder getSBML().
End of explanation
"""
rr = te.loada(Repressilator)
print rr.getIntegrator()
"""
Explanation: Solver Methoden
Achtung: Obwohl resetToOrigin() das Modell in den ursprünglichen Zustand zurück setzt, bleiben Solver-spezifische Einstellungen erhalten. Daher benutze am besten immer te.loada() als vollständigen Reset!
Mit getIntegrator() ist es möglich, den Solver und dessen gesetzte Einstellungen anzeigen zu lassen.
End of explanation
"""
rr = te.loada(Repressilator)
rr.setIntegrator('rk45')
print rr.getIntegrator()
"""
Explanation: Ändere den verwendeten Solver von 'CVODE' auf Runge-Kutta 'rk45' und lass dir die Settings nochmals anzeigen.
Verwende dazu setIntegrator() und getIntegrator().
Was fällt auf?
End of explanation
"""
rr = te.loada(Repressilator)
rr.simulate(0,1000,1000)
rr.plot()
"""
Explanation: Simuliere den Repressilator von 0s bis 1000s und plotte die Ergebnisse für verschiedene steps-Werte (z.b. steps = 10 oder 10000) in der simulate-Methode. Was macht das Argument steps?
End of explanation
"""
rr = te.loada(Repressilator)
rr.getIntegrator().setValue('relative_tolerance',0.0000001)
rr.getIntegrator().setValue('relative_tolerance',1)
rr.simulate(0,1000,1000)
rr.plot()
"""
Explanation: Benutze weiterhin 'CVODE' und verändere den Paramter 'relative_tolerance' des Solvers (z.b. 1 oder 10).
Verwendete dabei steps = 10000 in simulate().
Was fällt auf?
Hinweis - die nötige Methode lautet roadrunner.getIntegrator().setValue().
End of explanation
"""
rr = te.loada(Repressilator)
print type(rr)
print type(rr.model)
"""
Explanation: ODE-Modell als Objekt in Python
Oben haben wir gesehen, dass tellurium eine Instanz von RoadRunner erzeugt, wenn ein Modell eingelesen wird.
Außerdem ist der Zugriff auf das eigentliche Modell möglich. Unter Verwendung von .model gibt es zusätzliche Methoden um das eigentliche Modell zu manipulieren:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
fig_phase = plt.figure(figsize=(5,5))
rr = te.loada(Repressilator)
for l,i in enumerate([1.0,1.7,3.0,10.]):
fig_phase.add_subplot(2,2,l+1)
rr.n = i
rr.reset()
result = rr.simulate(0,500,500,selections=['time','X','PX'])
plt.plot(result['X'],result['PX'],label='n = %s' %i)
plt.xlabel('X')
plt.ylabel('PX')
plt.legend()
plt.tight_layout()
fig_timecourse= plt.figure(figsize=(5,5))
rr = te.loada(Repressilator)
for l,i in enumerate([1.0,1.7,3.0,10.]):
rr.n = i
rr.reset()
result = rr.simulate(0,500,500,selections=['time','X','PX'])
plt.plot(result['time'],result['PX'],label='PX; n = %s' %i)
plt.xlabel('time')
plt.ylabel('Species amounts')
plt.legend()
plt.tight_layout()
"""
Explanation: Aufgabe 1 - Parameterscan:
A) Sieh dir die Implementierung des Modells 'Repressilator' an, welche Paramter gibt es?
B) Erstelle einen Parameterscan, welcher den Wert des Paramters mit der Bezeichnung 'n' im Repressilator ändert.
(Beispielsweise für n=1,n=2,n=3,...)
Lasse das Modell für jedes gewählte 'n' simulieren.
Beantworte dazu folgende Fragen:
a) Welchen Zweck erfüllt 'n' im Modell im Hinblick auf die Reaktion, in der 'n' auftaucht?
b) Im Gegensatz dazu, welchen Effekt hat 'n' auf das Modellverhalten?
c) Kannst du einen Wert für 'n' angeben, bei dem sich das Verhalten des Modells qualitativ ändert?
C) Visualisiere die Simulationen. Welche Art von Plot ist günstig, um die Modellsimulation darzustellen? Es gibt mehrere geeignete Varianten, aber beschränke die Anzahl der Graphen im Plot(z.b. wähle eine Spezies und plotte).
Hinweise:
Nutze die "Autovervollständigung" des Python-Notebook und außerdem die offizielle Dokumentation von RoadRunner, um die Methoden zu finden, die für die Implementierung eines Parameterscans notwendig sind. Natürlich kannst du auch das Notebook von der Tellurium Einführung als Hilfe benutzen.
Ziehe in Erwägung, dass das Modell einen oder mehrere Resets benötigt. Überlege, an welcher Stelle deiner implementierung und welche Reset-Methode du idealerweise einsetzen solltest.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
rr = te.loada(Repressilator)
print rr.model.getFloatingSpeciesInitAmountIds()
print rr.model.getFloatingSpeciesInitAmounts()
for l,i in enumerate([1,5,10,20]):
# Auswahl einiger Varianten (es gibt noch mehr Möglichkeiten...)
#Variante1 - Falsch
#rr.Y=i
#Variante2 - Falsch
#rr.Y=i
#rr.reset()
#Variante3 - Richtig
rr.model["init(Y)"] = i
rr.reset()
result = rr.simulate(0,10,1000,selections=['Y','PY'])
#plt.plot(result[:,0],result['PY'],label='n = %s' %i)
plt.plot(result['Y'],label='initial Y = %s' %i)
plt.xlabel('time')
plt.ylabel('Species in amounts')
plt.axhline(y=i,linestyle = ':',color='black')
plt.legend()
"""
Explanation: Aufgabe 2 - (Initial value)-scan:
Erstelle einen "Scan", der den Anfwangswert von der Spezies Y ändert.
Das Modellverhalten ist hierbei weniger interessant.
Achte vielmehr darauf, die Resets so zu setzen, dass 'Y' bei der Simulation tatsächlich beim gesetzten Wert startet.
End of explanation
"""
|
dipanjanS/text-analytics-with-python | New-Second-Edition/Ch03 - Processing and Understanding Text/Ch03c - BONUS - Text Parsing with Stanford CoreNLP.ipynb | apache-2.0 | # set java path
import os
java_path = r'C:\Program Files\Java\jre1.8.0_192\bin\java.exe'
os.environ['JAVAHOME'] = java_path
from nltk.parse.stanford import StanfordParser
scp = StanfordParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
scp
sentence = 'This NLP Workshop is being organized by Analytics Vidhya as part of the DataHack Summit 2018'
sentence
result = list(scp.raw_parse(sentence))[0]
print(result)
os.environ['PATH'] = os.environ['PATH']+r';C:\Program Files\gs\gs9.25\bin'
result
result.pretty_print()
"""
Explanation: Constituency Parsing with Stanford NLP
Source: https://towardsdatascience.com/a-practitioners-guide-to-natural-language-processing-part-i-processing-understanding-text-9f4abfd13e72
Constituent-based grammars are used to analyze and determine the constituents of a sentence. These grammars can be used to model or represent the internal structure of sentences in terms of a hierarchically ordered structure of their constituents. Each and every word usually belongs to a specific lexical category in the case and forms the head word of different phrases. These phrases are formed based on rules called phrase structure rules.
Phrase structure rules form the core of constituency grammars, because they talk about syntax and rules that govern the hierarchy and ordering of the various constituents in the sentences. These rules cater to two things primarily.
They determine what words are used to construct the phrases or constituents.
They determine how we need to order these constituents together.
The generic representation of a phrase structure rule is S → AB , which depicts that the structure S consists of constituents A and B , and the ordering is A followed by B. While there are several rules (refer to Chapter 1, Page 19: Text Analytics with Python, if you want to dive deeper), the most important rule describes how to divide a sentence or a clause.
The phrase structure rule denotes a binary division for a sentence or a clause as S → NP VP where S is the sentence or clause, and it is divided into the subject, denoted by the noun phrase (NP) and the predicate, denoted by the verb phrase (VP).
A constituency parser can be built based on such grammars/rules, which are usually collectively available as context-free grammar (CFG) or phrase-structured grammar. The parser will process input sentences according to these rules, and help in building a parse tree.
We will be using nltk and the StanfordParser here to generate parse trees.
Prerequisites: Download the official Stanford Parser from here, which seems to work quite well. You can try out a later version by going to this website and checking the Release History section. After downloading, unzip it to a known location in your filesystem. Once done, you are now ready to use the parser from nltk , which we will be exploring soon.
The Stanford parser generally uses a PCFG (probabilistic context-free grammar) parser. A PCFG is a context-free grammar that associates a probability with each of its production rules. The probability of a parse tree generated from a PCFG is simply the production of the individual probabilities of the productions used to generate it.
You might need to download and install Java in case you don't have it already.
End of explanation
"""
from nltk.parse import CoreNLPParser
cnp = CoreNLPParser()
cnp
result = list(cnp.raw_parse(sentence))[0]
print(result)
result
result.pretty_print()
"""
Explanation: We can see the nested hierarchical structure of the constituents in the preceding output as compared to the flat structure in shallow parsing. Refer to the Penn Treebank reference as needed to lookup other tags.
Constituency Parsing with Stanford CoreNLP
You may have seen in the above messages that they are deprecating the old Stanford Parsers in favor of the more active Stanford Core NLP Project. It might even get removed after nltk version 3.4 so best to stay updated.
Details: https://github.com/nltk/nltk/issues/1839
Step by Step Tutorial here: https://github.com/nltk/nltk/wiki/Stanford-CoreNLP-API-in-NLTK
Sadly a lot of things have changed in the process so we need to do some extra effort to make it work!
Get CoreNLP from here
After you download, go to the folder and spin up a terminal and start the Core NLP Server locally
E:\> java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000
If it runs successfully you should see the following messages on the terminal
E:\stanford\stanford-corenlp-full-2018-02-27>java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - setting default constituency parser
[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz
[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead
[main] INFO CoreNLP - to use shift reduce parser download English models jar from:
[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html
[main] INFO CoreNLP - Threads: 4
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - No tokenizer type provided. Defaulting to PTBTokenizer.
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.4 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.9 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [2.0 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.8 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
[main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [4.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
[main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
[main] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 99996, Elapsed Time: 22.43 (s)
[main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [24.4 sec].
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
End of explanation
"""
import spacy
nlp = spacy.load('en', parse=False, tag=False, entity=False)
dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------'
sentence_nlp = nlp(sentence)
for token in sentence_nlp:
print(dependency_pattern.format(word=token.orth_,
w_type=token.dep_,
left=[t.orth_
for t
in token.lefts],
right=[t.orth_
for t
in token.rights]))
from spacy import displacy
displacy.render(sentence_nlp, jupyter=True,
options={'distance': 110,
'arrow_stroke': 2,
'arrow_width': 8})
"""
Explanation: Dependency Parsing with Spacy
In dependency parsing, we try to use dependency-based grammars to analyze and infer both structure and semantic dependencies and relationships between tokens in a sentence. The basic principle behind a dependency grammar is that in any sentence in the language, all words except one, have some relationship or dependency on other words in the sentence. The word that has no dependency is called the root of the sentence. The verb is taken as the root of the sentence in most cases. All the other words are directly or indirectly linked to the root verb using links , which are the dependencies.
Considering our sentence “The brown fox is quick and he is jumping over the lazy dog” , if we wanted to draw the dependency syntax tree for this, we would have the following structure.
These dependency relationships each have their own meaning and are a part of a list of universal dependency types. This is discussed in an original paper, Universal Stanford Dependencies: A Cross-Linguistic Typology by de Marneffe et al, 2014. You can check out the exhaustive list of dependency types and their meanings here.
If we observe some of these dependencies, it is not too hard to understand them.
The dependency tag det is pretty intuitive — it denotes the determiner relationship between a nominal head and the determiner. Usually, the word with POS tag DET will also have the det dependency tag relation. Examples include fox → the and dog → the.
The dependency tag amod stands for adjectival modifier and stands for any adjective that modifies the meaning of a noun. Examples include fox → brown and dog → lazy.
The dependency tag nsubj stands for an entity that acts as a subject or agent in a clause. Examples include is → fox and jumping → he.
The dependencies cc and conj have more to do with linkages related to words connected by coordinating conjunctions . Examples include is → and and is → jumping.
The dependency tag aux indicates the auxiliary or secondary verb in the clause. Example: jumping → is.
The dependency tag acomp stands for adjective complement and acts as the complement or object to a verb in the sentence. Example: is → quick
The dependency tag prep denotes a prepositional modifier, which usually modifies the meaning of a noun, verb, adjective, or preposition. Usually, this representation is used for prepositions having a noun or noun phrase complement. Example: jumping → over.
The dependency tag pobj is used to denote the object of a preposition . This is usually the head of a noun phrase following a preposition in the sentence. Example: over → dog.
Spacy had two types of English dependency parsers based on what language models you use, you can find more details here. Based on language models, you can use the Universal Dependencies Scheme or the CLEAR Style Dependency Scheme also available in NLP4J now.
End of explanation
"""
from nltk.parse.stanford import StanfordDependencyParser
sdp = StanfordDependencyParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
sdp
result = list(sdp.raw_parse(sentence))[0]
# print the dependency tree
print(result.tree())
result.tree()
result
"""
Explanation: Dependency Parsing with Stanford NLP
End of explanation
"""
from nltk.parse.corenlp import CoreNLPDependencyParser
dep_parser = CoreNLPDependencyParser()
dep_parser
result = list(dep_parser.raw_parse(sentence))[0]
print(result.tree())
result.tree()
result
list(result.triples())
print(result.to_conll(4))
"""
Explanation: Dependency Parsing with Stanford Core NLP
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.