repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
aamirg/athena-hacks-honeywell | modeling.ipynb | apache-2.0 | data = pd.read_csv("./formatted_data.csv",header=0, index_col=False)
data.head()
"""
Explanation: Supervised Learning
Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples. Each example is a pair consisting of an input object (typically a vector) and a desired output value. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples.
Classification - Identifying to which category an object belongs to.
Regression - Predicting a continuous-valued attribute associated with an object.
Understanding the data
This dataset contains 13910 measurements from 16 chemical sensors utilized in simulations for drift compensation in a discrimination task of 6 gases at various levels of concentrations.
The dataset comprises recordings from six distinct pure gaseous substances, namely Ammonia, Acetaldehyde, Acetone, Ethylene, Ethanol, and Toluene, each dosed at a wide variety of concentration values ranging from 5 to 1000 ppmv.
Read the csv data into a Pandas dataframe and print the first 5 rows
End of explanation
"""
drop_cols = ['Sensor_'+x+'1' for x in map(chr,range(65,81))]
drop_cols.append('Batch_No')
data = data.drop(drop_cols, axis=1)
data.head()
"""
Explanation: For each sensor the second column is the normalized form of the first column, so to avoid duplicates we drop the first column (A1,B1...P1) for each sensor.
End of explanation
"""
data.describe()
"""
Explanation: Summarize the data to better understand its distribution and decide on the appropriate preprocessing steps
End of explanation
"""
from sklearn import preprocessing
target = data['Label']
data = data.drop('Label', axis=1)
min_max_scaler = preprocessing.MinMaxScaler(feature_range=(-1,1))
data_scaled = min_max_scaler.fit_transform(data)
"""
Explanation: Data preprocessing
Dealing with missing values - Real world datasets often contain missing values, represented by blanks, NaNs etc
Discard rows and/or columns containing missing values at the risk of losing valuable data
Impute missing values by replacing them with the mean value of the column. An advanced way is to build a regression model to impute the missing values
Encoding categorical features - Using a label encoder helps us transform non-numerical labels to numerical labels. Another approach is Dummy encoding where you convert an attribute by creating duplicate variables which represents one level of a categorical variable. Presence of a level is represent by 1 and absence is represented by 0. For every level present, one dummy variable will be created. <br> In this dataset our target labels are categorical values that have already been encoded as numerical <br>
[1: Ethanol; 2: Ethylene; 3:Ammonia; 4: Acetaldehyde; 5: Acetone; 6: Toluene]
Feature scaling - We standardize the features to ensure that just because some features have a larger magnitude our model won't lead us to using them as the main predictor. Feature scaling helps reduces the training time for models and avoids the optimization from getting stuck in local optima.
Min-Max Scaling - Involves rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data.
Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1.
Scaling to unit length - the vector magnitude is used to obtain a vector of unit length. This usually means dividing each component by the Euclidean length of the vector.
Separate the data into input and output components and perform feature scaling on the input
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_scaled, target, test_size=0.25, random_state=0)
"""
Explanation: Split the dataset into a training set and test set.
If we use the entire dataset to train our model, it will end up modeling random error/noise present in the data, and will have poor predictive performance on unseen future data. This situation is known as Overfitting. To avoid this we hold out part of the available data as a test set and use the remaining for training. Some common splits are 90/10, 80/20, 75/25.
There is a risk of overfitting on the test set as you try to optimize the hyperparameters of parametric models to achieve optimal performance. To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”. Thus training is carried out on the training set, evaluation is done on the validation set and once the parameters have been tuned, final evaluation is carried out on the "unseen" test set.
The drawback of this approach is that we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets. To solve this we use a procedure called Cross-validation which is discussed later.
End of explanation
"""
from sklearn import tree
dt_classifier = tree.DecisionTreeClassifier()
dt_classifier = dt_classifier.fit(X_train, y_train)
y_pred = dt_classifier.predict(X_test)
print "Accuracy: %0.2f" %dt_classifier.score(X_test, y_test)
"""
Explanation: Binomial v/s Multinomial classification
Binomial classification problem is one where the dataset has 2 target classes in the dataset. We are dealing with a Multinomial classification problem, as we have more than 2 target classes in our dataset. To leverage binary classifiers for multinomial classification we can use one of the following strategies-
1. One-vs-All : It involves training a single classifier per class, with the samples of that class as positive samples and all other samples as negatives.
One-vs-One : It involves training K(K-1)/2 binary classifiers for a K-multiclass problem; each receives the samples of a pair of classes from the original training set, and must learn to distinguish these two classes. At prediction time, a voting scheme is applied: all K (K − 1) / 2 classifiers are applied to an unseen sample and the class that got the highest number of "+1" predictions gets predicted by the combined classifier.
Training a model
Decision Tree -
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable based on several input variables. It is a flow-chart-like structure, where each internal (non-leaf) node denotes a test on an attribute, each branch represents the outcome of a test, and each leaf (or terminal) node holds a class label. The topmost node in a tree is the root node.
Non-parametric models (can) become more and more complex with an increasing amount of data.
End of explanation
"""
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import itertools
%matplotlib inline
def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j], horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
return
#plt.figure()
plot_confusion_matrix(confusion_matrix(y_test, y_pred), classes=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"],
title='Confusion matrix')
"""
Explanation: In the above code snippet we fit a decision tree to our dataset, used it to make predictions on our test set and we calculated its accuracy as the number of correct predictions from all predictions made. Accuracy is a starting point but is not a sufficient measure for evaluating a model's predictive power due to a phenomena known as Accuracy Paradox. It yields misleading results if the data set is unbalanced.
Model evaluation metrics
A clean and unambiguous way to visualize the performance of a classifier is to use a use a Confusion matrix
| | Predicted class - Positive | Predicted class - Negative |
|---------------------------------|----------------------------------| |
|Acutal class - Positive | True Positive (TP) | False negative (FN) |
|Acutal class - Negative | False Positive (FP) | True negative (TN) |
True Positives (TP): number of positive examples, labeled as such.
False Positives (FP): number of negative examples, labeled as positive.
True Negatives (TN): number of negative examples, labeled as such.
False Negatives (FN): number of positive examples, labeled as negative.
We use these values to calculate Precision and Recall-
Precision answers the following question : out of all the examples the classifier labeled as positive, what fraction were correct.
$$Precision = \frac{TP}{TP + FP}$$
Recall answers out of all the positive examples there were, what fraction did the classifier pick up. It is calculated as -
$$Recall = \frac{TP}{TP + FN}$$
The harmonic mean of Precision and Recall is known as the F1 Score. It conveys the balance between the precision and the recall.
$$ F_1 score = \frac{2 \times Precision \times Recall}{Precision + Recall}$$
Let's visualize the confusion matrix for the decision tree. The sci-kit learn method just returns a nested array without any labels, so we plot for easier interpretation
End of explanation
"""
from sklearn.metrics import classification_report
print classification_report(y_test, y_pred, target_names=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"])
"""
Explanation: We use one of the methods to compute Precision, Recall and F-1 score for each class.
End of explanation
"""
from sklearn.model_selection import cross_val_score
def cv_score(clf,k):
f1_scores = cross_val_score(clf, data_scaled, target, cv=k, scoring='f1_macro')
print f1_scores
print("F1 score: %0.2f (+/- %0.2f)" % (f1_scores.mean(), f1_scores.std() * 2))
return
"""
Explanation: Cross-Validation
Cross validation is a method for estimating the prediction accuracy of a model on an unseen dataset without using a validation set. Instead of just holding out one part of the data to train on, you hold out different parts. For each part, you train on the rest, and evaluate the set you held out. Now you have effectively used all of your data for testing & training, without testing on data you trained on.
The different methods are -
k-fold CV - The training set is split into k smaller sets and the model is trained using k-1 folds. The resulting model is validated on the remaining part of the data
Leave One out - Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set.
Leave P Out - Similar to Leave One out as it creates all the possible training/test sets by removing p samples from the complete set.
Random Shuffle & Split - It will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets.
End of explanation
"""
cv_score(dt_classifier,10)
"""
Explanation: By default, the score computed at each CV iteration is the score method of the estimator.
It is possible to change this by using the scoring parameter
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier(n_estimators=5)
#rf_classifier = rf_classifier.fit(X_train, y_train)
#y_pred_rf = classifier.predict(X_test)
cv_score(rf_classifier,10)
"""
Explanation: Ensemble learning
Ensemble methods are a divide-and-conquer approach used to improve performance. The main principle behind ensemble methods is that a group of “weak learners” can come together to form a “strong learner”.Ensemble learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. One such method is Random Forests.
Random Forests
Decision trees are a popular & easy to interpret method but trees that are grown very deep tend to learn highly irregular patterns (noise). They tend to overfit their training sets i.e have low bias but very high variance.
Random Forests a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance.This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance of the final model as the individual decision trees are less correlated.
So how are the trees different? Well,
1. We used random samples of the observations to train them (they've each seen only part of the data) , and
2. We used a subset of the features for each tree.
End of explanation
"""
#plot_confusion_matrix(confusion_matrix(y_test, y_pred_rf), classes=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"],
# title='Confusion matrix')
#print classification_report(y_test, y_pred_rf, target_names=["Ethanol", "Ethylene", "Ammonia", "Acetaldehyde", "Acetone", "Toluene"])
"""
Explanation: Uncomment the following snippets of code to view the confusion matrix and classification report for the Random forest model.
End of explanation
"""
from sklearn import svm
svm_classifier = svm.SVC(C=1.0, kernel='rbf', gamma='auto', cache_size=9000, decision_function_shape = 'ovr')
cv_score(svm_classifier,10)
"""
Explanation: Bias - Variance Trade off
The bias–variance tradeoff is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithms from generalizing beyond their training set.
Bias : error from erroneous assumptions in the learning algorithm. High bias can cause underfitting : algorithm misses the relevant relations between features and target outputs.
Variance : error from sensitivity to small fluctuations in the training set. High variance can cause overfitting: modeling the random noise in the training data, rather than the intended outputs.
Support Vector Machines
An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall. It is a parametric learner hence we have a finite number of parameters.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedShuffleSplit
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42)
grid = GridSearchCV(svm.SVC(), param_grid=param_grid, cv=cv)
grid.fit(data, target)
print("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
"""
Explanation: Hyperparameter optimization
Hyperparameter optimization is the problem of choosing a set of hyperparameters for a learning algorithm, usually with the goal of optimizing a measure of the algorithm's performance.
Grid Search
Grid search, or a parameter sweep, is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.
End of explanation
"""
|
adit-chandra/tensorflow | tensorflow/lite/experimental/examples/lstm/TensorFlowLite_LSTM_Keras_Tutorial.ipynb | apache-2.0 | !pip install tf-nightly
"""
Explanation: Overview
This codelab will demonstrate how to build a LSTM model for MNIST recognition using keras & how to convert the model to TensorFlow Lite.
End of explanation
"""
# This is important!
import os
os.environ['TF_ENABLE_CONTROL_FLOW_V2'] = '1'
import tensorflow as tf
import numpy as np
"""
Explanation: Prerequisites
We're going to override the environment variable TF_ENABLE_CONTROL_FLOW_V2 since for TensorFlow Lite control flows.
End of explanation
"""
# Step 1: Build the MNIST LSTM model.
def buildLstmLayer(inputs, num_layers, num_units):
"""Build the lstm layer.
Args:
inputs: The input data.
num_layers: How many LSTM layers do we want.
num_units: The unmber of hidden units in the LSTM cell.
"""
lstm_cells = []
for i in range(num_layers):
lstm_cells.append(
tf.lite.experimental.nn.TFLiteLSTMCell(
num_units, forget_bias=0, name='rnn{}'.format(i)))
lstm_layers = tf.keras.layers.StackedRNNCells(lstm_cells)
# Assume the input is sized as [batch, time, input_size], then we're going
# to transpose to be time-majored.
transposed_inputs = tf.transpose(
inputs, perm=[1, 0, 2])
outputs, _ = tf.lite.experimental.nn.dynamic_rnn(
lstm_layers,
transposed_inputs,
dtype='float32',
time_major=True)
unstacked_outputs = tf.unstack(outputs, axis=0)
return unstacked_outputs[-1]
tf.reset_default_graph()
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28), name='input'),
tf.keras.layers.Lambda(buildLstmLayer, arguments={'num_layers' : 2, 'num_units' : 64}),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
"""
Explanation: Step 1 Build the MNIST LSTM model.
Note we will be using tf.lite.experimental.nn.TFLiteLSTMCell & tf.lite.experimental.nn.dynamic_rnn in the tutorial.
Also note here, we're not trying to build the model to be a real world application, but only demonstrates how to use TensorFlow lite. You can a build a much better model using CNN models.
For more canonical lstm codelab, please see here.
End of explanation
"""
# Step 2: Train & Evaluate the model.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Cast x_train & x_test to float32.
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
"""
Explanation: Step 2: Train & Evaluate the model.
We will train the model using MNIST data.
End of explanation
"""
# Step 3: Convert the Keras model to TensorFlow Lite model.
sess = tf.keras.backend.get_session()
input_tensor = sess.graph.get_tensor_by_name('input:0')
output_tensor = sess.graph.get_tensor_by_name('output/Softmax:0')
converter = tf.lite.TFLiteConverter.from_session(
sess, [input_tensor], [output_tensor])
tflite = converter.convert()
print('Model converted successfully!')
"""
Explanation: Step 3: Convert the Keras model to TensorFlow Lite model.
Note here: we just convert to TensorFlow Lite model as usual.
End of explanation
"""
# Step 4: Check the converted TensorFlow Lite model.
interpreter = tf.lite.Interpreter(model_content=tflite)
try:
interpreter.allocate_tensors()
except ValueError:
assert False
MINI_BATCH_SIZE = 1
correct_case = 0
for i in range(len(x_test)):
input_index = (interpreter.get_input_details()[0]['index'])
interpreter.set_tensor(input_index, x_test[i * MINI_BATCH_SIZE: (i + 1) * MINI_BATCH_SIZE])
interpreter.invoke()
output_index = (interpreter.get_output_details()[0]['index'])
result = interpreter.get_tensor(output_index)
# Reset all variables so it will not pollute other inferences.
interpreter.reset_all_variables()
# Evaluate.
prediction = np.argmax(result)
if prediction == y_test[i]:
correct_case += 1
print('TensorFlow Lite Evaluation result is {}'.format(correct_case * 1.0 / len(x_test)))
"""
Explanation: Step 4: Check the converted TensorFlow Lite model.
We're just going to load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.
End of explanation
"""
|
csaladenes/csaladenes.github.io | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/04.3-Density-GMM.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
"""
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Density Estimation: Gaussian Mixture Models
Here we'll explore Gaussian Mixture Models, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
End of explanation
"""
np.random.seed(2)
a=np.random.normal(0, 2, 2000)
b=np.random.normal(5, 5, 2000)
c=np.random.normal(3, 0.5, 600)
x = np.concatenate([a,
b,
c])
ax=plt.figure().gca()
ax.hist(x, 80, density=True,color='r')
# ax.hist(c, 80, density=True,color='g')
ax.set_xlim(-10, 20);
"""
Explanation: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution:
End of explanation
"""
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(5, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
"""
Explanation: Gaussian mixture models will allow us to approximate this density:
End of explanation
"""
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, density=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
"""
Explanation: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:
End of explanation
"""
print(clf.bic(X))
print(clf.aic(X))
"""
Explanation: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
$R^2$
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
End of explanation
"""
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
"""
Explanation: Let's take a look at these as a function of the number of gaussians:
End of explanation
"""
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
"""
Explanation: It appears that for both the AIC and BIC, 4 components is preferred.
Example: GMM For Outlier Detection
GMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
End of explanation
"""
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
"""
Explanation: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:
End of explanation
"""
set(true_outliers) - set(detected_outliers)
"""
Explanation: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
End of explanation
"""
set(detected_outliers) - set(true_outliers)
"""
Explanation: And here are the non-outliers which were spuriously labeled outliers:
End of explanation
"""
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
"""
Explanation: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!
End of explanation
"""
|
Danghor/Formal-Languages | Python/Regexp-2-NFA.ipynb | gpl-2.0 | class RegExp2NFA:
def __init__(self, Sigma):
self.Sigma = Sigma
self.StateCount = 0
"""
Explanation: From Regular Expressions to <span style="font-variant:small-caps;">Fsm</span>s
This notebook shows how a given regular expression $r$ can be transformed into an equivalent finite state machine.
It implements the theory that is outlined in section 4.4. of the
lecture notes.
The class RegExp2NFA administers two member variables:
- Sigma is the <em style="color:blue">alphabet</em>, i.e. the set of characters used.
- StateCount is a counter that is needed to create <em style="color:blue">unique</em> state names.
End of explanation
"""
def toNFA(self, r):
if r == 0:
return self.genEmptyNFA()
if r == '':
return self.genEpsilonNFA()
if isinstance(r, str) and len(r) == 1:
return self.genCharNFA(r)
if r[0] == 'cat':
return self.catenate(self.toNFA(r[1]), self.toNFA(r[2]))
if r[0] == 'or':
return self.disjunction(self.toNFA(r[1]), self.toNFA(r[2]))
if r[0] == 'star':
return self.kleene(self.toNFA(r[1]))
raise ValueError(f'{r} is not a proper regular expression.')
RegExp2NFA.toNFA = toNFA
del toNFA
"""
Explanation: The member function toNFA takes an object self of class RegExp2NFA and a regular expression r and returns a finite state machine
that accepts the same language as described by r. The regular expression is represented in Python as follows:
- The regular expression $\emptyset$ is represented as the number 0.
- The regular expression $\varepsilon$ is represented as the empty string ''.
- The regular expression $c$ that matches the character $c$ is represented by the character $c$.
- The regular expression $r_1 \cdot r_2$ is represented by the triple $\bigl(\texttt{'cat'}, \texttt{repr}(r_1), \texttt{repr}(r_2)\bigr)$.
Here, and in the following, for a given regular expression $r$ the expression $\texttt{repr}(r)$ denotes the Python representation of the regular
expressions $r$.
- The regular expression $r_1 + r_2$ is represented by the triple $\bigl(\texttt{'or'}, \texttt{repr}(r_1), \texttt{repr}(r_2)\bigr)$.
- The regular expression $r^*$ is represented by the pair $\bigl(\texttt{'star'}, \texttt{repr}(r)\bigr)$.
End of explanation
"""
def genEmptyNFA(self):
q0 = self.getNewState()
q1 = self.getNewState()
delta = {}
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genEmptyNFA = genEmptyNFA
del genEmptyNFA
"""
Explanation: The <span style="font-variant:small-caps;">Fsm</span> genEmptyNFA() is defined as
$$\bigl\langle { q_0, q_1 }, \Sigma, {}, q_0, { q_1 } \bigr\rangle. $$
Note that this <span style="font-variant:small-caps;">Fsm</span> has no transitions at all.
End of explanation
"""
def genEpsilonNFA(self):
q0 = self.getNewState()
q1 = self.getNewState()
delta = { (q0, ''): {q1} }
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genEpsilonNFA = genEpsilonNFA
del genEpsilonNFA
"""
Explanation: The <span style="font-variant:small-caps;">Fsm</span> genEpsilonNFA is defined as
$$ \bigl\langle { q_0, q_1 }, \Sigma,
\bigl{ \langle q_0, \varepsilon\rangle \mapsto {q_1} \bigr}, q_0, { q_1 } \bigr\rangle.
$$
End of explanation
"""
def genCharNFA(self, c):
q0 = self.getNewState()
q1 = self.getNewState()
delta = { (q0, c): {q1} }
return {q0, q1}, self.Sigma, delta, q0, { q1 }
RegExp2NFA.genCharNFA = genCharNFA
del genCharNFA
"""
Explanation: For a letter $c \in \Sigma$ the <span style="font-variant:small-caps;">Fsm</span> genCharNFA$(c)$ is defined as
$$ A(c) =
\bigl\langle { q_0, q_1 }, \Sigma,
\bigl{ \langle q_0, c \rangle \mapsto {q_1}\bigr}, q_0, { q_1 } \bigr\rangle.
$$
End of explanation
"""
def catenate(self, f1, f2):
M1, Sigma, delta1, q1, A1 = f1
M2, Sigma, delta2, q3, A2 = f2
q2, = A1
delta = delta1 | delta2
delta[q2, ''] = {q3}
return M1 | M2, Sigma, delta, q1, A2
RegExp2NFA.catenate = catenate
del catenate
"""
Explanation: Given two <span style="font-variant:small-caps;">Fsm</span>s f1 and f2, the function catenate(f1, f2)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it can be written
in the form
$$ s = s_1s_2 $$
and $s_1$ is recognized by f1 and $s_2$ is recognized by f2.
Assume that $f_1$ and $f_2$ have the following form:
- $f_1 = \langle Q_1, \Sigma, \delta_1, q_1, { q_2 }\rangle$,
- $f_2 = \langle Q_2, \Sigma, \delta_2, q_3, { q_4 }\rangle$,
- $Q_1 \cap Q_2 = {}$.
Then $\texttt{catenate}(f_1, f_2)$ is defined as:
$$ \bigl\langle Q_1 \cup Q_2, \Sigma,
\bigl{ \langle q_2,\varepsilon\rangle \mapsto {q_3} \bigr}
\cup \delta_1 \cup \delta_2, q_1, { q_4 } \bigr\rangle.
$$
End of explanation
"""
def disjunction(self, f1, f2):
M1, Sigma, delta1, q1, A1 = f1
M2, Sigma, delta2, q2, A2 = f2
q3, = A1
q4, = A2
q0 = self.getNewState()
q5 = self.getNewState()
delta = delta1 | delta2
delta[q0, ''] = { q1, q2 }
delta[q3, ''] = { q5 }
delta[q4, ''] = { q5 }
return { q0, q5 } | M1 | M2, Sigma, delta, q0, { q5 }
RegExp2NFA.disjunction = disjunction
del disjunction
"""
Explanation: Given two <span style="font-variant:small-caps;">Fsm</span>s f1 and f2, the function disjunction(f1, f2)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it is either
is recognized by f1 or by f2.
Assume again that the states of
$f_1$ and $f_2$ are different and that $f_1$ and $f_2$ have the following form:
- $f_1 = \langle Q_1, \Sigma, \delta_1, q_1, { q_3 }\rangle$,
- $f_2 = \langle Q_2, \Sigma, \delta_2, q_2, { q_4 }\rangle$,
- $Q_1 \cap Q_2 = {}$.
Then $\texttt{disjunction}(f_1, f_2)$ is defined as follows:
$$ \bigl\langle { q_0, q_5 } \cup Q_1 \cup Q_2, \Sigma,
\bigl{ \langle q_0,\varepsilon\rangle \mapsto {q_1, q_2},
\langle q_3,\varepsilon\rangle \mapsto {q_5},
\langle q_4,\varepsilon\rangle \mapsto {q_5} \bigr}
\cup \delta_1 \cup \delta_2, q_0, { q_5 } \bigr\rangle
$$
End of explanation
"""
def kleene(self, f):
M, Sigma, delta0, q1, A = f
q2, = A
q0 = self.getNewState()
q3 = self.getNewState()
delta = delta0
delta[q0, ''] = { q1, q3 }
delta[q2, ''] = { q1, q3 }
return { q0, q3 } | M, Sigma, delta, q0, { q3 }
RegExp2NFA.kleene = kleene
del kleene
"""
Explanation: Given an <span style="font-variant:small-caps;">Fsm</span> f, the function kleene(f)
creates an <span style="font-variant:small-caps;">Fsm</span> that recognizes a string $s$ if it can be written as
$$ s = s_1 s_2 \cdots s_n $$
and all $s_i$ are recognized by f. Note that $n$ might be $0$.
If f is defined as
$$ f = \langle Q, \Sigma, \delta, q_1, { q_2 } \rangle,
$$
then kleene(f) is defined as follows:
$$ \bigl\langle { q_0, q_3 } \cup Q, \Sigma,
\bigl{ \langle q_0,\varepsilon\rangle \mapsto {q_1, q_3},
\langle q_2,\varepsilon\rangle \mapsto {q_1, q_3}, \bigr}
\cup \delta, q_0, { q_3 } \bigr\rangle.
$$
End of explanation
"""
def getNewState(self):
self.StateCount += 1
return self.StateCount
RegExp2NFA.getNewState = getNewState
del getNewState
"""
Explanation: The function getNewState returns a new number that has not yet been used as a state.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_linear_model_patterns.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Romain Trachel <trachelr@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Linear classifier on sensor data with plot patterns and filters
Decoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable [1]_ than the classifier filters (weight vectors).
The patterns explain how the MEG and EEG data were generated from the
discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
References
.. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,
Blankertz, B., & Bießmann, F. (2014). On the interpretation of
weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25)
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=4, baseline=None, preload=True)
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
"""
Explanation: Set parameters
End of explanation
"""
clf = LogisticRegression()
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name)
"""
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
"""
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel(LogisticRegression())) # 3) fits a logistic regression
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1])
"""
Explanation: Let's do the same on EEG data using a scikit-learn pipeline
End of explanation
"""
|
flsantos/startup_acquisition_forecast | .ipynb_checkpoints/1_dataset_creation-checkpoint.ipynb | mit | import numpy as np
import pandas as pd
"""
Explanation: Loading Companies...
End of explanation
"""
companies = pd.read_csv('data/companies.csv')
#Having a look to the companies data structure
companies[:3]
#Let's first remove non USA companies, since they usually have a lot of missing data
companies_USA = companies[companies['country_code'] == 'USA']
#Check if there are any missing data for state_code in USA based companies
companies_USA['state_code'].unique()
#companies_USA['state_code'].value_counts()
# No nan values for state_code
#Let's maintain region and city in the dataset but probably they are not gonna be used
#companies_USA['city'].value_counts()
"""
Explanation: Let's import the data
End of explanation
"""
from operator import methodcaller
#Let's analyze category_list and probably expand it as dummy variables
#get a unique list of the categories
categories = list(companies_USA['category_list'].astype('str').unique())
#split each categori by |
categories = map(methodcaller("split", "|"), categories)
#flatten the list of sub categories
categories = [item for sublist in categories for item in sublist]
#total of 60k different categories
#categories
len(categories)
#We'll need to select the most important categories (that appears most of the times, and use Other for the rest)
companies_series = companies_USA['category_list'].astype('str')
categories_splitted_count = companies_series.str.split('|').apply(lambda x: pd.Series(x).value_counts()).sum()
#dummies
dummies = companies_series.str.get_dummies(sep='|')
########### Count of categories splitted first 50)###########
top50categories = list(categories_splitted_count.sort_values(ascending=False).index[:50])
##### Create a dataframe with the 50 top categories to be concatenated later to the complete dataframe
categories_df = dummies[top50categories]
categories_df = categories_df.add_prefix('Category_')
"""
Explanation: Converting categories to dummy variables (selecting top 50)
End of explanation
"""
#Let's start by comparing and understanding the difference between investments.csv and rounds.csv
df_investments = pd.read_csv('data/investments.csv')
df_investments = df_investments[df_investments['company_country_code'] == 'USA']
df_rounds = pd.read_csv('data/rounds.csv')
df_rounds = df_rounds[df_rounds['company_country_code'] == 'USA']
#companies_USA[companies_USA['permalink'] == '/organization/0xdata']
#df_investments[df_investments['company_permalink'] == '/organization/0xdata' ]
#df_rounds[df_rounds['company_permalink'] == '/organization/0xdata' ]
"""
Explanation: Comparing investments.csv and rounds.csv
End of explanation
"""
#df_rounds
#Prepare an aggregated rounds dataframe grouped by company and funding type
rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]})
#Get available unique funding types
funding_types = list(rounds_agg.index.levels[1])
funding_types
#Prepare the dataframe where all the dummy features for each funding type will be added (number of rounds and total sum for each type)
rounds_df = companies_USA[['permalink']]
rounds_df = rounds_df.rename(columns = {'permalink':'company_permalink'})
#Iterate over each kind of funding type, and add two new features for each into the dataframe
def add_dummy_for_funding_type(df, aggr_rounds, funding_type):
funding_df = aggr_rounds.iloc[aggr_rounds.index.get_level_values('funding_round_type') == funding_type].reset_index()
funding_df.columns = funding_df.columns.droplevel()
funding_df.columns = ['company_permalink', funding_type, funding_type+'_funding_total_usd', funding_type+'_funding_rounds']
funding_df = funding_df.drop(funding_type,1)
new_df = pd.merge(df, funding_df, on='company_permalink', how='left')
new_df = new_df.fillna(0)
return new_df
#rounds_agg was generated a few steps above
for funding_type in funding_types:
rounds_df = add_dummy_for_funding_type(rounds_df, rounds_agg, funding_type)
#remove the company_permalink variable, since it's already available in the companies dataframe
rounds_df = rounds_df.drop('company_permalink', 1)
#set rounds_df to have the same index of the other dataframes
rounds_df.index = companies_USA.index
rounds_df[:3]
"""
Explanation: The difference between investments and rounds is that investments is providing the information of where the money came from. Investments contains information about which investors paid each round. While rounds is grouping and totalizing the information by round.
Analyzing rounds.csv
End of explanation
"""
startups_df = pd.concat([companies_USA, categories_df, rounds_df], axis=1, ignore_index=False)
startups_df[:3]
"""
Explanation: Merging 3 dataframes (companies, categories and rounds)
End of explanation
"""
startups_df.index = list(startups_df['permalink'])
startups_df = startups_df.drop('permalink', 1)
startups_df.to_csv('data/startups_1.csv')
#startups_df
"""
Explanation: Write resulting dataframe to csv file
End of explanation
"""
|
StingraySoftware/notebooks | DataQuickLook/Quicklook NuSTAR data with Stingray.ipynb | mit | %load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from stingray.powerspectrum import AveragedPowerspectrum, DynamicalPowerspectrum
from stingray.crossspectrum import AveragedCrossspectrum
from stingray.events import EventList
from stingray.lightcurve import Lightcurve
from stingray.gti import create_gti_from_condition
"""
Explanation: In this notebook, we will analyze a NuSTAR data of the black hole X-ray binary H1743-322 with Stingray. Here we assume that the user has already reduced the data with the official pipeline and ran barycorr or other tools to refer the event times to the solar system barycenter.
End of explanation
"""
evA = EventList.read('nustar_A_src.evt', 'hea')
evB = EventList.read('nustar_B_src.evt', 'hea')
"""
Explanation: Quicklook NuSTAR data with Stingray
Let us load the data from two event lists corresponding to the two detectors onboard NuSTAR. fmt='hea' indicates event data produced by HEAsoft tools or compatible (e.g. XMM-Newton).
End of explanation
"""
all_ev = evA.join(evB)
"""
Explanation: For the sake of a quicklook, let us join the two event lists
End of explanation
"""
lc = all_ev.to_lc(100)
plt.figure(figsize=(12, 7))
plt.plot(lc.time, lc.counts)
bad_time_intervals = list(zip(lc.gti[:-1, 1], lc.gti[1:, 0]))
for b in bad_time_intervals:
plt.axvspan(b[0], b[1], color='r', alpha=0.5, zorder=10)
plt.ylim([5000, 6500])
"""
Explanation: Let us calculate the light curve and plot it.
In red, we show the bad time intervals, when the satellite was not acquiring valid data due to Earth occultation, SAA passages, or other issues.
End of explanation
"""
all_ev.energy
"""
Explanation: The light curve shows some long-term variability. Let us look at the colors. First of all, let us check that the events contain the energy of each photon. This should be the case, because NuSTAR data, together with XMM and NICER, are very well understood by Stingray and the calibration is done straight away.
End of explanation
"""
new_gti = create_gti_from_condition(lc.time, lc.counts > 5200)
all_ev.gti = new_gti
evA.gti = new_gti
evB.gti = new_gti
lc.gti = new_gti
hard = (all_ev.energy > 10) & (all_ev.energy < 79)
soft = (all_ev.energy > 3) & (all_ev.energy < 5)
hard_ev = all_ev.apply_mask(hard)
soft_ev = all_ev.apply_mask(soft)
hard_lc = hard_ev.to_lc(200)
soft_lc = soft_ev.to_lc(200)
hard_lc.apply_gtis()
soft_lc.apply_gtis()
hardness_ratio = hard_lc.counts / soft_lc.counts
intensity = hard_lc.counts + soft_lc.counts
plt.figure()
plt.scatter(hardness_ratio, intensity)
plt.xlabel("Hardness")
plt.ylabel("Counts")
"""
Explanation: Other missions might have all_ev.energy set to None. In which case, one needs to use all_ev.pi and express the energy through the PI channels (See the HENDRICS documentation for more advanced calibration using the rmf files).
Also, we notice that some GTIs do not catch all bad intervals (see how the light curve drops close to GTI borders). We make a more aggressive GTI filtering now:
End of explanation
"""
pds = AveragedPowerspectrum.from_events(all_ev, segment_size=256, dt=0.001, norm='leahy')
plt.figure(figsize=(10,7))
plt.loglog(pds.freq, pds.power)
plt.xlabel("Frequency")
plt.ylabel("Power (Leahy)")
"""
Explanation: Despite some light curve variability, the hardness ratio seems pretty stable during the observation.
Let us now look at the power density spectrum. Notice that we are using a sampling time of 0.001 s, meaning that we will investigate the power spectrum up to 500 Hz
End of explanation
"""
cs = AveragedCrossspectrum.from_events(evA, evB, segment_size=256, dt=0.001, norm='leahy')
plt.figure(figsize=(10,7))
plt.semilogx(cs.freq, cs.power.real)
"""
Explanation: Nice Quasi-periodic oscillations there! Note that at high frequencies the white noise level increases. This is not real variability, but an effect of dead time. The easiest way to get a flat periodogram at high frequencies is using the cospectrum instead of the power density spectrum. For this, we use separately the events from the two detectors. The cospectrum calculation is slightly slower than the power spectrum.
For an accurate way to correct the power density spectrum from dead time, see the documentation of stingray.deadtime and the Frequency Amplitude Difference (FAD) correction.
End of explanation
"""
cs_reb = cs.rebin_log(0.02)
plt.figure(figsize=(10,7))
plt.loglog(cs_reb.freq, cs_reb.power.real)
plt.ylim([1e-3, None])
plt.xlabel("Frequency")
plt.ylabel("Cospectrum Power")
"""
Explanation: To improve the plot, we can rebin the data logarithmically
End of explanation
"""
|
zhuanxuhit/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True) # descending order
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.':"||Period||",
',':"||Comma||",
'"':"||Quotation_Mark||",
';':"||Semicolon||",
'!':"||Exclamation_mark||",
'?':"||Question_mark||",
'(':"||Left_Parentheses||",
')':"||Right_Parentheses||",
'--':"||Dash||",
'\n':"||Return||"
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
input_ = tf.placeholder(shape=[None,None],name='input',dtype=tf.int32) # input shape = [batch_size, seq_size]
targets = tf.placeholder(shape=[None,None],name='targets',dtype=tf.int32)
learning_rate = tf.placeholder(dtype=tf.float32)
return input_, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
lstm_layers = 5
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, tf.identity(initial_state,name='initial_state')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs,dtype=tf.float32)
return outputs, tf.identity(final_state,name="final_state")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed_dim = 300;
embed = get_embed(input_data,vocab_size,embed_dim)
outputs, final_state = build_rnn(cell,embed)
# print(outputs) # Tensor("rnn/transpose:0", shape=(128, 5, 256), dtype=float32)
# print(final_state) # Tensor("final_state:0", shape=(2, 2, ?, 256), dtype=float32)
# !!! it is really import to have a good weigh init
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None, #tf.nn.relu
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
# print(logits) # Tensor("fully_connected/Relu:0", shape=(128, 5, 27), dtype=float32)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
# 根据建议修改的方法,很赞!
def get_batches(int_text, batch_size, seq_length):
n_batches = int(len(int_text) / (batch_size * seq_length))
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x, y)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
# def get_batches(int_text, batch_size, seq_length):
# """
# Return batches of input and target
# :param int_text: Text with the words replaced by their ids
# :param batch_size: The size of batch
# :param seq_length: The length of sequence
# :return: Batches as a Numpy array
# """
# # TODO: Implement Function
# batches = []
# n_batchs = (len(int_text)-1) // (batch_size * seq_length)
# # int_text = int_text[:n_batchs*batch_size * seq_length+1]
# for i in range(0,n_batchs*seq_length,seq_length):
# x = []
# y = []
# for j in range(i,i+batch_size * seq_length,seq_length):
# x.append(int_text[j:j+seq_length])
# y.append(int_text[j+1:j+1+seq_length])
# batches.append([x,y])
# return np.array(batches)
# #print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3))
# """
# DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
# """
# tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# 4257 line ,average 11 words
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 200
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 10 # !!! when i increase the seq_length from 5 to 10,it really helps,如果继续增加会怎么样呢?
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 40
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
# input_data_shape[0] batch size
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
import random
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
r = random.uniform(0,1)
#store prediction char
s = 0
#since length > indices starting at 0
char_id = len(probabilities) - 1
#for each char prediction probabilty
for i in range(len(probabilities)):
#assign it to S
s += probabilities[i]
#check if probability greater than our randomly generated one
if s >= r:
#if it is, thats the likely next char
char_id = i
break
return int_to_vocab[char_id]
# 另一种简单方法,对于为什么这么选择,可以参考一篇文章:
# http://yanyiwu.com/work/2014/01/30/simhash-shi-xian-xiang-jie.html
rand = np.sum(probabilities) * np.random.rand(1)
pred_word = int_to_vocab[int(np.searchsorted(np.cumsum(probabilities), rand))]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
rmichnovicz/Sick-Slopes | Slopes.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
deg = np.arange(0.0, 90.01, 0.01)
def deg2dist(deg): return 10.29 * np.cos(np.pi / 180 * deg)
dist = deg2dist(deg)
# Note that using plt.subplots below is equivalent to using
# fig = plt.figure and then ax = fig.add_subplot(111)
fig, ax = plt.subplots()
ax.plot(deg, dist)
ax.set(xlabel='degrees', ylabel='Distance',
title='Distance between lines of longitude 1/3 arc second apart')
ax.grid()
fig.savefig("degrees_and_dist.png")
plt.show()
print("Notable Latitudes")
print("49th parrallel (US-Canada border): ", deg2dist(49))
print("25.9 deg N (Brownsville, TX): ", deg2dist(25.9))
print("35.0 deg (LA): ", deg2dist(35.0))
print("40.7 deg (NYC): ", deg2dist(40.7))
"""
Explanation: Sick Slopes
Finding routes maximize speed in unpowered personal transport
The hobby I've spent the most time doing is longboarding, where my board is basically a large skateboard optimized for cruising around. I have spent many days and nights cruising in Atlanta, Houston, Dallas, and Barcelona. Often, I want to find places where I can fly downhill fast, especially in routes uninterrupted by stop signs, stoplights, and other barriers where I can slow down before encountering anything interesting. Basically, I just want to find the routes where I can go fast.
Existing Soutions
Findhills is the state-of-the-art hillfinding technology. In the site, users can link together points on Google Maps to create a route that follows roads. In a sidebar, a gorgeous, interactive, color-coded graph shows steepness and elevation change
This tool is best used in conjunction with a topo map such as MyTopo or Google Maps terrain view
Browsing the web
There's several YouTube Videos that reveal locations. Searching for specific cites can yeild surprisingly good results.
A couple of people have started spot databases. Check out BoardSpots and Longboard Mapp.
Longboarding forums like Siverfish (rip) and /r/longboarding are very stingy about posting spots since longboarding is illegal in so many places.
Data Sources: OpenStreetMap
Since Google Maps doesn't allow for direct download of street data, OpenStreetMap (OSM) is the source from which we'll get street locations. OSM's API takes 2 points that represent a bounding box (rectangle of coordinates) and returns a list of elements, of which OSM has defined 3 types, all of which can have tags (attributes):
* Nodes, which are points with a specific corrdinate pair
* Ways, which are made up of points, typically roads, rivers or bounderies. The tag prefix 'highway' indicates any type of road or path.
* Relations, which are made up of nodes, ways, and other relations.
For this project we process this data by creating an adjacency list of all nodes found in ways with desirable attributes, all of which were originally defined in British terms. We eliminate roads marked as 'highway=motorway' (ie. limited access freeway), highway=trunk, and highway=primary. Some tags such as 'highway=steps' can definately be eliminated. Although the current implementation eliminates ways with 'surface=unpaved' and similar ways, this should be an editable attribute so bikers can enable unpaved surfaces and longboarders can disable them. We also eliminate bridges, since USGS typically provides elevation data for land under bridges rather than the bridge itself. With further testing, it might be advantagious to assume all bridges are flat. For one-way streets, we only complete the adjacency list for the legal direction.
We also generate from OSM a list of stop signs and traffic signals. Stop signs (and other equivilent markers) are unfortuately marked poorly by OSM: While there are over 1,000,000 signals marked, there are less than 400,000 stop signs marked, and I often see areas with no stop signs marked. While this may be because of the difficulty seeing stop signs from airial pictures, it might also be due to the policy on the marker for stopsigns, 'highway=stop', for interections where only some of the roads have stop signs (not an all way stop):
Since the stop line on the approach applies to only one travel direction, that direction can usually be deduced by finding the shorter distance to the priority intersection. However, a few stop signs are on 2-way streets between closely-spaced junctions, making it necessary to identify the travel direction that stops. Where needed, this can be done using direction=forward or direction=backward, relating stop direction to OSM forward/backward of the way that contains the highway=stop node.
This is quite annoying for routing software that wants to account for stop signs. It shouldn't cause too many issues with my program though.
Data Sources: USGS Elevation Data
USGS publishes elevation data in its National Elevation Dataset (NED). While the entire US besides Alaska has been mapped at the resolution of 1/3 arc second, some of the US based on what I assume is state and municipal funding has been mapped at resolutions of 1/9 arc second and 1 meter.
So how large is 1/3 of an arc second? The question is complicated if you're trying to deal with the earth's shape of an elipsoid with varying elevation, but we can approximate it as a sphere for our purposes since we're not trying to point satellites or anything. Lines of latitude 1/3 arc seconds apart are about 10.29 meters apart, as as are lines of longitude at the equator. However, lines of longitude become closer together farther away from the equator: 10.29 * cos(latitude) can calculate the distance between 2 lines of longitude 1/3 arc seconds apart.
End of explanation
"""
g = -9.81 #accelertion due to gravity, m/s
drag_c = .6 #drag coefficient of human body
cross_a = .68 #Cross-sectional area of human body
mass = 80 #kg
frict_c = .03 #Coefficient of friction
import math
def acceleration_due_to_wind(v):
return -v**2 * (1.225 * drag_c * cross_a) / (2 * mass)
def acceleration_due_to_slope_no_friction(theta):
return g * math.sin(theta)
"""
Explanation: So how should we query points?
I think I've given this problem a bit more thought than it deserves for my purposes, but I'll type it out here. I'm not sure how the USGS decides what single elevation should represent an area of land, but if I were them, I would either go for the the median elevation that appears to represent land in their point cloud, or I would try and represent a point towards the middle of the block. In either case, most slopes in life are rather continious, so the two should be simmilar
Defining the problem
To calculate the acceleration of of a wheeled object on an incline (or lack thereof), we must consider its mass and the forces acting upon it, namely, the acceleration due to gravity, the wheel's internal bearing friction, the rolling friction, and air resistance. The former 3 contribute forces porportional to the sine of the incline; the latter is the daily work of thousands of engineers.
Reducing the problem's complexity
A core part of finding any algorithm is taking a node and checking all adjacent nodes to see if an adjacent node's speed can be improved by travelling through a road segment. Unfortunately, our formula to calculate the speed after going through a node requires a numerical integration, so it's kinda slow. We can approximate a solution by instead looking at the energy gained or lost going through the segment if air did not exist. We use these to find the optimal paths, then we use these paths and get a more precise simulation using our fancy formula.
How much does air matter? v0 ** 2
End of explanation
"""
def equivalent_slope(v):
return math.tan(math.asin(acceleration_due_to_wind(v) / g))
t = np.arange(0.0, 15.0, 0.01)
s = [equivalent_slope(v) for v in t]
plt.plot(t, s)
plt.xlabel('Speed (m/s)')
plt.ylabel('Equivalent Slope')
plt.title('Comparing deceleration due to wind to accelerations due to upward slopes')
plt.show()
t = np.arange(0.0, 15.0, 0.01)
s = [equivalent_slope(v) for v in t]
t = t * 2.237
plt.plot(t, s)
plt.xlabel('Speed (mph)')
plt.ylabel('Equivalent Slope')
plt.title('Comparing deceleration due to wind to accelerations due to upward slopes')
plt.show()
t = np.arange(0.0, 15.0, 0.01)
s = [acceleration_due_to_wind(v)/(g) for v in t]
t = t * 2.237
plt.plot(t, s)
plt.xlabel('Speed (mph)')
plt.ylabel('Equivalent Friction coefficient')
plt.title('Comparing deceleration due to wind to accelerations due to friction coefficients on flat')
plt.show()
"""
Explanation: Solving for theta and finding the equivelent slope of the angle...
End of explanation
"""
data = {
'north': 33.7874,
'west': -84.4203,
'south': 33.7677,
'east': -84.3812,
}
import wget
from scan_product_links import urls
import math
import os
us_urls = urls("elevationproductslinks/13secondplots.csv")
mx_ca_urls = urls("elevationproductslinks/1secondplots.csv")
def download_coords(data, country='United States'):
# TODO check if request is gucci
# TODO Remove the following block of code in production
if country == 'United States':
path_suffix = '_13'
useful_urls = us_urls
else:
path_suffix = '_1'
useful_urls = mx_ca_urls
for lat in range(
math.ceil(float(data['south'])), math.ceil(float(data['north'])) + 1
# Eg N 87.7 to N 86.
):
for lng in range(
math.floor(float(data['west'])), math.floor(float(data['east'])) + 1
):
fname = ('grd' + ('n' if lat>0 else 's')
+ str(abs(math.ceil(lat))).zfill(2)
+ ('e' if lng>=0 else 'w')
+ str(abs(math.floor(lng))).zfill(3))
database_path = ('elevationdata/'
+ fname
+ path_suffix + '/w001001.adf'
)
if not os.path.exists(database_path):
try:
print("downloading" + useful_urls[(lat, lng)] + "\n")
wget.download(useful_urls[(lat, lng)])
print("\n")
file_name = useful_urls[(lat, lng)].split('/')[-1]
archive = zipfile.ZipFile(file_name)
for file in archive.namelist():
if file.startswith("grd" + fname[3:] + path_suffix + "/"):
archive.extract(file, "elevationdata")
os.remove(file_name)
except (urllib.error.HTTPError):
print("Could not download data for", (lat, lng))
except KeyError:
print("Thing not found in urls: " (lat, lng))
download_coords(data)
mapsize = (
data['west'],
data['south'],
data['east'],
data['north']
)
"""
Explanation: It's actually pretty bad lol
Solving the problem
I've thought a lot about this problem, first talking about doing DFS and BFS based approaches from points of local maxima (and programming it), then talking about doing Djikstra-inspired algorithms with steeper slopes or fastest route expansions first. After taking Algorithms, I realize that the key to solving this is to minimize duplicate work! I present 2 algorithms.
The second looks at the highest elevation nodes first
We can label each node with the speed at which we can start speed, typically 1 m/s for longboards, then set to previous node prev to None.
Consider the highest elevation node on our map. It would make sense for routes to start here, since all routes to this point are uphill. For each adjacent node, we can the edit the node's prev and speed with the new highest speed if it is indeed higher.
If we change a node's speed that is uphill from a node, we much re-update all of its adjacent nodes speeds, then recurse on all nodes updated above the original node
We can also do more simple depth-first and breadth-first search approaches, where the first paths start from the highest node on the graph, then subsequent paths start from lower nodes on the graph which have not been incorportated into another path yet. Basically, we must keep a set of nodes that still need to be expanded, and we can also can keep some data structure to sort those nodes, which may possibly contain duplicates. One option for this data structure is to have a priority queue with the highest elevation nodes first.
Implementing
Let's start by downloading elevation data for a query. We query by using a simple dict.
End of explanation
"""
import subprocess
import math
import osmapi
import os.path
import pickle
def get_map_data(mapsize):
mapfilepath = 'maps/map'+str(mapsize)+'.dat'
# TODO Allow spanning countries
# (west, south, east, north), string
api_link = osmapi.OsmApi(#username='evanxq1@gmail.com',
#password='hrVQ*DO9aD9q'#,
#api="api06.dev.openstreetmap.org"
)
try:
if os.path.exists(mapfilepath):
print('loading local map...')
with open(mapfilepath, 'rb') as f:
map_data = pickle.load(f)
else:
print('requesting map...')
map_data = api_link.Map(mapsize[0], mapsize[1],
mapsize[2], mapsize[3])
with open(mapfilepath, 'wb') as f:
pickle.dump(map_data, f) # TODO delete this entire try block tbh
except IOError as e:
print("Couldn't write map data!", e.errorno, e.strerror)
# except Error as e: # osmapi.OsmApi.MaximumRetryLimitReachedError: #TODO: handle errors
# print(e.errorno, e.strerror)
# print("Could not get map data!")
# return False, [], [], [], [], [], []
return map_data
get_map_data(mapsize)
map_data = get_map_data(mapsize)
"""
Explanation: Next, we query OpenStreetMap for the map data!
End of explanation
"""
# Todo add to node ititialziation
class Node:
def __init__(self, node_id, lat, lng, is_stoplight, adj):
self.node_id = node_id
self.lat = lat
self.lng = lng
self.is_stoplight = is_stoplight
self.adj = adj
self.edge_coords = None
self.edge_elevations = []
self.edge_work = []
def __lt__(self, other):
return False
def __gt__(self, other):
return False
def create_adj_node_ptrs(self):
self.adj_node_ptrs = list(nodes[adj_node_id] for adj_node_id in self.adj)
# TODO support bridges
data['allow_bridges'] = False
data['banned_highway_types'] = [
'motorway', 'trunk', 'service', 'steps', 'footway', 'pedestrian', 'sidewalk', 'path'
]
# Also may include primary, secondary
# Highway type descriptions https://wiki.openstreetmap.org/wiki/Key:highway
# motorway: interstate
# trunk: mostly grade-separated state/us highways, always with medians
# primary: major roads
# footway: exclusively pedestrians
# sidewall: always on side of road
# pedestrian: pedestrian oriented path, but not sidewalk
# steps: stairsteps
# path: trail
# bridleway: horse trail
from collections import defaultdict
def map_to_graph(map_data):
banned_types = set(data['banned_highway_types'])
graph = defaultdict(set)
for entry in map_data:
if (entry['type'] == 'way'
and 'data' in entry.keys()
and 'tag' in entry['data'].keys()
and 'highway' in entry['data']['tag'].keys()
):
highway_type = entry['data']['tag']['highway']
else: # Is not labelled highway
continue
if (True
and highway_type not in banned_types
and (data['allow_bridges'] or ('bridge' not in entry['data']['tag']))
):
road_nodes = entry['data']['nd']
for i in range(len(road_nodes) - 1):
graph[road_nodes[i]].add(road_nodes[i+1])
graph[road_nodes[i+1]].add(road_nodes[i])
return graph
adj_list = map_to_graph(map_data)
def get_node_entries(target_nodes, map_data):
for item in map_data:
item_id = item["data"]["id"]
if item_id in target_nodes:
yield (item_id, item)
node_lat_lng = []
datapts_per_degree = 10800
def create_node_list_with_elevations(adj_list, map_data):
nodes = dict()
node_heights, node_latlons = dict(), dict()
stoplights = set()
for node_id, node_info in get_node_entries(adj_list.keys(), map_data):
nodes[node_id] = Node(
node_id,
lat=float(node_info['data']['lat']),
lng=float(node_info['data']['lon']),
is_stoplight =
('tag' in node_info['data'] and 'highway' in node_info['data']['tag']
and node_info['data']['tag']['highway'] == 'traffic_signals'
),
adj = list(adj_list[node_id])
)
return nodes
nodes = create_node_list_with_elevations(adj_list, map_data)
for node_id, node in nodes.items():
node.create_adj_node_ptrs()
import numpy as np
datapts_per_degree = 10800
def add_edges_return_queries(nodes):
large_query = set()
for node_id, node in nodes.items():
edge_coords = []
for adj_node in node.adj_node_ptrs:
# Degrees aren't squares, so this isn't super valid, but it's not important.
dist_in_degs = np.sqrt((node.lat - adj_node.lat)**2 + (node.lng - adj_node.lng)**2)
n_steps = max(int(dist_in_degs * datapts_per_degree), 2)
lat_steps = np.linspace(node.lat, adj_node.lat, num=n_steps, endpoint=True)
lng_steps = np.linspace(node.lng, adj_node.lng, num=n_steps, endpoint=True)
coords = list(zip(lat_steps, lng_steps))
edge_coords.append(coords)
large_query.update(coords)
node.edge_coords = edge_coords
return large_query
large_query = add_edges_return_queries(nodes)
# Make sure each node's edges start with the same coordinates
def test_edge_coords_start(nodes):
for node_id, node in nodes.items():
first = (node.lat, node.lng)
for edge in node.edge_coords:
assert edge[0] == first
test_edge_coords_start(nodes)
# Make sure each node's edges end with the same coordinates
# as its adjacent node's start with
def test_edge_coords_end(nodes):
for node_id, node in nodes.items():
assert len(node.edge_coords) == len(node.adj) == len(node.adj_node_ptrs)
for i in range(len(node.adj_node_ptrs)):
assert (node.adj_node_ptrs[i].lat, node.adj_node_ptrs[i].lng) == node.edge_coords[i][-1]
test_edge_coords_end(nodes)
"""
Explanation: We need to turn the map into a usable graph structure now!
Possible optimization: right now I have each node store its neighbors in IDs and as pointers. Since pointer dereferencing is slow, what we really want to do is to assign a zero-indexed ID to each node so it can be referenced in a list.
End of explanation
"""
from collections import defaultdict
def build_query_text(large_query, country="United States"):
queries = defaultdict(str)
latlng_order = defaultdict(list)
for lat_lng in large_query:
lat, lng = lat_lng
fname = ('grd' + ('n' if lat>0 else 's')
+ str(abs(math.ceil(lat))).zfill(2)
+ ('e' if lng>=0 else 'w') # lng = 0 block is all east I guess
+ str(abs(math.floor(lng))).zfill(3)
)
s = str(lng) + ' ' + str(lat) + '\n'
queries[fname] += s
latlng_order[fname].append(lat_lng)
return queries, latlng_order
def query_elevations(queries, latlng_order, country="United States"):
points = []
elevations = []
for fname in queries.keys():
if country == 'United States': # TODO deal with AK
database_path = 'elevationdata/' + fname + '_13/w001001.adf'
if country == 'Mexico' or country == 'Canada' or country == None:
# TODO deal with country == None which would be sorta weird
database_path = 'elevationdata/' + fname + '_1/w001001.adf'
proc = subprocess.Popen(
['gdallocationinfo', database_path, '-valonly', '-geoloc'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
universal_newlines=True
)
output, err = proc.communicate(queries[fname])
elevations += [float(s) for s in output.splitlines()]
points += latlng_order[fname]
if len(points) != len(elevations):
raise Exception("Error querying points: " + str(len(points)) + " points, " + str(len(elevations)) + " elevations")
ret = dict()
for i in range(len(points)):
ret[points[i]] = elevations[i]
return ret
queries, latlng_order = build_query_text(large_query, country="United States")
elevations = query_elevations(queries, latlng_order, country="United States")
def set_node_elevations(nodes, elevations):
for node_id, node in nodes.items():
for edge in node.edge_coords:
elevation_list = []
for coord_pair in edge:
elevation_list.append(elevations[coord_pair])
node.edge_elevations.append(elevation_list)
set_node_elevations(nodes, elevations)
"""
Explanation: Possible optimization here: instead of +=ing a bunch of strings, we could use a stringbuilder type object
End of explanation
"""
deviations = []
high_deviations = []
low_deviations = []
for node_id, node in nodes.items():
for i, edge in enumerate(node.edge_elevations):
bottom_end = min(edge[0], edge[-1])
top_end = max(edge[0], edge[-1])
top = max(edge)
bottom = min(edge)
deviations.append(max(top-top_end, bottom_end-bottom))
high_deviations.append(top-top_end)
low_deviations.append(bottom_end-bottom)
print(0, len(deviations))
print(.01, sum([i >= .01 for i in deviations]))
print(.1, sum([i >= .1 for i in deviations]))
print(.3, sum([i >= .3 for i in deviations]))
print(1, sum([i >= 1 for i in deviations]))
print(3, sum([i >= 3 for i in deviations]))
print(5, sum([i >= 5 for i in deviations]))
"""
Explanation: How nice! Now we need to decide what kind of edges we need to split on due to elevation differences. Let's test some stuff out.
End of explanation
"""
edges_to_scan = 0
for node_id, node in nodes.items():
for i, edge in enumerate(node.edge_elevations):
bottom_end = min(edge[0], edge[-1])
top_end = max(edge[0], edge[-1])
top = max(edge)
bottom = min(edge)
edges_to_scan += len(node.edge_elevations)
print(edges_to_scan/4) # over 4 rather than 2 since we only have to look at "opposite" nodes from the ones we split
"""
Explanation: Note that all values are even since our graph is directed, so there are edges going in each direction. Here's a question: since we've been storing all the adjacent edges of each graph as a random ordered list, what is the expected runtime of finding the opposite edge to split if we split over 1 meter?
End of explanation
"""
data['mass'] = 80 #kg
data['init_speed'] = 1.0
data['use_stoplights'] = True
def prep_graph(nodes):
init_energy = .5 *data['mass'] * data['init_speed'] ** 2
for node_id, node in nodes.items():
node.energy = init_energy
node.speed = data['init_speed']
node.prev_node = None
node.next_nodes = set()
node.path_start = node
# TODO add to some other method
node.elevation = node.edge_elevations[0][0]
prep_graph(nodes)
"""
Explanation: I've seen better, but it's not worth doing anything about now. Road graphs are always kind of sparse, so I imagine this would increase somewhat linearly with edge total. Programming splitting sounds like a pain, though. I'll do it later (tm)
End of explanation
"""
sorted_nodes = sorted(nodes.values(), key=lambda n: -n.elevation)
"""
Explanation: Here comes that big dramatic |V|log|V| sort that all CS classes have been prepraring me for!!
End of explanation
"""
g = -9.81 #accelertion due to gravity, m/s
data['drag_c'] = .6 #drag coefficient of human body
data['cross_a'] = .68 #Cross-sectional area of human body
data['mass'] = 80 #kg
data['frict_c'] = .03 #Coefficient of friction
import math
def prep_data_constants(data):
data['c1'] = (1.225 * data['drag_c'] * data['cross_a']) / (2 * data['mass'])
data['c2'] = g * data['frict_c']
def new_velocity(v0, dh, dist): # for small changes in V; dist is horizontal dist
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
a = ((g * math.sin(theta))
- v0 ** 2 * data['c1']
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist**2 + dh**2) + v0 ** 2
if vel_sqr > 0:
return math.sqrt(vel_sqr)
else:
return 0
prep_data_constants(data)
new_velocity(1.0, -2, 30)
new_velocity(1.0, -1, 30)
"""
Explanation: Well that was quick. Let's get to the «physics»
End of explanation
"""
def new_velocity(v0, dh, dist, integrations=1): # for small changes in V; dist is horizontal dist
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
v = v0
dist_per_i = dist/integrations
dh_per_i = dh/integrations
for i in range(integrations):
a = ((g * math.sin(theta))
- v ** 2 * data['c1']
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist_per_i**2 + dh_per_i**2) + v ** 2
if vel_sqr > 0:
v = math.sqrt(vel_sqr)
else:
return 0
return v
def new_velocity_no_friction(v0, dh, dist): # for small changes in V; dist is horizontal dist
integrations=1
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
v = v0
dist_per_i = dist/integrations
dh_per_i = dh/integrations
for i in range(integrations):
a = ((g * math.sin(theta))
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist_per_i**2 + dh_per_i**2) + v ** 2
if vel_sqr > 0:
v = math.sqrt(vel_sqr)
else:
return 0
return v
new_velocity_no_friction(1.0, -1, 30)
new_velocity(1.0, -1, 30, 1)
new_velocity(1.0, -1, 30, 10)
new_velocity(1.0, -1, 30, 100)
new_velocity(1.0, -1, 30, 1000)
new_velocity(1.0, -1, 30, 10000)
new_velocity_no_friction(1.0, -2, 30)
new_velocity(1.0, -2, 30, 1)
new_velocity(1.0, -2, 30, 10)
new_velocity(1.0, -2, 30, 100)
new_velocity(1.0, -2, 30, 1000)
new_velocity(1.0, -2, 30, 10000)
def latlong_dist(lat1_raw, lon1_raw, lat2_raw, lon2_raw):
lat1 = math.radians(float(lat1_raw))
lon1 = math.radians(float(lon1_raw))
lat2 = math.radians(float(lat2_raw))
lon2 = math.radians(float(lon2_raw))
# approximate radius of earth in m
R = 6373000.0
dlon = lon2 - lon1
dlat = lat2 - lat1
a = (math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2)
* math.sin(dlon / 2)**2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
distance = R * c
return distance
data['approx_frict_c'] = .03
def calculate_work(dist, dh): # Work done by gravity
theta = math.atan2(dh, dist)
a = ((g * math.sin(theta))
+ math.cos(theta) * g * data['approx_frict_c']
)
real_dist = math.sqrt(dist**2 + dh**2)
return real_dist * a * data['mass']
def find_work_all_edges(sorted_nodes):
for node in sorted_nodes:
node.edge_work = []
for i in range(len(node.adj)):
edge_coords = node.edge_coords[i]
edge_elevations = node.edge_elevations[i]
work = 0
horiz_dist = latlong_dist(edge_coords[0][0], edge_coords[0][1],
edge_coords[1][0], edge_coords[1][1])
for j in range(len(edge_coords) - 1):
dh = edge_elevations[j+1] - edge_elevations[j]
# horiz dist is actually same for each part of an edge
# horiz_dist = latlong_dist(edge_coords[j][0], edge_coords[j][1], edge_coords[j+1][0], edge_coords[j+1][1])
work += calculate_work(horiz_dist, dh)
node.edge_work.append(work)
find_work_all_edges(sorted_nodes)
import numpy as np
import pylab as pl
from matplotlib import collections as mc
# lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
# c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])
def graph_paths(sorted_nodes):
lines = []
colors = []
max_lat = -90
min_lat = 90
max_lng = -180
min_lng = 180
done = set()
for node in sorted_nodes:
max_lat = max(node.lat, max_lat)
min_lat = min(node.lat, min_lat)
max_lng = max(node.lng, max_lng)
min_lng = min(node.lng, min_lng)
for adj in node.next_nodes:
if ((adj.lng,adj.lat), (node.lng, node.lat)) not in done:
lines.append([(node.lng, node.lat),(adj.lng,adj.lat)])
done.add(((node.lng, node.lat),(adj.lng,adj.lat)))
colors = []
for line in lines:
beginning, end = line
x1, y1 = beginning
x2, y2 = end
angle = math.atan2(x2-x1, y2-y1)
colors.append((math.cos(angle) * .5 + .5, math.sin(angle) * .5 + .5, 0, 1))
lc = mc.LineCollection(lines, colors=colors, linewidths=1)
fig, ax = pl.subplots(figsize=(16,10))
ax.add_collection(lc)
# ax.autoscale()
# ax.margins(0.001)
ax.set_xlim(min_lng, max_lng)
ax.set_ylim(min_lat, max_lat)
plt.show()
import numpy as np
import pylab as pl
from matplotlib import collections as mc
# lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
# c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])
def generate_compass():
lines = []
for x in [-1, -.5, 0, .5, 1]:
for y in [-1, -.5, 0, .5, 1]:
if (x, y) != (0,0):
lines.append([(0,0), ((x,y))])
colors = []
for line in lines:
beginning, end = line
x1, y1 = beginning
x2, y2 = end
angle = math.atan2(x2-x1, y2-y1)
colors.append((math.cos(angle) * .5 + .5, math.sin(angle) * .5 + .5, 0, 1))
lc = mc.LineCollection(lines, colors=colors, linewidths=1)
fig, ax = pl.subplots()
ax.add_collection(lc)
ax.autoscale()
# ax.margins(0.001)
plt.show()
generate_compass()
# Adding this for reference
# class Node:
# def __init__(self, node_id, lat, lng, is_stoplight, adj):
# self.node_id = node_id
# self.lat = lat
# self.lng = lng
# self.is_stoplight = is_stoplight
# self.adj = adj
# self.edge_coords = None
# self.edge_elevations = []
# def create_adj_node_ptrs(self):
# self.adj_node_ptrs = list(nodes[adj_node_id] for adj_node_id in self.adj)
# node.speed = data['init_speed']
# node.prev_node = None
# node.path_start = node
# # TODO add to some other method
# node.elevation = node.edge_elevations[0][0]
# g = -9.81 #accelertion due to gravity, m/s
# data['drag_c'] = .6 #drag coefficient of human body
# data['cross_a'] = .68 #Cross-sectional area of human body
# data['mass'] = 80 #kg
# data['frict_c'] = .03 #Coefficient of friction
# def prep_data_constants(data):
# data['c1'] = (1.225 * data['drag_c'] * data['cross_a']) / (2 * data['mass'])
# data['c2'] = g * data['frict_c']
"""
Explanation: Let's see what happens when we try to "integrate" this more precisely.
End of explanation
"""
from heapq import heappush, heappop
def algo_1(sorted_nodes):
edges_explored = 0
for top_node in sorted_nodes:
if top_node.prev_node != None: # Already part of a path
continue
need_to_explore = set([top_node])
heap = [(-top_node.elevation, top_node)]
while need_to_explore:
_, node = heappop(heap)
if node not in need_to_explore:
continue
need_to_explore.remove(node)
node_energy = node.energy
for i in range(len(node.adj)):
adj = node.adj_node_ptrs[i]
edge_work = node.edge_work[i]
# For air resistance version, check first if
# (node.speed > adj.speed or node.elevation > adj.elevaton)
# then ride down nodes
if edge_work + node_energy > adj.energy:
adj.energy = edge_work + node_energy
if adj.prev_node is not None:
prev = adj.prev_node
next_nodes = prev.next_nodes
next_nodes.remove(adj)
adj.prev_node = node
node.next_nodes.add(adj)
adj.path_start = top_node
need_to_explore.add(adj)
heappush(heap, (-adj.elevation, adj))
edges_explored += 1
return edges_explored
prep_graph(nodes) # reset graph
algo_1(sorted_nodes)
graph_paths(sorted_nodes)
def test_nodes(sorted_nodes):
for node in sorted_nodes:
for adj in node.adj_node_ptrs:
if adj in node.next_nodes:
assert adj.prev_node == node
else:
assert adj.prev_node != node
test_nodes(sorted_nodes)
"""
Explanation: Attempt #1: Djikstra inspired
End of explanation
"""
|
npdoty/bigbang | examples/Single Word Trend.ipynb | agpl-3.0 | %matplotlib inline
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
import numpy as np
import math
import nltk
from itertools import repeat
from nltk.stem.lancaster import LancasterStemmer
st = LancasterStemmer()
from nltk.corpus import stopwords
import re
urls = ["http://mail.scipy.org/pipermail/ipython-dev/"]#,
#"http://mail.scipy.org/pipermail/ipython-user/"],
#"http://mail.scipy.org/pipermail/scipy-dev/",
#"http://mail.scipy.org/pipermail/scipy-user/",
#"http://mail.scipy.org/pipermail/numpy-discussion/"]
archives= [Archive(url,archive_dir="../archives") for url in urls]
checkword = "python" #can change words, should be lower case
"""
Explanation: This note book gives the trend of a single word in single mailing list.
End of explanation
"""
df = pd.DataFrame(columns=["MessageId","Date","From","In-Reply-To","Count"])
for row in archives[0].data.iterrows():
try:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
k = k.lower()
t = nltk.tokenize.word_tokenize(k)
subdict = {}
count = 0
for g in t:
try:
word = st.stem(g)
except:
print g
pass
if word == checkword:
count += 1
if count == 0:
continue
else:
subdict["MessageId"] = row[0]
subdict["Date"] = row[1]["Date"]
subdict["From"] = row[1]["From"]
subdict["In-Reply-To"] = row[1]["In-Reply-To"]
subdict["Count"] = count
df = df.append(subdict,ignore_index=True)
except:
if row[1]["Body"] is None:
print '!!! Detected an email with an empty Body field...'
else: print 'error'
df[:5] #dataframe of informations of the particular word.
"""
Explanation: You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run:
import nltk
nltk.download()
And in the graphical UI that appears, choose "punkt" from the All Packages tab and Download.
End of explanation
"""
df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')
"""
Explanation: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.
End of explanation
"""
|
ML4DS/ML4all | R2.kNN_Regression/regression_knn_professor.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pylab
# Packages used to read datasets
import scipy.io # To read matlab files
import pandas as pd # To read datasets in csv format
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
"""
Explanation: The k-nearest neighbors (kNN) regression algorithm
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 2.2 (Sep 08, 2017)
Changes: v.1.0 - First version
Changes: v.1.1 - Stock dataset included.
Changes: v.2.0 - Notebook for UTAD course. Advertising data incorporated
Changes: v.2.1 - Text and code revisited. General introduction removed.
Changes: v.2.2 - Compatibility with python 2 and 3.
End of explanation
"""
# SELECT dataset
# Available options are 'stock', 'concrete' or 'advertising'
ds_name = 'stock'
# Let us start by loading the data into the workspace, and visualizing the dimensions of all matrices
if ds_name == 'stock':
# STOCK DATASET
data = scipy.io.loadmat('datasets/stock.mat')
X_tr = data['xTrain']
S_tr = data['sTrain']
X_tst = data['xTest']
S_tst = data['sTest']
elif ds_name == 'concrete':
# CONCRETE DATASET.
data = scipy.io.loadmat('datasets/concrete.mat')
X_tr = data['X_tr']
S_tr = data['S_tr']
X_tst = data['X_tst']
S_tst = data['S_tst']
elif ds_name == 'advertising':
# ADVERTISING DATASET
df = pd.read_csv('datasets/Advertising.csv', header=0)
X_tr = df.values[:150, 1:4]
S_tr = df.values[:150, [-1]] # The brackets around -1 is to make sure S_tr is a column vector, as in the other datasets
X_tst = df.values[150:, 1:4]
S_tst = df.values[150:, [-1]]
else:
print('Unknown dataset')
# Print the data dimension and the dataset sizes
print("SELECTED DATASET: " + ds_name)
print("---- The size of the training set is {0}, that is: {1} samples with dimension {2}.".format(
X_tr.shape, X_tr.shape[0], X_tr.shape[1]))
print("---- The target variable of the training set contains {0} samples with dimension {1}".format(
S_tr.shape[0], S_tr.shape[1]))
print("---- The size of the test set is {0}, that is: {1} samples with dimension {2}.".format(
X_tst.shape, X_tst.shape[0], X_tst.shape[1]))
print("---- The target variable of the test set contains {0} samples with dimension {1}".format(
S_tst.shape[0], S_tst.shape[1]))
"""
Explanation: 1. The dataset
We describe next the regression task that we will use in the session. The dataset is an adaptation of the <a href=http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html> STOCK dataset</a>, taken originally from the <a href=http://lib.stat.cmu.edu/> StatLib Repository</a>. The goal of this problem is to predict the values of the stocks of a given airplane company, given the values of another 9 companies in the same day.
<small> If you are reading this text from the python notebook with its full functionality, you can explore the results of the regression experiments using two alternative datasets:
The
<a href=https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength>CONCRETE dataset</a>, taken from the <a href=https://archive.ics.uci.edu/ml/index.html>Machine Learning Repository at the University of California Irvine</a>. The goal of the CONCRETE dataset tas is to predict the compressive strength of cement mixtures based on eight observed variables related to the composition of the mixture and the age of the material).
The Advertising dataset, taken from the book <a href= http://www-bcf.usc.edu/~gareth/ISL/data.html> An Introduction to Statistical Learning with applications in R</a>, with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani. The goal of this problem is to predict the sales of a given product, knowing the investment in different advertising sectors. More specifically, the input and output variables can be described as follows:
Input features:
TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
Radio: advertising dollars spent on Radio
Newspaper: advertising dollars spent on Newspaper
Response variable:
Sales: sales of a single product in a given market (in thousands of widgets)
To do so, just replace stock by concrete or advertising in the next cell. Remind that you must run the cells again to see the changes.
</small>
End of explanation
"""
pylab.subplots_adjust(hspace=0.2)
for idx in range(X_tr.shape[1]):
ax1 = plt.subplot(3,3,idx+1)
ax1.plot(X_tr[:,idx],S_tr,'.')
ax1.get_xaxis().set_ticks([])
ax1.get_yaxis().set_ticks([])
plt.show()
"""
Explanation: 1.1. Scatter plots
We can get a first rough idea about the regression task representing the scatter plot of each of the one-dimensional variables against the target data.
End of explanation
"""
# Mean of all target values in the training set
s_hat = np.mean(S_tr)
print(s_hat)
"""
Explanation: 2. Baseline estimation. Using the average of the training set labels
A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.
This approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.
The prediction is thus given by
End of explanation
"""
# We start by defining a function that calculates the average square error
def square_error(s, s_est):
# Squeeze is used to make sure that s and s_est have the appropriate dimensions.
y = np.mean(np.power((s - s_est), 2))
# y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2))
return y
# Mean square error of the baseline prediction over the training data
# MSE_tr = <FILL IN>
MSE_tr = square_error(S_tr, s_hat)
# Mean square error of the baseline prediction over the test data
# MSE_tst = <FILL IN>
MSE_tst = square_error(S_tst, s_hat)
print('Average square error in the training set (baseline method): {0}'.format(MSE_tr))
print('Average square error in the test set (baseline method): {0}'.format(MSE_tst))
"""
Explanation: for any input ${\bf x}$.
Exercise 1
Compute the mean square error over training and test sets, for the baseline estimation method.
End of explanation
"""
if sys.version_info.major == 2:
Test.assertTrue(np.isclose(MSE_tr, square_error(S_tr, s_hat)),'Incorrect value for MSE_tr')
Test.assertTrue(np.isclose(MSE_tst, square_error(S_tst, s_hat)),'Incorrect value for MSE_tst')
"""
Explanation: Note that in the previous piece of code, function 'square_error' can be used when the second argument is a number instead of a vector with the same length as the first argument. The value will be subtracted from each of the components of the vector provided as the first argument.
End of explanation
"""
# We implement unidimensional regression using the k-nn method
# In other words, the estimations are to be made using only one variable at a time
from scipy import spatial
var = 0 # pick a variable (e.g., any value from 0 to 8 for the STOCK dataset)
k = 1 # Number of neighbors
n_points = 1000 # Number of points in the 'x' axis (for representational purposes)
# For representational purposes, we will compute the output of the regression model
# in a series of equally spaced-points along the x-axis
grid_min = np.min([np.min(X_tr[:,var]), np.min(X_tst[:,var])])
grid_max = np.max([np.max(X_tr[:,var]), np.max(X_tst[:,var])])
X_grid = np.linspace(grid_min,grid_max,num=n_points)
def knn_regression(X1, S1, X2, k):
""" Compute the k-NN regression estimate for the observations contained in
the rows of X2, for the training set given by the rows in X1 and the
components of S1. k is the number of neighbours of the k-NN algorithm
"""
if X1.ndim == 1:
X1 = np.asmatrix(X1).T
if X2.ndim == 1:
X2 = np.asmatrix(X2).T
distances = spatial.distance.cdist(X1,X2,'euclidean')
neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)
closest = neighbors[range(k),:]
est_values = np.zeros([X2.shape[0],1])
for idx in range(X2.shape[0]):
est_values[idx] = np.mean(S1[closest[:,idx]])
return est_values
est_tst = knn_regression(X_tr[:,var], S_tr, X_tst[:,var], k)
est_grid = knn_regression(X_tr[:,var], S_tr, X_grid, k)
plt.plot(X_tr[:,var], S_tr,'b.',label='Training points')
plt.plot(X_tst[:,var], S_tst,'rx',label='Test points')
plt.plot(X_grid, est_grid,'g-',label='Regression model')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
"""
Explanation: 3. Unidimensional regression with the $k$-nn method
The principles of the $k$-nn method are the following:
For each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set)
Obtain the estimation averaging the labels corresponding to the selected neighbors
The number of neighbors is a hyperparameter that plays an important role in the performance of the method. You can test its influence by changing $k$ in the following piece of code. In particular, you can sart with $k=1$ and observe the efect of increasing the value of $k$.
End of explanation
"""
var = 0
k_max = 60
k_max = np.minimum(k_max, X_tr.shape[0]) # k_max cannot be larger than the number of samples
#Be careful with the use of range, e.g., range(3) = [0,1,2] and range(1,3) = [1,2]
MSEk_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:,var],k))
for k in range(1, k_max+1)]
MSEk_tst = [square_error(S_tst,knn_regression(X_tr[:,var], S_tr, X_tst[:,var],k))
for k in range(1, k_max+1)]
kgrid = np.arange(1, k_max+1)
plt.plot(kgrid, MSEk_tr,'bo', label='Training square error')
plt.plot(kgrid, MSEk_tst,'ro', label='Test square error')
plt.xlabel('$k$')
plt.ylabel('Square Error')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
"""
Explanation: 3.1. Evolution of the error with the number of neighbors ($k$)
We see that a small $k$ results in a regression curve that exhibits many and large oscillations. The curve is capturing any noise that may be present in the training data, and <i>overfits</i> the training set. On the other hand, picking a too large $k$ (e.g., 200) the regression curve becomes too smooth, averaging out the values of the labels in the training set over large intervals of the observation variable.
The next code illustrates this effect by plotting the average training and test square errors as a function of $k$.
End of explanation
"""
k_max = 20
var_performance = []
k_values = []
for var in range(X_tr.shape[1]):
MSE_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:, var], k))
for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr[:,var], S_tr, X_tst[:, var], k))
for k in range(1, k_max+1)]
MSE_tr = np.asarray(MSE_tr)
MSE_tst = np.asarray(MSE_tst)
# We select the variable associated to the value of k for which the training error is minimum
pos = np.argmin(MSE_tr)
k_values.append(pos + 1)
var_performance.append(MSE_tst[pos])
plt.stem(range(X_tr.shape[1]), var_performance, use_line_collection=True)
plt.title('Results of unidimensional regression ($k$NN)')
plt.xlabel('Variable')
plt.ylabel('Test MSE')
plt.figure(2)
plt.stem(range(X_tr.shape[1]), k_values, use_line_collection=True)
plt.xlabel('Variable')
plt.ylabel('$k$')
plt.title('Selection of the hyperparameter')
plt.show()
"""
Explanation: As we can see, the error initially decreases achiving a minimum (in the test set) for some finite value of $k$ ($k\approx 10$ for the STOCK dataset). Increasing the value of $k$ beyond that value results in poorer performance.
Exercise 2
Analize the training MSE for $k=1$. Why is it smaller than for any other $k$? Under which conditions will it be exactly zero?
Exercise 3
Modify the code above to visualize the square error from $k=1$ up to $k$ equal to the number of training instances. Can you relate the square error of the $k$-NN method with that of the baseline method for certain value of $k$?
3.1. Influence of the input variable
Having a look at the scatter plots, we can observe that some observation variables seem to have a more clear relationship with the target value. Thus, we can expect that not all variables are equally useful for the regression task. In the following plot, we carry out a study of the performance that can be achieved with each variable.
Note that, in practice, the test labels are not available for the selection of hyperparameter
$k$, so we should be careful about the conclusions of this experiment. A more realistic approach will be studied later when we introduce the concept of model validation.
End of explanation
"""
k_max = 20
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k)) for k in range(1, k_max+1)]
plt.plot(np.arange(k_max)+1, MSE_tr,'bo',label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_tst,'ro',label='Test square error')
plt.xlabel('k')
plt.ylabel('Square error')
plt.legend(loc='best')
plt.show()
"""
Explanation: 4. Multidimensional regression with the $k$-nn method
In the previous subsection, we have studied the performance of the $k$-nn method when using only one variable. Doing so was convenient, because it allowed us to plot the regression curves in a 2-D plot, and to get some insight about the consequences of modifying the number of neighbors.
For completeness, we evaluate now the performance of the $k$-nn method in this dataset when using all variables together. In fact, when designing a regression model, we should proceed in this manner, using all available information to make as accurate an estimation as possible. In this way, we can also account for correlations that might be present among the different observation variables, and that may carry very relevant information for the regression task.
For instance, in the STOCK dataset, it may be that the combination of the stock values of two airplane companies is more informative about the price of the target company, while the value for a single company is not enough.
<small> Also, in the CONCRETE dataset, it may be that for the particular problem at hand the combination of a large proportion of water and a small proportion of coarse grain is a clear indication of certain compressive strength of the material, while the proportion of water or coarse grain alone are not enough to get to that result.</small>
End of explanation
"""
### This fragment of code runs k-nn with M-fold cross validation
# Parameters:
M = 5 # Number of folds for M-cv
k_max = 40 # Maximum value of the k-nn hyperparameter to explore
# First we compute the train error curve, that will be useful for comparative visualization.
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
## M-CV
# Obtain the indices for the different folds
n_tr = X_tr.shape[0]
permutation = np.random.permutation(n_tr)
# Split the indices in M subsets with (almost) the same size.
set_indices = {i: [] for i in range(M)}
i = 0
for pos in range(n_tr):
set_indices[i].append(permutation[pos])
i = (i+1) % M
# Obtain the validation errors
MSE_val = np.zeros((1,k_max))
for i in range(M):
val_indices = set_indices[i]
# Take out the val_indices from the set of indices.
tr_indices = list(set(permutation) - set(val_indices))
MSE_val_iter = [square_error(S_tr[val_indices],
knn_regression(X_tr[tr_indices, :], S_tr[tr_indices],
X_tr[val_indices, :], k))
for k in range(1, k_max+1)]
MSE_val = MSE_val + np.asarray(MSE_val_iter).T
MSE_val = MSE_val/M
# Select the best k based on the validation error
k_best = np.argmin(MSE_val) + 1
# Compute the final test MSE for the selecte k
MSE_tst = square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k_best))
plt.plot(np.arange(k_max)+1, MSE_tr, 'bo', label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_val.T, 'go', label='Validation square error')
plt.plot([k_best, k_best], [0, MSE_tst],'r-')
plt.plot(k_best, MSE_tst,'ro',label='Test error')
plt.legend(loc='best')
plt.show()
"""
Explanation: In this case, we can check that the average test square error is much lower than the error that was achieved when using only one variable, and also far better than the baseline method. It is also interesting to note that in this particular case the best performance is achieved for a small value of $k$, with the error increasing for larger values of the hyperparameter.
Nevertheless, as we discussed previously, these results should be taken carefully. How would we select the value of $k$, if test labels are (obvioulsy) not available for model validation?
5. Hyperparameter selection via cross-validation
5.1. Generalization
An inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <b>generalization</b>.
Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>
5.2. Cross-validation
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:
Split the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.
Carry out the training of the system $M$ times. For each run, use a different partition as a <i>validation</i> set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).
Average the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.
Rerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.
<img src="https://chrisjmccormick.files.wordpress.com/2013/07/10_fold_cv.png">
End of explanation
"""
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Fabian Pedregosa <fabian.pedregosa@inria.fr>
#
# License: BSD 3 clause (C) INRIA
###############################################################################
# Generate sample data
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
###############################################################################
# Fit regression model
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, c='k', label='data')
plt.plot(T, y_, c='g', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
plt.show()
"""
Explanation: Exercise 4
Modify the previous code to use only one of the variables in the input dataset
- Following a cross-validation approach, select the best value of $k$ for the $k$-nn based in variable 0 only.
- Compute the test error for the selected valua of $k$.
6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete module for machine learning tools is <a href=http://scikit-learn.org/stable/>Scikit-learn</a>. The following piece of code uses the method
KNeighborsRegressor
available in Scikit-learn. The example has been taken from <a href=http://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html>here</a>. As you can check, this routine allows us to build the estimation for a particular point using a weighted average of the targets of the neighbors:
To obtain the estimation at a point ${\bf x}$:
Find $k$ closest points to ${\bf x}$ in the training set
Average the corresponding targets, weighting each value according to the distance of each point to ${\bf x}$, so that closer points have a larger influence in the estimation.
End of explanation
"""
|
eneskemalergin/Data_Structures_and_Algorithms | Chapter2/2-Arrays.ipynb | gpl-3.0 | from array_class import Array1D
import random
# Array valueList created with size of 100
valueList = Array1D(100)
# Filling the array with random floating-point values
for i in range(len(valueList)):
valueList[i] = random.random()
# Print the values, one per line
for value in valueList:
print(value)
"""
Explanation: Arrays
Array structure is the most basic structure to store data. In this section we will learn the basics of Arrays and implement an array structure for a one-dimensional array. Then we will create two dimensional array.
The Array Structure
A one-dimensional array composed of multiple sequential elements stored in contiguous bytes of memory and allows for random access to the individual elements.
Elements in the array can be accessed directly by the index number we assign while we create the array.
Array structure is very similar to Python's list structure, however there are two major differences between the array and the list.
First, Array has limited number of operations, and list has large collection of operations.
Second, Array size cannot be changed after it's created, but list is flexible.
Array is best usable when the number of elements are known up front.
The Array Abstract Data Type
Array structure found in most programming languages as primitive type. Now we will define Array ADT to represent a one-dimensional array for use in Python that works similarly to arrays found in other languages.
Array ADT Definition:
Array(size) : Create one-dimensional array consisting of size elements with each element initially set to None. size must be greater than zero.
length() : Returns the length or number of elements in the array
getitem(index) : Returns the value stored in the array at element position index.(Must be in valid range.)
setitem(index, value) : Modifies the contents of the array element at position index to contain value.
clearing(value) : Clears the array by setting every element to value
iterator() : Creates and returns an iterator that can be used to traverse the elements of the array.
In our ADT definition we used basic hardware level implementation and made it more abstract by adding iterator and
optain size, set value, etc.
Now we created our ADT in file called array.py, let's use it to fill our array with random values and print them one per line.
End of explanation
"""
from array_class import Array1D
# Array theCounters created with size of 127 (ASCII characters)
theCounters = Array1D(127)
# theCounters elements initialized to 0
theCounters.clear(0)
# Open the text file for reading and extract each line from the file
# and iterate over each character in the line.
theFile = open('textfile.txt', 'r')
for line in theFile:
for letter in line:
code = ord(letter)
theCounters[code] += 1
# Close the file
theFile.close()
# Print the results. The uppercase letters have ASCII values in the range 65..90
# the lowercase letters are in the range 97..122.
for i in range(26):
print("%c - %4d %c - %4d" % (chr(65+i), theCounters[65+i], chr(97+i), theCounters[97+i]))
"""
Explanation: Now our Array ADT is working like charm let's use it with somewhat better implementation. Let's count the number of occurrences of each letter in a text file using Array ADT:
End of explanation
"""
import ctypes
ArrayType = ctypes.py_object * 5
slots = ArrayType()
slots[0]
"""
Explanation: To implement our array ADT we will use built-in module called ctypes which will give us an opportunity to implement hardware-supported array structure.
The ctypes module provides a tecnique for creating arrays that can store References to Python objects.
End of explanation
"""
slots[1] = 12
slots[3] = 44
slots[4] = 59
slots[3] = None
print slots[1]
print slots[3]
print slots[2]
"""
Explanation: Now if we want to print the value of the first item in the recently created array slots, we get ValueError, which says we don't have value for referenced position. But if we assign values...
End of explanation
"""
from array_class import Array2D
filename = "StudentGrades.txt"
# Open the text file for reading.
gradeFile = open(filename, "r")
# Extract the first two values which indicate the size of the array.
numStudents = int(gradeFile.readline())
numExams = int(gradeFile.readline())
# Create the 2-D array to store the grades.
examGrades = Array2D(numStudents, numExams)
# Extract the grades from the remaining lines.
i = 0
for student in gradeFile:
grades = student.split()
# print grades
for j in range(numExams):
examGrades[i,j] = int(grades[j])
i += 1
# Close the text file.
gradeFile.close()
# Compute each student's average exam grade.
for i in range(numStudents):
# Tally the exam grades for the ith student.
total = 0
for j in range(numExams):
total += examGrades[i,j]
# Computer average for the ith student.
examAvg = total / numExams
print("%2d: %6.2f" % (i+1, examAvg))
"""
Explanation: The size of the array never change which is really useful when you know the size of the array you will need. For class definition of array ADT check: array_class.py
Two-Dimensional Arrays
Some problems require us to make more then 1 dimensions, in this case two-dimensional arrays are handy.
We will define Array2D ADT for creating 2-D arrays, we will use some of the features directly from Array1D ADT.
You can find the implementation in array_class.py file.
Now let's talk about the usage of 2D arrays. Following snippets shows how to read text file and store the grades of students in the 2-D array.
End of explanation
"""
|
google/starthinker | colabs/bucket.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: Storage Bucket
Create and permission a bucket in Storage.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'bucket_bucket':'', # Name of Google Cloud Bucket to create.
'bucket_emails':'', # Comma separated emails.
'bucket_groups':'', # Comma separated groups.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter Storage Bucket Recipe Parameters
Specify the name of the bucket and who will have owner permissions.
Existing buckets are preserved.
Adding a permission to the list will update the permissions but removing them will not.
You have to manualy remove grants.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bucket':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bucket':{'field':{'name':'bucket_bucket','kind':'string','order':2,'default':'','description':'Name of Google Cloud Bucket to create.'}},
'emails':{'field':{'name':'bucket_emails','kind':'string_list','order':3,'default':'','description':'Comma separated emails.'}},
'groups':{'field':{'name':'bucket_groups','kind':'string_list','order':4,'default':'','description':'Comma separated groups.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute Storage Bucket
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-AND-gate-out-12d.ipynb | mit | ph_sel_name = "None"
data_id = "12d"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:36:27 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
d_orig = d
d = bext.burst_search_and_gate(d, m=10, F=7)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
bandwidth = 0.03
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_fret
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst search and selection
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
# ds_fret.add(E_fitter = E_fitter)
# dplot(ds_fret, hist_fret_kde, weights='size', bins=np.r_[-0.2:1.2:bandwidth], bandwidth=bandwidth);
# plt.axvline(E_pr_fret_kde, ls='--', color='r')
# print(ds_fret.ph_sel, E_pr_fret_kde)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-AND-gate.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
JamesSample/enviro_mod_notes | notebooks/07_GLUE.ipynb | mit | # Choose true params
a_true = 3
b_true = 6
sigma_true = 2
n = 100 # Length of data series
# For the independent variable, x, we will choose n values equally spaced
# between 0 and 10
x = np.linspace(0, 10, n)
# Calculate the dependent (observed) values, y
y = a_true*x + b_true + np.random.normal(loc=0, scale=sigma_true, size=n)
# Plot
plt.plot(x, y, 'ro')
plt.plot(x, a_true*x + b_true, 'k-')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Observed data')
plt.show()
"""
Explanation: Generalised Likelihood Uncertainty Estimation (GLUE)
GLUE is a framework for model calibration and uncertainty estimation that has become popular in recent years, especially within the UK hydrological community. The approach is well documented in the academic literature (Beven, 2006, for example, provides a comprehensive overview) but it is also controversial, in the sense that many authors consider the method to be both statistically incoherent and computationally inefficient.
The more I learn, the more I'm inclined to agree with those who feel GLUE is not an appropriate tool for model calibration and uncertainty estimation. For anyone who has yet to make a decision, I strongly recommend reading the literature on the subject, including the exchanges between leading proponents on both sides of the argument. For example:
Mantovan & Todino (2006) then Beven et al. (2007) then Mantovan & Todino (2007) <br><br>
Clark et al. (2011) then Beven et al. (2012) then Clark et al. (2012)
Two of the reasons GLUE has become so popular are that it is conceptually simple and easy to code. Such advantages are not easily ignored, especially among environmental scientists who are typically neither professional statisticians nor computer programmers. Although most would-be modellers are aware of some debate in the literature, many lack the statistical background to be able to follow the arguments in detail. What's more, many understandably take the view that, if the issue is still a matter for discussion between statisticians, either method will probably be adequate for a first foray into environmental modelling.
The aim of this notebook is to provide an introduction to some of the key issues, and to make it easier to follow the more detailed assessments in the academic literature. We will begin by comparing the frequentist, Bayesian and GLUE approaches to simple linear regression.
I will assume familiarity with frequentist Ordinary Least Squares (OLS) regression, and if you've worked through the previous notebooks you should also have a basic understanding of formal Bayesian inference and the differences between e.g. Monte Carlo and MCMC sampling. I'll try to provide a reasonable overview of GLUE, but if you're not familar with the technique already I'd recommend reading e.g. Beven (2006) for a more complete summary.
A much more comprehensive and detailed investigation of the limitations of GLUE is provided by Stedinger et al. (2008).
Three approaches compared
We will consider the following:
Frequentist OLS regression. This is just the usual approach to linear regression that most people are familiar with. <br><br>
Bayesian MCMC. A formal Bayesian approach, exactly the same as introduced in section 7 of notebook 4. <br><br>
Monte Carlo GLUE. A "limits of acceptability" approach using an informal (or pseudo-) likelihood function. The most common implementation of GLUE uses Monte Carlo sampling, similar to some of the techniques described in notebook 3.
It's worth emphasising straight away that using numerical simulation approaches such as Bayesian MCMC or GLUE to solve a simple linear regression problem is a case of using a very large sledgehammer to crack a very small nut. It is extremely unlikey that you would ever use either of these techniques for this kind of analsis in practice. However, if an approach is going to generalise well to more complex problems, it's often a good idea to check it works for simple problems too.
Simple linear regression is just a basic form of parameter inference: we want to infer the slope and intercept of our regression line, subject to a particular error model. The simplest form of linear regression assumes independent and identically distributed Guassian erros with mean zero.
$$y = ax + b + \epsilon \qquad where \qquad \epsilon \sim \mathcal N(0, \sigma_\epsilon)$$
We will start by generating some synthetic data based on the equation above and we'll then use the three methods to estimate the regression parameters and associated confidence intervals. The reason for doing this is to check that the two more complicated approaches gives results that are broadly consistent with the simple frequentist method (which is very well established).
1. Generate synthetic data
End of explanation
"""
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
# Add intercept for model
X = sm.add_constant(x)
# Fit
model = sm.OLS(y,X)
result = model.fit()
# Regression summary
print result.summary()
print '\n'
# Key results as dataframe
freq_df = pd.DataFrame(data={'a_freq':[result.conf_int()[1,0],
result.params[1],
result.conf_int()[1,1]],
'b_freq':[result.conf_int()[0,0],
result.params[0],
result.conf_int()[0,1]],
'sigma_freq':[np.nan,
(result.scale)**0.5,
np.nan]},
index=['2.5%', '50%', '97.5%'])
print freq_df.T
"""
Explanation: 2. Frequentist linear regression
There are several ways of performing simple linear regression, but the most commonly used is the OLS approach, which minimises the sum of squared model residuals. OLS regrssion under the assumption of independent and identically distributed (iid) Gaussian errors is so widely used that many software packages make the analysis very easy - so easy, in fact, that people often forget to check whether the iid assumption has actually been satisfied. In the examples below we won't check either, but that's because we know our test data was generated using iid errors, so we don't need to.
2.1. Fit the model
We'll use statsmodels to perform the regression in Python, including estimating 95% confidence intervals for the slope and intercept ($a$ and $b$, respectively). We must also estimate the error standard deviation , $\sigma_\epsilon$ (we'll ignore the confidence interval for this for now, because it's not provided by statsmodels by default).
End of explanation
"""
# Plot predicted
prstd, low, up = wls_prediction_std(result, alpha=0.05) # 95% interval
plt.fill_between(x, low, up, color='r', alpha=0.3)
plt.plot(x, result.fittedvalues, 'r-', label='Estimated')
plt.title('Frequentist')
# Plot true
plt.plot(x, y, 'bo')
plt.plot(x, a_true*x+b_true, 'b--', label='True')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
"""
Explanation: 2.2. Plot the result
We can now plot the median regression line plus the 95% confidence interval around it.
End of explanation
"""
# Data frame of lower CI, upper CI and observations
cov_df = pd.DataFrame({'low':low,
'obs':y,
'up':up})
# Are obs within CI?
cov_df['In_CI'] = ((cov_df['low'] < cov_df['obs']) &
(cov_df['up'] > cov_df['obs']))
# Coverage
cov = 100.*cov_df['In_CI'].sum()/len(cov_df)
print 'Coverage: %.1f%%' % cov
"""
Explanation: The estimated "best-fit" line is very close to the true one. Also, if our 95% confidence interval is correct, we should expect roughly 95% of the observations to lie within the shaded area. This proportion is often called the "coverage".
2.3. Estimate coverage
End of explanation
"""
from scipy.stats import norm
def log_likelihood(params, x, obs):
""" Calculate log likelihood assuming iid Gaussian errors.
"""
# Extract parameter values
a_est, b_est, sigma_est = params
# Calculate deterministic results with these parameters
sim = a_est*x + b_est
# Calculate log likelihood
ll = np.sum(norm(sim, sigma_est).logpdf(obs))
return ll
def log_prior(params):
""" Calculate log prior.
"""
# Extract parameter values
a_est, b_est, sigma_est = params
# If all parameters are within allowed ranges, return a constant
# (anything will do - I've used 0 here)
if ((a_min <= a_est < a_max) and
(b_min <= b_est < b_max) and
(sigma_min <= sigma_est < sigma_max)):
return 0
# Else the parameter set is invalid (probability = 0; log prob = -inf)
else:
return -np.inf
def log_posterior(params, x, obs):
""" Calculate log posterior.
"""
# Get log prior prob
log_pri = log_prior(params)
# Evaluate log likelihood if necessary
if np.isfinite(log_pri):
log_like = log_likelihood(params, x, obs)
# Calculate log posterior
return log_pri + log_like
else:
# Log prior is -inf, so log posterior is -inf too
return -np.inf
"""
Explanation: The coverage from the frequentist approach is correct, as expected.
3. Bayesian linear regression
For this problem, the Bayesian approach is significantly more complicated than the frequentist one. One of the real benefits of the Bayesian method, though, is its generality i.e. it doesn't necessarily become any more complicated when applied to challenging problems. As demonstrated in notebooks 4 and 6, the Bayesian approach is essentially the same regardless of whether you're performing simple linear regression or calibrating a hydrological model. It's worth bearing this in mind when working though the following sections.
3.1. Define the likelihood, prior and posterior
The likelihood, prior and posterior are defined in exactly the same way as in section 7 of notebook 4. Note that for the likelihood function we're required to explicitly define an error structure. This was not necessary for the frequentist approach above because statsmodels.api.OLS implicitly assumes iid Gaussian errors. For more complex error schemes, we'd need to specify the error struture for the frequentist analysis too.
End of explanation
"""
a_min, a_max = -10, 10
b_min, b_max = -10, 10
sigma_min, sigma_max = 0, 10
"""
Explanation: 3.2. Define limits for uniform priors
In the log_prior function above we've assumed improper uniform priors, just as we have in all the previous notebooks. Below we set allowable prior ranges for $a$, $b$ and $\sigma_\epsilon$.
End of explanation
"""
from scipy import optimize
def neg_log_posterior(params, x, obs):
""" Negative of log posterior.
"""
return -log_posterior(params, x, obs)
def find_map(init_guess, x, obs):
""" Find max of posterior.
init_guess [a, b, sigma]
"""
# Run optimiser
param_est = optimize.fmin(neg_log_posterior,
init_guess,
args=(x, obs))
return param_est
# Guess some starting values for [a, b, sigma]
param_guess = [1, 1, 1]
# Run optimiser
param_est = find_map(param_guess, x, y)
# Print results
print '\n'
for idx, param in enumerate(['a', 'b', 'sigma',]):
print 'Estimated %s: %.2f.' % (param, param_est[idx])
"""
Explanation: 3.3. Find the MAP
The MAP is the maximum of the posterior distribution. It gives the most likely values for the model parameters ($a$, $b$ and $\sigma_\epsilon$) given our piors and the data. It also provides a good starting point for our MCMC analysis.
End of explanation
"""
import emcee, corner
# emcee parameters
n_dim = 3 # Number of parameters being calibrated
n_walk = 20 # Number of "walkers"/chains
n_steps = 200 # Number of steps per chain
n_burn = 100 # Length of burn-in to discard
def run_mcmc(n_dim, n_walk, n_steps, n_burn, param_opt, truths=None):
""" Sample posterior using emcee.
n_dim Number of parameters being calibrated
n_walk Number of walkers/chains (must be even)
n_steps Number of steps taken by each walker
n_burn Number of steps to discard as "burn-in"
param_opt Optimised parameter set from find_map()
truths True values (if known) for plotting
Produces plots of the chains and a 'corner plot' of the
marginal posterior distribution.
Returns an array of samples (with the burn-in discarded).
"""
# Generate starting locations for the chains by adding a small
# amount of Gaussian noise to optimised MAP
starting_guesses = [param_opt + 1e-4*np.random.randn(n_dim)
for i in range(n_walk)]
# Prepare to sample. The params are automatically passed to log_posterior
# as part of n_dim. "args" lists the other params that are also necessary
sampler = emcee.EnsembleSampler(n_walk, n_dim, log_posterior,
args=[x, y])
# Run sampler
pos, prob, state = sampler.run_mcmc(starting_guesses, n_steps)
# Print some stats. based on run properties
print '\n'
print 'Average acceptance fraction: ', np.mean(sampler.acceptance_fraction)
print 'Autocorrelation time: ', sampler.acor
# Get results
# Plot traces, including burn-in
param_labels = ['a', 'b', 'sigma']
fig, axes = plt.subplots(nrows=n_dim, ncols=1, figsize=(10, 10))
for idx, title in enumerate(param_labels):
axes[idx].plot(sampler.chain[:,:,idx].T, '-', color='k', alpha=0.3)
axes[idx].set_title(title, fontsize=20)
plt.subplots_adjust(hspace=0.5)
plt.show()
# Discard burn-in
samples = sampler.chain[:, n_burn:, :].reshape((-1, n_dim))
# Triangle plot
tri = corner.corner(samples,
labels=param_labels,
truths=truths,
quantiles=[0.025, 0.5, 0.975],
show_titles=True,
title_args={'fontsize': 24},
label_kwargs={'fontsize': 20})
return samples
samples = run_mcmc(n_dim, n_walk, n_steps, n_burn, param_est,
[a_true, b_true, sigma_true])
"""
Explanation: It's reassuring to see the MAP estimates are close to the true values. However, as we've discusssed previously, these numbers are't much use without an indication of uncertainty i.e. how well-constrained are these values, given our priors and the data? For a simple problem like this, there are much simpler ways of estimating uncertainty using a Bayesian approach than by running an MCMC analysis (see notebook 8, for example). Nevertheless, the MCMC approach is very general and we've used it a number of times previously, so for consistency we'll apply it here as well.
3.4. Run the MCMC
As before, we'll use emcee to draw samples from the posterior.
End of explanation
"""
# Print estimates and confidence intervals
mcmc_df = pd.DataFrame(data=samples, columns=['a_mcmc', 'b_mcmc', 'sigma_mcmc'])
print mcmc_df.describe(percentiles=[0.025, 0.5, 0.975]).ix[['2.5%', '50%', '97.5%']].T
print '\n'
print freq_df.T
"""
Explanation: Blue solid lines on the "corner plot" above indicate the true values, while the vertical dotted lines on the histograms mark the 2.5%, 50% and 97.5% quantiles for the parameter estimates. In all cases, the true values lie well within the 95% credible intervals (a "credible interval" is the Bayesian equivalent of a frequentist "confidence interval").
3.6. Get the confidence intervals
As with the frequentist analysis, we can also plot our median simulation and the 95% credible interval on top of the observed data. First, we'll extract some key values into a data frame that we can compare with the frequentist results.
End of explanation
"""
# Store output data in lists
conf = []
# Pick parameter sets at random from the converged chains
for a, b, sigma in samples[np.random.randint(len(samples), size=1000)]:
# Simulate values
sim = a*x + b + norm.rvs(loc=0, scale=sigma, size=n)
df = pd.DataFrame(data={'Sim':sim})
# Add to conf
conf.append(df)
# Concatenate results
conf = pd.concat(conf, axis=1)
# Get 2.5 and 97.5 percentiles for plotting
conf = conf.T.describe(percentiles=[0.025, 0.5, 0.975]).T[['2.5%', '50%', '97.5%']]
# Plot predicted
plt.fill_between(x, conf['2.5%'], conf['97.5%'], color='r', alpha=0.3)
plt.plot(x, conf['50%'], 'r-', label='Estimated')
plt.title('Bayesian')
# Plot true line
plt.plot(x, y, 'bo')
plt.plot(x, a_true*x+b_true, 'b--', label='True')
plt.legend(loc='best')
plt.show()
"""
Explanation: The Bayesian and frequentist results are very similar. We can also sample from our MCMC simulations to derive credible intervals for plotting.
End of explanation
"""
# Add observations to df
conf['obs'] = y
# Are obs within CI?
conf['In_CI'] = ((conf['2.5%'] < conf['obs']) &
(conf['97.5%'] > conf['obs']))
# Coverage
cov = 100.*conf['In_CI'].sum()/len(conf)
print 'Coverage: %.1f%%' % cov
"""
Explanation: The edges of the credible interval are a little jagged due to our limited numerical sampling, but if we ran the chains for longer and used more samples to construct the intervals, we could get a smoother result. Nonetheless, it's pretty obvious that this interval is essentially identical to the one from the frequentist analysis.
3.7. Get the coverage
As above, we can also calculate the coverage, which should be roughly 95%.
End of explanation
"""
def nash_sutcliffe(params, x, obs):
""" Nash-Sutcliffe efficiency.
"""
# Extract parameter values
a_est, b_est = params
# Run simulation
sim = a_est*x + b_est
# NS
num = np.sum((sim - obs)**2)
denom = np.sum((obs - obs.mean())**2)
ns = 1 - (num/denom)
return [ns, sim]
"""
Explanation: 4. GLUE
The GLUE methodology is a little different. First of all, GLUE typically makes use of informal or pseudo- likelihood functions, which do not explicitly consider the error structure between the model output and the observations. Within the GLUE framework, it is permissable to use any scoring metric (or combination of metrics) to evaluate model performance, with the emphasis focusing less on what is statistically rigorous and more on what is physically meaningful. For example, it is very common to see GLUE analyses using the Nash-Sutcliffe efficiency as an indcator of model performance. GLUE also takes what is often called a "limits of acceptability" approach, requiring the user to define a threshold for their chosen metric that distinguishes between plausible and implausible model simulations.
The methodology usually goes something like this:
Choose a metric (or metrics) to indicate model performance. Skill scores such as Nash-Sutcliffe are very commonly used. <br><br>
Set a threshold for the chosen skill score above which model simulations will be deemed to be plausible. These plausible simulations are usually termed "behavioural" within the GLUE framework. <br><br>
Define prior distributions for the model's parameters. These are usually (but not necessarily) taken to be uniform, just like the ones we used above for the Bayesian analsysis. <br><br>
Sample from the pseudo-posterior
$$P_p(\theta|D) \propto P_p(D|\theta)P(\theta)$$
where the likelihood term is replaced by the pseudo-likelihood. Just like the Bayesian approach, the sampling strategy can be any of those described in previous notebooks (e.g. Monte Carlo, MCMC etc.). However, the vast majority of GLUE analyses make use of simple Monte Carlo sampling i.e. draw a large random sample from the prior, then evaluate the pseudo-likelihood for each parameter set. <br><br>
Any parameter sets scoring below the threshold defined in step 2 are discarded; those scoring above the threshold are labelled "behavioural" and kept for further analysis. <br><br>
The behavioural parameter sets are weighted according to their skill score. The model simulations are then ranked from lowest to highest, and the normalised weights are accumulated to produce a cumulative distribution function (CDF). <br><br>
The CDF is used to define a 95% uncertainty interval or prediction limit for the model output.
Some key points to note are that:
The use of a pseudo-likelihood function means the pseudo-posterior is not a true probability distribution, so GLUE cannot be used to generate a marginal posterior distribution for each model parameter. The basic unit of consideration in GLUE is the parameter set. <br><br>
The prediction limits (or uncertainty intervals) identified by GLUE are subjective and have no clear statistical meaning. For example, they are not confidence bounds in any true statistical sense: the 95% confidence interval is not expected to include 95% of the observations.
We will discuss the strengths and limitations of GLUE below, but first we'll apply the method to solve our simple linear regression problem.
4.1. Define the pseudo-likelihood
The range of possible metrics for the pseudo-likelihood is huge. In this example we'll use the Nash-Sutcliffe efficiency, which is very commonly used wth GLUE. Note that other metrics may perform better (see below), but a key "selling point" of the GLUE approach is that we shouldn't have to worry too much about our choice of goodness-of-fit measure.
End of explanation
"""
ns_min = 0.7
n_samp = 4000
"""
Explanation: 4.2. Set the behavioural threshold and sample size
We next need to set a behavioural threshold to separate plausible from implausible parameter sets. Choosing an appropriate threshold can be difficult, as it is rare for our skill score to have any direct physical relevance for our problem of interest (i.e. what is a "good" Nash-Sutcliffe score in the context of linear regression? What about for hydrology? etc.).
If we set our threshold too high, we will identify very few behavioural parameter sets; set it too low, and we risk classifying some poor simulations as "behavioural" and biasing our results. In practice, many published studies start off with a stringent behavioural threshold, but are then forced to relax it in order to find enough behavioural parameter sets to continue the analysis. This is sometimes argued to be an advantage, in the sense that GLUE allows rejection of all available models if none of them meet the pre-defined performance criteria.
For now, we'll try a threshold of $0.7$ and we'll investigate the effects of changing it later.
We also need to decide how many samples to draw from our prior. For this simple 2D example, Monte Carlo sampling should actually work OK, so we'll choose the same total number of samples as we used above in our MCMC analysis. Note, however, that for problems in a larger parameter space, we might need to draw a very large number of samples indeed using Monte Carlo methods to get a reasonable representation of the posterior.
End of explanation
"""
a_s = np.random.uniform(low=a_min, high=a_max, size=n_samp)
b_s = np.random.uniform(low=b_min, high=b_max, size=n_samp)
"""
Explanation: 4.3. Sample from the prior
One of the main advantages of Monte Carlo GLUE is that it is usually very easy to code (and to parallelise). Here we're drawing 4000 independent samples from our priors.
End of explanation
"""
def run_glue(a_s, b_s, n_samp, ns_min):
""" Run GLUE analysis.
Uses nash_sutcliffe() to estimate performance and returns
dataframes containing all "behavioural" parameter sets and
associated model output.
"""
# Store output
out_params = []
out_sims = []
# Loop over param sets
for idx in range(n_samp):
params = [a_s[idx], b_s[idx]]
# Calculate Nash-Sutcliffe
ns, sim = nash_sutcliffe(params, x, y)
# Store if "behavioural"
if ns >= ns_min:
params.append(ns)
out_params.append(params)
out_sims.append(sim)
# Build df
params_df = pd.DataFrame(data=out_params,
columns=['a', 'b', 'ns'])
assert len(params_df) > 0, 'No behavioural parameter sets found.'
# Number of behavioural sets
print 'Found %s behavioural sets out of %s runs.' % (len(params_df), n_samp)
# DF of behavioural simulations
sims_df = pd.DataFrame(data=out_sims)
return params_df, sims_df
params_df, sims_df = run_glue(a_s, b_s, n_samp, ns_min)
"""
Explanation: 4.4. Run GLUE
For each of the parameter sets drawn above, we run the model and calculate the Nash-Sutcliffe efficiency. If it's above the behavioural threshold we'll store that parameter set and the associated model output, otherwise we'll discard both.
End of explanation
"""
def weighted_quantiles(values, quantiles, sample_weight=None):
""" Modified from
http://stackoverflow.com/questions/21844024/weighted-percentile-using-numpy
NOTE: quantiles should be in [0, 1]
values array with data
quantiles array with desired quantiles
sample_weight array of weights (the same length as `values`)
Returns array with computed quantiles.
"""
# Convert to arrays
values = np.array(values)
quantiles = np.array(quantiles)
# Assign equal weights if necessary
if sample_weight is None:
sample_weight = np.ones(len(values))
# Otherwise use specified weights
sample_weight = np.array(sample_weight)
# Check quantiles specified OK
assert np.all(quantiles >= 0) and np.all(quantiles <= 1), 'quantiles should be in [0, 1]'
# Sort
sorter = np.argsort(values)
values = values[sorter]
sample_weight = sample_weight[sorter]
# Compute weighted quantiles
weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight
weighted_quantiles /= np.sum(sample_weight)
return np.interp(quantiles, weighted_quantiles, values)
def plot_glue(params_df, sims_df):
""" Plot median simulation and confidence intervals for GLUE.
"""
# Get weighted quantiles for each point in x from behavioural simulations
weights = params_df['ns']
quants = [0.025, 0.5, 0.975]
# List to store output
out = []
# Loop over points in x
for col in sims_df.columns:
values = sims_df[col]
out.append(weighted_quantiles(values, quants, sample_weight=weights))
# Build df
glue_df = pd.DataFrame(data=out, columns=['2.5%', '50%', '97.5%'])
# Plot predicted
plt.fill_between(x, glue_df['2.5%'], glue_df['97.5%'], color='r', alpha=0.3)
plt.plot(x, glue_df['50%'], 'r-', label='Estimated')
plt.title('GLUE')
# Plot true line
plt.plot(x, y, 'bo')
plt.plot(x, a_true*x+b_true, 'b--', label='True')
plt.legend(loc='best')
plt.show()
return glue_df
glue_df = plot_glue(params_df, sims_df)
"""
Explanation: Note that with a two dimensional parameter space and Nash-Sutcliffe cut-off of $0.7$, only about $\frac{1}{20}th$ of the model runs are classified as "behavioural". This fraction would decrease very rapidly if the parameter space became larger.
4.5. Estimate confidence intervals
Using just the behavioural parameter sets, we rank the model output and calculate weighted quantiles to produce the desired CDF.
End of explanation
"""
def glue_coverage(glue_df):
""" Prints coverage from GLUE analysis.
"""
# Add observations to df
glue_df['obs'] = y
# Are obs within CI?
glue_df['In_CI'] = ((glue_df['2.5%'] < glue_df['obs']) &
(glue_df['97.5%'] > glue_df['obs']))
# Coverage
cov = 100.*glue_df['In_CI'].sum()/len(glue_df)
print 'Coverage: %.1f%%' % cov
glue_coverage(glue_df)
"""
Explanation: These results are clearly a bit different to the output from the Bayesian and frequentist analyses presented above. The predicted line is not as good a fit to the true data and the confidence interval is wider at the extremes than it is towards the middle. Nevertheless, this result seems superficially reasonable in the sense that it does not obviously contradict the output obtained from the other methods. Overall it is likely that, in a decision-making context, all these approaches would lead to broadly the same actions being taken.
4.6. Coverage
For consistency, we'll also calculate the coverage for GLUE, but note that GLUE confidence intervals are not expected to bracket the stated proportion of the observations (see above).
End of explanation
"""
ns_min = 0
params_df, sims_df = run_glue(a_s, b_s, n_samp, ns_min)
glue_df = plot_glue(params_df, sims_df)
glue_coverage(glue_df)
"""
Explanation: Based on the results so far, you might be thinking there's not much to choose between any of these approaches, but let's see what happens to the GLUE output if the behavioural threshold is adjusted.
4.7. Changing the behavioural threshold
The Nash-Sutlcliffe score can take any value from $-\infty$ to $1$, with $0$ implying the model output is no better than taking the mean of the observations. What happens if we relax the behavioural threshold by setting it to $0$?
End of explanation
"""
ns_min = 0.9
params_df, sims_df = run_glue(a_s, b_s, n_samp, ns_min)
glue_df = plot_glue(params_df, sims_df)
glue_coverage(glue_df)
"""
Explanation: And what if we make the behavioural threshold more stringent, by setting it to $0.9$?
End of explanation
"""
|
cmshobe/landlab | notebooks/tutorials/lithology/lithology_and_litholayers.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
import os
import numpy as np
import xarray as xr
import dask
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import holoviews as hv
hv.notebook_extension('matplotlib')
from landlab import RasterModelGrid
from landlab.components import FlowAccumulator, FastscapeEroder, LinearDiffuser, Lithology, LithoLayers
from landlab.plot import imshow_grid
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Introduction to the Lithology and LithoLayers objects
Lithology and LithoLayers are two Landlab components meant to make it easier to work with spatially variable lithology that produces spatially variable parameter values (e.g. stream power erodability or diffusivity).
This tutorial is meant for users who have some experience using Landlab components.
In this tutorial we will explore the creation of spatially variable lithology and its impact on the evolution of topography. After an introductory example that will let you see how LithoLayers works, we will work through two more complicated examples. In the first example, we use the LithoLayers to erode either dipping layeres or an anticline. Then we will use Lithology to create inverted topography.
We will use xarray to store and annotate our model output. While we won't extensively discuss the use of xarray, some background will be provided.
To start, we will import the necessary modules. A note: this tutorial uses the HoloViews package for visualization. This package is a great tool for dealing with multidimentional annotated data (e.g. an xarray dataset). If you get an error on import, consider updating dask (this is what the author needed to do in April 2018). You will also need to have the Bokeh and Matplotlib packages installed.
In testing we've seen some users have a warning raised related to the Matplotlib backend. In our testing it was OK to ignore these errors.
End of explanation
"""
mg = RasterModelGrid((10, 15))
z = mg.add_zeros('topographic__elevation', at='node')
"""
Explanation: Part 1: Creating layered rock
First we will create an instance of a LithoLayers to learn how this component works. Both LithoLayers and Lithology work closely with a Landlab ModelGrid, storing information about rock type at each grid node.
To create LithoLayers you need the following information:
A model grid that has the field 'topographic__elevation' already created.
A list of elevations, called 'layer_elevations' that the bottom of your layers will go through at specified plan-view anchor point (default value for the anchor point is (x, y) = (0, 0)), and a list of rock type IDs that indicate the rock type of that layer. When 'layer_elevations' is negative that means that the layer goes through the anchor point above the topographic surface. These layers will be created where they extend below the topographic surface.
A dictionary of rock property attributes that maps a rock ID type to property values.
A functional form in x and y that defines the shape of your surface.
The use of this function form makes it possible for any function of x and y to be passed to LithoLayers.
Both the Lithology and LithoLayers components then know the rock type ID of all the material in the 'block of rock' you have specified. This can be used to continuously know the value of specified rock properties at the topographic surface, even as the rock is eroded, uplifted, or new rock is deposited.
In this tutorial we will first make an example to help build intuition and then do two more complex examples. Most of the functionality of Lithology and LithoLayers is shown in this tutorial, but if you want to read the full component documentation for LithoLayers, it can be found here. Links to both components documentation can be found at the bottom of the tutorial.
First, we create a small RasterModelGrid with topography.
End of explanation
"""
layer_elevations = 5. * np.arange(-10, 10)
# we create a bottom layer that is very thick.
layer_elevations[-1] = layer_elevations[-2] + 100
"""
Explanation: Next we make our layer elevations. We will make 20 layers that are 5 meters thick. Note that here, as with most Landlab components, there are no default units. At the anchor point, half of the layers will be above the ground ('layer_elevations' will have negative values) and half will be below the ground ('layer_elevations' have positive values).
We will make this with the np.arange function. We will also make the bottom layer really really thick so that we won't be able to erode through through it.
End of explanation
"""
layer_ids = np.tile([0, 1, 2, 3], 5)
"""
Explanation: Next we create an array that represents our rock type ID values. We will create alternating layers of four types of rock by making an array with alternating 0s 1s 2s and 3s with the np.tile function.
End of explanation
"""
attrs = {'K_sp': {0: 0.0003, 1: 0.0001, 2: 0.0002, 3: 0.0004}}
"""
Explanation: Our dictionary containing rock property attributes has the following form:
End of explanation
"""
func = lambda x, y: x + (2. * y)
"""
Explanation: 'K_sp' is the property that we want to track through the layered rock, 0, 1, 2, 3 are the rock type IDs, and 0.0003 and 0.0001 are the values for 'K_sp' for the rock types 0 and 1.
The rock type IDs are unique identifiers for each type of rock. A particular rock type may have many properties (e.g. 'K_sp', 'diffusivity', and more). You can either specify all the possible rock types and attributes when you instantiate the LithoLayers component, or you can add new ones with the lith.add_rock_type or lith.add_property built in functions.
Finally, we define our function. Here we will use a lambda expression to create a small anonymous function. In this case we define a function of x and y that returns the value x + (2. * y). The LithoLayers component will check that this function is a function of two variables and that when passed two arrays of size number-of-nodes it returns an array of size number-of-nodes.
This means that planar rock layers will dip into the ground to the North-North-East. By changing this functional form, we can make more complicated rock layers.
End of explanation
"""
lith = LithoLayers(mg, layer_elevations, layer_ids, function=func, attrs=attrs)
"""
Explanation: Finally we construct our LithoLayers component by passing the correct arguments.
End of explanation
"""
imshow_grid(mg, 'rock_type__id', cmap='viridis')
"""
Explanation: LithoLayers will make sure that the model grid has at-node grid fields with the layer attribute names. In this case, this means that the model grid will now include a grid field called 'K_sp' and a field called 'rock_type__id'. We can plot these with the Landlab imshow_grid function.
End of explanation
"""
z -= 1.
dz_ad = 0.
lith.dz_advection=dz_ad
lith.run_one_step()
"""
Explanation: As you can see, we have layers that strike East-South-East. Since we can only see the surface expression of the layers, we can't infer the dip direction or magnitude from the plot alone.
If the topographic surface erodes, then you will want to update LithoLayers. Like most Landlab components, LithoLayers uses a run_one_step method to update.
Next we will erode the topography by decrementing the variable z, which points to the topographic elevation of our model grid, by an amount 1. In a landscape evolution model, this would typically be done by running the run_one_step method for each of the process components in the model. If the rock mass is being advected up or down by an external force (e.g. tectonic rock uplift), then then advection must be specified. The dz_advection argument can be a single value or an array of size number-of-nodes.
End of explanation
"""
imshow_grid(mg, 'rock_type__id', cmap='viridis')
"""
Explanation: We can re-plot the value of 'K_sp'. We will see that the location of the surface expression of the rock layers has changed. As we expect, the location has changed in a way that is consistent with layers dipping to the NNE.
End of explanation
"""
z += 1.
dz_ad = 0.
lith.dz_advection=dz_ad
lith.rock_id=0
lith.run_one_step()
"""
Explanation: Anytime material is added, LithoLayers or Lithology needs to know the type of rock that has been added. LithoLayers and Lithology do not assume to know the correct rock type ID and thus require that the user specify it with the rock_id keyword argument. In the run_one_step function, both components will check to see if any deposition has occured. If deposition occurs and this argument is not passed, then an error will be raised.
For example here we add 1 m of topographic elevation and do not advect the block of rock up or down. When we run lith.run_one_step we specify that the type of rock has id 0.
End of explanation
"""
imshow_grid(mg, 'rock_type__id', cmap='viridis', vmin=0, vmax=3)
"""
Explanation: When we plot the value of the rock type ID at the surface, we find that it is now all purple, the color of rock type zero.
End of explanation
"""
z += 2.
dz_ad = 0.
spatially_variable_rock_id = mg.ones('node')
spatially_variable_rock_id[mg.x_of_node > 6] = 2
lith.dz_advection=dz_ad
lith.rock_id=spatially_variable_rock_id
lith.run_one_step()
imshow_grid(mg, 'rock_type__id', cmap='viridis', vmin=0, vmax=3)
"""
Explanation: The value passed to the rock_id keyword argument can be either a single value (as in the second to last example) or an array of length number-of-nodes. This option permits a user to indicate that more than one type of rock is deposited in a single time step.
Next we will add a 2 m thick layer that is type 1 for x values less than or equal to 6 and type 2 for all other locations.
End of explanation
"""
ds = lith.rock_cube_to_xarray(np.arange(30))
hvds_rock = hv.Dataset(ds.rock_type__id)
%opts Image style(cmap='viridis') plot[colorbar=True]
hvds_rock.to(hv.Image, ['x', 'y'])
"""
Explanation: As you can see this results in the value of rock type at the surface being about half rock type 1 and about half rock type 2. Next we will create an xarray dataset that has 3D information about our Lithology to help visualize the layers in space. We will use the rock_cube_to_xarray method of the LithoLayers component.
We will then convert this xarray dataset into a HoloViews dataset so we can visualize the result.
As you can see the LithoLayers has a value of rock types 1 and 2 at the surface, then a layer of 0 below, and finally changes to alternating layers.
End of explanation
"""
%opts Image style(cmap='viridis') plot[colorbar=True, invert_yaxis=True]
hvds_rock.to(hv.Image, ['x', 'z'])
"""
Explanation: The slider allows us to change the depth below the topographic surface.
We can also plot the cube of rock created with LithoLayers as a cross section. In the cross section we can see the top two layers we made by depositing rock and then dipping layers of alternating rock types.
End of explanation
"""
# Parameters that control the size and shape of the model grid
number_of_rows = 50
number_of_columns = 50
dx = 1
# Parameters that control the LithoLayers
# the layer shape function
func = lambda x, y: (0.5 * x)**2 + (0.5 * y)**2
# the layer thicknesses
layer_thickness = 50.
# the location of the anchor point
x0 = 25
y0 = 25
# the resolution at which you sample to create the plan view and cros-section view figures.
sample_depths = np.arange(0, 30, 1)
# create the model grid
mg = RasterModelGrid((number_of_rows, number_of_columns), dx)
z = mg.add_zeros('topographic__elevation', at='node')
# set up LithoLayers inputs
layer_ids = np.tile([0, 1, 2, 3], 5)
layer_elevations = layer_thickness * np.arange(-10, 10)
layer_elevations[-1] = layer_elevations[-2] + 100
attrs = {'K_sp': {0: 0.0003, 1: 0.0001, 2: 0.0002, 3: 0.0004}}
# create LithoLayers
lith = LithoLayers(mg,
layer_elevations,
layer_ids,
x0=x0,
y0=y0,
function=func,
attrs=attrs)
# deposity and erode
dz_ad = 0.
z -= 1.
lith.dz_advection=dz_ad
lith.run_one_step()
z += 1.
lith.dz_advection=dz_ad
lith.rock_id=0
lith.run_one_step()
z += 2.
spatially_variable_rock_id = mg.ones('node')
spatially_variable_rock_id[mg.x_of_node > 6] = 2
lith.dz_advection=dz_ad
lith.rock_id=spatially_variable_rock_id
lith.run_one_step()
# get the rock-cube data structure and plot
ds = lith.rock_cube_to_xarray(sample_depths)
hvds_rock = hv.Dataset(ds.rock_type__id)
# make a plan view image
%opts Image style(cmap='viridis') plot[colorbar=True]
hvds_rock.to(hv.Image, ['x', 'y'])
"""
Explanation: Hopefuly this gives you a sense of how LithoLayers works. The next two blocks of code have all the steps we just worked through in one place.
Try modifying the layer thicknesses, the size of the grid, the function used to create the form of the layers, the layers deposited and eroded, and the location of the anchor point to gain intuition for how you can use LithoLayers to create different types of layered rock.
End of explanation
"""
%opts Image style(cmap='viridis') plot[colorbar=True, invert_yaxis=True]
hvds_rock.to(hv.Image, ['x', 'z'])
"""
Explanation: You can also make a cross section of this new LithoLayers component.
End of explanation
"""
mg = RasterModelGrid((50, 30), 400)
z = mg.add_zeros('topographic__elevation', at='node')
random_field = 0.01 * np.random.randn(mg.size('node'))
z += random_field - random_field.min()
"""
Explanation: Part 2: Creation of a landscape evolution model with LithoLayers
In this next section, we will run LithoLayers with components used for a simple Landscape Evolution Model.
We will start by creating the grid.
End of explanation
"""
attrs = {'K_sp': {0: 0.0003, 1: 0.0001}}
z0s = 50 * np.arange(-20, 20)
z0s[-1] = z0s[-2] + 10000
ids = np.tile([0, 1], 20)
"""
Explanation: Next we set all the parameters for LithoLayers. Here we have two types of rock with different erodabilities.
End of explanation
"""
# Anticline
anticline_func = lambda x, y: ((0.002 * x)**2 + (0.001 * y)**2)
# Shallow dips
shallow_func = lambda x, y: ((0.001 * x) + (0.003 * y))
# Steeper dips
steep_func = lambda x, y: ((0.01 * x) + (0.01 * y))
"""
Explanation: There are three functional forms that you can choose between. Here we define each of them.
End of explanation
"""
# Anticline
lith = LithoLayers(mg,
z0s,
ids,
x0=6000,
y0=10000,
function=anticline_func,
attrs=attrs)
# Shallow dips
#lith = LithoLayers(mg, z0s, ids, function=shallow_func, attrs=attrs)
# Steeper dips
#lith = LithoLayers(mg, z0s, ids, function=steep_func, attrs=attrs)
"""
Explanation: The default option is to make an anticline, but you can comment/uncomment lines to choose a different functional form.
End of explanation
"""
imshow_grid(mg, 'K_sp')
"""
Explanation: Now that we've created LithoLayers, model grid fields for each of the LithoLayers attributes exist and have been set to the values of the rock exposed at the surface.
Here we plot the value of 'K_sp' as a function of the model grid.
End of explanation
"""
nts = 300
U = 0.001
dt = 1000
fa = FlowAccumulator(mg)
sp = FastscapeEroder(mg, K_sp='K_sp')
"""
Explanation: As you can see (in the default anticline option) we have concentric elipses of stronger and weaker rock.
Next, lets instantiate a FlowAccumulator and a FastscapeEroder to create a simple landscape evolution model.
We will point the FastscapeEroder to the model grid field 'K_sp' so that it will respond to the spatially variable erodabilities created by LithoLayers.
End of explanation
"""
ds = xr.Dataset(
data_vars={
'topographic__elevation': (
('time', 'y', 'x'), # tuple of dimensions
np.empty((nts, mg.shape[0], mg.shape[1])), # n-d array of data
{
'units': 'meters', # dictionary with data attributes
'long_name': 'Topographic Elevation'
}),
'rock_type__id':
(('time', 'y', 'x'), np.empty((nts, mg.shape[0], mg.shape[1])), {
'units': '-',
'long_name': 'Rock Type ID Code'
})
},
coords={
'x': (
('x'), # tuple of dimensions
mg.x_of_node.reshape(
mg.shape)[0, :], # 1-d array of coordinate data
{
'units': 'meters'
}), # dictionary with data attributes
'y': (('y'), mg.y_of_node.reshape(mg.shape)[:, 1], {
'units': 'meters'
}),
'time': (('time'), dt * np.arange(nts) / 1e6, {
'units': 'millions of years since model start',
'standard_name': 'time'
})
})
"""
Explanation: Before we run the model we will also instatiate an xarray dataset used to store the output of our model through time for visualization.
The next block may look intimidating, but I'll try and walk you through what it does.
xarray allows us to create a container for our data and label it with information like units, dimensions, short and long names, etc. xarray gives all the tools for dealing with N-dimentional data provided by python packages such as numpy, the labeling and named indexing power of the pandas package, and the data-model of the NetCDF file.
This means that we can use xarray to make a "self-referential" dataset that contains all of the variables and attributes that describe what each part is and how it was made. In this application, we won't make a fully self-referential dataset, but if you are interested in this, check out the NetCDF best practices.
Important for our application is that later on we will use the HoloViews package for visualization. This package is a great tool for dealing with multidimentional annotated data and will do things like automatically create nice axis labels with units. However, in order for it to work, we must first annotate our data to include this information.
Here we create an xarray Dataset with two variables 'topographic__elevation' and 'rock_type__id' and three dimensions 'x', 'y', and 'time'.
We pass xarray two dictionaries, one with information about the data variabiables (data_vars) and one with information about the coordinate system (coords). For each data variable or coordinate, we pass a tuple of three items: (dims, data, atts). The first element is a tuple of the name of the dimensions, the second element is the data, an the third is a dictionary of attributes.
End of explanation
"""
print(ds)
"""
Explanation: We can print the data set to get some basic information about it.
End of explanation
"""
ds.topographic__elevation
"""
Explanation: We can also print a single variable to get more detailed information about it.
Since we initialized the datset with empty arrays for the two data variables, we just see zeros for the data values.
End of explanation
"""
out_fields = ['topographic__elevation', 'rock_type__id']
for i in range(nts):
fa.run_one_step()
sp.run_one_step(dt=dt)
dz_ad = np.zeros(mg.size('node'))
dz_ad[mg.core_nodes] = U * dt
z += dz_ad
lith.dz_advection=dz_ad
lith.run_one_step()
for of in out_fields:
ds[of][i, :, :] = mg['node'][of].reshape(mg.shape)
"""
Explanation: Next, we run the model. In each time step we first run the FlowAccumulator to direct flow and accumulatate drainage area. Then the FastscapeEroder erodes the topography based on the stream power equation using the erodability value in the field 'K_sp'. We create an uplift field that uplifts only the model grid's core nodes. After uplifting these core nodes, we update LithoLayers. Importantly, we must tell the LithoLayers how it has been advected upward by uplift using the dz_advection keyword argument.
As we discussed in the introductory example, the built-in function lith.run_one_step has an optional keyword argument rock_id to use when some material may be deposited. The LithoLayers component needs to know what type of rock exists everywhere and it will raise an error if material is deposited and no rock type is specified. However, here we are using the FastscapeEroder which is fully detachment limited, and thus we know that no material will be deposited at any time. Thus we can ignore this keyword argument. Later in the tutorial we will use the LinearDiffuser which can deposit sediment and we will need to set this keyword argument correctly.
Within each timestep we save information about the model for plotting.
End of explanation
"""
imshow_grid(mg, 'topographic__elevation', cmap='viridis')
"""
Explanation: Now that the model has run, lets start by plotting the resulting topography.
End of explanation
"""
hvds_topo = hv.Dataset(ds.topographic__elevation)
hvds_rock = hv.Dataset(ds.rock_type__id)
hvds_topo
"""
Explanation: The layers of rock clearly influence the form of topography.
Next we will use HoloViews to visualize the topography and rock type together.
To start, we create a HoloViewDataset from our xarray datastructure.
End of explanation
"""
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo = hvds_topo.to(hv.Image, ['x', 'y'])
rock = hvds_rock.to(hv.Image, ['x', 'y'])
topo + rock
"""
Explanation: Next we specify that we want two images, one showing rock type and one showing topographic elevation. A slider bar shows us model time in millions of years.
Be patient. Running this next block may take a moment. HoloViews is rendering an image of all time slices so you can see an animated slider. This is pretty magical (but not instantaneous).
End of explanation
"""
mg2 = RasterModelGrid((30, 30), 200)
mg2.set_closed_boundaries_at_grid_edges(False, True, False, True)
z2 = mg2.add_zeros('topographic__elevation', at='node')
random_field = 0.01 * np.random.randn(mg2.size('node'))
z2 += random_field - random_field.min()
thicknesses2 = [10000]
ids2 = [0]
attrs2 = {'K_sp': {0: 0.0001, 1: 0.00001}, 'D': {0: 0.4, 1: 0.001}}
lith2 = Lithology(mg2, thicknesses2, ids2, attrs=attrs2)
nts = 500
U = 0.005
dt = 1000
fa2 = FlowAccumulator(mg2)
sp2 = FastscapeEroder(mg2, K_sp='K_sp')
ld2 = LinearDiffuser(mg2, linear_diffusivity='D')
out_fields = ['topographic__elevation', 'rock_type__id']
out_fields = ['topographic__elevation', 'rock_type__id']
nts = 200
U = 0.001
dt = 1000
ds2 = xr.Dataset(data_vars={
'topographic__elevation':
(('time', 'y', 'x'), np.empty((nts, mg2.shape[0], mg2.shape[1])), {
'units': 'meters',
'long_name': 'Topographic Elevation'
}),
'rock_type__id':
(('time', 'y', 'x'), np.empty((nts, mg2.shape[0], mg2.shape[1])), {
'units': '-',
'long_name': 'Rock Type ID Code'
})
},
coords={
'x': (('x'), mg2.x_of_node.reshape(mg2.shape)[0, :], {
'units': 'meters'
}),
'y': (('y'), mg2.y_of_node.reshape(mg2.shape)[:, 1], {
'units': 'meters'
}),
'time': (('time'), dt * np.arange(nts) / 1e6, {
'units': 'millions of years since model start',
'standard_name': 'time'
})
})
half_nts = int(nts / 2)
dz_ad2 = np.zeros(mg2.size('node'))
dz_ad2[mg2.core_nodes] = U * dt
lith2.dz_advection=dz_ad2
lith2.rock_id=0
for i in range(half_nts):
fa2.run_one_step()
sp2.run_one_step(dt=dt)
ld2.run_one_step(dt=dt)
z2 += dz_ad2
lith2.run_one_step()
for of in out_fields:
ds2[of][i, :, :] = mg2['node'][of].reshape(mg2.shape)
"""
Explanation: We can see the form of the anticline advecting through the topography. Cool!
Part 3: Creation of Inverted Topography
Here we will explore making inverted topography by eroding Lithology with constant properties for half of the model evaluation time, and then filling Lithology in with resistant material only where the drainage area is large. This is meant as a simple example of filling in valleys with volcanic material.
All of the details of the options for creating a Lithology can be found here.
In the next code block we make a new model and run it. There are a few important differences between this next example and the one we just worked through in Part 2.
Here we will have two rock types. Type 0 that represents non-volcanic material. It will have a higher diffusivity and erodability than the volcanic material, which is type 1.
Recall that in Part 2 we did not specify a rock_id keyword argument to the lith.run_one_step method. This was because we used only the FastscapeEroder component which is fully detachment limited and thus never deposits material. In this example we will also use the LinearDiffuser component, which may deposity material. The Lithology component needs to know the rock type everywhere and thus we must indicate the rock type of the newly deposited rock. This is done by passing a single value or number-of-nodes sized array rock type values to the run_one_step method.
We also are handling the model grid boundary conditions differently than in the last example, setting the boundaries on the top and bottom to closed.
End of explanation
"""
imshow_grid(mg2, 'topographic__elevation', cmap='viridis')
"""
Explanation: After the first half of run time, let's look at the topography.
End of explanation
"""
volcanic_deposits = np.zeros(mg2.size('node'))
da_big_enough = mg2['node']['drainage_area'] > 5e4
topo_difference_from_top = mg2['node']['topographic__elevation'].max(
) - mg2['node']['topographic__elevation']
volcanic_deposits[
da_big_enough] = 0.25 * topo_difference_from_top[da_big_enough]
volcanic_deposits[mg2.boundary_nodes] = 0.0
z2 += volcanic_deposits
lith2.rock_id=1
lith2.run_one_step()
imshow_grid(mg2, volcanic_deposits)
"""
Explanation: We can see that we have developed ridges and valleys as we'd expect from a model with stream power erosion and linear diffusion.
Next we will create some volcanic deposits that fill the channels in our model.
End of explanation
"""
for i in range(half_nts, nts):
fa2.run_one_step()
sp2.run_one_step(dt=dt)
ld2.run_one_step(dt=dt)
dz_ad2 = np.zeros(mg2.size('node'))
dz_ad2[mg2.core_nodes] = U * dt
z2 += dz_ad2
lith2.dz_advection=dz_ad2
lith2.rock_id=0
lith2.run_one_step()
for of in out_fields:
ds2[of][i, :, :] = mg2['node'][of].reshape(mg2.shape)
"""
Explanation: We should expect that the locations of our valleys and ridges change as the river system encouters the much stronger volcanic rock.
End of explanation
"""
imshow_grid(mg2, 'topographic__elevation', cmap='viridis')
"""
Explanation: Now that the model has run, let's plot the final elevation
End of explanation
"""
hvds_topo2 = hv.Dataset(ds2.topographic__elevation)
hvds_rock2 = hv.Dataset(ds2.rock_type__id)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo2 = hvds_topo2.to(hv.Image, ['x', 'y'])
rock2 = hvds_rock2.to(hv.Image, ['x', 'y'])
topo2 + rock2
# if you wanted to output to visualize in something like ParaView, the following commands can be used
#ds.to_netcdf('anticline.nc')
#ds2.to_netcdf('inversion.nc')
"""
Explanation: And now a HoloView Plot that lets us explore the time evolution of the topography
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/dev/n07_market_simulator.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from utils import analysis
"""
Explanation: This is a notebook to aid in the development of the market simulator. One initial version was created as part of the Machine Learning for Trading course. It has to be adapted for use in the Capstone project.
End of explanation
"""
from utils import marketsim as msim
orders_path = '../../data/orders/orders-my-leverage.csv'
orders_df = pd.read_csv(orders_path, index_col='Date', parse_dates=True, na_values=['nan'])
orders_df
data_df = pd.read_pickle('../../data/data_df.pkl')
port_vals_df, values = msim.simulate_orders(orders_df, data_df)
port_vals_df.plot()
values
analysis.value_eval(port_vals_df, graph=True, verbose=True, data_df=data_df)
"""
Explanation: To use the market simulator with the q-learning agent it must be possible to call it with custom data, stored in RAM. Let's try that.
End of explanation
"""
'AAPL' in data_df.columns.tolist()
data_df.index.get_level_values(0)[0]
symbols = data_df.columns.get_level_values(0).tolist()
symbols.append('CASH')
positions_df = pd.DataFrame(index=symbols, columns=['shares', 'value'])
positions_df
close_df = data_df.xs('Close', level='feature')
close_df.head()
current_date = close_df.index[-1]
current_date
positions_df['shares'] = np.zeros(positions_df.shape[0])
positions_df.loc['CASH','shares'] = 1000
positions_df
SHARES = 'shares'
VALUE = 'value'
CASH = 'CASH'
prices = close_df.loc[current_date]
prices[CASH] = 1.0
positions_df[VALUE] = positions_df[SHARES] * prices
positions_df
ORDER_SYMBOL = 'symbol'
ORDER_ORDER = 'order'
ORDER_SHARES = 'shares'
BUY = 'BUY'
SELL = 'SELL'
NOTHING = 'NOTHING'
order = pd.Series(['AAPL', BUY, 200], index=[ORDER_SYMBOL, ORDER_ORDER, ORDER_SHARES])
order
if order[ORDER_ORDER] == 'BUY':
positions_df.loc[order[ORDER_SYMBOL], SHARES] += order[ORDER_SHARES]
positions_df.loc[CASH, SHARES] -= order[ORDER_SHARES] * close_df.loc[current_date, order[ORDER_SYMBOL]]
if order[ORDER_ORDER] == 'SELL':
positions_df.loc[order[ORDER_SYMBOL], SHARES] -= order[ORDER_SHARES]
positions_df.loc[CASH, SHARES] += order[ORDER_SHARES] * close_df.loc[current_date, order[ORDER_SYMBOL]]
positions_df[VALUE] = positions_df[SHARES] * prices
positions_df.loc['AAPL']
positions_df.loc[CASH]
close_df.loc[current_date, 'AAPL']
116*200
positions_df[VALUE].iloc[:-1]
values = positions_df[VALUE]
leverage = np.sum(np.abs(values.iloc[:-1])) / (np.sum(values))
leverage
"""
Explanation: That function has many of the desired characteristics, but doesn't follow the dynamics necessary for the interaction with the agent. The solution will be to implement a new class, called Portfolio, that will accept orders, keep track of the positions and return their values when asked for.
End of explanation
"""
from recommender.portfolio import Portfolio
p = Portfolio(data_df)
from recommender.order import Order
o1 = Order(['AAPL', BUY, 150])
print(o1)
p.positions_df
p.positions_df.loc['AAPL']
p.execute_order(o1)
p.positions_df.loc[['AAPL','CASH']]
p.add_market_days(1)
p.current_date
p.positions_df.loc[['AAPL', CASH]]
p.add_market_days(1)
p.current_date
p.positions_df.loc[['AAPL', CASH]]
p.positions_df[VALUE].sum()
p.execute_order(Order(['AAPL',SELL,100]))
p.positions_df[p.positions_df[SHARES] != 0]
"""
Explanation: Let's test the Portfolio class
End of explanation
"""
p.execute_order(Order(['MSFT',BUY,120]))
p.get_positions()
p.leverage_limit = 2
"""
Explanation: Let's add a leverage limit of 2
End of explanation
"""
p.execute_order(Order(['AAPL',BUY, 10]))
p.get_positions()
"""
Explanation: Let's buy a less than the limit
End of explanation
"""
p.execute_order(Order(['AAPL',BUY, 5000]))
p.get_positions()
"""
Explanation: Now, let's buy more than the limit
End of explanation
"""
p.execute_order(Order(['AAPL',SELL, 300]))
p.get_positions()
"""
Explanation: The last order wasn't executed because the leverage limit was reached. That's good.
Let's now go short on AAPL, but less than the limit
End of explanation
"""
p.execute_order(Order(['AAPL',SELL, 3000]))
p.get_positions()
"""
Explanation: Now, the same, but this time let's pass the limit.
End of explanation
"""
pos = p.get_positions()
pos[VALUE].sum()
p.add_market_days(1000)
p.get_positions()
p.add_market_days(6000)
p.get_positions()
p.get_positions()[VALUE].sum()
p.add_market_days(-7000) # Back in time...
p.get_positions()
p.current_date
"""
Explanation: Nothing happened because the leverage limit was reached. That's ok.
End of explanation
"""
p.close_df.loc[p.current_date, 'GOOG']
p.execute_order(Order(['GOOG', BUY, 100]))
p.get_positions()
"""
Explanation: Let's try to buy GOOG before it entered the market...
End of explanation
"""
# I need to add some cash, because I lost a lot of money shorting AAPL in the last 20 years, and I need to meet the leverage limits.
p.positions_df.loc[CASH, SHARES] = 100000
p.update_values()
p.add_market_days(7200)
p.execute_order(Order(['GOOG', BUY, 100]))
p.get_positions()
"""
Explanation: Ok, nothing happened. That's correct.
Now, let's add some years and try to buy GOOG again...
End of explanation
"""
p.leverage_limit
p.my_leverage_reached()
p.get_leverage()
"""
Explanation: Good. This time GOOG was bought!
What about the leverage?
End of explanation
"""
|
jsignell/MpalaTower | inspection/.ipynb_checkpoints/meta_data-checkpoint.ipynb | mit | from __future__ import print_function
import pandas as pd
import datetime as dt
import numpy as np
import os
import xray
from posixpath import join
from flask.ext.mongoengine import MongoEngine
db = MongoEngine()
ROOTDIR = 'C:/Users/Julia/Documents/GitHub/MpalaTower/raw_netcdf_output/'
data = 'Table1'
datas = ['upper', 'Table1', 'lws', 'licor6262', 'WVIA',
'Manifold', 'flux', 'ts_data', 'Table1Rain']
non_static_attrs = ['instrument', 'source', 'program', 'logger']
static_attrs = ['station_name', 'lat', 'lon', 'elevation',
'Year', 'Month', 'DOM', 'Minute', 'Hour',
'Day_of_Year', 'Second', 'uSecond', 'WeekDay']
# Setting expected ranges for units. It is ok to include multiple ways of writing
# the same unit, just put all the units in a list
flag_by_units = {}
temp_min = 0
temp_max = 40
temp = ['Deg C', 'C']
for unit in temp:
flag_by_units.update({unit : {'min' : temp_min, 'max' : temp_max}})
percent_min = 0
percent_max = 100
percent = ['percent', '%']
for unit in percent:
flag_by_units.update({unit : {'min' : percent_min, 'max' : percent_max}})
shf_min = ''
shf_max = ''
shf = ['W/m^2']
shf_cal_min = ''
shf_cal_max = ''
shf_cal = ['W/(m^2 mV)']
batt_min = 11
batt_max = 240
batt = ['Volts', 'V']
for unit in batt:
flag_by_units.update({unit : {'min' : batt_min, 'max' : batt_max}})
PA_min = 15
PA_max = 25
PA = ['uSec']
def process_netcdf(input_dir, data, f, static_attrs):
ds = xray.Dataset()
ds = xray.open_dataset(join(input_dir, data, f),
decode_cf=True, decode_times=True)
df = ds.to_dataframe()
# drop from df, columns that don't change with time
exclude = [var for var in static_attrs if var in df.columns]
df_var = df.drop(exclude, axis=1) # dropping vars like lat, lon
# get some descriptive statistics on each of the variables
df_summ = df_var.describe()
return ds, df_summ
"""
Explanation: Meta Data
This notebook contains everything you need to create a nice neat list of meta data dictionaries out of netcdf files. In this case we have made one meta data dictionary for each day in a five year span. The dictionaries are only created when there is data available on the given day, and there are up to 8 datafiles represented on each day. Each files contains data from various sensors and that is reported out in a whole slew of variables. Each variable has attributes associated with it in the netcdf file. These attributes are carried over into the dict and other attributes are added, such as a flag variable that can be raised for various problematic data situations (missing data, unreasonable data, ...)
Overview of Data Dict Structure
What we want is the output to be a dict for each of the following: Variable, File, Metadata. The functions that generate each of these will be called from the parse_netcdf function. Then the dicts will be fed into the classes and output to the database.
Setup
End of explanation
"""
class Variable(db.EmbeddedDocument):
timestep_count = db.IntField()
flags = db.ListField(db.StringField())
name = db.StringField(db_field='var')
units = db.StringField()
count = db.IntField()
avg_val = db.FloatField(db_field='mean') # Avoid function names
std_val = db.FloatField(db_field='std')
min_val = db.FloatField(db_field='min')
max_val = db.FloatField(db_field='max')
p25th = db.FloatField(db_field='25%')
p75th = db.FloatField(db_field='75%')
content_type = db.StringField(db_field='content_coverage_type')
coordinates = db.StringField()
comment = db.StringField()
def generate_flags(var, flag_by_units):
# check status of data and raise flags
flags = []
if var.timestep_count*11/12 < var.count < var.timestep_count:
flags.append('missing a little data')
elif var.timestep_count < var.count <= var.timestep_count*11/12:
flags.append('missing some data')
elif var.timestep_count/12 <= var.count <= var.timestep_count/2:
flags.append('missing lots of data')
elif var.count == 0:
flags.append('no data')
try:
if var.name.startswith('del'):
pass
elif var.comment == 'Std': # don't check std_dev
pass
else:
if var.max_val > flag_by_units[var.units]['max']:
flags.append('contains high values')
if var.min_val < flag_by_units[var.units]['min']:
flags.append('contains low values')
except:
pass
return flags
def generate_variable(ds, df_summ, var, flag_by_units):
if df_summ[var]['count'] != 0:
this_variable = Variable(
name=var,
timestep_count=len(ds['time']),
count=df_summ[var]['count'],
avg_val=df_summ[var]['mean'],
std_val=df_summ[var]['std'],
min_val=df_summ[var]['min'],
p25th=df_summ[var]['25%'],
p75th=df_summ[var]['75%'],
max_val=df_summ[var]['max'],
units=ds[var].attrs['units'],
comment=ds[var].attrs['comment'],
coordinates=ds[var].attrs['coordinates'],
content_type=ds[var].attrs['content_coverage_type'],
)
else:
this_variable = Variable(
name=var,
timestep_count=len(ds['time']),
count=df_summ[var]['count']
)
this_variable.flags = generate_flags(this_variable, flag_by_units)
return this_variable
"""
Explanation: Variable
End of explanation
"""
class File(db.EmbeddedDocument):
source = db.StringField()
instrument = db.StringField()
datafile = db.StringField()
filename = db.StringField()
frequency = db.FloatField()
frequency_flag = db.StringField()
# The File object contains a list of Variables:
variables = db.EmbeddedDocumentListField(Variable)
def convert_to_sec(num, units):
if units.startswith(('Min','min')):
out = int(num)*60
elif units.startswith(('ms', 'mS')):
out = float(num)/1000
elif units.startswith(('s','S')):
out = int(num)
else:
print('couldn\'t parse units')
return (num, units)
return out
def programmed_frequency(f, this_file):
data = f['data']
program = this_file.source.split('CPU:')[1].split(',')[0]
try:
prog = open(join(f['dir'], 'programs', program))
except:
freq_flag = 'program: %s not found' % program
freq = float('nan')
return freq_flag, freq
lines = prog.readlines()
i= 0
k = 0
interval = None
DT = 'DataTable'
DI = 'DataInterval'
CT = 'CallTable'
for i in range(len(lines)):
if lines[i].split()[0:] == [DT, data]:
k = i
if lines[i].split()[0:1] == DI and i <= (k+2):
interval = lines[i].split(',')[1]
units = lines[i].split(',')[2]
i +=1
if interval == None:
i = 0
for i in range(len(lines)):
if lines[i].split()[0:1] == 'Scan':
interval = lines[i].split('(')[1].split(',')[0]
units = lines[i].split(',')[1]
if lines[i].split()[0:2] == [CT, data] and i <= (k+7):
interval = interval
units = units
else:
interval = None
units = None
i +=1
if interval == None:
freq_flag = 'could not find interval in %s' % program
freq = 'nan'
return freq_flag, freq
try:
num = int(interval)
except:
for line in lines:
if line.startswith('Const '+interval):
a = line.split('=')[1]
b = a.split()[0]
num = int(b)
freq = convert_to_sec(num, units)
freq_flag = 'found frequency'
return freq_flag, freq
def generate_file(f, ds, df_summ, flag_by_units):
this_file = File(
source=ds.attrs['source'],
instrument=ds.attrs['instrument'],
filename=f['filename']
)
freq_flag, freq = programmed_frequency(f, this_file)
this_file.frequency = float(freq)
this_file.frequency_flag = freq_flag
variables = []
for var in df_summ:
variables.append(generate_variable(ds, df_summ, var, flag_by_units))
this_file.variables = variables
return this_file
"""
Explanation: Datafile attribute dict
End of explanation
"""
class Metadata(db.Document):
license = db.StringField()
title = db.StringField()
creator = db.StringField(db_field='creator_name', default='Kelly Caylor')
creator_email = db.EmailField()
institution = db.StringField()
aknowledgements = db.StringField()
feature_type = db.StringField(db_field='featureType')
year = db.IntField(required=True)
month = db.IntField(required=True)
doy = db.IntField(required=True)
date = db.DateTimeField(required=True)
summary = db.StringField()
conventions = db.StringField()
naming_authority = db.StringField() # or URLField?
# The Metadata object contains a list of Files:
files = db.EmbeddedDocumentListField(File)
meta = {
'collection': 'metadata',
'ordering': ['-date'],
'index_background': True,
'indexes': [
'year',
'month',
'doy',
]
}
def generate_metadata(self, input_dir, ds):
self.license = ds.attrs['license']
self.title = ds.attrs['title']
self.creator=ds.attrs['creator_name']
self.creator_email=ds.attrs['creator_email']
self.institution=ds.attrs['institution']
self.aknowledgements=ds.attrs['acknowledgement']
self.feature_type=ds.attrs['featureType']
self.summary=ds.attrs['summary']
self.conventions=ds.attrs['Conventions']
self.naming_authority=ds.attrs['naming_authority']
return self
"""
Explanation: Metadata
End of explanation
"""
def find_dates(self, input_dir, datas):
data_list = []
data_dict = None
start = '2010-01-01'
end = dt.datetime.utcnow()
rng = pd.date_range(start, end, freq='D')
for date in rng:
i = 0
y = date.year
m = date.month
d = date.dayofyear
f = 'raw_MpalaTower_%i_%03d.nc' % (y, d)
if any(f in os.listdir(join(input_dir, data)) for data in datas):
data_dict = {'year': y, 'month' : m, 'doy': d, 'date' : date, 'files': []}
def find_files(this_metadata, datas):
f = 'raw_MpalaTower_%i_%03d.nc' % (this_metadata.year, this_metadata.doy)
for data in datas:
if f in os.listdir(join(input_dir, data)):
this_file.datafile = data
this_file.filename = f
this_metadata.files.append.(this_file)
return this_metadata
"""
Explanation: Process data into list of daily data dicts
End of explanation
"""
from flask.ext.mongoengine import MongoEngine
db = MongoEngine()
db.connect(host='mongodb://joey:joejoe@dogen.mongohq.com:10097/mpala_tower_metadata')
ds, df_summ = process_netcdf(data_dict['files'][0]['dir'],
data_dict['files'][0]['data'],
data_dict['files'][0]['filename'],
static_attrs)
this_metadata = generate_metadata(data_dict, ds)
for f in data_dict['files']:
print(f['filename'],f['data'])
ds, df_summ = process_netcdf(f['dir'], f['data'], f['filename'], static_attrs)
this_file = generate_file(f, ds, df_summ, flag_by_units)
this_metadata.files.append(this_file)
this_metadata.save()
"""
Explanation: Send to internet
End of explanation
"""
|
4DGenome/Chromosomal-Conformation-Course | Notebooks/A4-Align_and_compare_TADs.ipynb | gpl-3.0 | from pytadbit import load_chromosome
"""
Explanation: Table of Contents
Comparing TAD borders between experiments
Alignment of TAD borders
Significance
Playing with borders
Get a given column
Search for aligned TADs with specific features
Strongly conserved broders
Borders specific to one experiment
Comparing TAD borders between experiments
End of explanation
"""
crm = load_chromosome('results/fragment/crm18.tdb')
print crm
"""
Explanation: We load Chromosome objects, previously saved. These object usualy only contain "metadata", chromosome size, position of TADs for each experiment.
Note: the interaction matrices associated o the experiment can , also be saved, but by default, they are not as they can be loaded afterwards directly from the matrix.
End of explanation
"""
ali = crm.align_experiments(['T0', 'T60'])
ali.draw()
"""
Explanation: Alignment of TAD borders
End of explanation
"""
ali, stats = crm.align_experiments(['T0', 'T60'], randomize=True)
print ali
stats
print 'Alignment score: %.3f, p-value: %.4f\n proportion of borders of T0 found in T60: %.3f, of T60 in T0 %.3f' % stats
"""
Explanation: Significance
End of explanation
"""
ali.get_column(3)
cols = ali.get_column(3)
col = cols[0]
border1, border2 = col[1]
border1['score']
border2['score']
ali.draw(focus=(1, 30))
"""
Explanation: Playing with borders
Get a given column
End of explanation
"""
ali.get_column(lambda x: x['score']>5, min_num=2)
"""
Explanation: Search for aligned TADs with specific features
Strongly conserved broders
End of explanation
"""
ali.get_column(lambda x: x['score']==0.0, lambda x: x['exp'].name=='T0', min_num=1)
"""
Explanation: Borders specific to one experiment
End of explanation
"""
|
morningc/wwconnect-2016-spark4everyone | python/Apache Spark for Everyone | PySpark + Python + Jupyter.ipynb | mit | # set your working directory if you want less pathy things later
WORK_DIR = '/Users/amcasari/repos/wwconnect-2016-spark4everyone/'
# create an RDD from bikes data
# sc is an existing SparkContext (initialized when PySpark starts)
bikes = sc.textFile(WORK_DIR + "data/bikes/p*")
bikes.count()
# import SQLContext
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
# since we are familiar with pandas dataframes, let's convert the RDD to a Spark DataFrame
# we'll try to infer the schema from the files
bikes_df = sqlContext.createDataFrame(bikes)
# whoops a daisy, let's remove the header, split out the Rows + we can programmatically specify the schema
names = bikes.first().replace('"','')
names
# remove the header using subtract
bikesHeader = bikes.filter(lambda x: "instant" in x)
bikesFiltered = bikes.subtract(bikesHeader)
bikesFiltered.count()
# programmatically specify the schema using a StructField
from pyspark.sql.types import *
fields = [StructField(field_name, StringType(), False) for field_name in names.split(',')]
fields
schema = StructType(fields)
schema
# convert each line in the csv to a tuple
parts = bikesFiltered.map(lambda l: l.split(","))
bikesSplit = parts.map(lambda p: (p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7], p[8], p[9], p[10],
p[11], p[12], p[13], p[14], p[15], p[16]))
# Apply the schema to the RDD.
bikes_df = sqlContext.createDataFrame(bikesSplit, schema)
bikes_df.show()
bikes_df.printSchema()
# now we can look for trends + data quality questions...
# total # of rows in the DataFrame
num_rows = bikes_df.count()
# number of distinct rows in the DataFrame
num_distinct = bikes_df.distinct().count()
# and we can start to see where pySpark returning python objects can be used locally
print "count() returns a python object of type " + str(type(num_rows))
print "number of duplicate rows in the DataFrame: " + str(num_rows - num_distinct)
# check out some more df methods
bikes_df.groupBy('holiday').count().show()
# let's looks at trips in July
july_trips = bikes_df.filter(bikes_df['mnth'] == 7)
# since we'll be working over the DAG quite a bit, let's persist the RDD in memory
july_trips.persist()
july_trips.count()
july_trips.show()
# what else would you examine here?
# more functions can be found here in documentation (listed in refs)
# when we are done working with data, remove from memory
july_trips.unpersist()
"""
Explanation: Apache Spark for Everyone - PySpark + Python
Markdown blocks communicate text, images + whatever other useful HTML bits you want to share.
Like TODO lists:
~~get bikes data set~~
~~import csv~~
~~do some things with pyspark~~
~~do some thigns with python~~
show a python vis?
save out file
And code bits:
```
from pyspark.sql import SQLContext
```
And where you can check on your local Spark cluster
Great Markdown cheatsheet on github here
End of explanation
"""
# create an RDD from music lyrics + perform Classic WordCount()
from operator import add
lines = sc.textFile(WORK_DIR + "/data/music/machete - amanda palmer")
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print "%s: %i" % (word, count)
"""
Explanation: Markdown is useful for analysis notes, directions, and making jokes...
You can also reference songs you like, which are more fun for WordCount() than README.md
End of explanation
"""
|
liufuyang/deep_learning_tutorial | course-deeplearning.ai/course4-cnn/week2-ResNets/ResNets/Residual+Networks+-+v2.ipynb | mit | import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
"""
Explanation: Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.
In this assignment, you will:
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
End of explanation
"""
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
# X = X + X_shortcut
X = Add()([X,X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
"""
Explanation: 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Vanishing gradient <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : A ResNet block showing a skip-connection <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Identity block. Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 4 </u><font color='purple'> : Identity block. Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2a'. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be conv_name_base + '2b'. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2c'. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: See reference
- To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: Activation('relu')(X)
- To add the value passed forward by the shortcut: See reference
End of explanation
"""
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
# X = X + X_shortcut
X = Add()([X,X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 4 </u><font color='purple'> : Convolutional block </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '2a'.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be conv_name_base + '2b'.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be conv_name_base + '2c'.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '1'.
- The BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '1'.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- Conv Hint
- BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: Activation('relu')(X)
- Addition Hint
End of explanation
"""
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
# The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
# The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
X = convolutional_block(X, f = 3, filters = [128,128,512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128,128,512], stage=3, block='b')
X = identity_block(X, 3, [128,128,512], stage=3, block='c')
X = identity_block(X, 3, [128,128,512], stage=3, block='d')
# Stage 4 (≈6 lines)
# The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
# The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
# The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
# The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
# The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
X = AveragePooling2D(pool_size=(2, 2), name='avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 5 </u><font color='purple'> : ResNet-50 model </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be 'fc' + str(classes).
Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling see reference
Here're some other functions we used in the code below:
- Conv2D: See reference
- BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: See reference
- Max pooling: See reference
- Fully conected layer: See reference
- Addition: See reference
End of explanation
"""
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
"""
Explanation: Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running model.fit(...) below.
End of explanation
"""
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
"""
Explanation: As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
End of explanation
"""
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
"""
Explanation: The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 6 </u><font color='purple'> : SIGNS dataset </center></caption>
End of explanation
"""
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
"""
Explanation: Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
End of explanation
"""
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
End of explanation
"""
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
End of explanation
"""
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
"""
Explanation: ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation
"""
model.summary()
"""
Explanation: You can also print a summary of your model by running the following code.
End of explanation
"""
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
"""
Explanation: Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/notes2017/old/NB-06-Loops.ipynb | gpl-3.0 | animals = ['cat', 'dog', 'mouse']
for x in animals:
print("This is the {}.".format(x))
"""
Explanation: 루프(Loop)
시퀀스 자료형을 for 문 또는 while 문과 조합하여 사용하면 간단하지만 강력한 루프 프로그래밍을 완성할 수 있다. 특히 range 또는 xrange 함수를 유용하게 활용할 수 있다.
for 문 루프
리스트 활용
End of explanation
"""
for x in animals:
print("{}!, this is the {}.".format("Hi", x))
"""
Explanation: 서식 있는 print 문
위 코드에서는 서식이 있는 print문(formatted print)을 사용하였다. 사용방식은 원하는 곳에 중괄호({})를 위치시킨 후에 사용된 중괄호 개수만큼 format 키워드의 인자를 넣어주면 된다.
End of explanation
"""
for x in animals:
print("{1}!, you are {0}.".format("animals", x))
"""
Explanation: 아래와 같이 인덱싱을 이용하는
방식으로도 사용할 수 있다. 서식이 있는 print문에 대해서는 이후에 보다 다양한 예제를 살펴볼 것이다.
End of explanation
"""
for letter in "Hello World":
print(letter)
"""
Explanation: 문자열 활용
문자열을 이용하여 for문을 실행하면 하나씩 보여준다.
End of explanation
"""
a = range(10)
a
"""
Explanation: range 함수
일정한 순서로 이루어진 리스트는 range 함수를 이용하여 생성할 수 있다.
End of explanation
"""
b = xrange(10)
b
a[5]
b[5]
"""
Explanation: 파이썬 2.x 버전에서는 range와 거의 동일한 역할을 수행하지만 리스트 전체를 보여주지 않는 xrange가 있다.
xrange(n)이 리턴하는 리스트의 원소들은 인덱싱을 통해서만 확인할 수 있다.
xrange는 굳이 원소 전체를 알 필요가 없고 단순히 카운팅만이 필요할 경우 보다 range보다 빠르게 리스트 원소에 접근하여 프로그램의 실행속도를 증가시키는 데에 활용할 수 있다.
주의: 파이썬 3.x 버전부터는 xrange 함수가 사용되지 않는다. range 함수만 사용할 것을 추천한다.
End of explanation
"""
a[2:6]
"""
Explanation: 주의: xrange를 사용해서 만든 리스트에는 슬라이싱을 적용할 수 없다. 즉, 인덱싱만 사용한다.
End of explanation
"""
c0 = range(4)
c0
c1 = range(1, 4)
c1
c2 = range(1, 10, 2)
c2
"""
Explanation: range 함수 인자
인자를 최대 세 개까지 받을 수 있다. 각 인자들의 역할은 슬라이싱에 사용되는 세 개의 인자들의 역할과 동일하다.
range([start,] stop [, step])
start의 경우 주어지지 않으면 0을 기본값으로 갖는다.
step의 경우 주어지지 않으면 1을 기본값으로 갖느다.
End of explanation
"""
for i in range(6):
print("the square of {} is {}").format(i, i ** 2)
"""
Explanation: range 함수는 for문에서 유용하게 활용된다.
End of explanation
"""
for i in range(5):
print("printing five times")
"""
Explanation: 단순한 카운트 역할을 수행하는 용도로 range함수를 활용할 수도 있다.
End of explanation
"""
i
"""
Explanation: C 또는 Java 언어에서와는 달리 파이썬에서는 for문에서 사용되는 변수는 지역변수가 아님에 주의할 것.
End of explanation
"""
def range_double(x):
z = []
for y in range(x):
z.append(y*2)
return z
range_double(4)
"""
Explanation: range 함수 활용
연습: range 활용
함수 range_double은 range 함수와 비슷한 일을 한다. 대신에, 각 원소의 값을 두 배로 하여 리스트를 생성하도록 한다.
>>> range_double (4)
[0, 2, 4, 6]
>>> range_double (10)
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
End of explanation
"""
def skip13(a, b):
result = []
for k in range (a,b):
if k == 13:
pass # 아무 것도 하지 않음
else: result.append(k)
return result
skip13(1, 20)
"""
Explanation: 연습: range 활용
서양에서는 숫자 13에 대한 미신이 있다. 따라서 동양에서 건물에 4층이 없는 경우가 있는 것처럼 서양에는 13층이 없는 경우가 있다. 아래 함수는 13을 건너 뛰며 리스트를 생성하여 리턴한다.
End of explanation
"""
q = []
def add(name):
q.append(name)
def next():
return q.pop(0)
def show():
for i in q:
print(i)
def length():
return len(q)
add("SeungMin")
q
add("Jisung")
q.pop(1)
q
next()
add("Park")
add("Kim")
q
show()
length()
next()
add("Hwang")
show()
next()
show()
"""
Explanation: 연습: 큐를 이용하여 은행대기 및 호출 프로그램 생성
리스트를 활용하여 큐(queue) 자료구조를 구현할 수 있다. 큐를 구현하기 위해서는 보통 다음 함수들을 함께 구현해야 한다.
add(name): 새로운 손님이 추가될 경우 손님의 이름을 큐에 추가한다.
next(): 대기자 중에서 가장 먼저 도착한 손님 이름을 리턴한다.
show(): 대기자 명단을 보여(print)준다.
length(): 대기자 수를 리턴한다.
준비사항:
q = []를 전역변수로 선언하여 활용한다.
큐(queue) 자료구조의 활용을 이해할 수 있어야 한다.
선입선출(FIFO, First-In-First-Out) 방식을 사용한다.
End of explanation
"""
def iterate(f, x, n):
a = x
for i in range(n):
a = f(a)
return a
# 반 나누기를 원하는 만큼 반복하고자 하면 f 인자에 아래 divide_2 함수를 입력하면 된다.
def divide_2(x):
return x/2.0
iterate(divide_2, 64, 3)
"""
Explanation: 연습: range 이용 함수호출 반복하기
파이썬 함수를 다른 함수의 인자로 사용할 수 있다.
C 또는 Java 언어에서는 포인터를 사용해야함 가능하다.
함수를 인자로 사용할 수 있는 언어를 고차원 언어라 한다.
예제: 아래 iterate 함수는 특정 함수 f를 인자 x에 n번 반복적용한 결과를 리턴하는 함수이다.
End of explanation
"""
x = 64
while x > 1:
x = x/2
print(x)
"""
Explanation: while 문 루프
for 문과 while 문의 차이점
for문은 특정 구간(보통 시퀀스 자료형으로 표현됨) 내에서 움직이는 동안 일을 반복해서 진행함
while문은 특정 조건(불값으로 표현됨)이 만족되는 동안 일을 반복해서 진행함
End of explanation
"""
eps = 1.0
while eps + 1 > 1:
eps = eps / 2.0
eps + 1 > 1
# print("A very small epsilon is {}.".format(eps))
"""
Explanation: 예제
컴퓨터가 다룰 수 있는 실수형 숫자 중에서 절대값이 가장 작은 숫자의 근사값 구하기
보통 컴퓨터가 다룰 수 있는 가장 큰 숫자는 어느정도 들어서 알고 있다.
반면에 컴퓨터가 다룰 수 있는 가장 작은 양의 실수도 존재한다.
컴퓨터는 무한히 0에 가까워지는 실수를 다를 수 없다.
매우 큰 수를 다룰 때와 마찬가지로 절대값이 매우 작은 실수를 다룰 때에도 조심해야 함을 의미한다.
이는 컴퓨터의 한계때문이지 파이썬 자체의 한계가 아니다. 모든 프로그래밍언어가 동일한 한계를 갖고 있다.
End of explanation
"""
def n_divide(n):
a = []
for i in range(n+1):
d = 1-(float(n-i)/n)
a.append(d)
return a
n_divide(10)
"""
Explanation: 연습문제
양의 정수 n을 입력 받아 0과 n 사이의 값을 균등하게 n분의 1로 쪼개는 숫자들의 리스트를 리턴하는 함수 n_divide을 작성하라.
견본답안 1
End of explanation
"""
def n_divide1(n):
a = []
for i in range(n+1):
d = (float(i)/n)
a.append(d)
return a
n_divide1(10)
"""
Explanation: [0.0, 0.1, 0.2, 0.3, ..., 0.9, 1.0]을 기대하였지만 다르게 나왔다. n_divide 함수를 아래와 같이 코딩해 보자.
견본답안 2
End of explanation
"""
|
sadahanu/Capstone | NLP/nlp eda.ipynb | mit | # source1: web
df_breed = pd.read_csv("breed_nick_names.txt",names=['breed_info'])
df_breed.head()
df_breed.shape
breeds_info = df_breed['breed_info'].values
breed_dict = {}
for breed in breeds_info:
temp = breed.lower()
temp = re.findall('\d.\s+(\D*)', temp)[0]
temp = temp.strip().split('=')
breed_dict[temp[0].strip()] = temp[1].strip()
# 1. different nicek names are separated with 'or'
for k, v in breed_dict.iteritems():
breed_dict[k] = map(lambda x:x.strip(), v.split(' or '))
# 2. get n-gram and stemmed words breed_dict
for k, v in breed_dict.iteritems():
breed_dict[k] = set(v)
breed_dict[k].add(k)
temp_set = set([snowball.stem(x) for x in breed_dict[k]])
breed_dict[k] = breed_dict[k]|temp_set
for word in word_tokenize(k):
breed_dict[k].add(word)
breed_dict[k].add(snowball.stem(word))
breed_dict[k] = breed_dict[k] - {'dog', 'dogs'} - stopword_set
print breed_dict['chow chows']
breed_lookup = defaultdict(set)
for k, v in breed_dict.iteritems():
for word in v:
breed_lookup[word].add(k)
breed_lookup.keys()
del_list = ['toy','blue','great','duck','coat','wire','st.','white','grey',
'black','old','smooth','west','soft']
for w in del_list:
breed_lookup.pop(w, None)
print len(breed_lookup)
# polish the look up tables based on 52 base classes
breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed'])
base_breeds = breed_classes['breed'].values
not_found_breed = []
for breed in base_breeds:
if breed not in breed_dict:
if breed in breed_lookup:
if len(breed_lookup[breed])==1:
breed_in_dict = list(breed_lookup[breed])[0]
breed_dict[breed] = breed_dict[breed_in_dict]
breed_dict[breed].add(breed_in_dict)
breed_dict.pop(breed_in_dict, None)
print "replace the key {} with {}".format(breed_in_dict, breed)
else:
print breed, breed_lookup[breed]
elif snowball.stem(breed) in breed_lookup:
breed_stem = snowball.stem(breed)
if len(breed_lookup[breed_stem])==1:
breed_in_dict = list(breed_lookup[breed_stem])[0]
breed_dict[breed] = breed_dict[breed_in_dict]
breed_dict[breed].add(breed_in_dict)
breed_dict.pop(breed_in_dict, None)
else:
print breed,breed_stem, breed_lookup[breed_stem]
else:
not_found_breed.append(breed)
print "not found these breeds:"
print not_found_breed
"""
Explanation: Read dog breed information
End of explanation
"""
# poodles:
for breed in not_found_breed:
if breed.endswith('poodle') or breed=='wheaten terrier':
breed_dict[breed] = set(breed.split())|set([snowball.stem(w) for w in breed.split()])
breed_dict.pop('poodle', None)
# bullmastiff
if 'bull mastiff' in not_found_breed:
breed_dict['bull mastiff'] = breed_dict['bullmastiffs']
breed_dict.pop('bullmastiffs', None)
# english springer
if 'english springer' in not_found_breed:
breed_dict['english springer'] = breed_dict['english springer spaniels']
breed_dict.pop('english springer spaniels', None)
# german short haired, german shepherd and 'american bulldog'
name = 'american bulldog'
if name in not_found_breed:
breed_dict[name] = breed_dict['bulldog'] | set(name.split()) | set([snowball.stem(w) for w in name.split()])
breed_dict.pop('bulldog', None)
name = 'german shorthaired'
if name in not_found_breed:
breed_dict[name] = breed_dict['german shorthaired pointers']
breed_dict.pop('german shorthaired pointers', None)
name = 'german shepherd'
if name in not_found_breed:
breed_dict[name] = breed_dict['german shepherd dog']
breed_dict.pop('german shepherd dog', None)
# basset dog
breed_dict['basset'] = breed_dict['basset hound']|breed_dict['petits bassets griffons vendeens']
'basset' in base_breeds
sorted(breed_dict.keys())
ind = np.random.randint(df_reviews.shape[0])
text_review = df_reviews['review_content'][ind].lower()
print text_review
puncs = string.punctuation
reduced_set = set([snowball.stem(x) for x in (set(filter(lambda x: x not in puncs, word_tokenize(text_review)))
- stopword_set)])
po_breeds = []
for w in reduced_set:
if w in breed_lookup:
po_breeds.extend(breed_lookup[w])
print po_breeds
df_reviews.columns
def getReviewBreed(text):
ntext = text.decode('utf-8')
reduced_set = set([snowball.stem(x) for x in
(set(filter(lambda x: x not in string.punctuation,
word_tokenize(ntext.lower()))) - stopword_set)])
po_breeds = []
for w in reduced_set:
if w in breed_lookup:
po_breeds.extend(breed_lookup[w])
return po_breeds
def getBreedTable(df):
N = df.shape[0]
breed = []
review_id = []
toy_id = []
for ind, row in df.iterrows():
breed.append(getReviewBreed(row['review_content']))
review_id.append(row['review_id'])
toy_id.append(row['toy_id'])
return pd.DataFrame({'review_id':review_id, 'toy_id':toy_id, 'breed_extract':breed})
test_df = df_reviews.copy()
start_time = time.time()
new_df = getBreedTable(test_df)
print time.time() - start_time
new_df.head()
df_reviews['review_content'][1]
new_df.shape
df_extract = pd.merge(df_reviews, new_df, on=['review_id', 'toy_id'])
df_extract.pop('review_content')
print df_extract.shape
df_extract.head()
#ind = np.random.randint(df_extract.shape[0])
ind = 4
print df_reviews['review_content'][ind]
print df_extract['breed_extract'][ind]
df_extract['breed_extract'] = df_extract['breed_extract'].apply(lambda row:','.join(row))
df_extract.head()
np.sum(df_extract['breed_extract'].isnull())
breed_lookup['poodle']
"""
Explanation: for poodles:create each type of poodle a separate look up item and delete the original poodle one.
to add: american bulldog, merge with bulldog here
to add: bull mastiff = bullmstiffs
to add english springer = 'english springer spaniels'
german shorhaired = 'german shorthaired pointers'
german shepherd = 'german shepherd dog'
to add basset and merge ['basset hound', 'petits bassets griffons vendeens']
End of explanation
"""
save_data = df_extract.to_csv(index=False)
s3_res = boto3.resource('s3')
s3_res.Bucket('dogfaces').put_object(Key='reviews/extract_breed_review.csv', Body=save_data)
# save breed_lookup
# save breed_dict
with open('breed_lookup.pickle', 'wb') as handle:
pickle.dump(breed_lookup, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('breed_dict.pickle', 'wb') as handle:
pickle.dump(breed_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# source 2: classified dog names
breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed'])
breed_classes.head()
"""
Explanation: Save intermediate import dictionaries and results
End of explanation
"""
# generate a data frame, review_id, toy_id, breed
len(df_extract['review_id'].unique())
"""
Explanation: Get breed scores
End of explanation
"""
|
mdiaz236/DeepLearningFoundations | transfer-learning/Transfer_Learning.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
labels_vecs = # Your one-hot encoded labels array here
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
train_x, train_y =
val_x, val_y =
test_x, test_y =
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
logits = # output layer logits
cost = # cross entropy loss
optimizer = # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
Kaggle/learntools | notebooks/data_viz_to_coder/raw/ex2.ipynb | apache-2.0 | import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
"""
Explanation: In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate line charts to understand patterns in the data.
Scenario
You have recently been hired to manage the museums in the City of Los Angeles. Your first project focuses on the four museums pictured in the images below.
You will leverage data from the Los Angeles Data Portal that tracks monthly visitors to each museum.
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
End of explanation
"""
# Set up code checking
import os
if not os.path.exists("../input/museum_visitors.csv"):
os.symlink("../input/data-for-datavis/museum_visitors.csv", "../input/museum_visitors.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex2 import *
print("Setup Complete")
"""
Explanation: The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
"""
# Path of the file to read
museum_filepath = "../input/museum_visitors.csv"
# Fill in the line below to read the file into a variable museum_data
museum_data = ____
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
#%%RM_IF(PROD)%%
museum_data = pd.read_csv(museum_filepath, index_col="Date", parse_dates=True)
step_1.assert_check_passed()
# Uncomment the line below to receive a hint
#_COMMENT_IF(PROD)_
step_1.hint()
# Uncomment the line below to see the solution
#_COMMENT_IF(PROD)_
step_1.solution()
"""
Explanation: Step 1: Load the data
Your first assignment is to read the LA Museum Visitors data file into museum_data. Note that:
- The filepath to the dataset is stored as museum_filepath. Please do not change the provided value of the filepath.
- The name of the column to use as row labels is "Date". (This can be seen in cell A1 when the file is opened in Excel.)
To help with this, you may find it useful to revisit some relevant code from the tutorial, which we have pasted below:
```python
Path of the file to read
spotify_filepath = "../input/spotify.csv"
Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
```
The code you need to write now looks very similar!
End of explanation
"""
# Print the last five rows of the data
____ # Your code here
"""
Explanation: Step 2: Review the data
Use a Python command to print the last 5 rows of the data.
End of explanation
"""
# Fill in the line below: How many visitors did the Chinese American Museum
# receive in July 2018?
ca_museum_jul18 = ____
# Fill in the line below: In October 2018, how many more visitors did Avila
# Adobe receive than the Firehouse Museum?
avila_oct18 = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
ca_museum_jul18 = 2620
avila_oct18 = 19280-4622
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
"""
Explanation: The last row (for 2018-11-01) tracks the number of visitors to each museum in November 2018, the next-to-last row (for 2018-10-01) tracks the number of visitors to each museum in October 2018, and so on.
Use the last 5 rows of the data to answer the questions below.
End of explanation
"""
# Line chart showing the number of visitors to each museum over time
____ # Your code here
# Check your answer
step_3.check()
#%%RM_IF(PROD)%%
plt.figure(figsize=(12,6))
sns.lineplot(data=museum_data)
plt.title("Monthly Visitors to Los Angeles City Museums")
step_3.assert_check_passed()
#%%RM_IF(PROD)%%
sns.lineplot(data=museum_data['Avila Adobe'], label="Avila Adobe")
sns.lineplot(data=museum_data['Firehouse Museum'], label="Firehouse Museum")
sns.lineplot(data=museum_data['Chinese American Museum'], label="Chinese American Museum")
sns.lineplot(data=museum_data['America Tropical Interpretive Center'], label="America Tropical Interpretive Center")
step_3.assert_check_passed()
#%%RM_IF(PROD)%%
sns.lineplot(data=museum_data['Avila Adobe'])
sns.lineplot(data=museum_data['Firehouse Museum'])
sns.lineplot(data=museum_data['Chinese American Museum'])
sns.lineplot(data=museum_data['America Tropical Interpretive Center'])
step_3.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.hint()
#_COMMENT_IF(PROD)_
step_3.solution_plot()
"""
Explanation: Step 3: Convince the museum board
The Firehouse Museum claims they ran an event in 2014 that brought an incredible number of visitors, and that they should get extra budget to run a similar event again. The other museums think these types of events aren't that important, and budgets should be split purely based on recent visitors on an average day.
To show the museum board how the event compared to regular traffic at each museum, create a line chart that shows how the number of visitors to each museum evolved over time. Your figure should have four lines (one for each museum).
(Optional) Note: If you have some prior experience with plotting figures in Python, you might be familiar with the plt.show() command. If you decide to use this command, please place it after the line of code that checks your answer (in this case, place it after step_3.check() below) -- otherwise, the checking code will return an error!
End of explanation
"""
# Line plot showing the number of visitors to Avila Adobe over time
____ # Your code here
# Check your answer
step_4.a.check()
#%%RM_IF(PROD)%%
sns.lineplot(data=museum_data['Avila Adobe'], label='avila_adobe')
step_4.a.assert_check_passed()
#%%RM_IF(PROD)%%
plt.figure(figsize=(12,6))
plt.title("Monthly Visitors to Avila Adobe")
sns.lineplot(data=museum_data['Avila Adobe'])
plt.xlabel("Date")
step_4.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.lineplot(data=museum_data['Firehouse Museum'], label="Firehouse Museum")
# Unfortunately (?), this one passes -- fix later.
step_4.a.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution_plot()
"""
Explanation: Step 4: Assess seasonality
When meeting with the employees at Avila Adobe, you hear that one major pain point is that the number of museum visitors varies greatly with the seasons, with low seasons (when the employees are perfectly staffed and happy) and also high seasons (when the employees are understaffed and stressed). You realize that if you can predict these high and low seasons, you can plan ahead to hire some additional seasonal employees to help out with the extra work.
Part A
Create a line chart that shows how the number of visitors to Avila Adobe has evolved over time. (If your code returns an error, the first thing that you should check is that you've spelled the name of the column correctly! You must write the name of the column exactly as it appears in the dataset.)
End of explanation
"""
#_COMMENT_IF(PROD)_
step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
"""
Explanation: Part B
Does Avila Adobe get more visitors:
- in September-February (in LA, the fall and winter months), or
- in March-August (in LA, the spring and summer)?
Using this information, when should the museum staff additional seasonal employees?
End of explanation
"""
|
hidenori-t/snippet | reading_plan.ipynb | mit | # 読書計画用スニペット
from datetime import date
import math
def reading_plan(title, total_number_of_pages, period):
current_page = int(input("Current page?: "))
deadline = (date(*period) - date.today()).days
remaining_pages = total_number_of_pages - current_page
print(title, period, "まで", math.ceil(remaining_pages / deadline), "p/day 残り",
remaining_pages, "p/", deadline, "days" )
print(date.today(), ":", current_page + math.ceil(remaining_pages / deadline), "pまで")
"""
Explanation: <a href="https://colab.research.google.com/github/hidenori-t/snippet/blob/master/reading_plan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
reading_plan("『哲学は何を問うてきたか』", 38, (2020,6,15))
reading_plan("『死す哲』", 261, (2020,6,15))
"""
Explanation: https://wwp.shizuoka.ac.jp/philosophy/%E5%93%B2%E5%AD%A6%E5%AF%BE%E8%A9%B1%E5%A1%BE/
End of explanation
"""
reading_plan("『メノン』", 263, (2020,10,17))
date(2020, 6, 20) - date.today()
reading_plan("『饗宴 解説』", 383, (2020,6,20))
"""
Explanation: $$I_{xivia} \approx (twitter + instagram + \Delta facebook ) \bmod 2$$
$$I_{xivia} \approx (note + mathtodon) \bmod 2$$
End of explanation
"""
|
johntellsall/shotglass | jupyter/timeline.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
from datetime import datetime
try:
# Try to fetch a list of Matplotlib releases and their dates
# from https://api.github.com/repos/matplotlib/matplotlib/releases
import urllib.request
import json
url = 'https://api.github.com/repos/matplotlib/matplotlib/releases'
url += '?per_page=100'
data = json.loads(urllib.request.urlopen(url, timeout=.4).read().decode())
dates = []
names = []
for item in data:
if 'rc' not in item['tag_name'] and 'b' not in item['tag_name']:
dates.append(item['published_at'].split("T")[0])
names.append(item['tag_name'])
# Convert date strings (e.g. 2014-10-18) to datetime
dates = [datetime.strptime(d, "%Y-%m-%d") for d in dates]
except Exception:
# In case the above fails, e.g. because of missing internet connection
# use the following lists as fallback.
names = ['v2.2.4', 'v3.0.3', 'v3.0.2', 'v3.0.1', 'v3.0.0', 'v2.2.3',
'v2.2.2', 'v2.2.1', 'v2.2.0', 'v2.1.2', 'v2.1.1', 'v2.1.0',
'v2.0.2', 'v2.0.1', 'v2.0.0', 'v1.5.3', 'v1.5.2', 'v1.5.1',
'v1.5.0', 'v1.4.3', 'v1.4.2', 'v1.4.1', 'v1.4.0']
dates = ['2019-02-26', '2019-02-26', '2018-11-10', '2018-11-10',
'2018-09-18', '2018-08-10', '2018-03-17', '2018-03-16',
'2018-03-06', '2018-01-18', '2017-12-10', '2017-10-07',
'2017-05-10', '2017-05-02', '2017-01-17', '2016-09-09',
'2016-07-03', '2016-01-10', '2015-10-29', '2015-02-16',
'2014-10-26', '2014-10-18', '2014-08-26']
# Convert date strings (e.g. 2014-10-18) to datetime
dates = [datetime.strptime(d, "%Y-%m-%d") for d in dates]
"""
Explanation: Creating a timeline with lines, dates, and text
How to create a simple timeline using Matplotlib release dates.
Timelines can be created with a collection of dates and text. In this example,
we show how to create a simple timeline using the dates for recent releases
of Matplotlib. First, we'll pull the data from GitHub.
End of explanation
"""
# Choose some nice levels
levels = np.tile([-5, 5, -3, 3, -1, 1],
int(np.ceil(len(dates)/6)))[:len(dates)]
# Create figure and plot a stem plot with the date
fig, ax = plt.subplots(figsize=(8.8, 4), constrained_layout=True)
ax.set(title="Matplotlib release dates")
ax.vlines(dates, 0, levels, color="tab:red") # The vertical stems.
ax.plot(dates, np.zeros_like(dates), "-o",
color="k", markerfacecolor="w") # Baseline and markers on it.
# annotate lines
for d, l, r in zip(dates, levels, names):
ax.annotate(r, xy=(d, l),
xytext=(-3, np.sign(l)*3), textcoords="offset points",
horizontalalignment="right",
verticalalignment="bottom" if l > 0 else "top")
# format xaxis with 4 month intervals
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=4))
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b %Y"))
plt.setp(ax.get_xticklabels(), rotation=30, ha="right")
# remove y axis and spines
ax.yaxis.set_visible(False)
ax.spines[["left", "top", "right"]].set_visible(False)
ax.margins(y=0.1)
plt.show()
"""
Explanation: Next, we'll create a stem plot with some variation in levels as to
distinguish even close-by events. We add markers on the baseline for visual
emphasis on the one-dimensional nature of the time line.
For each event, we add a text label via ~.Axes.annotate, which is offset
in units of points from the tip of the event line.
Note that Matplotlib will automatically plot datetime inputs.
End of explanation
"""
|
ManchesterBioinference/BranchedGP | notebooks/Hematopoiesis.ipynb | apache-2.0 | import time
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import BranchedGP
plt.style.use("ggplot")
%matplotlib inline
"""
Explanation: Branching GP Regression on hematopoietic data
Alexis Boukouvalas, 2017
Note: this notebook is automatically generated by Jupytext, see the README for instructions on working with it.
test change
Branching GP regression with Gaussian noise on the hematopoiesis data described in the paper "BGP: Gaussian processes for identifying branching dynamics in single cell data".
This notebook shows how to build a BGP model and plot the posterior model fit and posterior branching times.
End of explanation
"""
Y = pd.read_csv("singlecelldata/hematoData.csv", index_col=[0])
monocle = pd.read_csv("singlecelldata/hematoMonocle.csv", index_col=[0])
Y.head()
monocle.head()
# Plot Monocle DDRTree space
genelist = ["FLT3", "KLF1", "MPO"]
f, ax = plt.subplots(1, len(genelist), figsize=(10, 5), sharex=True, sharey=True)
for ig, g in enumerate(genelist):
y = Y[g].values
yt = np.log(1 + y / y.max())
yt = yt / yt.max()
h = ax[ig].scatter(
monocle["DDRTreeDim1"],
monocle["DDRTreeDim2"],
c=yt,
s=50,
alpha=1.0,
vmin=0,
vmax=1,
)
ax[ig].set_title(g)
def PlotGene(label, X, Y, s=3, alpha=1.0, ax=None):
fig = None
if ax is None:
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
for li in np.unique(label):
idxN = (label == li).flatten()
ax.scatter(X[idxN], Y[idxN], s=s, alpha=alpha, label=int(np.round(li)))
return fig, ax
"""
Explanation: Read the hematopoiesis data. This has been simplified to a small subset of 23 genes found to be branching.
We have also performed Monocle2 (version 2.1) - DDRTree on this data. The results loaded include the Monocle estimated pseudotime, branching assignment (state) and the DDRTree latent dimensions.
End of explanation
"""
def FitGene(g, ns=20): # for quick results subsample data
t = time.time()
Bsearch = list(np.linspace(0.05, 0.95, 5)) + [
1.1
] # set of candidate branching points
GPy = (Y[g].iloc[::ns].values - Y[g].iloc[::ns].values.mean())[
:, None
] # remove mean from gene expression data
GPt = monocle["StretchedPseudotime"].values[::ns]
globalBranching = monocle["State"].values[::ns].astype(int)
d = BranchedGP.FitBranchingModel.FitModel(Bsearch, GPt, GPy, globalBranching)
print(g, "BGP inference completed in %.1f seconds." % (time.time() - t))
# plot BGP
fig, ax = BranchedGP.VBHelperFunctions.PlotBGPFit(
GPy, GPt, Bsearch, d, figsize=(10, 10)
)
# overplot data
f, a = PlotGene(
monocle["State"].values,
monocle["StretchedPseudotime"].values,
Y[g].values - Y[g].iloc[::ns].values.mean(),
ax=ax[0],
s=10,
alpha=0.5,
)
# Calculate Bayes factor of branching vs non-branching
bf = BranchedGP.VBHelperFunctions.CalculateBranchingEvidence(d)["logBayesFactor"]
fig.suptitle("%s log Bayes factor of branching %.1f" % (g, bf))
return d, fig, ax
d, fig, ax = FitGene("MPO")
d_c, fig_c, ax_c = FitGene("CTSG")
"""
Explanation: Fit BGP model
Notice the cell assignment uncertainty is higher for cells close to the branching point.
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2017-03-17-numpy-finite-diff.ipynb | mit | # import time module to get execution time
import time
# plotting
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Going through Mark's Ocean World climate model code, one of the improvements that we discussed this Friday was moving from nested for-loops to array-wide operations. Sometimes this is called vectorisation, even though operations are performed not only on 1D-vectors, but on N-dimensional arrays.
The most interesting part was to apply this concept to the finite differences, specifically to the horizontal diffusion, which is the mechanism of transporting heat in the ocean and the atmosphere in Mark's model (there is no advection).
Changes that we made can be viewed on GitHub:
* First introduction of vectorised finite differences (using numpy.roll() function)
* Cleaner version with a diffusion subroutine
In this notebook you can go through simpler examples of diffusion problem and see for yourself why NumPy is great.
Examples in this notebook are taken from the book "High Performance Python".
Pure Python
Python doesn’t natively support vectorization.
There are two reasons for this:
Python lists store pointers to the actual data,
Python bytecode is not optimized for vectorization, so for loops cannot predict when using vectorization would be beneficial.
End of explanation
"""
grid_shape = (1024, 1024)
def evolve(grid, dt, out, D=1.0):
xmax, ymax = grid_shape
for i in range(xmax):
for j in range(ymax):
grid_xx = grid[(i+1) % xmax][j] + grid[(i-1) % xmax][j] - 2.0 * grid[i][j]
grid_yy = grid[i][(j+1) % ymax] + grid[i][(j-1) % ymax] - 2.0 * grid[i][j]
out[i][j] = grid[i][j] + D * (grid_xx + grid_yy) * dt
"""
Explanation: Diffusion problem
Let's use a square grid.
End of explanation
"""
def run_experiment(num_iterations):
xmax, ymax = grid_shape
# Allocate lists of lists to use as 2d arrays
next_grid = [[0.0,] * ymax for x in range(xmax)]
grid = [[0.0,] * ymax for x in range(xmax)]
# Set up initial conditions
block_low = int(grid_shape[0] * .4)
block_high = int(grid_shape[0] * .5)
for i in range(block_low, block_high):
for j in range(block_low, block_high):
grid[i][j] = 0.005
# Start of integration
start = time.time()
for i in range(num_iterations):
evolve(grid, 0.1, next_grid)
grid, next_grid = next_grid, grid
return time.time() - start
exec_time = run_experiment(10)
print('Execution time of 10 iterations: {:.2f} s'.format(exec_time))
"""
Explanation: Note that evolve() function does not return anything, because lists are mutable objects and their elements can be modified in place.
End of explanation
"""
vector = list(range(1000000))
"""
Explanation: Pretty slow!
NumPy
Luckily, numpy has all of the features we need - it stores data in contiguous chunks of memory
It supports vectorized operations on its data
As a result, any arithmetic we do on numpy arrays happens in chunks without us having to explicitly loop over each element
Example to calculate vector norm
Fake data:
End of explanation
"""
def norm_square_list(vector):
norm = 0
for v in vector:
norm += v * v
return norm
%timeit norm_square_list(vector)
def norm_square_list_comprehension(vector):
return sum([v*v for v in vector])
%timeit norm_square_list_comprehension(vector)
def norm_squared_generator_comprehension(vector):
return sum(v*v for v in vector)
%timeit norm_squared_generator_comprehension(vector)
"""
Explanation: pure python
End of explanation
"""
import numpy as np
vector = np.arange(1000000)
def norm_square_numpy(vector):
return np.sum(vector * vector)
%timeit norm_square_numpy(vector)
def norm_square_numpy_dot(vector):
return np.dot(vector, vector)
%timeit norm_square_numpy_dot(vector)
"""
Explanation: the same, but with numpy
End of explanation
"""
a = np.arange(10, 20, 2)
a
np.diff(a)
a[1:] - a[:-1]
"""
Explanation: Applying numpy to finite differences
Generic differences
numpy.gradient()
numpy.diff()
End of explanation
"""
grid_shape = (1024, 1024)
def laplacian(grid):
return (np.roll(grid, +1, 0) + np.roll(grid, -1, 0) +
np.roll(grid, +1, 1) + np.roll(grid, -1, 1) - 4 * grid)
def evolve(grid, dt, D=1):
return grid + dt * D * laplacian(grid)
def run_experiment_numpy(num_iterations):
# Allocate 2d array
grid = np.zeros(grid_shape)
# Initial conditions
block_low = int(grid_shape[0] * .4)
block_high = int(grid_shape[0] * .5)
grid[block_low:block_high, block_low:block_high] = 0.005
# Save the initial conditions to use in the plotting section
grid0 = grid.copy()
# Integrate in time
start = time.time()
for i in range(num_iterations):
grid = evolve(grid, 0.1)
return grid0, grid, time.time() - start
n = 1000
g0, g, exec_time = run_experiment_numpy(n)
print('Execution time of 10 iterations: {:.2f} s'.format(exec_time / (n / 10)))
"""
Explanation: Diffusion Problem
To compare to the pure Python we use the same grid and same initial conditions.
End of explanation
"""
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(16, 6))
h = ax0.pcolormesh(g0)
fig.colorbar(h, ax=ax0)
ax0.set_title('Initial conditions')
h = ax1.pcolormesh(g)
fig.colorbar(h, ax=ax1)
ax1.set_title('After {n} iterations'.format(n=n))
"""
Explanation: We can see that even for this simple example using numpy improves performance by at least 1-2 orders of magnitude. The code also is cleaner and shorter than in native-Python implementation.
End of explanation
"""
HTML(html)
"""
Explanation: References
M. Gorelick, I. Ozsvald, 2014. High Performance Python. O'Reilly Media.
End of explanation
"""
|
weikang9009/pysal | notebooks/explore/spaghetti/Network_Usage.ipynb | bsd-3-clause | import os
last_modified = None
if os.name == "posix":
last_modified = !stat -f\
"# This notebook was last updated: %Sm"\
Network_Usage.ipynb
elif os.name == "nt":
last_modified = !for %a in (Network_Usage.ipynb)\
do echo # This notebook was last updated: %~ta
if last_modified:
get_ipython().set_next_input(last_modified[-1])
# This notebook was last updated: May 13 20:21:51 2019
"""
Explanation: Basic Tutorial for pysal.spaghetti
End of explanation
"""
# pysal submodule imports
from pysal.lib import examples
from pysal.explore import spaghetti as spgh
from pysal.explore import esda
import numpy as np
import matplotlib.pyplot as plt
import time
%matplotlib inline
__author__ = "James Gaboardi <jgaboardi@gmail.com>"
"""
Explanation:
End of explanation
"""
ntw = spgh.Network(in_data=examples.get_path('streets.shp'))
"""
Explanation: Instantiate a network
End of explanation
"""
# Crimes
ntw.snapobservations(examples.get_path('crimes.shp'),
'crimes',
attribute=True)
# Schools
ntw.snapobservations(examples.get_path('schools.shp'),
'schools',
attribute=False)
"""
Explanation: Snap point patterns to the network
End of explanation
"""
ntw.pointpatterns
"""
Explanation: A network is composed of a single topological representation of roads and $n$ point patterns which are snapped to the network.
End of explanation
"""
counts = ntw.count_per_link(ntw.pointpatterns['crimes'].obs_to_arc,
graph=False)
sum(list(counts.values())) / float(len(counts.keys()))
"""
Explanation: Attributes for every point pattern
dist_snapped dict keyed by pointid with the value as snapped distance from observation to network arc
dist_to_vertex dict keyed by pointid with the value being a dict in the form
{node: distance to vertex, node: distance to vertex}
npoints point observations in set
obs_to_arc dict keyed by arc with the value being a dict in the form
{pointID:(x-coord, y-coord), pointID:(x-coord, y-coord), ... }
obs_to_vertex list of incident network vertices to snapped observation points
points geojson like representation of the point pattern. Includes properties if read with attributes=True
snapped_coordinates dict keyed by pointid with the value being (x-coord, y-coord)
Counts per link (arc or edge) are important, but should not be precomputed since we have different representations of the network (spatial and graph currently). (Relatively) Uniform segmentation still needs to be done.
End of explanation
"""
n200 = ntw.split_arcs(200.0)
counts = n200.count_per_link(n200.pointpatterns['crimes'].obs_to_arc,
graph=False)
sum(counts.values()) / float(len(counts.keys()))
"""
Explanation: Network segmentation
End of explanation
"""
# 'full' unsegmented network
vertices_df, arcs_df = spgh.element_as_gdf(ntw,
vertices=ntw.vertex_coords,
arcs=ntw.arcs)
# network segmented at 200-meter increments
vertices200_df, arcs200_df = spgh.element_as_gdf(n200,
vertices=n200.vertex_coords,
arcs=n200.arcs)
"""
Explanation: Create geopandas.GeoDataFrame objects of the vertices and arcs
End of explanation
"""
base = arcs_df.plot(color='k', alpha=.25, figsize=(12,12))
vertices_df.plot(ax=base, color='b', markersize=300, alpha=.25)
arcs200_df.plot(ax=base, color='k', alpha=.25)
vertices200_df.plot(ax=base, color='r', markersize=25, alpha=1.)
"""
Explanation: Visualization of the shapefile derived, unsegmented network with vertices in a larger, blue, semi-opaque form and the distance segmented network with small, red, fully opaque vertices.
End of explanation
"""
# Binary Adjacency
w = ntw.contiguityweights(graph=False)
# Build the y vector
arcs = w.neighbors.keys()
y = np.zeros(len(arcs))
for i, a in enumerate(arcs):
if a in counts.keys():
y[i] = counts[a]
# Moran's I
res = esda.moran.Moran(y,
w,
permutations=99)
print(dir(res))
"""
Explanation: Moran's I using the digitized network
End of explanation
"""
counts = ntw.count_per_link(ntw.pointpatterns['crimes'].obs_to_arc,
graph=True)
# Binary Adjacency
w = ntw.contiguityweights(graph=True)
# Build the y vector
edges = w.neighbors.keys()
y = np.zeros(len(edges))
for i, e in enumerate(edges):
if e in counts.keys():
y[i] = counts[e]
# Moran's I
res = esda.moran.Moran(y,
w,
permutations=99)
print(dir(res))
"""
Explanation: Moran's I using the graph representation to generate the W
Note that we have to regenerate the counts per arc, since the graph will have less edges.
End of explanation
"""
# Binary Adjacency
w = n200.contiguityweights(graph=False)
# Compute the counts
counts = n200.count_per_link(n200.pointpatterns['crimes'].obs_to_arc,
graph=False)
# Build the y vector and convert from raw counts to intensities
arcs = w.neighbors.keys()
y = np.zeros(len(arcs))
for i, a in enumerate(edges):
if a in counts.keys():
length = n200.arc_lengths[a]
y[i] = counts[a] / length
# Moran's I
res = esda.moran.Moran(y,
w,
permutations=99)
print(dir(res))
"""
Explanation: Moran's I using the segmented network and intensities instead of counts
End of explanation
"""
t1 = time.time()
n0 = ntw.allneighbordistances(ntw.pointpatterns['crimes'])
print(time.time()-t1)
t1 = time.time()
n1 = n200.allneighbordistances(n200.pointpatterns['crimes'])
print(time.time()-t1)
"""
Explanation: Timings for distance based methods, e.g. G-function
End of explanation
"""
t1 = time.time()
n0 = ntw.allneighbordistances(ntw.pointpatterns['crimes'])
print(time.time()-t1)
t1 = time.time()
n1 = n200.allneighbordistances(n200.pointpatterns['crimes'])
print(time.time()-t1)
"""
Explanation: Note that the first time these methods are called, the underlying vertex-to-vertex shortest path distance matrix has to be calculated. Subsequent calls will not require this, and will be much faster:
End of explanation
"""
npts = ntw.pointpatterns['crimes'].npoints
sim = ntw.simulate_observations(npts)
sim
"""
Explanation: Simulate a point pattern on the network
Need to supply a count of the number of points and a distirbution (default is uniform). Generally, this will not be called by the user, since the simulation will be used for Monte Carlo permutation.
End of explanation
"""
fres = ntw.NetworkF(ntw.pointpatterns['crimes'],
permutations=99)
plt.figure(figsize=(8,8))
plt.plot(fres.xaxis, fres.observed, 'b-', linewidth=1.5, label='Observed')
plt.plot(fres.xaxis, fres.upperenvelope, 'r--', label='Upper')
plt.plot(fres.xaxis, fres.lowerenvelope, 'k--', label='Lower')
plt.legend(loc='best', fontsize='x-large')
plt.title('Network F Function', fontsize='xx-large')
plt.show()
"""
Explanation: F-function
End of explanation
"""
gres = ntw.NetworkG(ntw.pointpatterns['crimes'],
permutations=99)
plt.figure(figsize=(8,8))
plt.plot(gres.xaxis, gres.observed, 'b-', linewidth=1.5, label='Observed')
plt.plot(gres.xaxis, gres.upperenvelope, 'r--', label='Upper')
plt.plot(gres.xaxis, gres.lowerenvelope, 'k--', label='Lower')
plt.legend(loc='best', fontsize='x-large')
plt.title('Network G Function', fontsize='xx-large')
plt.show()
"""
Explanation: Create a nearest neighbor matrix using the crimes point pattern
[note from jlaura] Right now, both the G and K functions generate a full distance matrix. This is because, I know that the full generation is correct and I believe that the truncated generated, e.g. nearest neighbor, has a bug.
G-function
End of explanation
"""
kres = ntw.NetworkK(ntw.pointpatterns['crimes'],
permutations=99)
plt.figure(figsize=(8,8))
plt.plot(kres.xaxis, kres.observed, 'b-', linewidth=1.5, label='Observed')
plt.plot(kres.xaxis, kres.upperenvelope, 'r--', label='Upper')
plt.plot(kres.xaxis, kres.lowerenvelope, 'k--', label='Lower')
plt.legend(loc='best', fontsize='x-large')
plt.title('Network K Function', fontsize='xx-large')
plt.show()
"""
Explanation: K-function
End of explanation
"""
|
tschinz/iPython_Workspace | 02_WP/VHDL/Steppermotordriver_L6208PD.ipynb | gpl-2.0 | # Function to calculate the Bits needed fo a given number
def unsigned_num_bits(num):
_nbits = 1
_n = num
while(_n > 1):
_nbits = _nbits + 1
_n = _n / 2
return _nbits
"""
Explanation: VHDL implementation of Steppermotordriver for L6208PD
End of explanation
"""
rev_distance = 0.5 # mm
step_angle = 1.8 # °
# Calculation one Step
step_distance = rev_distance/360*step_angle
print("Step Distance = {} mm".format(step_distance))
print("Step Distance = {} um".format(step_distance*1000))
# Calculation max and min register position
RegBitNb = 32
regval_max = 2**(RegBitNb-1)-1
regval_min = -2**(RegBitNb-1)
step_distance_max = regval_max*step_distance
step_distance_min = regval_min*step_distance
print("Register Position Values = {} ... {}".format(regval_max, regval_min))
print("Position Register distances = {} m ... {} m".format(step_distance_max/1000, step_distance_min/1000))
"""
Explanation: Steppermotor ST4118S0206-A settings
Speed = $120\frac{1}{min}$
1 Revolution = $0.5mm$
1 Step = $1.8°$
Distance calulcation
End of explanation
"""
speed_max = 60# rev/min
step_angle = 1.8 # °
steps_per_rev = 360/step_angle
speed_max_sec = speed_max/60 # rev/sec
f_max = speed_max_sec * steps_per_rev
print("Max Frequency of Steppermotor is {} Hz".format(f_max))
"""
Explanation: Max Frequency calulation
$f_{max} = speed * steps = \frac{1}{s}*1 = \frac{1}{s} $
End of explanation
"""
speed_resolution = 2**8 # different speed values
clk_freq = 100e6 # Hz
speed_max = 120*1/60 # rev/min * min/s = rev/s
steps_per_rev = 200 # steps per revolution
g_max_speed = ((speed_resolution-1)*clk_freq)/(speed_max*steps_per_rev)
print("g_MAX_SPEED = {} needs {} Bits".format(int(g_max_speed), unsigned_num_bits(int(g_max_speed))))
"""
Explanation: Max Speed calculations
$g_MAX_SPEED = \frac{(speed_{resolution}-1)clk_{freq}}{speed_{max}steps_per_rev} = \frac{([values]-1)[Hz]}{[\frac{rev}{s}][\frac{steps}{rev}]}$
End of explanation
"""
speed_resolution = 2**8 # different speed values
clk_freq = 100e6 # Hz
speed_max = 60*1/60 # rev/min * min/s = rev/s
max_acceleration_time = 2.0 # seconds from 0 to max speed
max_acceleration_rev = speed_max/max_acceleration_time # rev/s^2
max_decceleration_time = 1.0 # seconds from max to 0 speed
max_decceleration_rev = speed_max/max_decceleration_time # rev/s^2
g_max_acceleration = (speed_max*clk_freq)/((speed_resolution-1)*max_acceleration_rev)
g_max_decceleration = (speed_max*clk_freq)/((speed_resolution-1)*max_decceleration_rev)
print("g_MAX_ACCELERATION = {} needs {} Bits".format(int(g_max_acceleration),unsigned_num_bits(int(g_max_acceleration))))
print("g_MAX_DECCELERATION = {} needs {} Bits".format(int(g_max_decceleration),unsigned_num_bits(int(g_max_decceleration))))
"""
Explanation: Max Acceleration calculations
$g_MAX_ACCELERATION = \frac{speed_{max}clk_{freq}}{(speed_{resolution}-1)acceleration_speed} = \frac{[\frac{rev}{s}][Hz]}{([values]-1)[\frac{rev}{s^{2}}]}$
$g_MAX_DECCELERATION = \frac{speed_{max}clk_{freq}}{(speed_{resolution}-1)decceleration_speed} = \frac{[\frac{rev}{s}][Hz]}{([values]-1)[\frac{rev}{s^{2}}]}$
End of explanation
"""
import math
speed_resolution = 2**8 # different speed values
speed_max = 120*1/60 # rev/min * min/s = rev/s
max_acceleration_time = 2.0 # seconds from 0 to max speed
max_acceleration_rev = speed_max/max_acceleration_time # rev/s^2
def calc_speed_intended(max_acceleration_rev, position_difference):
# return round(math.sqrt(2*64*max_acceleration_rev*position_difference))
return round(41*math.log(max_acceleration_rev*position_difference+1))
for position_difference in [0,1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536]:
speed_intended = calc_speed_intended(max_acceleration_rev, position_difference)
if speed_intended > speed_resolution-1:
speed_intended = speed_resolution-1
print("speed_intended: {:3} @ position_difference: {:5}".format(int(speed_intended),position_difference))
# Draw Plot
import numpy as np
import pylab as pl
pl.clf()
nbrOfPoints = 600
position_difference = np.linspace(0,nbrOfPoints,nbrOfPoints)
speed_intended = np.empty(shape=[len(position_difference)], dtype=np.float64)
for i in range(len(position_difference)):
speed_intended[i] = calc_speed_intended(max_acceleration_rev, position_difference[i])
if speed_intended[i] > speed_resolution-1:
speed_intended[i] = speed_resolution-1
# Plot graph
pl.plot(position_difference,speed_intended, label="Acceleration")
speed_intended = np.empty(shape=[len(position_difference)], dtype=np.float64)
for i in range(len(position_difference)):
speed_intended[i] = 255-calc_speed_intended(max_acceleration_rev, position_difference[i])
if speed_intended[i] <= 0:
speed_intended[i] = 0
# Plot graph
pl.plot(position_difference,speed_intended, label="Decceleration")
# Place legend, Axis and Title
pl.legend(loc='best')
pl.xlabel("PositionDifference [Steps]")
pl.ylabel("Speed [0-255]")
pl.title("Acceleration & Deccleration")
"""
Explanation: Speed intended calculations
$ speed_{indended} = \sqrt[2]{264g_MAX_ACCELERATION|position_{difference}|} $
or
$ speed_{indended} = 41\log({g_MAX_ACCELERATION*|position_{difference}|}) $
End of explanation
"""
f_clk = 100e6 # Hz
f_step_max = 100e3 # Hz
g_step_freq = f_clk/f_step_max
print("Number of steps for max step frequency: {} needs {} Bits".format(int(g_step_freq), unsigned_num_bits(g_step_freq)))
"""
Explanation: Max Step Frequency
$g_STEP_FREQ = \frac{f_{clk}}{f_step_driver_{max}}$
For $f_step_driver_{max}$ see datasheet motor driver (L6208 = $100kHz$)
End of explanation
"""
|
lithiumdenis/MLSchool | 3. Котики и собачки.ipynb | mit | visual = pd.read_csv('data/CatsAndDogs/TRAIN2.csv')
#Сделаем числовой столбец Outcome, показывающий, взяли животное из приюта или нет
#Сначала заполним единицами, типа во всех случах хорошо
visual['Outcome'] = 'true'
#Неудачные случаи занулим
visual.loc[visual.OutcomeType == 'Euthanasia', 'Outcome'] = 'false'
visual.loc[visual.OutcomeType == 'Died', 'Outcome'] = 'false'
#Заменим строки, где в SexuponOutcome NaN, на что-нибудь осмысленное
visual.loc[visual.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
#Сделаем два отдельных столбца для пола и фертильности
visual['Gender'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
visual['Fertility'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[0])
"""
Explanation: Данные
Возьмите данные с https://www.kaggle.com/c/shelter-animal-outcomes .
Обратите внимание, что в этот раз у нас много классов, почитайте в разделе Evaluation то, как вычисляется итоговый счет (score).
Визуализация
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 1.</h3>
</div>
</div>
Выясните, построив необходимые графики, влияет ли возраст, пол или фертильность животного на его шансы быть взятыми из приюта.
Подготовим данные
End of explanation
"""
mergedByAges = visual.groupby('AgeuponOutcome')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByAges, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=False, rot=45);
"""
Explanation: Сравним по возрасту
End of explanation
"""
mergedByGender = visual.groupby('Gender')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByGender, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=True, rot=45);
"""
Explanation: Сравним по полу
End of explanation
"""
mergedByFert = visual.groupby('Fertility')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByFert, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=True, rot=45);
"""
Explanation: Сравним по фертильности
End of explanation
"""
train, test = pd.read_csv(
'data/CatsAndDogs/TRAIN2.csv' #наши данные
#'data/CatsAndDogs/train.csv' #исходные данные
), pd.read_csv(
'data/CatsAndDogs/TEST2.csv' #наши данные
#'data/CatsAndDogs/test.csv' #исходные данные
)
train.head()
test.shape
"""
Explanation: <b>Вывод по возрасту:</b> лучше берут не самых старых, но и не самых молодых
<br>
<b>Вывод по полу:</b> по большому счёту не имеет значения
<br>
<b>Вывод по фертильности:</b> лучше берут животных с ненарушенными репродуктивными способностями. Однако две следующие группы не сильно различаются по сути и, если их сложить, то разница не столь велика.
Построение моделей
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 2.</h3>
</div>
</div>
Посмотрите тетрадку с генерацией новых признаков. Сделайте как можно больше релевантных признаков из всех имеющихся.
Не забудьте параллельно обрабатывать отложенную выборку (test), чтобы в ней были те же самые признаки, что и в обучающей.
<b>Возьмем исходные данные</b>
End of explanation
"""
#Сначала по-аналогии с визуализацией
#Заменим строки, где в SexuponOutcome, Breed, Color NaN
train.loc[train.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'
train.loc[train.Breed.isnull(), 'Breed'] = 'Unknown'
train.loc[train.Color.isnull(), 'Color'] = 'Unknown'
#Сделаем два отдельных столбца для пола и фертильности
train['Gender'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
train['Fertility'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[0])
#Теперь что-то новое
#Столбец, в котором отмечено, есть имя у животного или нет
train['hasName'] = 1
train.loc[train.Name.isnull(), 'hasName'] = 0
#Столбец, в котором объединены порода и цвет
train['breedColor'] = train.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)
#Декомпозируем DateTime
#Во-первых, конвертируем столбец в тип DateTime из строкового
train['DateTime'] = pd.to_datetime(train['DateTime'])
#А теперь декомпозируем
train['dayOfWeek'] = train.DateTime.apply(lambda dt: dt.dayofweek)
train['month'] = train.DateTime.apply(lambda dt: dt.month)
train['day'] = train.DateTime.apply(lambda dt: dt.day)
train['quarter'] = train.DateTime.apply(lambda dt: dt.quarter)
train['hour'] = train.DateTime.apply(lambda dt: dt.hour)
train['minute'] = train.DateTime.apply(lambda dt: dt.hour)
train['year'] = train.DateTime.apply(lambda dt: dt.year)
#Разбиение возраста
#Сделаем два отдельных столбца для обозначения года/месяца и их количества
train['AgeuponFirstPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[0])
train['AgeuponSecondPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])
#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s
train['AgeuponSecondPartInDays'] = 0
train.loc[train.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365
train.loc[train.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365
train.loc[train.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30
train.loc[train.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30
train.loc[train.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7
train.loc[train.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7
#Во-первых, конвертируем столбец в числовой тип из строкового
train['AgeuponFirstPart'] = pd.to_numeric(train['AgeuponFirstPart'])
train['AgeuponSecondPartInDays'] = pd.to_numeric(train['AgeuponSecondPartInDays'])
#А теперь получим нормальное время жизни в днях
train['LifetimeInDays'] = train['AgeuponFirstPart'] * train['AgeuponSecondPartInDays']
#Удалим уж совсем бессмысленные промежуточные столбцы
train = train.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1)
train.head()
"""
Explanation: <b>Добавим новые признаки в train</b>
End of explanation
"""
#Сначала по-аналогии с визуализацией
#Заменим строки, где в SexuponOutcome, Breed, Color NaN
test.loc[test.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'
test.loc[test.Breed.isnull(), 'Breed'] = 'Unknown'
test.loc[test.Color.isnull(), 'Color'] = 'Unknown'
#Сделаем два отдельных столбца для пола и фертильности
test['Gender'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
test['Fertility'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[0])
#Теперь что-то новое
#Столбец, в котором отмечено, есть имя у животного или нет
test['hasName'] = 1
test.loc[test.Name.isnull(), 'hasName'] = 0
#Столбец, в котором объединены порода и цвет
test['breedColor'] = test.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)
#Декомпозируем DateTime
#Во-первых, конвертируем столбец в тип DateTime из строкового
test['DateTime'] = pd.to_datetime(test['DateTime'])
#А теперь декомпозируем
test['dayOfWeek'] = test.DateTime.apply(lambda dt: dt.dayofweek)
test['month'] = test.DateTime.apply(lambda dt: dt.month)
test['day'] = test.DateTime.apply(lambda dt: dt.day)
test['quarter'] = test.DateTime.apply(lambda dt: dt.quarter)
test['hour'] = test.DateTime.apply(lambda dt: dt.hour)
test['minute'] = test.DateTime.apply(lambda dt: dt.hour)
test['year'] = test.DateTime.apply(lambda dt: dt.year)
#Разбиение возраста
#Сделаем два отдельных столбца для обозначения года/месяца и их количества
test['AgeuponFirstPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[0])
test['AgeuponSecondPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])
#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s
test['AgeuponSecondPartInDays'] = 0
test.loc[test.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365
test.loc[test.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365
test.loc[test.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30
test.loc[test.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30
test.loc[test.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7
test.loc[test.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7
#Во-первых, конвертируем столбец в числовой тип из строкового
test['AgeuponFirstPart'] = pd.to_numeric(test['AgeuponFirstPart'])
test['AgeuponSecondPartInDays'] = pd.to_numeric(test['AgeuponSecondPartInDays'])
#А теперь получим нормальное время жизни в днях
test['LifetimeInDays'] = test['AgeuponFirstPart'] * test['AgeuponSecondPartInDays']
#Удалим уж совсем бессмысленные промежуточные столбцы
test = test.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1)
test.head()
"""
Explanation: <b>Добавим новые признаки в test по-аналогии</b>
End of explanation
"""
np.random.seed = 1234
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
#####################Заменим NaN значения на слово Unknown##################
#Уберем Nan значения из train
train.loc[train.AnimalID.isnull(), 'AnimalID'] = 'Unknown'
train.loc[train.Name.isnull(), 'Name'] = 'Unknown'
train.loc[train.OutcomeType.isnull(), 'OutcomeType'] = 'Unknown'
train.loc[train.AnimalType.isnull(), 'AnimalType'] = 'Unknown'
train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'
train.loc[train.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'
#Уберем Nan значения из test
test.loc[test.AnimalID.isnull(), 'AnimalID'] = 'Unknown'
test.loc[test.Name.isnull(), 'Name'] = 'Unknown'
test.loc[test.AnimalType.isnull(), 'AnimalType'] = 'Unknown'
test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'
test.loc[test.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'
#####################Закодируем слова числами################################
#Закодировали AnimalID цифрами вместо названий в test & train
#encAnimalID = preprocessing.LabelEncoder()
#encAnimalID.fit(pd.concat((test['AnimalID'], train['AnimalID'])))
#test['AnimalID'] = encAnimalID.transform(test['AnimalID'])
#train['AnimalID'] = encAnimalID.transform(train['AnimalID'])
#Закодировали имя цифрами вместо названий в test & train
encName = preprocessing.LabelEncoder()
encName.fit(pd.concat((test['Name'], train['Name'])))
test['Name'] = encName.transform(test['Name'])
train['Name'] = encName.transform(train['Name'])
#Закодировали DateTime цифрами вместо названий в test & train
encDateTime = preprocessing.LabelEncoder()
encDateTime.fit(pd.concat((test['DateTime'], train['DateTime'])))
test['DateTime'] = encDateTime.transform(test['DateTime'])
train['DateTime'] = encDateTime.transform(train['DateTime'])
#Закодировали OutcomeType цифрами вместо названий в train, т.к. в test их нет
encOutcomeType = preprocessing.LabelEncoder()
encOutcomeType.fit(train['OutcomeType'])
train['OutcomeType'] = encOutcomeType.transform(train['OutcomeType'])
#Закодировали AnimalType цифрами вместо названий в test & train
encAnimalType = preprocessing.LabelEncoder()
encAnimalType.fit(pd.concat((test['AnimalType'], train['AnimalType'])))
test['AnimalType'] = encAnimalType.transform(test['AnimalType'])
train['AnimalType'] = encAnimalType.transform(train['AnimalType'])
#Закодировали SexuponOutcome цифрами вместо названий в test & train
encSexuponOutcome = preprocessing.LabelEncoder()
encSexuponOutcome.fit(pd.concat((test['SexuponOutcome'], train['SexuponOutcome'])))
test['SexuponOutcome'] = encSexuponOutcome.transform(test['SexuponOutcome'])
train['SexuponOutcome'] = encSexuponOutcome.transform(train['SexuponOutcome'])
#Закодировали AgeuponOutcome цифрами вместо названий в test & train
encAgeuponOutcome = preprocessing.LabelEncoder()
encAgeuponOutcome.fit(pd.concat((test['AgeuponOutcome'], train['AgeuponOutcome'])))
test['AgeuponOutcome'] = encAgeuponOutcome.transform(test['AgeuponOutcome'])
train['AgeuponOutcome'] = encAgeuponOutcome.transform(train['AgeuponOutcome'])
#Закодировали Breed цифрами вместо названий в test & train
encBreed = preprocessing.LabelEncoder()
encBreed.fit(pd.concat((test['Breed'], train['Breed'])))
test['Breed'] = encBreed.transform(test['Breed'])
train['Breed'] = encBreed.transform(train['Breed'])
#Закодировали Color цифрами вместо названий в test & train
encColor = preprocessing.LabelEncoder()
encColor.fit(pd.concat((test['Color'], train['Color'])))
test['Color'] = encColor.transform(test['Color'])
train['Color'] = encColor.transform(train['Color'])
#Закодировали Gender цифрами вместо названий в test & train
encGender = preprocessing.LabelEncoder()
encGender.fit(pd.concat((test['Gender'], train['Gender'])))
test['Gender'] = encGender.transform(test['Gender'])
train['Gender'] = encGender.transform(train['Gender'])
#Закодировали Fertility цифрами вместо названий в test & train
encFertility = preprocessing.LabelEncoder()
encFertility.fit(pd.concat((test['Fertility'], train['Fertility'])))
test['Fertility'] = encFertility.transform(test['Fertility'])
train['Fertility'] = encFertility.transform(train['Fertility'])
#Закодировали breedColor цифрами вместо названий в test & train
encbreedColor = preprocessing.LabelEncoder()
encbreedColor.fit(pd.concat((test['breedColor'], train['breedColor'])))
test['breedColor'] = encbreedColor.transform(test['breedColor'])
train['breedColor'] = encbreedColor.transform(train['breedColor'])
####################################Предобработка#################################
from sklearn.model_selection import cross_val_score
#poly_features = preprocessing.PolynomialFeatures(3)
#Подготовили данные так, что X_tr - таблица без AnimalID и OutcomeType, а в y_tr сохранены OutcomeType
X_tr, y_tr = train.drop(['AnimalID', 'OutcomeType'], axis=1), train['OutcomeType']
#Типа перевели dataFrame в array и сдалали над ним предварительную обработку
#X_tr = poly_features.fit_transform(X_tr)
X_tr.head()
"""
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 3.</h3>
</div>
</div>
Выполните отбор признаков, попробуйте различные методы. Проверьте качество на кросс-валидации.
Выведите топ самых важных и самых незначащих признаков.
Предобработка данных
End of explanation
"""
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif, mutual_info_classif
skb = SelectKBest(mutual_info_classif, k=15)
x_new = skb.fit_transform(X_tr, y_tr)
x_new
"""
Explanation: Статистические тесты
End of explanation
"""
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
names = X_tr.columns.values
lr = LinearRegression()
rfe = RFE(lr, n_features_to_select=1)
rfe.fit(X_tr,y_tr);
print("Features sorted by their rank:")
print(sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names)))
"""
Explanation: Методы обертки
End of explanation
"""
from sklearn.linear_model import Lasso
clf = Lasso()
clf.fit(X_tr, y_tr);
clf.coef_
features = X_tr.columns.values
print('Всего Lasso выкинуло %s переменных' % (clf.coef_ == 0).sum())
print('Это признаки:')
for s in features[np.where(clf.coef_ == 0)[0]]:
print(' * ', s)
"""
Explanation: Отбор при помощи модели Lasso
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor()
clf.fit(X_tr, y_tr);
clf.feature_importances_
imp_feature_idx = clf.feature_importances_.argsort()
imp_feature_idx
features = X_tr.columns.values
k = 0
while k < len(features):
print(features[k], imp_feature_idx[k])
k += 1
"""
Explanation: Отбор при помощи модели RandomForest
End of explanation
"""
#Для начала выкинем ненужные признаки, выявленные на прошлом этапе
X_tr = X_tr.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1)
test = test.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1)
X_tr.head()
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
clf1 = LogisticRegression(random_state=1234)
clf3 = GaussianNB()
clf5 = KNeighborsClassifier()
eclf = VotingClassifier(estimators=[
('lr', clf1), ('gnb', clf3), ('knn', clf5)],
voting='soft', weights=[1,1,10])
scores = cross_val_score(eclf, X_tr, y_tr)
eclf = eclf.fit(X_tr, y_tr)
print('Best score:', scores.min())
#delete AnimalID from test
X_te = test.drop(['AnimalID'], axis=1)
X_te.head()
y_te = eclf.predict(X_te)
y_te
ans_nn = pd.DataFrame({'AnimalID': test['AnimalID'], 'type': encOutcomeType.inverse_transform(y_te)})
ans_nn.head()
#Зададим функцию для трансформации
def onehot_encode(df_train, column):
from sklearn.preprocessing import LabelBinarizer
cs = df_train.select_dtypes(include=['O']).columns.values
if column not in cs:
return (df_train, None)
rest = [x for x in df_train.columns.values if x != column]
lb = LabelBinarizer()
train_data = lb.fit_transform(df_train[column])
new_col_names = ['%s' % x for x in lb.classes_]
if len(new_col_names) != train_data.shape[1]:
new_col_names = new_col_names[::-1][:train_data.shape[1]]
new_train = pd.concat((df_train.drop([column], axis=1), pd.DataFrame(data=train_data, columns=new_col_names)), axis=1)
return (new_train, lb)
ans_nn, lb = onehot_encode(ans_nn, 'type')
ans_nn
ans_nn.head()
"""
Explanation: <b>Вывод по признакам:</b>
<br>
<b>Не нужны:</b> Name, DateTime, month, day, Breed, breedColor. Всё остальное менее однозначно, можно и оставить.
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 4.</h3>
</div>
</div>
Попробуйте смешать разные модели с помощью <b>sklearn.ensemble.VotingClassifier</b>. Увеличилась ли точность? Изменилась ли дисперсия?
End of explanation
"""
test.shape[0] == ans_nn.shape[0]
#Сделаем нумерацию индексов не с 0, а с 1
ans_nn.index += 1
#Воткнем столбец с индексами как столбец в конкретное место
ans_nn.insert(0, 'ID', ans_nn.index)
#delete AnimalID from test
ans_nn = ans_nn.drop(['AnimalID'], axis=1)
ans_nn.head()
#Сохраним
ans_nn.to_csv('ans_catdog.csv', index=False)
"""
Explanation: Проверим, что никакие строчки при манипуляции с NaN не потерялись
End of explanation
"""
|
bayesimpact/bob-emploi | data_analysis/notebooks/datasets/imt/market_score_api_dataset.ipynb | gpl-3.0 | import os
from os import path
import pandas as pd
import seaborn as _
DATA_FOLDER = os.getenv('DATA_FOLDER')
market_statistics = pd.read_csv(path.join(DATA_FOLDER, 'imt/market_score.csv'))
market_statistics.head()
"""
Explanation: Author: Marie Laure, marielaure@bayesimpact.org
IMT Market Score from API
The IMT dataset provides regional statistics about different jobs. Here we are interested in the market score (called by Pôle Emploi the tension ratio. It is a bit misleading as a big tension ratio means plenty of jobs...). It corresponds to a ratio of the average number of weekly open offers to the average number of weekly applications per 10 candidates. This value is provided among others (e.g. number of offers in the last week, number of application in the last week...) in the "statitics on offers and demands" subset of the IMT dataset.
Previously, we retrieved IMT data by scraping the IMT website. As an exploratory step, we are interested in the sanity of the API based data and identifying putative additional information provided only by the API.
The dataset can be retrieved with the following command (it takes ~15 minutes):
docker-compose run --rm data-analysis-prepare make data/imt/market_score.csv
Data Sanity
Loading and General View
First let's load the csv file:
End of explanation
"""
to_remove = [name for name in market_statistics.columns if 'SEASONAL_' in name]
market_statistics.drop(to_remove, axis=1, inplace=True)
market_statistics.sort_values(['ROME_PROFESSION_CARD_CODE', 'AREA_CODE']).head()
"""
Explanation: Wow! Tons of columns! There is a lot of information on whether a job is seasonal, shows a peak in offers at a particular month or not. Seasonal is described as having twice as much offers than the monthly average (calculated over a year), and seeing this pattern on two subsequent years. Because we are not interested in the seasonality here, we'll remove at least the per month data (12 columns).
End of explanation
"""
market_statistics.TENSION_RATIO.notnull().describe()
"""
Explanation: OK. Some values are missing for Market score, documentation states that the ratio is undefined when offers and demands are below 30.
How many missing values do we have for tension ratio here?
End of explanation
"""
market_statistics[['ROME_PROFESSION_CARD_CODE', 'AREA_CODE', 'AREA_TYPE_CODE']].describe()
"""
Explanation: Data is missing for 87% of the lines!
Lines represents data for an Area x Rome job group. So how many lines should we expect? First, how many areas, area types and job groups do we have?
End of explanation
"""
pd.concat([market_statistics['AREA_TYPE_CODE'], market_statistics['AREA_CODE']]).nunique()
"""
Explanation: Oh! Look at the job groups... even the most recent ROME job groups groups are here! Good job Pôle Emploi! There are 4 area types (consistent with the documentation) and 509 areas.
Because some areas may be labelled with multiple area types, let's see how many area x area types we have here.
End of explanation
"""
market_with_score = market_statistics[market_statistics.TENSION_RATIO.notnull()]
market_with_score.TENSION_RATIO.describe()
"""
Explanation: A little bit more than the 509 unique area names. Thus, confirming the redundant area names describing more than one area types.
With this in mind, we would expect 509 x 532 = 272916 lines if there were information on each job in each area. Hmmm... That's not the case and we have ~9.5% of the expected lines missing.
For the remaining 32013 (~11.7% of the expected lines) lines with market score data, what is the distribution of these scores?
End of explanation
"""
market_with_score[market_with_score.TENSION_RATIO > 50].TENSION_RATIO.hist();
market_with_score[market_with_score.TENSION_RATIO > 50]\
.sort_values('TENSION_RATIO', ascending=False)\
[['TENSION_RATIO', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'SEASONAL']].head()
"""
Explanation: On the subset with market score information, the market score is usually between 3 and 8. Which is not super reassuring on a candidate point of view... we should remember that this corresponds to the number of offer per 10 candidates. At the end, most of the time we have less than 1 offer per candidate.
However, in some markets (area/job) we can find extreme values (the max is at 1664 offers for 10 persons). How many of these extreme/unexpected values can we found, and to which jobs and areas do they correspond?
End of explanation
"""
market_statistics.AREA_TYPE_NAME.value_counts()
"""
Explanation: The 1664 offers for 10 persons observed above appears to be a real outlier. However, it corresponds to a seasonal job and may be linked to a specific place recruiting tons of people (mall, resort...). Note that it sounds like a great idea to apply to be a baker in Arles!
We noticed that the AREA_TYPE_NAME variable can cover multiple values. Can we say more about this?
End of explanation
"""
market_statistics[
(market_statistics.AREA_NAME == 'LYON CENTRE') &\
(market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\
[['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]
market_statistics[
(market_statistics.AREA_NAME == 'RHONE') &\
(market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\
[['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]
market_statistics[
(market_statistics.AREA_NAME == 'AUVERGNE-RHONE-ALPES') &\
(market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\
[['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]
"""
Explanation: This dataset have multiple granularity layers. We have information at the department ("Département") level, region level or whole country!
For one job, can you have observations for multiple areas? Let's try for butchers in the "Lyon" area, the department is Rhône and the region Auvergne-Rhône-Alpes.
End of explanation
"""
market_statistics.ROME_PROFESSION_CARD_CODE.nunique()
"""
Explanation: Good! We have info for all of these.
Let's go a little bit more general. How many jobs do we have here?
End of explanation
"""
area_romes = market_statistics.groupby(['AREA_TYPE_CODE', 'AREA_CODE']).ROME_PROFESSION_CARD_NAME.size()
area_romes.hist();
"""
Explanation: How many of these are represented in each area. If we have data for every job in every area, we expect to have 532 jobs in each area.
End of explanation
"""
market_statistics[(market_statistics.NB_APPLICATION_END_MONTH == 0) &\
(market_statistics.NB_OFFER_END_MONTH == 0) &\
(market_statistics.NB_OFFER_LAST_WEEK == 0) &\
(market_statistics.NB_APPLICATION_LAST_WEEK == 0)].head()
"""
Explanation: For some areas, we have missing jobs. They could be missing because some jobs have 0 offers, 0 applications etc...
Can we find some of these zero values in the dataset?
End of explanation
"""
department_romes = market_statistics[market_statistics.AREA_TYPE_CODE == 'D'].\
groupby('AREA_NAME').ROME_PROFESSION_CARD_NAME.size()
department_romes.hist();
"""
Explanation: There are jobs with zeros and NA. So probably, the missing values and the zeros are different things. We couldn't find any information about this in the documentation.
Is there an area level (except whole country) for which we have info for all job groups?
End of explanation
"""
region_romes = market_statistics[market_statistics.AREA_TYPE_CODE == 'R'].\
groupby('AREA_NAME').ROME_PROFESSION_CARD_NAME.size()
region_romes.hist();
"""
Explanation: Arf... Almost... A couple of departments have some jobs not represented.
Let's see with the region.
End of explanation
"""
area_romes = area_romes.to_frame()
area_romes = area_romes.reset_index(['AREA_TYPE_CODE', 'AREA_CODE'])
area_romes.columns = [['AREA_TYPE_CODE', 'AREA_CODE', 'jobgroups']]
department_romes = area_romes[area_romes.AREA_TYPE_CODE == 'D']
department_romes.sort_values('jobgroups').head(10)
"""
Explanation: Nothing is perfect! But most of the regions have information for all jobs.
Let's have a look to an area for which there is less jobs than expected (532)?
First, what are the areas with less than 532 job groups?
End of explanation
"""
market_statistics[
(market_statistics.AREA_NAME == 'YONNE') &\
(market_statistics.ROME_PROFESSION_CARD_CODE == 'J1502')]\
[['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]
"""
Explanation: Overseas territories (97X area codes) and Corsica (2X area codes) are the areas where there are the higher number of job groups missing.
Conclusion
This dataset seems quite clean even if:
- There are few information on market score
- There are some areas with missing jobs. This seems not to be related with lines with zeros...
However, there are multiple granularity layers that seems consistent between each others.
Comparison with Scraped Data
Let's compare these data from the one that are now (2017/09/14) online.
For a Nurse in the department "Yonne", there are no value.
End of explanation
"""
market_statistics[
(market_statistics.AREA_NAME == 'CHER') &\
(market_statistics.ROME_PROFESSION_CARD_CODE == 'F1603')]\
[['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]
"""
Explanation: Same here!
What about a plumber in the Cher department. The website announces 3 offers for 10 people.
End of explanation
"""
haute_saone_jobs = market_statistics[market_statistics.AREA_CODE == '70'].ROME_PROFESSION_CARD_NAME.unique()
market_statistics[-market_statistics.ROME_PROFESSION_CARD_NAME.isin(haute_saone_jobs)].\
ROME_PROFESSION_CARD_NAME.unique()
"""
Explanation: Same story here. Yippee!
Let's have a look to the areas with missing jobs. As an example we'll look at the Haute-Saône department (code 70).
End of explanation
"""
|
StingraySoftware/notebooks | Simulator/Concepts/PowerLaw Spectrum.ipynb | mit | import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Simulating Light Curves from Power Law Power Spectra
In this notebook, we will show how to simulate a light curve from a power spectrum that
follows a power law shape.
End of explanation
"""
def simulate(B):
N = 1024
# Define frequencies from 0 to 2*pi
w = np.linspace(0.001,2*np.pi,N)
# Draw two set of 'N' guassian distributed numbers
a1 = np.random.normal(size=N)
a2 = np.random.normal(size=N)
# Multiply by (1/w)^B to get real and imaginary parts
real = a1 * np.power((1/w),B/2)
imaginary = a2 * np.power((1/w),B/2)
# Form complex numbers corresponding to each frequency
f = [complex(r, i) for r,i in zip(real,imaginary)]
# Obtain real valued time series
f_conj = np.conjugate(np.array(f))
# Obtain time series
f_inv = np.fft.ifft(f_conj)
return f_inv
"""
Explanation: The power distribution is of the form S(w) = (1/w)^B. Define a function to recover time series from power law spectrum.
End of explanation
"""
f = simulate(1)
plt.plot(np.real(f))
plt.xlabel('Time')
plt.ylabel('Counts')
plt.title('Recovered LightCurve with B=1')
"""
Explanation: Start with B=1 to get a flicker noise distribution.
End of explanation
"""
f = simulate(2)
plt.plot(np.real(f))
plt.xlabel('Time')
plt.ylabel('Counts')
plt.title('Recovered LightCurve with B=2')
"""
Explanation: Try out with B=2 to get random walk distribution.
End of explanation
"""
|
quantumlib/OpenFermion-Cirq | examples/tutorial_4_variational.ipynb | apache-2.0 | import openfermion
import openfermioncirq
# Set parameters of jellium model.
wigner_seitz_radius = 5. # Radius per electron in Bohr radii.
n_dimensions = 2 # Number of spatial dimensions.
grid_length = 2 # Number of grid points in each dimension.
spinless = True # Whether to include spin degree of freedom or not.
n_electrons = 2 # Number of electrons.
# Figure out length scale based on Wigner-Seitz radius and construct a basis grid.
length_scale = openfermion.wigner_seitz_length_scale(
wigner_seitz_radius, n_electrons, n_dimensions)
grid = openfermion.Grid(n_dimensions, grid_length, length_scale)
# Initialize the model and compute its ground energy in the correct particle number manifold
fermion_hamiltonian = openfermion.jellium_model(grid, spinless=spinless, plane_wave=False)
hamiltonian_sparse = openfermion.get_sparse_operator(fermion_hamiltonian)
ground_energy, _ = openfermion.jw_get_ground_state_at_particle_number(
hamiltonian_sparse, n_electrons)
print('The ground energy of the jellium Hamiltonian at {} electrons is {}'.format(
n_electrons, ground_energy))
# Convert to DiagonalCoulombHamiltonian type.
hamiltonian = openfermion.get_diagonal_coulomb_hamiltonian(fermion_hamiltonian)
# Define the objective function
objective = openfermioncirq.HamiltonianObjective(hamiltonian)
# Create a swap network Trotter ansatz.
iterations = 1 # This is the number of Trotter steps to use in the ansatz.
ansatz = openfermioncirq.SwapNetworkTrotterAnsatz(
hamiltonian,
iterations=iterations)
print('Created a variational ansatz with the following circuit:')
print(ansatz.circuit.to_text_diagram(transpose=True))
"""
Explanation: Tutorial IV: Constructing variational algorithms
Variational quantum algorithms are a broad set of methods which involve optimizing a parameterized quantum circuit ansatz applied to some initial state (called the "reference") in order to minimize a cost function defined with respect to the output state. In the context of quantum simulation, very often the goal is to prepare ground states and the cost function is the expectation value of a Hamiltonian. Thus, if we define the reference (initial state) as $\lvert \psi\rangle$, the Hamiltonian as $H$ and the parameterized quantum circuit as $U(\vec{\theta})$ where $\vec{\theta}$ are the varaitional parameters, then the goal is to minimize the cost function
$$
E(\vec \theta) = \langle \psi \rvert
U^\dagger(\vec{\theta}) H U(\vec{\theta})
\lvert \psi\rangle.
$$
A classical optimization algorithm can be used to find the $\vec{\theta}$ that minimizes the value of the expression. The performance of a variational algorithm depends crucially on the choice of ansatz circuit $U(\vec{\theta})$, the choice of reference, and the strategy for choosing the initial parameters $\vec{\theta}$ since typically global optimizing is challenging and one needs to begin reasonably close to the intended state. One possibility is to use an ansatz of the form
$$
U(\vec{\theta}) = \prod_j \exp(-i \theta_j H_j)
$$
where the $H = \sum_j H_j$. This ansatz is inspired by a low Trotter-number Trotter-Suzuki based approximation to adiabatic state preparation. OpenFermion-Cirq contains routines for constructing ansatzes of this form which use as templates the Trotter step algorithms implemented in the trotter module.
Jellium with a Linear Swap Network
We will first demonstrate the construction and optimization of a variational ansatz for a jellium Hamiltonian. We will use an ansatz based on the LINEAR_SWAP_NETWORK Trotter step, which takes as input a DiagonalCoulombHamiltonian. Later, we will show how one can create a custom circuit ansatz and apply it to the H$_2$ molecule in a minimal basis.
End of explanation
"""
# Use preparation circuit for mean-field state
import cirq
preparation_circuit = cirq.Circuit(
openfermioncirq.prepare_gaussian_state(
ansatz.qubits,
openfermion.QuadraticHamiltonian(hamiltonian.one_body),
occupied_orbitals=range(n_electrons)))
# Create a Hamiltonian variational study
study = openfermioncirq.VariationalStudy(
'jellium_study',
ansatz,
objective,
preparation_circuit=preparation_circuit)
print("Created a variational study with {} qubits and {} parameters".format(
len(study.ansatz.qubits), study.num_params))
print("The value of the objective with default initial parameters is {}".format(
study.value_of(ansatz.default_initial_params())))
print("The circuit of the study is")
print(study.circuit.to_text_diagram(transpose=True))
"""
Explanation: In the last lines above we instantiated a class called SwapNetworkTrotterAnsatz which inherits from the general VariationalAnsatz class in OpenFermion-Cirq. A VariationalAnsatz is essentially a parameterized circuit that one constructs so that parameters can be supplied symbolically. This way one does not (necessarily) need to recompile the circuit each time the variational parameters change. We also instantiated a HamiltonianObjective which represents the objective function being the expectation value of our Hamiltonian.
Optimizing an ansatz requires the creation of a VariationalStudy object. A VariationalStudy is responsible for performing optimizations and storing the results. By default, it evaluates parameters by simulating the quantum circuit and computing the objective function, in this case the expectation value of the Hamiltonian, on the final state. It includes an optional state preparation circuit to be applied prior to the ansatz circuit. For this example, we will prepare the initial state as an eigenstate of the one-body operator of the Hamiltonian. Since the one-body operator is a quadratic Hamiltonian, its eigenstates can be prepared using the prepare_gaussian_state method. The SwapNetworkTrotterAnsatz class also includes a default setting of parameters which is inspired by the idea of state preparation by adiabatic evolution from the mean-field state.
End of explanation
"""
# Perform an optimization run.
from openfermioncirq.optimization import ScipyOptimizationAlgorithm, OptimizationParams
algorithm = ScipyOptimizationAlgorithm(
kwargs={'method': 'COBYLA'},
options={'maxiter': 100},
uses_bounds=False)
optimization_params = OptimizationParams(
algorithm=algorithm)
result = study.optimize(optimization_params)
print(result.optimal_value)
"""
Explanation: As we can see, our initial guess isn't particularly close to the target energy. Optimizing the study requires the creation of an OptimizationParams object. The most import component of this object is the optimization algorithm to use. OpenFermion-Cirq includes a wrapper around the the minimize method of Scipy's optimize module and more optimizers will be included in the future. Let's perform an optimization using the COBYLA method. Since this is just an example, we will set the maximum number of function evaluations to 100 so that it doesn't run too long.
End of explanation
"""
optimization_params = OptimizationParams(
algorithm=algorithm,
cost_of_evaluate=1e6)
study.optimize(
optimization_params,
identifier='COBYLA with maxiter=100, noisy',
repetitions=3,
reevaluate_final_params=True,
use_multiprocessing=True)
print(study)
"""
Explanation: In practice, the expectation value of the Hamiltonian cannot be measured exactly due to errors from finite sampling. This manifests as an error, or noise, in the measured value of the energy which can be reduced at the cost of more measurements. The HamiltonianVariationalStudy class incorporates a realistic model of this noise (shot-noise). The OptimizationParams object can have a cost_of_evaluate parameter which in this case represents the number of measurements used to estimate the energy for a set of parameters. If we are interested in how well an optimizer performs in the presence of noise, then we may want to repeat the optimization several times and see how the results vary between repetitions.
Below, we will perform the same optimization, but this time using the noise model. We will allow one million measurements per energy evaluation and repeat the optimization three times. Since this time the function evaluations are noisy, we'll also indicate that the final parameters of the study should be reevaluated according to a noiseless simulation. Finally, we'll print out a summary of the study, which includes all results obtained so far (including from the previous cell).
End of explanation
"""
import openfermion
diatomic_bond_length = .7414
geometry = [('H', (0., 0., 0.)),
('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
charge = 0
description = format(diatomic_bond_length)
molecule = openfermion.MolecularData(
geometry,
basis,
multiplicity,
description=description)
molecule.load()
hamiltonian = molecule.get_molecular_hamiltonian()
print("Bond Length in Angstroms: {}".format(diatomic_bond_length))
print("Hartree Fock (mean-field) energy in Hartrees: {}".format(molecule.hf_energy))
print("FCI (Exact) energy in Hartrees: {}".format(molecule.fci_energy))
"""
Explanation: We see then that in the noisy study the optimizer fails to converge to the final result with high enough accuracy. Apparently then one needs more measurements, a more stable optimizer, or both!
H$_2$ with a custom ansatz
The above example shows one of the nice built-in ansatz offered in OpenFermion-Cirq that can be applied to many different types of physical systems without the need for much input by the user. In some research cases, however, one may wish to design their own paramterized ansatz. Here will give an example of how to do this for the simple case of the H$_2$ molecule in a minimal basis.
To provide some brief background, in a minimal basis H$2$ is discretized into two slater-type spatial orbitals, each of which is expressed as a sum of 3 Gaussians (STO-3G). After pre-processing with a mean-field, Hartree-Fock, procedure, the best meanfield approximation of the ground state is found to be the symmetric superposition of these two spatial orbitals. After including spin in the problem by assigning each spatial orbital an alpha and beta spin, or equivalently the tensor product of the spatial and spin-$1/2$ degree of freedom, the mean-field state is expressed as
\begin{equation}
\vert \Psi{\text{initial}} \rangle = a^\dagger_1 a^\dagger_0 \vert \rangle.
\end{equation}
Within the Jordan-Wigner encoding of fermionic systems, this is equivalent to a computational basis state with the first two qubits being in the 1 state and the second two qubits in the 0 state. This can be prepared via a simple circuit as
\begin{equation}
| \Psi_{\text{initial}} \rangle = X_1 X_0 \vert 0 0 0 0 \rangle = \vert 1 1 0 0 \rangle.
\end{equation}
As a result of the symmetries present in this system, only one transition is allowed, and it completely characterizes the freedom required to move from this initial guess to the exact ground state solution for all geometries of H$_2$ in the minimal basis. That is the concerted transitions of electrons from spin-orbitals 0, 1 to 2, 3. This corresponds to the fermionic operator $a_3^\dagger a_2^\dagger a_1 a_0$, which is of course not unitary, but one may lift this operation to the anti-hermitian generator of a rotation as in unitary coupled cluster to yield the unitary
\begin{equation}
\exp \left[ \theta \left(a_3^\dagger a_2^\dagger a_1 a_0 - a_0^\dagger a_1^\dagger a_2 a_3\right) \right]
\end{equation}
which may be decomposed exactly using a combination of the Jordan-Wigner transformation and standard identites from Nielsen and Chuang. However, as has been noted before, the essential action of concerted electron movement can be captured in only a single of the Jordan-Wigner terms, hence the simpler operation
\begin{equation}
\exp \left[ -i \theta Y_3 X_2 X_1 X_0 \right]
\end{equation}
suffices. This is what we use here in combination with standard gate identities to parameterize an ansatz for H$_2$.
In the following code we first load up one example geometry of the H$_2$ molecule, as this data is included with OpenFermion. To compute such Hamiltonians for arbitrary molecules in different basis sets geometries, etc., one can use plugins such as OpenFermion-Psi4 or OpenFermion-PySCF. Later we will use these same techniques to load and evaluate the full curve with our ansatz.
End of explanation
"""
import cirq
import openfermioncirq
import sympy
class MyAnsatz(openfermioncirq.VariationalAnsatz):
def params(self):
"""The parameters of the ansatz."""
return [sympy.Symbol('theta_0')]
def operations(self, qubits):
"""Produce the operations of the ansatz circuit."""
q0, q1, q2, q3 = qubits
yield cirq.H(q0), cirq.H(q1), cirq.H(q2)
yield cirq.XPowGate(exponent=-0.5).on(q3)
yield cirq.CNOT(q0, q1), cirq.CNOT(q1, q2), cirq.CNOT(q2, q3)
yield cirq.ZPowGate(exponent=sympy.Symbol('theta_0')).on(q3)
yield cirq.CNOT(q2, q3), cirq.CNOT(q1, q2), cirq.CNOT(q0, q1)
yield cirq.H(q0), cirq.H(q1), cirq.H(q2)
yield cirq.XPowGate(exponent=0.5).on(q3)
def _generate_qubits(self):
"""Produce qubits that can be used by the ansatz circuit."""
return cirq.LineQubit.range(4)
"""
Explanation: Now we design a custom ansatz with a single parameter based on the simplfied unitary above. The ansatz class makes convenient use of named parameters which are specified by the params routine. The parameterized circuit then makes use of these parameters within its operations method.
End of explanation
"""
ansatz = MyAnsatz()
objective = openfermioncirq.HamiltonianObjective(hamiltonian)
q0, q1, _, _ = ansatz.qubits
preparation_circuit = cirq.Circuit(
cirq.X(q0),
cirq.X(q1))
study = openfermioncirq.VariationalStudy(
name='my_hydrogen_study',
ansatz=ansatz,
objective=objective,
preparation_circuit=preparation_circuit)
print(study.circuit)
"""
Explanation: After this custom ansatz is designed, we can instantiate it and package it into a variational study class along with an initial state preparation cirucit that makes it more convenient to study parts of an ansatz. In this case our initial state is the doubly occupied computational basis state mentioned above.
End of explanation
"""
# Perform optimization.
import numpy
from openfermioncirq.optimization import COBYLA, OptimizationParams
optimization_params = OptimizationParams(
algorithm=COBYLA,
initial_guess=[0.01])
result = study.optimize(optimization_params)
print("Initial state energy in Hartrees: {}".format(molecule.hf_energy))
print("Optimized energy result in Hartree: {}".format(result.optimal_value))
print("Exact energy result in Hartees for reference: {}".format(molecule.fci_energy))
"""
Explanation: With this this paramterized circuit and state preparation packaged into a variational study, it is now straightfoward to attach an optimizer and find the optimal value as was done in the example above. Note that we can also set an initial guess for the angle as determined by any number of methods, and we demonstrate this here. Note that as the built-in simulator for Cirq is based on single precision, the solution may appear sub-variational past this precision due to round off errors that accumlate, however it is far below the accuracy one is typically concerned with for this type of problem.
End of explanation
"""
bond_lengths = ['{0:.1f}'.format(0.3 + 0.1 * x) for x in range(23)]
hartree_fock_energies = []
optimized_energies = []
exact_energies = []
for diatomic_bond_length in bond_lengths:
geometry = [('H', (0., 0., 0.)),
('H', (0., 0., diatomic_bond_length))]
description = format(diatomic_bond_length)
molecule = openfermion.MolecularData(geometry, basis,
multiplicity, description=description)
molecule.load()
hamiltonian = molecule.get_molecular_hamiltonian()
study = openfermioncirq.VariationalStudy(
name='my_hydrogen_study',
ansatz=ansatz,
objective=openfermioncirq.HamiltonianObjective(hamiltonian),
preparation_circuit=preparation_circuit)
result = study.optimize(optimization_params)
hartree_fock_energies.append(molecule.hf_energy)
optimized_energies.append(result.optimal_value)
exact_energies.append(molecule.fci_energy)
print("R={}\t Optimized Energy: {}".format(diatomic_bond_length, result.optimal_value))
"""
Explanation: Using this same circuit and approach, we can now build a curve for the length of the H$_2$ molecule and plot it in the following way. Note that running the code in the cell above is required for this example.
End of explanation
"""
import matplotlib
import matplotlib.pyplot as pyplot
%matplotlib inline
# Plot the energy mean and std Dev
fig = pyplot.figure(figsize=(10,7))
bkcolor = '#ffffff'
ax = fig.add_subplot(1, 1, 1)
pyplot.subplots_adjust(left=.2)
ax.set_xlabel('R (Angstroms)')
ax.set_ylabel(r'E Hartrees')
ax.set_title(r'H$_2$ bond dissociation curve')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
bond_lengths = [float(x) for x in bond_lengths]
ax.plot(bond_lengths, hartree_fock_energies, label='Hartree-Fock')
ax.plot(bond_lengths, optimized_energies, '*', label='Optimized')
ax.plot(bond_lengths, exact_energies, '--', label='Exact')
ax.legend(frameon=False)
pyplot.show()
"""
Explanation: Now that we've collected that data, we can easily visualize it with standard matplotlib routines
End of explanation
"""
|
prk327/CoAca | 3_Plotting_Categorical_Data.ipynb | gpl-3.0 | # loading libraries and reading the data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# set seaborn theme if you prefer
sns.set(style="white")
# read data
market_df = pd.read_csv("./global_sales_data/market_fact.csv")
customer_df = pd.read_csv("./global_sales_data/cust_dimen.csv")
product_df = pd.read_csv("./global_sales_data/prod_dimen.csv")
shipping_df = pd.read_csv("./global_sales_data/shipping_dimen.csv")
orders_df = pd.read_csv("./global_sales_data/orders_dimen.csv")
"""
Explanation: Plotting Categorical Data
In this section, we will:
- Plot distributions of data across categorical variables
- Plot aggregate/summary statistics across categorical variables
Plotting Distributions Across Categories
We have seen how to plot distributions of data. Often, the distributions reveal new information when you plot them across categorical variables.
Let's see some examples.
End of explanation
"""
# boxplot of a variable
sns.boxplot(y=market_df['Sales'])
plt.yscale('log')
plt.show()
"""
Explanation: Boxplots
We had created simple boxplots such as the ones shown below. Now, let's plot multiple boxplots and see what they can tell us the distribution of variables across categories.
End of explanation
"""
# merge the dataframe to add a categorical variable
df = pd.merge(market_df, product_df, how='inner', on='Prod_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.yscale('log')
plt.show()
"""
Explanation: Now, let's say you want to compare the (distribution of) sales of various product categories. Let's first merge the product data into the main dataframe.
End of explanation
"""
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
"""
Explanation: So this tells you that the sales of office supplies are, on an average, lower than the other two categories. The sales of technology and furniture categories seem much better. Note that each order can have multiple units of products sold, so Sales being higher/lower may be due to price per unit or the number of units.
Let's now plot the other important variable - Profit.
End of explanation
"""
df = df[(df.Profit<1000) & (df.Profit>-1000)]
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
"""
Explanation: Profit clearly has some outliers due to which the boxplots are unreadable. Let's remove some extreme values from Profit (for the purpose of visualisation) and try plotting.
End of explanation
"""
# adjust figure size
plt.figure(figsize=(10, 8))
# subplot 1: Sales
plt.subplot(1, 2, 1)
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.title("Sales")
plt.yscale('log')
# subplot 2: Profit
plt.subplot(1, 2, 2)
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.title("Profit")
plt.show()
"""
Explanation: You can see that though the category 'Technology' has better sales numbers than others, it is also the one where the most loss making transactions happen. You can drill further down into this.
End of explanation
"""
# merging with customers df
df = pd.merge(df, customer_df, how='inner', on='Cust_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Customer_Segment', y='Profit', data=df)
plt.show()
"""
Explanation: Now that we've compared Sales and Profits across product categories, let's drill down further and do the same across another categorical variable - Customer_Segment.
We'll need to add the customer-related attributes (dimensions) to this dataframe.
End of explanation
"""
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.boxplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df)
plt.show()
"""
Explanation: You can visualise the distribution across two categorical variables using the hue= argument.
End of explanation
"""
# plot shipping cost as percentage of Sales amount
sns.boxplot(x=df['Product_Category'], y=100*df['Shipping_Cost']/df['Sales'])
plt.ylabel("100*(Shipping cost/Sales)")
plt.show()
"""
Explanation: Across all customer segments, the product category Technology seems to be doing fairly well, though Furniture is incurring losses across all segments.
Now say you are curious to know why certain orders are making huge losses. One of your hypothesis is that the shipping cost is too high in some orders. You can plot derived variables as well, such as shipping cost as percentage of sales amount.
End of explanation
"""
# bar plot with default statistic=mean
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.show()
"""
Explanation: Plotting Aggregated Values across Categories
Bar Plots - Mean, Median and Count Plots
Bar plots are used to display aggregated values of a variable, rather than entire distributions. This is especially useful when you have a lot of data which is difficult to visualise in a single figure.
For example, say you want to visualise and compare the average Sales across Product Categories. The sns.barplot() function can be used to do that.
End of explanation
"""
# Create 2 subplots for mean and median respectively
# increase figure size
plt.figure(figsize=(12, 6))
# subplot 1: statistic=mean
plt.subplot(1, 2, 1)
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.title("Average Sales")
# subplot 2: statistic=median
plt.subplot(1, 2, 2)
sns.barplot(x='Product_Category', y='Sales', data=df, estimator=np.median)
plt.title("Median Sales")
plt.show()
"""
Explanation: Note that, by default, seaborn plots the mean value across categories, though you can plot the count, median, sum etc. Also, barplot computes and shows the confidence interval of the mean as well.
End of explanation
"""
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.barplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df, estimator=np.median)
plt.show()
"""
Explanation: Look at that! The mean and median sales across the product categories tell different stories. This is because of some outliers (extreme values) in the Furniture category, distorting the value of the mean.
You can add another categorical variable in the plot.
End of explanation
"""
# Plotting categorical variable across the y-axis
plt.figure(figsize=(10, 8))
sns.barplot(x='Profit', y="Product_Sub_Category", data=df, estimator=np.median)
plt.show()
"""
Explanation: The plot neatly shows the median profit across product categories and customer segments. It says that:
- On an average, only Technology products in Small Business and Corporate (customer) categories are profitable.
- Furniture is incurring losses across all Customer Segments
Compare this to the boxplot we had created above - though the bar plots contains 'lesser information' than the boxplot, it is more revealing.
<hr>
When you want to visualise having a large number of categories, it is helpful to plot the categories across the y-axis. Let's now drill down into product sub categories.
End of explanation
"""
# Plotting count across a categorical variable
plt.figure(figsize=(10, 8))
sns.countplot(y="Product_Sub_Category", data=df)
plt.show()
"""
Explanation: The plot clearly shows which sub categories are incurring the heaviest losses - Copiers and Fax, Tables, Chairs and Chairmats are the most loss making categories.
You can also plot the count of the observations across categorical variables using sns.countplot().
End of explanation
"""
|
blakeflei/IntroScientificPythonWithJupyter | 06b - Fitting Plots.ipynb | bsd-3-clause | # Python imports
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import normal
from scipy.optimize import curve_fit
"""
Explanation: Fitting Plots
Essential for determining the fit of a model to raw data, curve fitting is ubiquitous. Using the scipy.optimize.curve_fit functionality, we can define a function to fit the data.
By the end of this file you should be able to:
1. Fit data to a predefined function
Further reading:
https://lmfit.github.io/lmfit-py/intro.html
https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.curve_fit.html
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html
Finding error in fitting paramaters:
https://stackoverflow.com/questions/14581358/getting-standard-errors-on-fitted-parameters-using-the-optimize-leastsq-method-i
End of explanation
"""
np.random.seed(1) # Use a seed for the random number generator
X = np.linspace(0, 5, 30)
Y = np.exp(-X) + normal(0, 0.2, 30)
plt.scatter(X,Y)
plt.title('Scatter Plot')
plt.show()
"""
Explanation: Fit a nonlinear (exponential) function:
End of explanation
"""
def func(x_vals, A, B, C):
return A * np.exp(-B * x_vals) + C
init_guess = (1, 1e-6, 1) # Use an initial guess (optional)
popt, pcov = curve_fit(func, X, Y, p0=init_guess)
print('Optimized parameters are: {}'.format(popt))
"""
Explanation: Now that we have the data, let's see about a fit. We use the curve_fit function from the scipy.optimize module, which requires a function and the data.
A guess parameter (p0, optional) is important for exponential fits:
End of explanation
"""
x_fitted = np.linspace(0, 5)
y_fitted = func(x_fitted, *popt) # Apply func using opt to x_fitted
"""
Explanation: The outputs are the optmized parameters (popt) and the covariance matrix (pcov) of the optimized parameters.
Next, we apply the optimization parameters to the functions to generate a plot that shows the fit:
End of explanation
"""
# Plot
plt.scatter(X,Y)
plt.plot(x_fitted,y_fitted, color="red")
plt.title('Scatter Plot + Fit')
plt.show()
"""
Explanation: Above, func is called on x_fitted, using the starred (unpacked) expression (list) of popt optimal parameters.
Python starred expressions:
*list unpacks a list to create an iterable. For example, *[1,2,3,4] unpacks to 1,2,3,4.
End of explanation
"""
perr = np.sqrt(np.diag(pcov)) # One standard deviation errors in the
# parameters
print('Estimated errors are: {}'.format(perr))
# Remember that func is: A * np.exp(-B * x_vals) + C
std_p = [perr[0], -perr[1], perr[2]] # Combination of perr for highest values
std_m = [-perr[0], perr[1], -perr[2]]# Combination of perr fpr lowest values
# Determine the plots for the ± standard deviation
y_fitted_p = func(x_fitted, *(popt+std_p))
y_fitted_m = func(x_fitted, *(popt+std_m))
# Plot
plt.scatter(X,Y)
plt.plot(x_fitted,y_fitted, color="red")
plt.plot(x_fitted,y_fitted_p, color="blue", label='+ stdev')
plt.plot(x_fitted,y_fitted_m, color="blue", label='- stdev')
plt.title('Scatter Plot')
plt.legend(loc='best')
plt.show()
"""
Explanation: How accurate are the estimated parameters for the data?
The output of the curve_fit function includes pcov, the estimated covariance of popt.
Properly estimating the variance is an in-depth statistical problem. However, if we are to trust the assumptions curve_fit make (type of data, relationships between estimated parameters, etc), then the square root of the diagonals of the pcov from above approximates the error. This should be done with much caution!
End of explanation
"""
rand_gen = normal(0,1,1000) # Generate numbers
bins = np.linspace(-5,5,num=100)
histogram = np.histogram(rand_gen,bins); # Use histogram to get the
# distribution
X = histogram[1][:-1]
Y = histogram[0]
plt.scatter(X,Y)
plt.title('Scatter Plot')
plt.show()
def gauss_func(x_vals, ampl, sig, mu):
return ampl * np.exp(-(-x_vals-mu)**2/(2*sig**2))
bounds_set = ([0,0,0],[max(Y)*10, 10, 1]) # We can set bounds on the fit too
popt, pcov = curve_fit(gauss_func, X, Y, bounds=bounds_set)
x_fitted = np.linspace(-5, 5)
y_fitted = gauss_func(x_fitted, *popt)
plt.scatter(X,Y)
plt.plot(x_fitted,y_fitted, color='red')
plt.title('Scatter Plot + Fit')
plt.show()
"""
Explanation: Gaussian fit:
Here, we generate a lot of normally distributed random numbers and use the histogram of the normally distributed numbers to fit a guassian curve.
Here, a bounds parameter (bounds) is optional in the same way the p0 was:
End of explanation
"""
|
SheffieldML/notebook | GPy-phil/GPy Intro.ipynb | bsd-3-clause | import GPy, numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: GPy
GPy is a framework for Gaussian process based applications. It is design for speed and reliability. The main three pillars of its functionality are made of
Ease of use
Reproduceability
Scalability
In this tutorial we will have a look at the three main pillars, so you may be able to use Gaussian processes with ease of mind and without the complications of cutting edge research code.
End of explanation
"""
X = np.random.uniform(0, 10, (200, 1))
f = np.sin(.3*X) + .3*np.cos(1.3*X)
f -= f.mean()
Y = f+np.random.normal(0, .1, f.shape)
plt.scatter(X, Y)
m = GPy.models.GPRegression(X, Y)
m
"""
Explanation: Ease of use
GPy handles the parameters of the parameter based models on the basis of the parameterized framework built in itself. The framework allows to use parameters in an intelligent and intuative way.
End of explanation
"""
m.rbf.lengthscale = 1.5
m
"""
Explanation: Changing parameters is as easy as assigning new values to the respective parameter:
End of explanation
"""
# Type your code here
"""
Explanation: The whole model gets updated automatically, when updating a parameter, without you having to interfere at all.
Change some parameters and plot the results, using the models plot() function
What do the different parameters change in the result?
End of explanation
"""
m.optimize(messages=1)
_ = m.plot()
# You can use different kernels to use on the data.
# Try out three different kernels and plot the result after optimizing the GP:
# See kernels using GPy.kern.<tab>
# Type your code here
"""
Explanation: The parameters can be optimized using gradient based optimization. The optimization routines are taken over from scipy. Running the optimization in a GPy model is a call to the models own optimize method.
End of explanation
"""
# Type your code here
"""
Explanation: Reproduceability
GPy has a built in save and load functionality, allowing you to pickle a model with all its parameters and data in a single file. This is usefull when transferring models to another location, or rerunning models with different intializations etc.
Try saving a model using the models pickle(<name>) function and load it again using GPy.load(<name>). The loaded model is fully functional and can be used as usual.
End of explanation
"""
# Type your code here
"""
Explanation: We have put a lot of effort in stability of execution, so try to randomize a model using its randomize() function, which randomized the models parameters. After optimization the result whould be very close to previous model optimizations.
End of explanation
"""
GPy.core.SparseGP?
GPy.core.SVGP?
"""
Explanation: Scalability
GPys parameterized framework can handle as many parameters as you like and is memory and speed efficient in setting parameters by having only one copy of the parameters in memory.
There are many scalability based Gaussian process methods implemented in GPy, have a look at
End of explanation
"""
#Type your code here
"""
Explanation: We can easily run a sparse GP on above data by using the wrapper methods for running different GPy models:
GPy.models.<tab>
Use the GPy.models.SparseGPRegression to run the above data using the sparse GP:
End of explanation
"""
|
mayank-johri/LearnSeleniumUsingPython | Section 1 - Core Python/Chapter 05 - Data Types/Numbers.ipynb | gpl-3.0 | # Converting real to integer
print ('int(3.14) =', int(3.14))
print ('int(3.64) =', int(3.64))
print('int("22") =', int("22"))
print('int("22.0") !=', int("22.0"))
print("int(3+4j) =", int(3+4j))
# Converting integer to real
print ('float(5) =', float(5))
print('int("22.0") ==', float("22.0"))
print('int(float("22.0")) ==', int(float("22.0")))
# Calculation between integer and real results in real
print ('5.0 / 2 + 3 = ', 5 / 2 + 3)
x = 3.5
y = 2.5
z = x + y
print(x, y, z)
print(type(x), type(y), type(z))
z = int(z)
print(x, y, z)
print(type(x), y, type(z))
# Integers in other base
print ("int('20', 8) =", int('20', 8)) # base 8
print ("int('20', 16) =", int('20', 16)) # base 16
# Operations with complex numbers
c = 3 + 4j
print ('c =', c)
print ('Real Part:', c.real)
print ('Imaginary Part:', c.imag)
print ('Conjugate:', c.conjugate())
"""
Explanation: Numbers
Python provide following builtins numeric data types:
Integer (int): i = 26011950
Floating Point real (float): f = 1.2345
Complex (complex): c = 2 + 10j
The builtin function int() can be used to convert other types to integer, including base changes.
Example:
End of explanation
"""
x = 22
y = 4
if(x < y):
print("X wins")
else:
print("Y wins")
x = 2
y = 4
if(x < y):
print("X wins")
else:
print("Y wins")
"""
Explanation: NOTE: The real numbers can also be represented in scientific notation, for example: 1.2e22.
Arithmetic Operations:
Python has a number of defined operators for handling numbers through arithmetic calculations, logic operations (that test whether a condition is true or false) or bitwise processing (where the numbers are processed in binary form).
Logical Operations:
Less than (<)
Greater than (>)
Less than or equal to (<=)
Greater than or equal to (>=)
Equal to (==)
Not equal to (!=)
Less than (<)
End of explanation
"""
x = 2
y = 4
if(x > y):
print("X wins")
else:
print("Y wins")
x = 14
y = 4
if(x > y):
print("X wins")
else:
print("Y wins")
"""
Explanation: Greater than (>)
End of explanation
"""
x = 2
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 2
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 21
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
"""
Explanation: Less than or equal to (<=)
End of explanation
"""
x = 8
y = 4
if(x >= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 14
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
"""
Explanation: greater_than_or_equal_to
End of explanation
"""
x = 4
y = 4
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 41
y = 4
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 2+1j
y = 3+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 21+1j
y = 21+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 21+1j
y = 21+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
"""
Explanation: Equal To
End of explanation
"""
x = 4
y = 4
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 41
y = 4
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 2+1j
y = 3+1j
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 21+1j
y = 21+1j
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
"""
Explanation: Not Equal To
End of explanation
"""
x = 10 #-> 1010
y = 11 #-> 1011
"""
Explanation: Bitwise Operations:
Left Shift (<<)
Right Shift (>>)
And (&)
Or (|)
Exclusive Or (^)
Inversion (~)
During the operations, numbers are converted appropriately (eg. (1.5+4j) + 3 gives 4.5+4j).
Besides operators, there are also some builtin features to handle numeric types: abs(), which returns the absolute value of the number, oct(), which converts to octal, hex(), which converts for hexadecimal, pow(), which raises a number by another and round(), which returns a real number with the specified rounding.
End of explanation
"""
print("x<<2 = ", x<<2)
print("x =", x)
print("x>>2 = ", x>>2)
print("x&y = ", x&y)
print("x|y = ", x|y)
print("x^y = ", x^y)
print("x =", x)
print("~x = ", ~x)
print("~y = ", ~y)
"""
Explanation: 1011
"""
OR
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
AND
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
"""
End of explanation
"""
|
yedivanseven/bestPy | examples/03_Logging.ipynb | gpl-3.0 | import sys
sys.path.append('../..')
"""
Explanation: CHAPTER 3
Logging
As you are exploring and, later, using bestPy you might want to keep track (in a discrete way) of what happens under the hood. For that purpose, a convenient logging faciĺity is built into bestPy that keeps you up to date.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
End of explanation
"""
from bestPy.datastructures import Transactions
"""
Explanation: Import
We are not going to actually recommend anything in the present chapter. We just want to take a closer look at the warnings issued when reading transaction data from CSV file in the last chapter 2. To recreate these warnings, all we need to import is Transactions from bestPy.datastructures.
End of explanation
"""
file = 'examples_data.csv'
data = Transactions.from_csv(file)
"""
Explanation: Read transaction data
End of explanation
"""
import sys
sys.path.append('../..')
from bestPy import write_log_to
from bestPy.datastructures import Transactions
logfile = 'logfile.txt'
write_log_to(logfile, log_level=20)
file = 'examples_data.csv'
data = Transactions.from_csv(file)
"""
Explanation: There they are again! While it is maybe helpful to have the warnings pop up like this in a Jupyter notbook, it is not clear how to benefit from this feature when writing a standalone python program or service. Also, having a lot of them might mess up your tidy notebook layout.
In fact, these messages aren't intended to pop up in the Jupyter notebook at all! Rather, they are intended to be written to a logfile together with other information (as well as some warnings and errors while your are still experimenting with bestPy). We will make it best practice, then, to always enable bestPys logging facilities before doing anything else. The logging function is conveniently accessible through the top-level package.
python
from bestPy import write_log_to
Tab completion reveals that the write_log_to() function has two arguments. The first is the path to and name of the logfile to be written and the second is the logging level, which can have the following (integer) values:
+ 10 ... debug
+ 20 ... info
+ 30 ... warning
+ 40 ... error
+ 50 ... critical
Any event with a logging level lower than the one specified will not appear in the logfile. You might want to start with 20 for info to learn which events are logged and then swotch to 30 for warning later.
To see how logging works in practice, you will first need to restart the Kernel of this Jupyter notebook (Menu: Kernel --> Restart). Then, we
+ make again sure we have bestPy in our PYTHONPATH
+ do our imports again
+ intialize logging
+ read transaction data again
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | workshops/guided-projects/guided_project_3.ipynb | apache-2.0 | import os
"""
Explanation: Guided Project 3
Learning Objective:
Learn how to customize the tfx template to your own dataset
Learn how to modify the Keras model scaffold provided by tfx template
In this guided project, we will use the tfx template tool to create a TFX pipeline for the covertype project, but this time, instead of re-using an already implemented model as we did in guided project 2, we will adapt the model scaffold generated by tfx template so that it can train on the covertype dataset
Note: The covertype dataset is loacated at
gs://workshop-datasets/covertype/small/dataset.csv
End of explanation
"""
ENDPOINT = # Enter your Kubeflow ENDPOINT here.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
"""
Explanation: Step 1. Environment setup
Envirnonment Variables
Setup the your Kubeflow pipelines endopoint below the same way you did in guided project 1 & 2.
End of explanation
"""
%%bash
TFX_PKG="tfx==0.22.0"
KFP_PKG="kfp==0.5.1"
pip freeze | grep $TFX_PKG || pip install -Uq $TFX_PKG
pip freeze | grep $KFP_PKG || pip install -Uq $KFP_PKG
"""
Explanation: tfx and kfp tools setup
End of explanation
"""
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
"""
Explanation: You may need to restart the kernel at this point.
skaffold tool setup
End of explanation
"""
!which skaffold
"""
Explanation: Modify the PATH environment variable so that skaffold is available:
At this point, you shoud see the skaffold tool with the command which:
End of explanation
"""
PIPELINE_NAME = # Your pipeline name
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
"""
Explanation: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
End of explanation
"""
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
"""
Explanation: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
"""
!python -m models.features_test
!python -m models.keras.model_test
"""
Explanation: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the same Chicago Taxi dataset and ML model as
the Airflow Tutorial.
Here is brief introduction to each of the Python files:
pipeline - This directory contains the definition of the pipeline
* configs.py — defines common constants for pipeline runners
* pipeline.py — defines TFX components and a pipeline
models - This directory contains ML model definitions.
* features.py, features_test.py — defines features for the model
* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform
models/estimator - This directory contains an Estimator based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using TF estimator
models/keras - This directory contains a Keras based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using Keras
beam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine
Running the tests:
You might notice that there are some files with _test.py in their name.
These are unit tests of the pipeline and it is recommended to add more unit
tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag.
You can usually get a module name by deleting .py extension and replacing / with ..
For example:
End of explanation
"""
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil ls gs://{GCS_BUCKET_NAME} | grep {GCS_BUCKET_NAME} || gsutil mb gs://{GCS_BUCKET_NAME}
"""
Explanation: Step 4. Create the artifact store bucket
Note: You probably already have completed this step in guided project 1, so you may
may skip it if this is the case.
Components in the TFX pipeline will generate outputs for each run as
ML Metadata Artifacts, and they need to be stored somewhere.
You can use any storage which the KFP cluster can access, and for this example we
will use Google Cloud Storage (GCS).
Let us create this bucket if you haven't created it in guided project 1.
Its name will be <YOUR_PROJECT>-kubeflowpipelines-default.
End of explanation
"""
|
geography-munich/sciprog | material/sub/jrjohansson/Lecture-1-Introduction-to-Python-Programming.ipynb | apache-2.0 | ls scripts/hello-world*.py
cat scripts/hello-world.py
!python scripts/hello-world.py
"""
Explanation: Introduction to Python programming
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
Python program files
Python code is usually stored in text files with the file ending ".py":
myprogram.py
Every line in a Python program file is assumed to be a Python statement, or part thereof.
The only exception is comment lines, which start with the character # (optionally preceded by an arbitrary number of white-space characters, i.e., tabs or spaces). Comment lines are usually ignored by the Python interpreter.
To run our Python program from the command line we use:
$ python myprogram.py
On UNIX systems it is common to define the path to the interpreter on the first line of the program (note that this is a comment line as far as the Python interpreter is concerned):
#!/usr/bin/env python
If we do, and if we additionally set the file script to be executable, we can run the program like this:
$ myprogram.py
Example:
End of explanation
"""
cat scripts/hello-world-in-swedish.py
!python scripts/hello-world-in-swedish.py
"""
Explanation: Character encoding
The standard character encoding is ASCII, but we can use any other encoding, for example UTF-8. To specify that UTF-8 is used we include the special line
# -*- coding: UTF-8 -*-
at the top of the file.
End of explanation
"""
import math
"""
Explanation: Other than these two optional lines in the beginning of a Python code file, no additional code is required for initializing a program.
IPython notebooks
This file - an IPython notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the JSON format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook.
Modules
Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
References
The Python Language Reference: http://docs.python.org/2/reference/index.html
The Python Standard Library: http://docs.python.org/2/library/
To use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:
End of explanation
"""
import math
x = math.cos(2 * math.pi)
print(x)
"""
Explanation: This includes the whole module and makes it available for use later in the program. For example, we can do:
End of explanation
"""
from math import *
x = cos(2 * pi)
print(x)
"""
Explanation: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module:
End of explanation
"""
from math import cos, pi
x = cos(2 * pi)
print(x)
"""
Explanation: This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *:
End of explanation
"""
import math
print(dir(math))
"""
Explanation: Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the dir function:
End of explanation
"""
help(math.log)
log(10)
log(10, 2)
"""
Explanation: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
End of explanation
"""
# variable assignments
x = 1.0
my_variable = 12.2
"""
Explanation: We can also use the help function directly on modules: Try
help(math)
Some very useful modules form the Python standard library are os, sys, math, shutil, re, subprocess, multiprocessing, threading.
A complete lists of standard modules for Python 2 and Python 3 are available at http://docs.python.org/2/library/ and http://docs.python.org/3/library/, respectively.
Variables and types
Symbol names
Variable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter.
By convention, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Note: Be aware of the keyword lambda, which could easily be a natural variable name in a scientific program. But being a keyword, it cannot be used as a variable name.
Assignment
The assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
End of explanation
"""
type(x)
"""
Explanation: Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
End of explanation
"""
x = 1
type(x)
"""
Explanation: If we assign a new value to a variable, its type can change.
End of explanation
"""
print(y)
"""
Explanation: If we try to use a variable that has not yet been defined we get an NameError:
End of explanation
"""
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
"""
Explanation: Fundamental types
End of explanation
"""
import types
# print all types defined in the `types` module
print(dir(types))
x = 1.0
# check if the variable x is a float
type(x) is float
# check if the variable x is an int
type(x) is int
"""
Explanation: Type utility functions
The module types contains a number of type name definitions that can be used to test if variables are of certain types:
End of explanation
"""
isinstance(x, float)
"""
Explanation: We can also use the isinstance method for testing types of variables:
End of explanation
"""
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
z = complex(x)
print(z, type(z))
x = float(z)
"""
Explanation: Type casting
End of explanation
"""
y = bool(z.real)
print(z.real, " -> ", y, type(y))
y = bool(z.imag)
print(z.imag, " -> ", y, type(y))
"""
Explanation: Complex variables cannot be cast to floats or integers. We need to use z.real or z.imag to extract the part of the complex number we want:
End of explanation
"""
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# Integer division of float numbers
3.0 // 2.0
# Note! The power operators in python isn't ^, but **
2 ** 2
"""
Explanation: Operators and comparisons
Most operators and comparisons in Python work as one would expect:
Arithmetic operators +, -, *, /, // (integer division), '**' power
End of explanation
"""
True and False
not False
True or False
"""
Explanation: Note: The / operator always performs a floating point division in Python 3.x.
This is not true in Python 2.x, where the result of / is always an integer if the operands are integers.
to be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x).
The boolean operators are spelled out as the words and, not, or.
End of explanation
"""
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
"""
Explanation: Comparison operators >, <, >= (greater or equal), <= (less or equal), == equality, is identical.
End of explanation
"""
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
# replace a substring in a string with something else
s2 = s.replace("world", "test")
print(s2)
"""
Explanation: Compound types: Strings, List and dictionaries
Strings
Strings are the variable type that is used for storing text messages.
End of explanation
"""
s[0]
"""
Explanation: We can index a character in a string using []:
End of explanation
"""
s[0:5]
s[4:5]
"""
Explanation: Heads up MATLAB users: Indexing start at 0!
We can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):
End of explanation
"""
s[:5]
s[6:]
s[:]
"""
Explanation: If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:
End of explanation
"""
s[::1]
s[::2]
"""
Explanation: We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):
End of explanation
"""
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("str1" + "str2" + "str3") # strings added with + are concatenated without space
print("value = %f" % 1.0) # we can use C-style string formatting
# this formatting creates a string
s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5)
print(s2)
# alternative, more intuitive way of formatting a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)
print(s3)
"""
Explanation: This technique is called slicing. Read more about the syntax here: http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python has a very rich set of functions for text processing. See for example http://docs.python.org/2/library/string.html for more information.
String formatting examples
End of explanation
"""
l = [1,2,3,4]
print(type(l))
print(l)
"""
Explanation: List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is [...]:
End of explanation
"""
print(l)
print(l[1:3])
print(l[::2])
"""
Explanation: We can use the same slicing techniques to manipulate lists as we could use on strings:
End of explanation
"""
l[0]
"""
Explanation: Heads up MATLAB users: Indexing starts at 0!
End of explanation
"""
l = [1, 'a', 1.0, 1-1j]
print(l)
"""
Explanation: Elements in a list do not all have to be of the same type:
End of explanation
"""
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
"""
Explanation: Python lists can be inhomogeneous and arbitrarily nested:
End of explanation
"""
start = 10
stop = 30
step = 2
range(start, stop, step)
# in python 3 range generates an interator, which can be converted to a list using 'list(...)'.
# It has no effect in python 2
list(range(start, stop, step))
list(range(-10, 10))
s
# convert a string to a list by type casting:
s2 = list(s)
s2
# sorting lists
s2.sort()
print(s2)
"""
Explanation: Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function:
End of explanation
"""
# create a new empty list
l = []
# add an elements using `append`
l.append("A")
l.append("d")
l.append("d")
print(l)
"""
Explanation: Adding, inserting, modifying, and removing elements from lists
End of explanation
"""
l[1] = "p"
l[2] = "p"
print(l)
l[1:3] = ["d", "d"]
print(l)
"""
Explanation: We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
End of explanation
"""
l.insert(0, "i")
l.insert(1, "n")
l.insert(2, "s")
l.insert(3, "e")
l.insert(4, "r")
l.insert(5, "t")
print(l)
"""
Explanation: Insert an element at an specific index using insert
End of explanation
"""
l.remove("A")
print(l)
"""
Explanation: Remove first element with specific value using 'remove'
End of explanation
"""
del l[7]
del l[6]
print(l)
"""
Explanation: Remove an element at a specific location using del:
End of explanation
"""
point = (10, 20)
print(point, type(point))
point = 10, 20
print(point, type(point))
"""
Explanation: See help(list) for more details, or read the online documentation
Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are immutable.
In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:
End of explanation
"""
x, y = point
print("x =", x)
print("y =", y)
"""
Explanation: We can unpack a tuple by assigning it to a comma-separated list of variables:
End of explanation
"""
point[0] = 20
"""
Explanation: If we try to assign a new value to an element in a tuple we get an error:
End of explanation
"""
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
params["parameter1"] = "A"
params["parameter2"] = "B"
# add a new entry
params["parameter4"] = "D"
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
print("parameter4 = " + str(params["parameter4"]))
"""
Explanation: Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:
End of explanation
"""
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
"""
Explanation: Control Flow
Conditional statements: if, elif, else
The Python syntax for conditional execution of code uses the keywords if, elif (else if), else:
End of explanation
"""
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
"""
Explanation: For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
Examples:
End of explanation
"""
for x in [1,2,3]:
print(x)
"""
Explanation: Loops
In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is:
for loops:
End of explanation
"""
for x in range(4): # by default range start at 0
print(x)
"""
Explanation: The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:
End of explanation
"""
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
"""
Explanation: Note: range(4) does not include 4 !
End of explanation
"""
for key, value in params.items():
print(key + " = " + str(value))
"""
Explanation: To iterate over key-value pairs of a dictionary:
End of explanation
"""
for idx, x in enumerate(range(-3,3)):
print(idx, x)
"""
Explanation: Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:
End of explanation
"""
l1 = [x**2 for x in range(0,5)]
print(l1)
"""
Explanation: List comprehensions: Creating lists using for loops:
A convenient and compact way to initialize lists:
End of explanation
"""
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
"""
Explanation: while loops:
End of explanation
"""
def func0():
print("test")
func0()
"""
Explanation: Note that the print("done") statement is not part of the while loop body because of the difference in indentation.
Functions
A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.
End of explanation
"""
def func1(s):
"""
Print a string 's' and tell how many characters it has
"""
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
"""
Explanation: Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
End of explanation
"""
def square(x):
"""
Return the square of x.
"""
return x ** 2
square(4)
"""
Explanation: Functions that returns a value use the return keyword:
End of explanation
"""
def powers(x):
"""
Return a few powers of x.
"""
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
"""
Explanation: We can return multiple values from a function using tuples (see above):
End of explanation
"""
def myfunc(x, p=2, debug=False):
if debug:
print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p))
return x**p
"""
Explanation: Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes:
End of explanation
"""
myfunc(5)
myfunc(5, debug=True)
"""
Explanation: If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:
End of explanation
"""
myfunc(p=3, debug=True, x=7)
"""
Explanation: If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
End of explanation
"""
f1 = lambda x: x**2
# is equivalent to
def f2(x):
return x**2
f1(2), f2(2)
"""
Explanation: Unnamed functions (lambda function)
In Python we can also create unnamed functions, using the lambda keyword:
End of explanation
"""
# map is a built-in python function
map(lambda x: x**2, range(-3,4))
# in python 3 we can use `list(...)` to convert the iterator to an explicit list
list(map(lambda x: x**2, range(-3,4)))
"""
Explanation: This technique is useful for example when we want to pass a simple function as an argument to another function, like this:
End of explanation
"""
class Point:
"""
Simple class for representing a point in a Cartesian coordinate system.
"""
def __init__(self, x, y):
"""
Create a new Point at x, y.
"""
self.x = x
self.y = y
def translate(self, dx, dy):
"""
Translate the point by dx and dy in the x and y direction.
"""
self.x += dx
self.y += dy
def __str__(self):
return("Point at [%f, %f]" % (self.x, self.y))
"""
Explanation: Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain attributes (variables) and methods (functions).
A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).
Each class method should have an argument self as its first argument. This object is a self-reference.
Some class method names have special meaning, for example:
__init__: The name of the method that is invoked when the object is first created.
__str__ : A method that is invoked when a simple string representation of the class is needed, as for example when printed.
There are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names
End of explanation
"""
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class
print(p1) # this will invoke the __str__ method
"""
Explanation: To create a new instance of a class:
End of explanation
"""
p2 = Point(1, 1)
p1.translate(0.25, 1.5)
print(p1)
print(p2)
"""
Explanation: To invoke a class method in the class instance p:
End of explanation
"""
%%file mymodule.py
"""
Example of a python module. Contains a variable called my_variable,
a function called my_function, and a class called MyClass.
"""
my_variable = 0
def my_function():
"""
Example function
"""
return my_variable
class MyClass:
"""
Example class.
"""
def __init__(self):
self.variable = my_variable
def set_variable(self, new_value):
"""
Set self.variable to a new value
"""
self.variable = new_value
def get_variable(self):
return self.variable
"""
Explanation: Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities.
Modules
One of the most important concepts in good programming is to reuse code and avoid repetitions.
The idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot.
Python supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending .py), and it can be made accessible to other Python modules and programs using the import statement.
Consider the following example: the file mymodule.py contains simple example implementations of a variable, function and a class:
End of explanation
"""
import mymodule
"""
Explanation: We can import the module mymodule into our Python program using import:
End of explanation
"""
help(mymodule)
mymodule.my_variable
mymodule.my_function()
my_class = mymodule.MyClass()
my_class.set_variable(10)
my_class.get_variable()
"""
Explanation: Use help(module) to get a summary of what the module provides:
End of explanation
"""
reload(mymodule) # works only in python 2
"""
Explanation: If we make changes to the code in mymodule.py, we need to reload it using reload:
End of explanation
"""
raise Exception("description of the error")
"""
Explanation: Exceptions
In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
End of explanation
"""
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except:
print("Caught an exception")
"""
Explanation: A typical use of exceptions is to abort functions when some error condition occurs, for example:
def my_function(arguments):
if not verify(arguments):
raise Exception("Invalid arguments")
# rest of the code goes here
To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements:
try:
# normal code goes here
except:
# code for error handling goes here
# this code is not executed unless the code
# above generated an error
For example:
End of explanation
"""
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except Exception as e:
print("Caught an exception:" + str(e))
"""
Explanation: To get information about the error, we can access the Exception class instance that describes the exception by using for example:
except Exception as e:
End of explanation
"""
%load_ext version_information
%version_information
"""
Explanation: Further reading
http://www.python.org - The official web page of the Python programming language.
http://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended.
http://www.greenteapress.com/thinkpython/ - A free book on Python programming.
Python Essential Reference - A good reference book on Python programming.
Versions
End of explanation
"""
|
LucaCanali/Miscellaneous | PLSQL_Neural_Network/MNIST_oracle_plsql.ipynb | apache-2.0 | %%bash
sqlplus -s mnist/mnist@dbserver:1521/orcl.cern.ch <<EOF
-- create the table for test data, where the images of digits are stored as arrays of type utl_nla_array
create table testdata_array as
select a.image_id, a.label,
cast(multiset(select val from testdata where image_id=a.image_id order by val_id) as utl_nla_array_flt) image_array
from (select distinct image_id, label from testdata) a order by image_id;
-- create the table with tensor definitions, the tensors are stored as arrays of type utl_nla_array
create table tensors_array as
select a.name, cast(multiset(select val from tensors where name=a.name order by val_id) as utl_nla_array_flt) tensor_vals
from (select distinct name from tensors) a;
EOF
"""
Explanation: How to recognize handwritten digits in Oracle PL/SQL
This notebook contains the steps to deploy an example system for recognizing hand written digits of the MNIST dataset using Oracle and an artificial neural network serving engine implemented in PL/SQL
Author: Luca.Canali@cern.ch - July 2016
Steps:
Load test data and tensors into Oracle tables
Post-process those tables to make use of Oracle's linear algebra package UTL_NLA
Create a custom package MNIST to serve the artificial neural network
Test the package MNIST with test data consisitng of 10000 images of handwritten digits
Instructions to load the test data and tensors into Oracle
Note: you don't need this step if you previously followed the training steps in the notebook MNIST_tensorflow_exp_to_oracle.ipynb
Create the database user MNIST
For example run this using an account with DBA privileges:
<code>
SQL> create user mnist identified by mnist default tablespace users quota unlimited on users;
SQL> grant connect, create table, create procedure to mnist;
SQL> grant read, write on directory DATA_PUMP_DIR to mnist;
</code>
The dump file file can be imported as follow:
Download the Oracle datapump file MNIST_tables.dmp.gz (see Github repository) and unzip it. Move the .dmp file to a valid directory, for example the directory DATA_PUMP_DIR which by default is $ORACLE_HOME/rdbms/log
use impdp to load the data (this has been tested on Oracle 11.2.0.4 and 12.1.0.2):
<code>
<b>impdp mnist/mnist tables=testdata,tensors directory=DATA_PUMP_DIR dumpfile=MNIST_tables.dmp</b>
</code>
Post process the tables, this is because the following makes use of Oracle's linear algebra package UTL_NLA
End of explanation
"""
%%bash
sqlplus -s mnist/mnist@dbserver:1521/orcl.cern.ch <<EOF
create or replace package mnist
as
-- MNIST scoring enginge in PL/SQL
-- Author: Luca.Canali@cern.ch, July 2016
g_b0_array utl_nla_array_flt;
g_W0_matrix utl_nla_array_flt;
g_b1_array utl_nla_array_flt;
g_W1_matrix utl_nla_array_flt;
function score(p_testimage_array utl_nla_array_flt) return number;
procedure init;
end;
/
create or replace package body mnist
as
procedure init
/* initialize the tensors that make up the neural network */
as
begin
SELECT tensor_vals INTO g_W0_matrix FROM tensors_array WHERE name='W0';
SELECT tensor_vals INTO g_W1_matrix FROM tensors_array WHERE name='W1';
SELECT tensor_vals INTO g_b0_array FROM tensors_array WHERE name='b0';
SELECT tensor_vals INTO g_b1_array FROM tensors_array WHERE name='b1';
end;
procedure print_debug(p_array utl_nla_array_flt)
/* useful for debugging pourposes, prints an array to screen. requires set serveroutput on */
as
begin
dbms_output.put_line('***************');
for i in 1..p_array.count loop
dbms_output.put_line('p_array(' || i ||') = ' || TO_CHAR(p_array(i),'9999.9999'));
end loop;
dbms_output.put_line('**************');
end;
function argmax(p_array utl_nla_array_flt) return integer
as
v_index number;
v_maxval float;
begin
v_index := 1;
v_maxval := p_array(v_index);
for i in 2..p_array.count loop
if ( p_array(i) > v_maxval) then
v_index := i;
v_maxval := p_array(v_index);
end if;
end loop;
return(v_index);
end;
function score(p_testimage_array utl_nla_array_flt) return number
as
v_Y0 utl_nla_array_flt;
v_output_array utl_nla_array_flt;
begin
v_Y0 := g_b0_array;
/* this is part of the computation of the hidden layer, Y0 = W0_matrix * p_test_image_array + B0 */
/* utl_nla.blas_gemv performs matrix multiplication and vector addition */
utl_nla.blas_gemv(
trans => 'N',
m => 100,
n => 784,
alpha => 1.0,
a => g_W0_matrix,
lda => 100,
x => p_testimage_array,
incx => 1,
beta => 1.0,
y => v_Y0,
incy => 1,
pack => 'C'
);
/* This is part of the computation of the hidden layer: Y0 -> sigmoid(Y0) */
for i in 1..v_Y0.count loop
v_Y0(i) := 1 / ( 1 + exp(-v_Y0(i)));
end loop;
v_output_array := g_b1_array;
/* this is part of the computation of the output layer, Y1 = W1_matrix * Y0 + B1 */
/* utl_nla.blas_gemv performs matrix multiplication and vector addition */
utl_nla.blas_gemv(
trans => 'N',
m => 10,
n => 100,
alpha => 1.0,
a => g_W1_matrix,
lda => 10,
x => v_Y0,
incx => 1,
beta => 1.0,
y => v_output_array,
incy => 1,
pack => 'C'
);
/* print_debug(v_output_array); */
/* v_output_array needs to be passed via softmax function to provide a distribution probability */
/* here we are only interested in the maximum value which gives the predicted number with an offset of 1 */
return (argmax(v_output_array) - 1);
end;
end;
/
EOF
"""
Explanation: Create the package that runs the neural network in PL/SQL
Notes:
- The main function is MNIST.SCORE: it takes as input an image to process (p_testimage_array utl_nla_array_flt) and returns the predicted number.
- The procedure MNIST.INIT, loads the tensors from the table tensors_array into PL/SQL global variables.
End of explanation
"""
%%bash
sqlplus -s mnist/mnist@dbserver:1521/orcl.cern.ch <<EOF
exec mnist.init
select mnist.score(image_array), label from testdata_array where rownum=1;
EOF
"""
Explanation: Test the scoring engine with one test image
Notes:
- the images of the handwritten digits are encoded in the field image_array of the table testdata_array
- The label field of testdata_array contains the value of the digit
- When MNIST.SCORE output is equal to the label value, the neural network has predicted correctly the digit
End of explanation
"""
%%bash
sqlplus -s mnist/mnist@dbserver:1521/orcl.cern.ch <<EOF
exec mnist.init
set timing on
select sum(decode(mnist.score(image_array), label, 1, 0)) "Images correctly identified",
count(*) "Total number of images"
from testdata_array;
EOF
"""
Explanation: Test the scoring engine with all the test images
Notes:
- from the SQL here below that the neural network and the serving engine MNIST.SCORE correctly predicts 9787 out of 10000 images, that is has an accuracy on the test set of ~98%
- The execution time for processing 10000 test images is about 2 minutes, that is ~12 ms to process each image on average
End of explanation
"""
|
thsant/scipy-intro | 05._Matplotlib.ipynb | cc0-1.0 | %pylab inline
"""
Explanation: Matplotlib
End of explanation
"""
X = linspace(-pi, pi, 256)
C = cos(X)
S = sin(X)
"""
Explanation: Matplotlib é um módulo para a criação de gráficos 2D e 3D criada por John Hunter (2007). Sua sintaxe é propositalmente similar às funções de plotagem da MATLAB, facilitando o aprendizado de usuários que desejem replicar gráficos construídos naquele ambiente. Com uma grande comunidade de usuários, Matplolib possui diversos tutoriais na Web. Seu site oficial apresenta uma enorme galeria de exemplos que permite ao pesquisador rapidamente identificar o código necessário para o tipo de gráfico que pretende utilizar.
Biblioteca para plotting
Gráficos de alta qualidade que podem ser utilizados em publicação científica
Projetada de forma que a sintaxe de suas funções seja similar às análogas em MATLAB
Exemplo: exibição de duas funções, $\sin (\theta)$ e $\cos (\theta)$
Considere o domínio $X$, formado por 256 pontos no intervalo $[-\pi, \pi]$, e as funções $\cos(x)$ e $\sin(x)$:
End of explanation
"""
plot(X, C)
plot(X, S)
"""
Explanation: O gráfico das duas funções pode ser facilmente exibido com a função plot:
End of explanation
"""
fig = figure()
# Remover as bordas superior e inferior
ax = gca() # gca significa 'get current axis'
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Mover os eixos e as marcas para o centro do gráfico
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
# Definie os intervalos exibidos nas oordenadas e abscissas
xlim(-3.5, 3.5)
ylim(-1.25, 1.25)
# Indica o texto a ser utilizado nas marcas dos eixos
xticks([-pi, -pi/2, 0, pi/2, pi], [r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
yticks([-1, 0, +1], [r'$-1$', r'$0$', r'$+1$'])
# Anotação de dois pontos de interesse: o seno e o cosseno de 2pi/3
theta = 2 * pi / 3
plot([theta, theta], [0, cos(theta)], color='red', linewidth=2.5, linestyle="--")
scatter([theta], [cos(theta)], 25, color='red')
annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(theta, sin(theta)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plot([theta, theta],[0, sin(theta)], color='green', linewidth=2.5, linestyle="--")
scatter([theta, ],[sin(theta), ], 25, color='green')
annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$',
xy=(theta, cos(theta)), xycoords='data',
xytext=(-90, -50), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# Exibe as funções
plot(X, C, color="red", linewidth=1.5, linestyle="-", label=r'$\cos(\theta)$')
plot(X, S, color="green", linewidth=1.5, linestyle="-", label=r'$\sin(\theta)$')
# Adiciona a legenda
legend(loc='upper left')
"""
Explanation: O exemplo acima apresenta um gráfico muito simples. Para ilustrar a grande variedade de personalizações fornecidas pela Matplotlib, é apresentado abaixo um exemplo mais complexo, uma versão modificada do código apresentado por Rougier et al.. O leitor interessado pode obter explicações detalhadas na Seção 1.4, Matplotlib: plotting, das SciPy Lecuture Notes.
End of explanation
"""
fig.savefig('trig.tif', dpi=1200)
"""
Explanation: Armazenamento de figuras em arquivo
Uma das funções da Matplotlib é auxiliar os pesquisadores na preparação de gráficos para publicação em periódicos. Ao preparar um manuscrito, é comum o pesquisador se deparar com orientações como esta:
Suas figuras deveriam ser preparadas como qualidade de publicação, utilizando aplicações capazes de gerar arquivos TIFF de alta resolução (1200 dpi para linhas e 300 dpi para arte colorida ou half-tone.
In Preparing Your Manuscript, Oxford Journals
Exigências como a acima podem ser facilmente atendidas pela Matplotlib, que possui uma função savefig capaz de exportar o gráfico para um arquivo em disco, em diversos formatos, com resolução definida pelo pesquisador:
End of explanation
"""
fig.savefig('trig.pdf')
"""
Explanation: Na ausência de ilustrações bitmap, uma alternativa é armazenar a imagem em um formato vetorial, suportado por exemplos em arquivos PDF e EPS:
End of explanation
"""
# 1 linha, 2 colunas, posição 1
subplot(1, 2, 1)
plot(X, C, 'r-')
# 1 linha, 2 colunas, posição 2
subplot(1, 2, 2)
plot(X, S, 'g-')
# 2 linhas, 2 colunas, posição 1
subplot(2, 2, 1)
plot(X, C, 'r-')
# 2 linha, 2 colunas, posição 2
subplot(2, 2, 2)
plot(X, S, 'g-')
# 2 linha, 2 colunas, posição 3
subplot(2, 2, 3)
plot(X, [tan(x) for x in X], 'b-')
# 2 linha, 2 colunas, posição 4
subplot(2, 2, 4)
plot(X, [cosh(x) for x in X], 'c-')
"""
Explanation: Subplots
End of explanation
"""
n = 1024
X = random.normal(0,1,n)
Y = random.normal(0,1,n)
scatter(X,Y)
"""
Explanation: Outros tipos de gráficos
Scatter plots
End of explanation
"""
!head ./data/iris.data.txt
"""
Explanation: Um clássico: Iris Dataset
Este conjunto de dados é famoso na literatura de reconhecimento de padrões, sendo apresentado pela primeira vez por R. A. Fisher em 1950. Nele há 3 classes
da planta Iris: Iris Setosa, Iris Virginica e Iris Versicolor. Cada classe possui 50 amostras com 4 medidas: comprimento e largura da sépala, comprimento e largura da pétala.
End of explanation
"""
iris_data = loadtxt('./data/iris.data.txt', usecols=(0,1,2,3))
iris_class = loadtxt('./data/iris.data.txt', dtype='string')[:,4]
setosa = iris_data[iris_class == 'Iris-setosa']
virginica = iris_data[iris_class == 'Iris-virginica']
versicolor = iris_data[iris_class == 'Iris-versicolor']
"""
Explanation: O código abaixo apenas carrega os dados das 3 classes de planta a partir do arquivo:
End of explanation
"""
scatter(setosa[:,2], setosa[:,3], c='r', marker='o')
scatter(virginica[:,2], virginica[:,3], c='g', marker='s')
scatter(versicolor[:,2], versicolor[:,3], c='b', marker='^')
xlabel(u'comprimento da pétala')
ylabel(u'largura da pétala')
"""
Explanation: Podemos definir o símbolo e a cor utilizada em um scatter plot. No exemplo abaixo, círculos vermelhos representam os dados para Setosa, triângulos azuis representam Versicolor e quadrados verdes exibem o dados de Virginica:
End of explanation
"""
from scipy.misc import lena
L = lena()
imshow(L)
colorbar()
imshow(L, cmap=cm.gray)
colorbar()
"""
Explanation: Imagens
End of explanation
"""
x = arange(20)
y = random.rand(20) + 1.
print x
print y
bar(x, y)
"""
Explanation: Barras
End of explanation
"""
n = 12
X = arange(n)
Y = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
axes([0.025, 0.025, 0.95, 0.95])
bar(X, Y, facecolor='#9999ff', edgecolor='gray')
for x, y in zip(X, Y):
text(x + 0.4, y + 0.05, '%.2f' % y, ha='center', va='bottom')
xlim(-.5, n)
xticks(())
ylim(0, 1.25)
yticks(())
"""
Explanation: Um exemplo mais elaborado
End of explanation
"""
x = random.randn(10000)
n, bins, patches = hist(x, 100)
"""
Explanation: Histogramas
End of explanation
"""
john = imread('./data/john-hunter.jpg')
imshow(john)
title('John D. Hunter')
axis('off')
"""
Explanation: A galeria da Matplotlib
Diversos exemplos de como utilizar a Matplotlib podem ser encontrados na galeria.
John Hunter (1968-2012)
Em 28 de agosto de 2012, John D. Hunter, o criador da matplotlib, faleceu devido a complicações durante o tratamento de um câncer. Ele foi diagnosticado em julho de 2012, logo após sua palestra na conferência SciPy, falecendo em agosto do mesmo ano.
Em retribuição a seu trabalho, a comunidade Python criou o John Hunter Memorial Fund, destinado principalmente ao fomento da educação de suas três filhas.
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Classification/Week 3/Assignment 1/module-5-decision-tree-assignment-1-blank.ipynb | mit | import graphlab
graphlab.canvas.set_target('ipynb')
"""
Explanation: Identifying safe loans with decision trees
The LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https://en.wikipedia.org/wiki/Default_(finance).
In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be charged off and possibly go into default. In this assignment you will:
Use SFrames to do some feature engineering.
Train a decision-tree on the LendingClub dataset.
Visualize the tree.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Train a complex tree model and compare it to simple tree model.
Let's get started!
Fire up Graphlab Create
Make sure you have the latest version of GraphLab Create. If you don't find the decision tree module, then you would need to upgrade GraphLab Create using
pip install graphlab-create --upgrade
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Load LendingClub dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
End of explanation
"""
loans.column_names()
"""
Explanation: Exploring some features
Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
End of explanation
"""
loans['grade'].show()
"""
Explanation: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
End of explanation
"""
loans['home_ownership'].show()
"""
Explanation: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called subgrade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here.
Now, let's look at a different feature.
End of explanation
"""
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
"""
Explanation: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
Exploring the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
"""
loans['safe_loans'].show(view = 'Categorical')
"""
Explanation: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset.
End of explanation
"""
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
"""
Explanation: You should have:
* Around 81% safe loans
* Around 19% risky loans
It looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging.
Features for the classification algorithm
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
End of explanation
"""
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print "Number of safe loans : %s" % len(safe_loans_raw)
print "Number of risky loans : %s" % len(risky_loans_raw)
"""
Explanation: What remains now is a subset of features and the target that we will use for the rest of this notebook.
Sample data to balance classes
As we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (safe_loans_raw) and one with just the risky loans (risky_loans_raw).
End of explanation
"""
print "Percentage of safe loans : ", len(safe_loans_raw) / float(len(safe_loans_raw) + len(risky_loans_raw))
print "Percentage of risky loans : ", len(risky_loans_raw) / float(len(safe_loans_raw) + len(risky_loans_raw))
"""
Explanation: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment:
End of explanation
"""
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
"""
Explanation: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results.
End of explanation
"""
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
End of explanation
"""
train_data, validation_data = loans_data.random_split(.8, seed=1)
"""
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Split data into training and validation sets
We split the data into training and validation sets using an 80/20 split and specifying seed=1 so everyone gets the same results.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters (this is known as model selection). Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that.
End of explanation
"""
decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features)
"""
Explanation: Use decision tree to build a classifier
Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use validation_set=None to get the same results as everyone else.
End of explanation
"""
small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 2)
"""
Explanation: Visualizing a learned model
As noted in the documentation, typically the the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with max depth of 2 to gain some intuition by visualizing the learned tree.
End of explanation
"""
small_model.show(view="Tree")
"""
Explanation: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
Note: To better understand this visual:
* The root node is represented using pink.
* Intermediate nodes are in green.
* Leaf nodes in blue and orange.
End of explanation
"""
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
"""
Explanation: Making predictions
Let's consider two positive and two negative examples from the validation set and see what the model predicts. We will do the following:
* Predict whether or not a loan is safe.
* Predict the probability that a loan is safe.
End of explanation
"""
decision_tree_model.predict(sample_validation_data)
"""
Explanation: Explore label predictions
Now, we will use our model to predict whether or not a loan is likely to default. For each row in the sample_validation_data, use the decision_tree_model to predict whether or not the loan is classified as a safe loan.
Hint: Be sure to use the .predict() method.
End of explanation
"""
decision_tree_model.predict(sample_validation_data, output_type='probability')
"""
Explanation: Quiz Question: What percentage of the predictions on sample_validation_data did decision_tree_model get correct?
Explore probability predictions
For each row in the sample_validation_data, what is the probability (according decision_tree_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using decision_tree_model on sample_validation_data:
End of explanation
"""
small_model.predict(sample_validation_data, output_type='probability')
"""
Explanation: Quiz Question: Which loan has the highest probability of being classified as a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
Tricky predictions!
Now, we will explore something pretty interesting. For each row in the sample_validation_data, what is the probability (according to small_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using small_model on sample_validation_data:
End of explanation
"""
sample_validation_data[1]
"""
Explanation: Quiz Question: Notice that the probability preditions are the exact same for the 2nd and 3rd loans. Why would this happen?
Answer: The same leaf node
Visualize the prediction on a tree
Note that you should be able to look at the small tree, traverse it yourself, and visualize the prediction being made. Consider the following point in the sample_validation_data
End of explanation
"""
small_model.show(view="Tree")
"""
Explanation: Let's visualize the small tree here to do the traversing for this data point.
End of explanation
"""
small_model.predict(sample_validation_data)
"""
Explanation: Note: In the tree visualization above, the values at the leaf nodes are not class predictions but scores (a slightly advanced concept that is out of the scope of this course). You can read more about this here. If the score is $\geq$ 0, the class +1 is predicted. Otherwise, if the score < 0, we predict class -1.
Quiz Question: Based on the visualized tree, what prediction would you make for this data point?
Answer: -1
Now, let's verify your prediction by examining the prediction made using GraphLab Create. Use the .predict function on small_model.
End of explanation
"""
print small_model.evaluate(train_data)['accuracy']
print decision_tree_model.evaluate(train_data)['accuracy']
"""
Explanation: Evaluating accuracy of the decision tree model
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Let us start by evaluating the accuracy of the small_model and decision_tree_model on the training data
End of explanation
"""
print small_model.evaluate(validation_data)['accuracy']
print decision_tree_model.evaluate(validation_data)['accuracy']
"""
Explanation: Checkpoint: You should see that the small_model performs worse than the decision_tree_model on the training data.
Now, let us evaluate the accuracy of the small_model and decision_tree_model on the entire validation_data, not just the subsample considered above.
End of explanation
"""
big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
"""
Explanation: Quiz Question: What is the accuracy of decision_tree_model on the validation set, rounded to the nearest .01?
Evaluating accuracy of a complex decision tree model
Here, we will train a large decision tree with max_depth=10. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want.
End of explanation
"""
print big_model.evaluate(train_data)['accuracy']
print big_model.evaluate(validation_data)['accuracy']
"""
Explanation: Now, let us evaluate big_model on the training set and validation set.
End of explanation
"""
predictions = decision_tree_model.predict(validation_data)
"""
Explanation: Checkpoint: We should see that big_model has even better performance on the training set than decision_tree_model did on the training set.
Quiz Question: How does the performance of big_model on the validation set compare to decision_tree_model on the validation set? Is this a sign of overfitting?
Quantifying the cost of mistakes
Every mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model.
Assume the following:
False negatives: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted.
False positives: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given.
Correct predictions: All correct predictions don't typically incur any cost.
Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps:
1. First, let us compute the predictions made by the model.
1. Second, compute the number of false positives.
2. Third, compute the number of false negatives.
3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives.
First, let us make predictions on validation_data using the decision_tree_model:
End of explanation
"""
false_positives = 0
false_negatives = 0
for item in xrange(len(validation_data)):
if predictions[item] != validation_data['safe_loans'][item]:
if predictions[item] == 1:
false_positives += 1
else:
false_negatives += 1
print false_positives
print false_negatives
"""
Explanation: False positives are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives:
End of explanation
"""
10000 * false_negatives
20000 * false_positives
(10000 * false_negatives) + (20000 * false_positives)
"""
Explanation: False negatives are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives:
Quiz Question: Let us assume that each mistake costs money:
* Assume a cost of \$10,000 per false negative.
* Assume a cost of \$20,000 per false positive.
What is the total cost of mistakes made by decision_tree_model on validation_data?
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/DecisionTree-checkpoint.ipynb | mit | import numpy as np
import pandas as pd
from sklearn import tree
input_file = "e:/sundog-consult/udemy/datascience/PastHires.csv"
df = pd.read_csv(input_file, header = 0)
df.head()
"""
Explanation: Decison Trees
First we'll load some fake data on past hires I made up. Note how we use pandas to convert a csv file into a DataFrame:
End of explanation
"""
d = {'Y': 1, 'N': 0}
df['Hired'] = df['Hired'].map(d)
df['Employed?'] = df['Employed?'].map(d)
df['Top-tier school'] = df['Top-tier school'].map(d)
df['Interned'] = df['Interned'].map(d)
d = {'BS': 0, 'MS': 1, 'PhD': 2}
df['Level of Education'] = df['Level of Education'].map(d)
df.head()
"""
Explanation: scikit-learn needs everything to be numerical for decision trees to work. So, we'll map Y,N to 1,0 and levels of education to some scale of 0-2. In the real world, you'd need to think about how to deal with unexpected or missing data! By using map(), we know we'll get NaN for unexpected values.
End of explanation
"""
features = list(df.columns[:6])
features
"""
Explanation: Next we need to separate the features from the target column that we're trying to bulid a decision tree for.
End of explanation
"""
y = df["Hired"]
X = df[features]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X,y)
"""
Explanation: Now actually construct the decision tree:
End of explanation
"""
from IPython.display import Image
from sklearn.externals.six import StringIO
import pydotplus
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data,
feature_names=features)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
"""
Explanation: ... and display it. Note you need to have pydotplus installed for this to work. (!pip install pydotplus)
To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value. So value = [0. 5.] mean there are 0 "no hires" and 5 "hires" by the tim we get to that point. value = [3. 0.] means 3 no-hires and 0 hires.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=10)
clf = clf.fit(X, y)
#Predict employment of an employed 10-year veteran
print (clf.predict([[10, 1, 4, 0, 0, 0]]))
#...and an unemployed 10-year veteran
print (clf.predict([[10, 0, 4, 0, 0, 0]]))
"""
Explanation: Ensemble learning: using a random forest
We'll use a random forest of 10 decision trees to predict employment of specific candidate profiles:
End of explanation
"""
|
econandrew/povcalnetjson | notebooks/integral-constrained-cubic-spline.ipynb | mit | # The y values are simply the mean-scaled derivatives of the Lorenz curve
dL = np.diff(L)
dp = np.diff(p)
y = ymean * dL/dp
#y = np.hstack((0.0, y))
# And we arbitrarily assign these y values to the mid-points of the p values
pmid = np.add(p[1:],p[:-1])/2
#pmid = np.hstack((0.0, pmid))
plt.plot(y, pmid, 'b.')
# Find the least squares fit
X = np.vstack((np.power(y, 3), np.power(y, 2), np.power(y, 1), np.power(y, 0)))
# bX = y
import numpy.linalg
coef = numpy.linalg.lstsq(np.transpose(X), np.transpose(pmid))[0]
lscubic = lambda y: np.matmul(coef, np.vstack((np.power(y, 3), np.power(y, 2), np.power(y, 1), np.power(y, 0))))
plt.plot(ygrid, lscubic(ygrid), 'g-');
yknots = inverse(lscubic, (0, 20))(pmid).tolist()
plt.plot(yknots, pmid, "g.")
"""
Explanation: We define a Lorenz curve with 5 observed points $(L_i, p_i)$ for $i \in 0, 1, 2, 3, 4 = k$ so that a single cubic cannot fit the implied CDF (except by chance). Then four spline segments of the CDF are defined, each as:
$$
F(y) = a_i y^3 + b_i y^2 + c_i y + d_i,\qquad \mathrm{for}\quad y_{i-1} < y \leq y_i
$$
Then we have a total of $4k-3$ interior constraints:
- $k$ integral (Lorenz) constraints
- $k-1$ point constraints (continuity)
- $k-1$ first derivative constraints ($C^1$)
- $k-1$ second derivative constraints ($C^2$)
and we choose 4 further endpoint constraints from amongst the following:
- endpoint constraints $y_\min$, $y_\max$
- endpoint derivative constraints $f(y_\min)$, $f(y_\max)$
- endpoint second derivative constraints $f'(y_\min)$, $f'(y_\max)$
We can initialise all of these constraints with a least-squares fit as a good initial guess, using the midpoint fit as in the direct cubic spline interpolation.
End of explanation
"""
N_segments = len(pmid)-1
a = [coef[0]]*N_segments
b = [coef[1]]*N_segments
c = [coef[2]]*N_segments
d = [coef[3]]*N_segments
def make_cubic_spline(knots, a, b, c, d):
def cubic_spline(x):
if x < knots[0] or x > knots[-1]:
return float("NaN")
for i in range(0, len(knots)):
if x < knots[i+1]:
return a[i] * x**3 + b[i] * x**2 + c[i] * x**1 + d[i]
return np.vectorize(cubic_spline)
# Confirm that the piecewise cubic spline is correctly defined
plt.plot(yknots, pmid, "g.")
plt.plot(ygrid, make_cubic_spline(yknots, a, b, c, d)(ygrid), 'r-')
def errfun(yknots, a, b, c, d):
ymin, ymax = 2, 20
f_ymin, f_ymax = 0, 0
err_lorenz = 0
err_point = 0
err_d1 = 0
err_d2 = 0
err_endpoints = 0
err_nonincr = 0
for i in range(N_segments):
lhs = L[i+1] - L[i]
rhs = (3/4)*a[i]*(yknots[i+1] - yknots[i])**4 + (2/3)*b[i]*(yknots[i+1]-yknots[i])**3 + (1/2)*c[i]*(yknots[i+1]-yknots[i])**2
err_lorenz += (lhs-rhs)**2
for i in range(N_segments-1):
lhs = a[i]*yknots[i+1]**3 + b[i]*yknots[i+1]**2 + c[i]*yknots[i+1]**1 + d[i]
rhs = a[i+1]*yknots[i+1]**3 + b[i+1]*yknots[i+1]**2 + c[i+1]*yknots[i+1]**1 + d[i+1]
err_point += (lhs-rhs)**2
for i in range(N_segments-1):
lhs = 3*a[i]*yknots[i+1]**2 + 2*b[i]*yknots[i+1]**1 + c[i]
rhs = 3*a[i+1]*yknots[i+1]**2 + 2*b[i+1]*yknots[i+1]**1 + c[i+1]
err_d1 += (lhs-rhs)**2
for i in range(N_segments-1):
lhs = 6*a[i]*yknots[i+1] + 2*b[i]
lhs = 6*a[i+1]*yknots[i+1] + 2*b[i+1]
err_d2 += (lhs-rhs)**2
err_endpoints += (
(a[0]*ymin**3 + b[0]*ymin**2 + c[0]*ymin**1 + d[0]*ymin**0 - 0.0) ** 2 +
(a[-1]*ymax**3 + b[-1]*ymax**2 + c[-1]*ymax**1 + d[-1]*ymax**0 - 1.0) ** 2 +
(6*a[0]*ymin + 2*b[0] - f_ymin) ** 2 +
(6*a[-1]*ymax + 2*b[-1] - f_ymax) ** 2
)
d1_grid = [3*a[i]*y**2 + 2*b[i]*y**1 + c[i] for y in np.linspace(ymin, ymax, 100)]
d1_grid_negs = -sum([p for p in d1_grid if p < 0])
err_nonincr = 10 * d1_grid_negs
#print("errors:", err_lorenz, err_point, err_d1, err_d2, err_endpoints)
return err_lorenz + err_point + err_d1 + err_d2 + err_endpoints + err_nonincr
def collapse_args(yknots, a, b, c, d):
return yknots + a + b + c + d
def extract_args(args):
if isinstance(args, list):
args = np.array(args)
xyknots, args = args[:len(yknots)], args[len(yknots):]
xa, args = args[:len(a)], args[len(a):]
xb, args = args[:len(b)], args[len(b):]
xc, args = args[:len(c)], args[len(c):]
xd, args = args[:len(d)], args[len(d):]
return xyknots.tolist(), xa.tolist(), xb.tolist(), xc.tolist(), xd.tolist()
def errfun_wrapper(args):
return errfun(*extract_args(args))
errfun(yknots, a, b, c, d)
errfun_wrapper(collapse_args(yknots, a, b, c, d))
import scipy.optimize
result = scipy.optimize.minimize(errfun_wrapper,collapse_args(yknots, a, b, c, d))
yknots, a, b, c, d = extract_args(result.x)
errfun(yknots, a, b, c, d)
plt.plot(y, pmid, 'b.')
plt.plot(ygrid, make_cubic_spline(yknots, a, b, c, d)(ygrid), 'r-')
#plt.plot(ygrid, a[-1]*ygrid**3 + b[-1]*ygrid**2 + c[-1]*ygrid**1 + d[-1]*ygrid**0)
i = 2
lhs = L[i+1] - L[i]
rhs = (3/4)*a[i]*(yknots[i+1] - yknots[i])**4 + (2/3)*b[i]*(yknots[i+1]-yknots[i])**3 + (1/2)*c[i]*(yknots[i+1]-yknots[i])**2
print(lhs, rhs)
plt.plot(xgrid, lorenz(make_cubic_spline(yknots, a, b, c, d), 3, 18, ymean)(xgrid))
#plt.plot(xgrid, inverse(make_cubic_spline(yknots, a, b, c, d), (3, 18), (0,1))(xgrid))
"""
Explanation: Sure enough, the least square fit is imperfect. Now we use this curve to assign our starting values for all the segments.
End of explanation
"""
|
dianafprieto/SS_2017 | .ipynb_checkpoints/06_NB_VTKPython_Scalar-checkpoint.ipynb | mit | %gui qt
import vtk
from vtkviewer import SimpleVtkViewer
#help(vtk.vtkRectilinearGridReader())
"""
Explanation: <img src="imgs/header.png">
Visualization techniques for scalar fields in VTK + Python
Recap: The VTK pipeline
<img src="imgs/vtk_pipeline.png", align=left>
$~$
VTK look-up tables and transfer functions
End of explanation
"""
# do not forget to call "Update()" at the end of the reader
rectGridReader = vtk.vtkRectilinearGridReader()
rectGridReader.SetFileName("data/jet4_0.500.vtk")
rectGridReader.Update()
"""
Explanation: 1. Data input (source)
End of explanation
"""
%qtconsole
rectGridOutline = vtk.vtkRectilinearGridOutlineFilter()
rectGridOutline.SetInputData(rectGridReader.GetOutput())
"""
Explanation: 2. Filters
Filter 1: vtkRectilinearGridOutlineFilter() creates wireframe outline for a rectilinear grid.
End of explanation
"""
rectGridOutlineMapper = vtk.vtkPolyDataMapper()
rectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())
"""
Explanation: 3. Mappers
Mapper: vtkPolyDataMapper() maps vtkPolyData to graphics primitives.
End of explanation
"""
outlineActor = vtk.vtkActor()
outlineActor.SetMapper(rectGridOutlineMapper)
outlineActor.GetProperty().SetColor(0, 0, 0)
"""
Explanation: 4. Actors
End of explanation
"""
#Option 1: Default vtk render window
renderer = vtk.vtkRenderer()
renderer.SetBackground(0.5, 0.5, 0.5)
renderer.AddActor(outlineActor)
renderer.ResetCamera()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(500, 500)
renderWindow.Render()
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renderWindow)
iren.Start()
#Option 2: Using the vtk-viewer for Jupyter to interactively modify the pipeline
vtkSimpleWin = SimpleVtkViewer()
vtkSimpleWin.resize(1000,800)
vtkSimpleWin.hide_axes()
vtkSimpleWin.add_actor(outlineActor)
vtkSimpleWin.add_actor(gridGeomActor)
vtkSimpleWin.ren.SetBackground(0.5, 0.5, 0.5)
vtkSimpleWin.ren.ResetCamera()
"""
Explanation: 5. Renderers and Windows
End of explanation
"""
|
zomansud/coursera | ml-regression/week-1/week-1-simple-regression-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()
"""
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
"""
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
"""
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
"""
Explanation: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
"""
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
"""
Explanation: As we see we get the same answer both ways
End of explanation
"""
def simple_linear_regression(input_feature, output):
# compute total inputs
total_N = input_feature.size()
# compute the sum of input_feature and output
sum_yi = output.sum()
sum_xi = input_feature.sum()
# compute the product of the output and the input_feature and its sum
product_yi_xi = output * input_feature
sum_product_yi_xi = product_yi_xi.sum()
# compute the squared value of the input_feature and its sum
squared_xi = input_feature * input_feature
sum_squared_xi = squared_xi.sum()
# use the formula for the slope
slope = float(sum_product_yi_xi - (float(sum_yi * sum_xi) / total_N)) / (sum_squared_xi - (float(sum_xi * sum_xi) / total_N))
# use the formula for the intercept
intercept = float(sum_yi - (slope * sum_xi)) / total_N
return (intercept, slope)
"""
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
"""
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
"""
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
"""
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
"""
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
"""
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + slope * input_feature
return predicted_values
"""
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
"""
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
"""
Explanation: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estimated above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
"""
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
prediction = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residual = output - prediction
# square the residuals and add them up
residual_squared = residual * residual
RSS = residual_squared.sum()
return(RSS)
"""
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
"""
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
"""
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
"""
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
"""
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
"""
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = float(output - intercept) / slope
return estimated_feature
"""
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
"""
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
"""
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
"""
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print "Intercept: " + str(bedrooms_intercept)
print "Slope: " + str(bedrooms_slope)
"""
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
"""
# Compute RSS when using bedrooms on TEST data:
rss_prices_on_bedrooms_test = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], bedrooms_intercept, bedrooms_slope)
print 'The RSS of predicting Prices based on Bedrooms on TEST Data is : ' + str(rss_prices_on_bedrooms_test)
# Compute RSS when using squarefeet on TEST data:
rss_prices_on_sqft_test = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet on TEST Data is : ' + str(rss_prices_on_sqft_test)
print min(rss_prices_on_bedrooms_test, rss_prices_on_sqft_test)
"""
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation
"""
|
rdhyee/diversity-census-calc | zzz-Census_Geo.ipynb | apache-2.0 | !ls /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.shp
!rm /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.geojson
!/Library/Frameworks/Python.framework/Versions/Current/bin/ogr2ogr -f GeoJSON /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.geojson /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.shp
#!ogr2ogr -f GeoJSON -t_srs crs:84 /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.geojson /Users/raymondyee/Downloads/tl_2010_06001_bg00/tl_2010_06001_bg00.shp
"""
Explanation: Goal: be able to compute geo files (.shp, kml, geojson, topojson for arbitrary census geographic entities)
what I had gotten from MN population center
https://www.nhgis.org/
http://127.0.0.1:8888/notebooks/wwod13/nhgis_2013_10_16.ipynb
What are useful libraries?
From Learning Geospatial Analysis with Python > Preface > What you need for this book : Safari Books Online:
Python Version 2.x (minimum Version 2.5)
GDAL/OGR Version 1.7.1 or later
GEOS Version 3.2.2 or later
PyShp 1.1.6 or later
Shapely Version 1.2 or later
Proj Version 4.7 or later
PyProj Version 1.8.6 or later
NumPy
PNGCanvas
Python Imaging Library (PIL)
using ogr2ogr command line to convert shapefile to geojson
On my mac, I installed binaries from Build Notes [KyngChaos Wiki] -> but I see
conda install gdal
will give you command line tools like ogr2ogr
Documentation: GDAL: ogr2ogr -- see whether How to convert Shapefiles to GeoJSON maps for use on GitHub (and why you should) » Ben Balter gives the right incantation:
ogr2ogr -f GeoJSON -t_srs crs:84 [name].geojson [name].shp
End of explanation
"""
import gdal
import ogr
import osr
import gdalnumeric
import gdalconst
"""
Explanation: ogr2ogr -f GeoJSON tl_2010_06001_bg00.geojson tl_2010_06001_bg00.shp
ogr2ogr -f GeoJSON tl_2010_06001_tract10.geojson tl_2010_06001_tract10.shp
GDAL
conda install gdal
GDAL 1.10.0 : Python Package Index
Welcome to the Python GDAL/OGR Cookbook! — Python GDAL/OGR Cookbook 1.0 documentation
End of explanation
"""
|
LiaoPan/blaze | docs/source/_static/notebooks/xray-dask.ipynb | bsd-3-clause | import xray
import dask.array as da
import numpy as np
import dask
"""
Explanation: xray + dask
This was modified from a notebook originally written by Stephan Hoyer
Weather data -- especially the results of numerical weather simulations -- is big. Some of the biggest super computers make weather forecasts, and they save their output on increasingly high resolution grids. Even for data analysis purposes, it's easy to need to process 10s or 100s of GB of data.
There are many excellent tools for working with weather data, which is usually stored in the netCDF file format. Many of these have support for out-of-core data, notably including the command line tools NCO and CDO. There are even Python tools, including a netCDF4 library and Iris. However, none of these tools matched the ease of use of pandas. We knew there there was a better way, so we decided to write xray, a library for working with multi-dimensional labeled data.
The latest release of xray includes support for processing datasets that don't fit into memory using dask, a new Python library that extends NumPy to out-of-core datasets by blocking arrays into small chunks and using a simple task scheduling abstraction. Dask allows xray to easily process out of core data and simultaneously make use of all our CPUs resources.
Loading data
First, we'll import dask and setup a ThreadPool for processing tasks. Dask currently doesn't do this automatically.
End of explanation
"""
!ls /home/mrocklin/data/ecmwf/*.nc3
ds = xray.open_mfdataset('/home/mrocklin/data/ecmwf/*.nc3', engine='scipy')
ds
"""
Explanation: We'll use the new xray.open_mfdataset function to open archived weather data from ECMWF. It opens a glob of netCDF files on my local disk and automatically infers how to combine them into a few logical arrays by reading their metadata:
End of explanation
"""
np.prod(ds.dims.values()) * 8 * 2 ** -30
"""
Explanation: 11 GB of Data
End of explanation
"""
!cat /proc/meminfo | grep MemTotal
"""
Explanation: 4GB of Memory
End of explanation
"""
# x.mean(2)
ds.mean('longitude')
ds.sel(time="2014-04", latitude=(ds.latitude > 10 & (ds.latitude < 40)))
"""
Explanation: Index with meaningful values, not numbers
End of explanation
"""
%time ds.groupby('time.month').mean('time').load_data()
"""
Explanation: Groupby operations and datetime handling
End of explanation
"""
11e9 / 113 / 1e6 # MB/s
"""
Explanation: Bandwidth
End of explanation
"""
|
d00d/quantNotebooks | .ipynb_checkpoints/06142017-PRA-Python-Ciriculum-Notebook-1-checkpoint.ipynb | unlicense | import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnchoredText
#import matplotlib.animation as animation
%matplotlib inline
weight =[258.1,257.1,256.6,257.7,257.6,254.3,252.5,252.6,251.7]
#plot(weight, 'm', label='line1', linewidth=4)
plt.title('Q2 2017 - Progress on Weight Loss Program')
plt.grid(True)
plt.xlabel('Weigh in #')
plt.ylabel('Weight in Lbs.')
ax = plt.gca()
at = AnchoredText(
"Rob's Daily Weight Loss progress",
loc=3, prop=dict(size=10), frameon=True,
)
at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2")
ax.add_artist(at)
plt.plot(weight,'m', linewidth=4, linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10)
"""
Explanation: Matplotlib
working with matplotlib for 2D graphing
End of explanation
"""
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.array([[1,2,3,4], [5,6,7,8]], dtype=int), columns=['Pacific','Mountain','Central','Eastern'])
plt.plot(df)
df
"""
Explanation: Dataframes
Working with Pandas and DataFrames. Importing necessary packages.
End of explanation
"""
from IPython.display import display, Math, Latex
display(Math(r'\sqrt{a^2 + b^3}'))
#%lsmagic
#%quickref
#%debug
"""
Explanation: LaTex usage for Math representations.
Any LaTeX math should be inside $$:
$$c = \sqrt{a^2 + b^2}$$
End of explanation
"""
|
dusenberrymw/incubator-systemml | samples/jupyter-notebooks/ALS_python_demo.ipynb | apache-2.0 | from pyspark.sql import SparkSession
from pyspark.sql.types import *
from systemml import MLContext, dml
spark = SparkSession\
.builder\
.appName("als-example")\
.getOrCreate()
schema = StructType([StructField("movieId", IntegerType(), True),
StructField("userId", IntegerType(), True),
StructField("rating", IntegerType(), True),
StructField("date", StringType(), True)])
ratings = spark.read.csv("./netflix/training_set_normalized/mv_0*.txt", schema = schema)
ratings = ratings.select('userId', 'movieId', 'rating')
ratings.show(10)
ratings.describe().show()
"""
Explanation: Scaling Alternating Least Squares Using Apache SystemML
Recommendation systems based on Alternating Least Squares (ALS) algorithm have gained popularity in recent years because, in general, they perform better as compared to content based approaches.
ALS is a matrix factorization algorithm, where a user-item matrix is factorized into two low-rank non-orthogonal matrices:
$$R = U M$$
The elements, $r_{ij}$, of matrix $R$ can represent, for example, ratings assigned to the $j$th movie by the $i$th user.
This matrix factorization assumes that each user can be described by $k$ latent features. Similarly, each item/movie can also be represented by $k$ latent features. The user rating of a particular movie can thus be approximated by the product of two $k$-dimensional vectors:
$$r_{ij} = {\bf u}_i^T {\bf m}_j$$
The vectors ${\bf u}_i$ are rows of $U$ and ${\bf m}_j$'s are columns of $M$. These can be learned by minimizing the cost function:
$$f(U, M) = \sum_{i,j} \left( r_{ij} - {\bf u}_i^T {\bf m}_j \right)^2 = \| R - UM \|^2$$
Regularized ALS
In this notebook, we'll implement ALS algorithm with weighted-$\lambda$-regularization formulated by Zhou et. al. The cost function with such regularization is:
$$f(U, M) = \sum_{i,j} I_{ij}\left( r_{ij} - {\bf u}i^T {\bf m}_j \right)^2 + \lambda \left( \sum_i n{u_i} \| {\bf u}\|i^2 + \sum_j n{m_j} \|{\bf m}\|_j^2 \right)$$
Here, $\lambda$ is the usual regularization parameter. $n_{u_i}$ and $n_{m_j}$ represent the number of ratings of user $i$ and movie $j$ respectively. $I_{ij}$ is an indicator variable such that $I_{ij} = 1$ if $r_{ij}$ exists and $I_{ij} = 0$ otherwise.
If we fix ${\bf m}_j$, we can determine ${\bf u}_i$ by solving a regularized least squares problem:
$$ \frac{1}{2} \frac{\partial f}{\partial {\bf u}_i} = 0$$
This gives the following matrix equation:
$$\left(M \text{diag}({\bf I}i^T) M^{T} + \lambda n{u_i} E\right) {\bf u}_i = M {\bf r}_i^T$$
Here ${\bf r}i^T$ is the $i$th row of $R$. Similarly, ${\bf I}_i$ the $i$th row of the matrix $I = [I{ij}]$. Please see Zhou et. al for details.
Reading Netflix Movie Ratings Data
In this example, we'll use Netflix movie ratings. This data set can be downloaded from here. We'll use spark to read movie ratings data into a dataframe. The csv files have four columns: MovieID, UserID, Rating, Date.
End of explanation
"""
#-----------------------------------------------------------------
# Create kernel in SystemML's DSL using the R-like syntax for ALS
# Algorithms available at : https://systemml.apache.org/algorithms
# Below algorithm based on ALS-CG.dml
#-----------------------------------------------------------------
als_dml = \
"""
# Default values of some parameters
r = rank
max_iter = 50
check = TRUE
thr = 0.01
R = table(X[,1], X[,2], X[,3])
# check the input matrix R, if some rows or columns contain only zeros remove them from R
R_nonzero_ind = R != 0;
row_nonzeros = rowSums(R_nonzero_ind);
col_nonzeros = t(colSums (R_nonzero_ind));
orig_nonzero_rows_ind = row_nonzeros != 0;
orig_nonzero_cols_ind = col_nonzeros != 0;
num_zero_rows = nrow(R) - sum(orig_nonzero_rows_ind);
num_zero_cols = ncol(R) - sum(orig_nonzero_cols_ind);
if (num_zero_rows > 0) {
print("Matrix R contains empty rows! These rows will be removed.");
R = removeEmpty(target = R, margin = "rows");
}
if (num_zero_cols > 0) {
print ("Matrix R contains empty columns! These columns will be removed.");
R = removeEmpty(target = R, margin = "cols");
}
if (num_zero_rows > 0 | num_zero_cols > 0) {
print("Recomputing nonzero rows and columns!");
R_nonzero_ind = R != 0;
row_nonzeros = rowSums(R_nonzero_ind);
col_nonzeros = t(colSums (R_nonzero_ind));
}
###### MAIN PART ######
m = nrow(R);
n = ncol(R);
# initializing factor matrices
U = rand(rows = m, cols = r, min = -0.5, max = 0.5);
M = rand(rows = n, cols = r, min = -0.5, max = 0.5);
# initializing transformed matrices
Rt = t(R);
loss = matrix(0, rows=max_iter+1, cols=1)
if (check) {
loss[1,] = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +
sum((M^2) * col_nonzeros));
print("----- Initial train loss: " + toString(loss[1,1]) + " -----");
}
lambda_I = diag (matrix (lambda, rows = r, cols = 1));
it = 0;
converged = FALSE;
while ((it < max_iter) & (!converged)) {
it = it + 1;
# keep M fixed and update U
parfor (i in 1:m) {
M_nonzero_ind = t(R[i,] != 0);
M_nonzero = removeEmpty(target=M * M_nonzero_ind, margin="rows");
A1 = (t(M_nonzero) %*% M_nonzero) + (as.scalar(row_nonzeros[i,1]) * lambda_I); # coefficient matrix
U[i,] = t(solve(A1, t(R[i,] %*% M)));
}
# keep U fixed and update M
parfor (j in 1:n) {
U_nonzero_ind = t(Rt[j,] != 0)
U_nonzero = removeEmpty(target=U * U_nonzero_ind, margin="rows");
A2 = (t(U_nonzero) %*% U_nonzero) + (as.scalar(col_nonzeros[j,1]) * lambda_I); # coefficient matrix
M[j,] = t(solve(A2, t(Rt[j,] %*% U)));
}
# check for convergence
if (check) {
loss_init = as.scalar(loss[it,1])
loss_cur = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +
sum((M^2) * col_nonzeros));
loss_dec = (loss_init - loss_cur) / loss_init;
print("Train loss at iteration (M) " + it + ": " + loss_cur + " loss-dec " + loss_dec);
if (loss_dec >= 0 & loss_dec < thr | loss_init == 0) {
print("----- ALS converged after " + it + " iterations!");
converged = TRUE;
}
loss[it+1,1] = loss_cur
}
} # end of while loop
loss = loss[1:it+1,1]
if (check) {
print("----- Final train loss: " + toString(loss[it+1,1]) + " -----");
}
if (!converged) {
print("Max iteration achieved but not converged!");
}
# inject 0s in U if original R had empty rows
if (num_zero_rows > 0) {
U = removeEmpty(target = diag(orig_nonzero_rows_ind), margin = "cols") %*% U;
}
# inject 0s in R if original V had empty rows
if (num_zero_cols > 0) {
M = removeEmpty(target = diag(orig_nonzero_cols_ind), margin = "cols") %*% M;
}
M = t(M);
"""
"""
Explanation: ALS implementation using DML
The following script implements the regularized ALS algorithm as described above. One thing to note here is that we remove empty rows/columns from the rating matrix before running the algorithm. We'll add back the zero rows and columns to matrices $U$ and $M$ after the algorithm converges.
End of explanation
"""
ml = MLContext(sc)
# Define input/output variables for DML script
alsScript = dml(als_dml).input("X", ratings) \
.input("lambda", 0.01) \
.input("rank", 100) \
.output("U", "M", "loss")
# Execute script
res = ml.execute(alsScript)
U, M, loss = res.get('U','M', "loss")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(loss.toNumPy(), 'o');
"""
Explanation: Running the Algorithm
We'll first create an MLContext object which is the entry point for SystemML. Inputs and outputs are defined through a dml function.
End of explanation
"""
predict_dml = \
"""
R = table(R[,1], R[,2], R[,3])
K = 5
Rrows = nrow(R);
Rcols = ncol(R);
zero_cols_ind = (colSums(M != 0)) == 0;
K = min(Rcols - sum(zero_cols_ind), K);
n = nrow(X);
Urows = nrow(U);
Mcols = ncol(M);
X_user_max = max(X[,1]);
if (X_user_max > Rrows) {
stop("Predictions cannot be provided. Maximum user-id exceeds the number of rows of R.");
}
if (Urows != Rrows | Mcols != Rcols) {
stop("Number of rows of U (columns of M) does not match the number of rows (column) of R.");
}
# creats projection matrix to select users
s = seq(1, n);
ones = matrix(1, rows = n, cols = 1);
P = table(s, X[,1], ones, n, Urows);
# selects users from factor U
U_prime = P %*% U;
# calculate rating matrix for selected users
R_prime = U_prime %*% M;
# selects users from original R
R_users = P %*% R;
# create indictor matrix to remove existing ratings for given users
I = R_users == 0;
# removes already recommended items and creating user2item matrix
R_prime = R_prime * I;
# stores sorted movies for selected users
R_top_indices = matrix(0, rows = nrow (R_prime), cols = K);
R_top_values = matrix(0, rows = nrow (R_prime), cols = K);
# a large number to mask the max ratings
range = max(R_prime) - min(R_prime) + 1;
# uses rowIndexMax/rowMaxs to update kth ratings
for (i in 1:K){
rowIndexMax = rowIndexMax(R_prime);
rowMaxs = rowMaxs(R_prime);
R_top_indices[,i] = rowIndexMax;
R_top_values[,i] = rowMaxs;
R_prime = R_prime - range * table(seq (1, nrow(R_prime), 1), rowIndexMax, nrow(R_prime), ncol(R_prime));
}
R_top_indices = R_top_indices * (R_top_values > 0);
# cbind users as a first column
R_top_indices = cbind(X[,1], R_top_indices);
R_top_values = cbind(X[,1], R_top_values);
"""
# user for which we want to recommend movies
ids = [116,126,130,131,133,142,149,158,164,168,169,177,178,183,188,189,192,195,199,201,215,231,242,247,248,
250,261,265,266,267,268,283,291,296,298,299,301,302,304,305,307,308,310,312,314,330,331,333,352,358,363,
368,369,379,383,384,385,392,413,416,424,437,439,440,442,453,462,466,470,471,477,478,479,481,485,490,491]
users = spark.createDataFrame([[i] for i in ids])
predScript = dml(predict_dml).input("R", ratings) \
.input("X", users) \
.input("U", U) \
.input("M", M) \
.output("R_top_indices")
pred = ml.execute(predScript).get("R_top_indices")
pred = pred.toNumPy()
"""
Explanation: Predictions
Once $U$ and $M$ are learned from the data, we can recommend movies for any users. If $U'$ represent the users for which we seek recommendations, we first obtain the predicted ratings for all the movies by users in $U'$:
$$R' = U' M$$
Finally, we sort the ratings for each user and present the top 5 movies with highest predicted ratings. The following dml script implements this. Since we're using very low rank in this example, these recommendations are not meaningful.
End of explanation
"""
import pandas as pd
titles = pd.read_csv("./netflix/movie_titles.csv", header=None, sep=';', names=['movieID', 'year', 'title'])
import re
import wikipedia as wiki
from bs4 import BeautifulSoup as bs
import requests as rq
from IPython.core.display import Image, display
def get_poster(title):
if title.endswith('Bonus Material'):
title = title.strip('Bonus Material')
title = re.sub(r'[^\w\s]','',title)
matches = wiki.search(title)
if matches is None:
return
film = [s for s in matches if 'film)' in s]
film = film[0] if len(film) > 0 else matches[0]
try:
url = wiki.page(film).url
except:
return
html = rq.get(url)
if html.status_code == 200:
soup = bs(html.content, 'html.parser')
infobox = soup.find('table', class_="infobox")
if (infobox):
img = infobox.find('img')
if img:
display(Image('http:' + img['src']))
def show_recommendations(userId, preds):
for row in preds:
if int(row[0]) == userId:
print("\nrecommendations for userId", int(row[0]) )
for title in titles.title[row[1:]].values:
print(title)
get_poster(title)
break
show_recommendations(192, preds=pred)
"""
Explanation: Just for Fun!
Once we have the movie recommendations, we can show the movie posters for those recommendations. We'll fetch these movie poster from wikipedia. If movie page doesn't exist on wikipedia, we'll just list the movie title.
End of explanation
"""
|
tensorflow/privacy | tensorflow_privacy/privacy/privacy_tests/secret_sharer/secret_sharer_image_example.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
"""
# @title Install dependencies
# You may need to restart the runtime to use tensorflow-privacy.
from IPython.display import clear_output
!pip install git+https://github.com/tensorflow/privacy.git
clear_output()
# @title Imports
import functools
import os
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from PIL import Image, ImageDraw, ImageFont
from matplotlib import pyplot as plt
import math
from tensorflow_privacy.privacy.privacy_tests.secret_sharer.generate_secrets import SecretConfig, construct_secret, generate_random_sequences, construct_secret_dataset
from tensorflow_privacy.privacy.privacy_tests.secret_sharer.exposures import compute_exposure_interpolation, compute_exposure_extrapolation
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.utils import log_loss
"""
Explanation: Assess privacy risks of an Image classification model with Secret Sharer Attack
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/secret_sharer/secret_sharer_image_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/secret_sharer/secret_sharer_image_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this colab, we adapt secret sharer in an image classification model. We will train a model with "secrets", i.e. random images, inserted in the training data, and then evaluate if the model has "memorized" those secrets.
Setup
You may set the runtime to use a GPU by Runtime > Change runtime type > Hardware accelerator.
End of explanation
"""
# @title Functions for defining model and loading data.
def small_cnn():
"""Setup a small CNN for image classification."""
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Input(shape=(32, 32, 3)))
for _ in range(3):
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10))
return model
def load_cifar10():
def convert_to_numpy(ds):
images, labels = [], []
for sample in tfds.as_numpy(ds):
images.append(sample['image'])
labels.append(sample['label'])
return np.array(images).astype(np.float32) / 255, np.array(labels).astype(np.int32)
ds_train = tfds.load('cifar10', split='train')
ds_test = tfds.load('cifar10', split='test')
x_train, y_train = convert_to_numpy(ds_train)
x_test, y_test = convert_to_numpy(ds_test)
# x has shape (n, 32, 32, 3), y has shape (n,)
return x_train, y_train, x_test, y_test
# @title Function for training the model.
def train_model(x_train, y_train, x_test, y_test,
learning_rate=0.02, batch_size=250, epochs=50):
model = small_cnn()
optimizer = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
# Train model
model.fit(
x_train,
y_train,
epochs=epochs,
validation_data=(x_test, y_test),
batch_size=batch_size,
verbose=2)
return model
"""
Explanation: Functions for the model, and the CIFAR-10 data
End of explanation
"""
# @title Functions for generating secrets
def generate_random_label(n, nclass, seed):
"""Generates random labels."""
return np.random.RandomState(seed).choice(nclass, n)
def generate_uniform_random(shape, n, seed):
"""Generates uniformly random images."""
rng = np.random.RandomState(seed)
data = rng.uniform(size=(n,) + shape)
return data
def images_from_texts(sequences, shape, font_fn, num_lines=3, bg_color=(255, 255, 255), fg_color=(0, 0, 0)):
"""Generates an image with a given text sequence."""
characters_per_line = len(sequences[0]) // num_lines
if characters_per_line * num_lines < len(sequences[0]):
characters_per_line += 1
line_height = shape[1] // num_lines
font_size = line_height
font_width = ImageFont.truetype(font_fn, font_size).getsize('a')[0]
if font_width > shape[0] / characters_per_line:
font_size = int(math.floor(font_size / font_width * (shape[0] / characters_per_line)))
assert font_size > 0
font = ImageFont.truetype(font_fn, font_size)
imgs = []
for sequence in sequences:
img = Image.new('RGB', shape, color=bg_color)
d = ImageDraw.Draw(img)
for i in range(num_lines):
d.text((0, i * line_height),
sequence[i * characters_per_line:(i + 1) * characters_per_line],
font=font, fill=fg_color)
imgs.append(img)
return imgs
def generate_random_text_image(shape, n, seed, font_fn, vocab, pattern, num_lines, bg_color, fg_color):
"""Generates images with random texts."""
text_sequences = generate_random_sequences(vocab, pattern, n, seed)
imgs = images_from_texts(text_sequences, shape, font_fn, num_lines, bg_color, fg_color)
return np.array([np.array(i) for i in imgs])
# The function for plotting text on image needs a font, so we download it here.
# You can try other fonts. Notice that the images_from_texts is implemented under the assumption that the font is monospace.
!wget https://github.com/google/fonts/raw/main/apache/robotomono/RobotoMono%5Bwght%5D.ttf
font_fn = 'RobotoMono[wght].ttf'
"""
Explanation: Secret sharer attack on the model
The general idea of secret sharer is to check if the model behaves differently on data it has seen vs. has not seen. Such memorization does not happen only on generative sequence models. It is thus natural to ask if the idea can be adapted to image classification tasks as well.
Here, we present one potential way to do secret sharer on image classification task. Specifically, we will consider
two types of secrets, where the secret is
- (an image with each pixel sampled uniformly at random, a random label)
- (an image with text on it, a random label)
But of course, you can try other secrets, for example, you can use images from another dataset (like MNIST), and a fixed label.
Generate Secrets
First, we define the functions needed to generate random image, image with random text, and random labels.
End of explanation
"""
#@title Generate secrets
num_repetitions = [1, 10, 50]
num_secrets_for_repetitions = [20] * len(num_repetitions)
num_references = 65536
secret_config_text = SecretConfig(name='random text image', num_repetitions=num_repetitions, num_secrets_for_repetitions=num_secrets_for_repetitions, num_references=num_references)
secret_config_rand = SecretConfig(name='uniform random image', num_repetitions=num_repetitions, num_secrets_for_repetitions=num_secrets_for_repetitions, num_references=num_references)
seed = 123
shape = (32, 32)
nclass = 10
n = num_references + sum(num_secrets_for_repetitions)
# setting for text image
num_lines = 3
bg_color=(255, 255, 0)
fg_color=(0, 0, 0)
image_text = generate_random_text_image(shape, n, seed,
font_fn,
list('0123456789'), 'My SSN is {}{}{}-{}{}-{}{}{}{}',
num_lines, bg_color, fg_color)
image_text = image_text.astype(np.float32) / 255
image_rand = generate_uniform_random(shape + (3,), n, seed)
label = generate_random_label(n, nclass, seed)
data_text = list(zip(image_text, label)) # pair up the image and label
data_rand = list(zip(image_rand, label))
"""
`construct_secret` partitions data into subsets of secrets that are going to be
repeated for different number of times, and a references set. It returns a SecretsSet with 3 fields:
config is the configuration of the secrets set
references is a list of `num_references` samples to be used as references
secrets is a dictionary, where the key is the number of repetition, the value is a list of samples
"""
secrets_text = construct_secret(secret_config_text, data_text)
secrets_rand = construct_secret(secret_config_rand, data_rand)
#@title Let's look at the secrets we generated
def visualize_images(imgs):
f, axes = plt.subplots(1, len(imgs))
for i, img in enumerate(imgs):
axes[i].imshow(img)
visualize_images(image_text[:5])
visualize_images(image_rand[:5])
"""
Explanation: Now we will use the functions above to generate the secrets. Here, we plan to try secrets that are repeated once, 10 times and 50 times. For each repetition value, we will pick 20 secrets, to get a more accurate exposure estimation. We will leave out 65536 samples as references.
End of explanation
"""
# @title Train a model with original data
x_train, y_train, x_test, y_test = load_cifar10()
model_original = train_model(x_train, y_train, x_test, y_test)
# @title Train model with original data combined with secrets
# `construct_secret_dataset` returns a list of secrets, repeated for the
# required number of times.
secret_dataset = construct_secret_dataset([secrets_text, secrets_rand])
x_secret, y_secret = zip(*secret_dataset)
x_combined = np.concatenate([x_train, x_secret])
y_combined = np.concatenate([y_train, y_secret])
print(f'We will inject {len(x_secret)} samples so the total number of training data is {x_combined.shape[0]}')
model_secret = train_model(x_combined, y_combined, x_test, y_test)
"""
Explanation: Train the Model
We will train two models, one with the original CIFAR-10 data, the other with CIFAR-10 combined with the secrets.
End of explanation
"""
# @title Functions for computing losses and exposures
def calculate_losses(model, samples, is_logit=False, batch_size=1000):
"""Calculate losses of model prediction on data, provided true labels.
"""
data, labels = zip(*samples)
data, labels = np.array(data), np.array(labels)
pred = model.predict(data, batch_size=batch_size, verbose=0)
if is_logit:
pred = tf.nn.softmax(pred).numpy()
loss = log_loss(labels, pred)
return loss
def compute_loss_for_secret(secrets, model):
losses_ref = calculate_losses(model, secrets.references)
losses = {rep: calculate_losses(model, samples) for rep, samples in secrets.secrets.items()}
return losses, losses_ref
def compute_exposure_for_secret(secrets, model):
losses, losses_ref = compute_loss_for_secret(secrets, model)
exposure_interpolation = compute_exposure_interpolation(losses, losses_ref)
exposure_extrapolation = compute_exposure_extrapolation(losses, losses_ref)
return exposure_interpolation, exposure_extrapolation, losses, losses_ref
# @title Check the exposures
exp_i_orig_text, exp_e_orig_text, _, _ = compute_exposure_for_secret(secrets_text, model_original)
exp_i_orig_rand, exp_e_orig_rand, _, _ = compute_exposure_for_secret(secrets_rand, model_original)
exp_i_scrt_text, exp_e_scrt_text, _, _ = compute_exposure_for_secret(secrets_text, model_secret)
exp_i_scrt_rand, exp_e_scrt_rand, _, _ = compute_exposure_for_secret(secrets_rand, model_secret)
# First, let's confirm that the model trained with original data won't show any exposure
print('On model trained with original data:')
print('Text secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_orig_text.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_orig_text.items()]))
print('Random secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_orig_rand.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_orig_rand.items()]))
# Then, let's look at the model trained with combined data
print('On model trained with original data + secrets:')
print('Text secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_scrt_text.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_scrt_text.items()]))
print('Random secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_scrt_rand.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_scrt_rand.items()]))
"""
Explanation: Secret Sharer Evaluation
Similar to perplexity in language model, here we will use the cross entropy loss for our image classification model to measure how confident the model is on an example.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/9f8cb3957705df93f5da4fe6dc1bc69b/fnirs_artifact_removal.ipynb | bsd-3-clause | # Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD-3-Clause
import os
import mne
from mne.preprocessing.nirs import (optical_density,
temporal_derivative_distribution_repair)
"""
Explanation: Visualise NIRS artifact correction methods
Here we artificially introduce several fNIRS artifacts and observe
how artifact correction techniques attempt to correct the data.
End of explanation
"""
fnirs_data_folder = mne.datasets.fnirs_motor.data_path()
fnirs_cw_amplitude_dir = os.path.join(fnirs_data_folder, 'Participant-1')
raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True)
raw_intensity.load_data().resample(3, npad="auto")
raw_od = optical_density(raw_intensity)
new_annotations = mne.Annotations([31, 187, 317], [8, 8, 8],
["Movement", "Movement", "Movement"])
raw_od.set_annotations(new_annotations)
raw_od.plot(n_channels=15, duration=400, show_scrollbars=False)
"""
Explanation: Import data
Here we will work with the fNIRS motor data <fnirs-motor-dataset>.
We resample the data to make indexing exact times more convenient.
We then convert the data to optical density to perform corrections on
and plot these signals.
End of explanation
"""
corrupted_data = raw_od.get_data()
corrupted_data[:, 298:302] = corrupted_data[:, 298:302] - 0.06
corrupted_data[:, 450:750] = corrupted_data[:, 450:750] + 0.03
corrupted_od = mne.io.RawArray(corrupted_data, raw_od.info,
first_samp=raw_od.first_samp)
new_annotations.append([95, 145, 245], [10, 10, 10],
["Spike", "Baseline", "Baseline"])
corrupted_od.set_annotations(new_annotations)
corrupted_od.plot(n_channels=15, duration=400, show_scrollbars=False)
"""
Explanation: We can see some small artifacts in the above data from movement around 40,
190 and 240 seconds. However, this data is relatively clean so we will
add some additional artifacts below.
Add artificial artifacts to data
Two common types of artifacts in NIRS data are spikes and baseline shifts.
Spikes often occur when a person moves and the optode moves relative to the
scalp and then returns to its original position.
Baseline shifts occur if the optode moves relative to the scalp and does not
return to its original position.
We add a spike type artifact at 100 seconds and a baseline shift at 200
seconds to the data.
End of explanation
"""
corrected_tddr = temporal_derivative_distribution_repair(corrupted_od)
corrected_tddr.plot(n_channels=15, duration=400, show_scrollbars=False)
"""
Explanation: Apply temporal derivative distribution repair
This approach corrects baseline shift and spike artifacts without the need
for any user-supplied parameters :footcite:FishburnEtAl2019.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.2/examples/legacy_contact_binary.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Comparing Contacts Binaries in PHOEBE 2 vs PHOEBE Legacy
NOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually install the python wrappers in the phoebe-py directory.
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary(contact_binary=True)
b['q'] = 0.7
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
"""
Explanation: Adding Datasets and Compute Options
End of explanation
"""
b.add_compute('legacy')
"""
Explanation: Now we add compute options for the 'legacy' backend.
End of explanation
"""
b.set_value_all('atm', 'extern_planckint')
"""
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
"""
b.set_value_all('gridsize', 30)
"""
Explanation: Set value of gridsize for the trapezoidal (WD) mesh.
End of explanation
"""
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
"""
Explanation: Let's also disable other special effect such as heating, gravity, and light-time effects.
End of explanation
"""
b.run_compute(kind='phoebe', model='phoebe2model', irrad_method='none')
b.run_compute(kind='legacy', model='phoebe1model', irrad_method='none')
"""
Explanation: Finally, let's compute our models
End of explanation
"""
afig, mplfig = b.filter(dataset='lc01').plot(c={'phoebe2model': 'g', 'phoebe1model': 'r'}, linestyle='solid',
legend=True, show=True)
"""
Explanation: Plotting
Light Curve
End of explanation
"""
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2model') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
"""
Explanation: Now let's plot the residuals between these two models
End of explanation
"""
afig, mplfig = b['rv01'].plot(c={'phoebe2model': 'g', 'phoebe1model': 'r'}, linestyle='solid',
legend=True, show=True)
artist, = plt.plot(b.get_value('rvs@primary@phoebe2model', ) - b.get_value('rvs@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@secondary@phoebe2model') - b.get_value('rvs@secondary@phoebe1model'), color='g', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-0.3, 0.3)
"""
Explanation: RVs
End of explanation
"""
|
SciTools/courses | course_content/iris_course/4.Joining_Cubes_Together.ipynb | gpl-3.0 | import iris
import numpy as np
"""
Explanation: Iris introduction course
4. Joining Cubes Together
Learning outcome: by the end of this section, you will be able to apply Iris functionality to combine multiple Iris cubes into a new larger cube.
Duration: 30 minutes
Overview:<br>
4.1 Merge<br>
4.2 Concatenate<br>
4.3 Exercise<br>
4.4 Summary of the Section
Setup
End of explanation
"""
fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp')
cubes = iris.load(fname)
print(cubes)
"""
Explanation: 4.1 Merge<a id='merge'></a>
When Iris loads data it tries to reduce the number of cubes returned by collecting together multiple fields with
shared metadata into a single multidimensional cube. In Iris, this is known as merging.
In order to merge two cubes, they must be identical in everything but a scalar dimension, which goes on to become a new data dimension.
The diagram below shows how three 2D cubes, which have the same x and y coordinates but different z coordinates, are merged together to create a single 3D cube.
The iris.load_raw function can be used as a diagnostic tool to load the individual "fields" that Iris identifies in a given set of filenames before any merge takes place.
Let's compare the behaviour of iris.load_raw and the behaviour of the general purpose loading function, iris.load
First, we load in a file using iris.load:
End of explanation
"""
fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp')
raw_cubes = iris.load_raw(fname)
print(raw_cubes)
"""
Explanation: As you can see iris.load returns a CubeList containing a single 3D cube.
Now let's try loading in the file using iris.load_raw:
End of explanation
"""
print(raw_cubes[0])
print('--' * 40)
print(raw_cubes[1])
"""
Explanation: This time, iris has returned six 2D cubes.
PP files usually contain multiple 2D fields. iris.load_raw has returned a 2D cube for each of these fields, whereas iris.load has merged the cubes together then returned the resulting 3D cube.
When we look in detail at the raw 2D cubes, we find that they are identical in every coordinate except for the scalar forecast_period and time coordinates:
End of explanation
"""
merged_cubelist = raw_cubes.merge()
print(merged_cubelist)
"""
Explanation: To merge a CubeList, we can use the merge or merge_cube methods.
The merge method will try to merge together the cubes in the CubeList in order to return a CubeList of as few cubes as possible.
The merge_cube method will do the same as merge but will return a single Cube. If the initial CubeList cannot be merged into a single Cube, merge_cube will raise an error, giving a helpful message explaining why the cubes cannot be merged.
Let's merge the raw 2D cubes we previously loaded in:
End of explanation
"""
merged_cube = merged_cubelist[0]
print(merged_cube)
"""
Explanation: merge has returned a cubelist of a single 3D cube.
End of explanation
"""
#
# edit space for user code ...
#
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Try merging <b><font face="courier" color="black">raw_cubes</font></b> using the <b><font face="courier" color="black">merge_cube</font></b> method.</p>
</div>
End of explanation
"""
print(merged_cube.coord('time'))
print(merged_cube.coord('forecast_period'))
"""
Explanation: When we look in more detail at our merged cube, we can see that the time coordinate has become a new dimension, as well as gaining another forecast_period auxiliary coordinate:
End of explanation
"""
fname = iris.sample_data_path('GloSea4', 'ensemble_00[34].pp')
cubes = iris.load_raw(fname, 'surface_temperature')
print(len(cubes))
"""
Explanation: Identifying Merge Problems
In order to avoid the Iris merge functionality making inappropriate assumptions about the data, merge is strict with regards to the uniformity of the incoming cubes.
For example, if we load the fields from two ensemble members from the GloSea4 model sample data, we see we have 12 fields before any merge takes place:
End of explanation
"""
incomplete_cubes = cubes.merge()
print(incomplete_cubes)
"""
Explanation: If we try to merge these 12 cubes we get 2 cubes rather than one:
End of explanation
"""
print(incomplete_cubes[0])
print('--' * 40)
print(incomplete_cubes[1])
"""
Explanation: When we look in more detail at these two cubes, what is different between the two? (Hint: One value changes, another is completely missing)
End of explanation
"""
#
# edit space for user code ...
#
"""
Explanation: As mentioned earlier, if merge_cube cannot merge the given CubeList to return a single Cube, it will raise a helpful error message identifying the cause of the failiure.
<div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b><p>Try merging the loaded <b><font face="courier" color="black">cubes</font></b> using <b><font face="courier" color="black">merge_cube</font></b> rather than <b><font face="courier" color="black">merge</font></b>.</p>
</div>
End of explanation
"""
for cube in cubes:
if not cube.coords('realization'):
cube.add_aux_coord(iris.coords.DimCoord(np.int32(3),
'realization'))
merged_cube = cubes.merge_cube()
print(merged_cube)
"""
Explanation: By inspecting the cubes themselves or using the error message raised when using merge_cube we can see that some cubes are missing the realization coordinate.
By adding the missing coordinate, we can trigger a merge of the 12 cubes into a single cube, as expected:
End of explanation
"""
fname = iris.sample_data_path('A1B_north_america.nc')
cube = iris.load_cube(fname)
cube_1 = cube[:10]
cube_2 = cube[10:20]
cubes = iris.cube.CubeList([cube_1, cube_2])
print(cubes)
"""
Explanation: 4.2 Concatenate<a id='concatenate'></a>
We have seen that merge combines a list of cubes with a common scalar coordinate to produce a single cube with a new dimension created from these scalar values.
But what happens if you try to combine cubes along a common dimension.
Let's create a CubeList with two cubes that have been indexed along the time dimension of the original cube.
End of explanation
"""
print(cubes.merge())
"""
Explanation: These cubes should be able to be joined together; after all, they have both come from the same original cube!
However, merge returns two cubes, suggesting that these two cubes cannot be merged:
End of explanation
"""
print(cubes.concatenate())
"""
Explanation: Merge cannot be used to combine common non-scalar coordinates. Instead we must use concatenate.
Concatenate joins together ("concatenates") common non-scalar coordinates to produce a single cube with the common dimension extended.
In the below diagram, we see how three 3D cubes are concatenated together to produce a 3D cube with an extended t dimension.
To concatenate a CubeList, we can use the concatenate or concatenate_cube methods.
Similar to merging, concatenate will return a CubeList of as few cubes as possible, whereas concatenate_cube will attempt to return a cube, raising an error with a helpful message where this is not possible.
If we apply concatenate to our cubelist, we will see that it returns a CubeList with a single Cube:
End of explanation
"""
#
# edit space for user code ...
#
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Try concatenating <b><font face="courier" color="black">cubes</font></b> using the <b><font face="courier" color="black">concatenate_cube</font></b> method.
</div>
End of explanation
"""
# EDIT for user code ...
# SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ...
# %load solutions/iris_exercise_4.3.1a
"""
Explanation: 4.3 Section Review Exercise<a id='exercise'></a>
The following exercise is designed to give you experience of solving issues that prevent a merge or concatenate from taking place.
Part 1
Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.1.*.nc files from being joined together into a single cube.
a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.1.*.nc'. Store the cubes in a variable called raw_cubes.
Hint: Constraints can be given to the load_raw function as you would with the other load functions.
End of explanation
"""
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_4.3.1b
"""
Explanation: b) Try merging the loaded cubes into a single cube. Why does this raise an error?
End of explanation
"""
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_4.3.1c
"""
Explanation: c) Fix the cubes such that they can be merged into a single cube.
Hint: You can use del to remove an item from a dictionary.
End of explanation
"""
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_4.3.2a
"""
Explanation: Part 2
Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.5.*.nc files from being joined together into a single cube.
a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.5.*.nc'. Store the cubes in a variable called raw_cubes.
End of explanation
"""
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_4.3.2b
"""
Explanation: b) Join the cubes together into a single cube. Should these cubes be merged or concatenated?
End of explanation
"""
|
mlperf/training_results_v0.5 | v0.5.0/google/cloud_v2.512/resnet-tpuv2-512/code/resnet/model/tpu/tools/colab/Classification_Iris_data_with_TPUEstimator.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,0
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An Example of a custom TPUEstimator for the Iris dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import os
import pandas as pd
import pprint
import tensorflow as tf
import time
"""
Explanation: Simple Classification Model using TPUEstimator on Colab TPU
This notebook demonstrates using Cloud TPUs to build a simple classification model using iris dataset to predict the species of the flower. This model is using 4 input features (SepalLength, SepalWidth, PetalLength, PetalWidth) to determine one of these flower species (Setosa, Versicolor, Virginica).
Note: You will need a GCP account and a GCS bucket for this notebook to run!
Imports
End of explanation
"""
use_tpu = True #@param {type:"boolean"}
bucket = '' #@param {type:"string"}
assert bucket, 'Must specify an existing GCS bucket name'
print('Using bucket: {}'.format(bucket))
if use_tpu:
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
MODEL_DIR = 'gs://{}/{}'.format(bucket, time.strftime('tpuestimator-dnn/%Y-%m-%d-%H-%M-%S'))
print('Using model dir: {}'.format(MODEL_DIR))
from google.colab import auth
auth.authenticate_user()
if 'COLAB_TPU_ADDR' in os.environ:
TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
# Upload credentials to TPU.
with tf.Session(TF_MASTER) as sess:
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f)
tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)
# Now credentials are set for all future sessions on this TPU.
else:
TF_MASTER=''
with tf.Session(TF_MASTER) as session:
print ('List of devices:')
pprint.pprint(session.list_devices())
"""
Explanation: Resolve TPU Address and authenticate GCS Bucket
End of explanation
"""
# Model specific parameters
# TPU address
tpu_address = TF_MASTER
# Estimators model_dir
model_dir = MODEL_DIR
# This is the global batch size, not the per-shard batch.
batch_size = 128
# Total number of training steps.
train_steps = 1000
# Total number of evaluation steps. If '0', evaluation after training is skipped
eval_steps = 4
# Number of iterations per TPU training loop
iterations = 500
"""
Explanation: FLAGS used as model params
End of explanation
"""
TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth',
'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
PREDICTION_INPUT_DATA = {
'SepalLength': [6.9, 5.1, 5.9],
'SepalWidth': [3.1, 3.3, 3.0],
'PetalLength': [5.4, 1.7, 4.2],
'PetalWidth': [2.1, 0.5, 1.5],
}
PREDICTION_OUTPUT_DATA = ['Virginica', 'Setosa', 'Versicolor']
def maybe_download():
train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-1], TRAIN_URL)
test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-1], TEST_URL)
return train_path, test_path
def load_data(y_name='Species'):
"""Returns the iris dataset as (train_x, train_y), (test_x, test_y)."""
train_path, test_path = maybe_download()
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
train_x, train_y = train, train.pop(y_name)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
test_x, test_y = test, test.pop(y_name)
return (train_x, train_y), (test_x, test_y)
def train_input_fn(features, labels, batch_size):
"""An input function for training"""
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.apply(
tf.contrib.data.batch_and_drop_remainder(batch_size))
# Return the dataset.
return dataset
def eval_input_fn(features, labels, batch_size):
"""An input function for evaluation"""
features=dict(features)
inputs = (features, labels)
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices(inputs)
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.apply(
tf.contrib.data.batch_and_drop_remainder(batch_size))
# Return the dataset.
return dataset
def predict_input_fn(features, batch_size):
"""An input function for prediction"""
dataset = tf.data.Dataset.from_tensor_slices(features)
dataset = dataset.batch(batch_size)
return dataset
"""
Explanation: Get input data and Define input functions
End of explanation
"""
def metric_fn(labels, logits):
"""Function to return metrics for evaluation"""
predicted_classes = tf.argmax(logits, 1)
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
return {'accuracy': accuracy}
def my_model(features, labels, mode, params):
"""DNN with three hidden layers, and dropout of 0.1 probability."""
# Create three fully connected layers each layer having a dropout
# probability of 0.1.
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
# Compute predictions.
predicted_classes = tf.argmax(logits, 1)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.contrib.tpu.TPUEstimatorSpec(mode, predictions=predictions)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels,
logits=logits)
if mode == tf.estimator.ModeKeys.EVAL:
return tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, loss=loss, eval_metrics=(metric_fn, [labels, logits]))
# Create training op.
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
if use_tpu:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.contrib.tpu.TPUEstimatorSpec(mode, loss=loss, train_op=train_op)
"""
Explanation: Model and metric function
End of explanation
"""
def main():
# Fetch the data
(train_x, train_y), (test_x, test_y) = load_data()
# Feature columns describe how to use the input.
my_feature_columns = []
for key in train_x.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# Resolve TPU cluster and runconfig for this.
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
tpu_address)
run_config = tf.contrib.tpu.RunConfig(
model_dir=model_dir,
cluster=tpu_cluster_resolver,
session_config=tf.ConfigProto(
allow_soft_placement=True, log_device_placement=True),
tpu_config=tf.contrib.tpu.TPUConfig(iterations),
)
# Build 2 hidden layer DNN with 10, 10 units respectively.
classifier = tf.contrib.tpu.TPUEstimator(
model_fn=my_model,
use_tpu=use_tpu,
train_batch_size=batch_size,
eval_batch_size=batch_size,
predict_batch_size=batch_size,
config=run_config,
params={
'feature_columns': my_feature_columns,
# Two hidden layers of 10 nodes each.
'hidden_units': [10, 10],
# The model must choose between 3 classes.
'n_classes': 3,
'use_tpu': use_tpu,
})
# Train the Model.
classifier.train(
input_fn = lambda params: train_input_fn(
train_x, train_y, params["batch_size"]),
max_steps=train_steps)
# Evaluate the model.
eval_result = classifier.evaluate(
input_fn = lambda params: eval_input_fn(
test_x, test_y, params["batch_size"]),
steps=eval_steps)
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
# Generate predictions from the model
predictions = classifier.predict(
input_fn = lambda params: predict_input_fn(
PREDICTION_INPUT_DATA, params["batch_size"]))
for pred_dict, expec in zip(predictions, PREDICTION_OUTPUT_DATA):
template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print(template.format(SPECIES[class_id],
100 * probability, expec))
"""
Explanation: Main Function
End of explanation
"""
main()
"""
Explanation: Run It!!
End of explanation
"""
|
arizona-phonological-imaging-lab/autotres | examples/network-training-tutorial.ipynb | apache-2.0 | import logging
from imp import reload
reload(logging)
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO, datefmt='%I:%M:%S')
#logging.debug('This is a debug message')
#logging.basicConfig(level=logging.INFO)
"""
Explanation: Training a network
Logging
For most purposes, we are going to want to set our logging level to INFO, since some commands are going to run for a long time, and we would like periodic updates.
End of explanation
"""
keys = ['study', 'frame']
"""
Explanation: Setting up an HDF5 database
The first step is to create a dataset. This process is mostly abstracted away for you using a3.Dataset objects. Your responsibility is to specify how your data is stored and structured. We'll start by specifying some structure.
Keys
For the example dataset, there is only a single subject. For this dataset, a combination of a study ID and frame ID is sufficient to pick out a unique datapoint. We will encode this database key as a list of the form ['study','frame']. While you might construct a key of additional fields (ex. subject), the last element will usually be frame, since that is usually the minimal unit of analysis.
Note that these names are arbitrary, as long as you are consistant. Of course using informative names is best practice, since autotres has some sensible defaults that rely on certain keys; for example, the default code for extracting images from video relies on a 'frame' key.
End of explanation
"""
import os
import re
types = {
'trace': {
'regex': r"""(?x)
(?P<study>\d+\w+) # in the example dataset, a 'study' is encoded in the image name as the substring preceding an '_'
_(?P<frame>\d+)\.(?:jpg|png) # the frame number
\.(?P<tracer>\w+) # the tracer id
\.traced\.txt$""",
'conflict': 'list'
},
'image': {
'regex': r"""(?x)
(?P<study>\d+\w+)
_(?P<frame>\d+)
\.(?P<ext>jpg|png)$""",
'conflict': 'hash'
},
'name': {
'regex': r"""(?x)
(?P<fname>(?P<study>\d+\w+)
_(?P<frame>\d+)
\.(?P<ext>jpg|png)
)$""",
}
}
"""
Explanation: Types
Next, we need to be able to tell autotres about the types of files we have, the types of data they represent, and what levels of the key heirarchy each piece of data should be associated with. This will mostly be accomplished with regular expressions. We create a dict of data types. The keys of the dict represent data type names. Note that we will want one type for each type of data we want, not for each type of file we have. autotres is perfectly happy pulling more than one type of information from a single file.
These are again arbitrary labels, but autotres can provide some sensible default behaviors for certain labels. Each dict contains information about that type:
'conflict': How to deal with multiple files of the same type appearing for same combination of identifiers.
For example, If I have multiple tracers on my team, then I would expect to have multiple 'trace' files for each combination of 'subject' and 'frame'. In this case, I would like to keep all of the traces for the same image, so that I can do something sensible with it, like interpolate. In this case I will set the value for 'conflict' to 'list'.
Similarly, if I have conducted multiple studies with my dataset, one looking at coronals and one looking at fricatives, I would expect to have multiple copies of the images for coronal fricatives. However, unlike the instance with multiple traces, fricative_frame-00042.png and coronal_frame-00042.png should be identical files. I can use the 'hash' option to specify that I should ignore duplicates as long as they are the same, but should raise an exception if they don't.
Finally, there are some situations that I simply expect not to happen. For example, if a single subject is associated with more that one 'audio' file, then perhaps it is most likely that somebdy mislabeled something. In this case, I would not set 'conflict' to anything, and if there is a conflict, autotres will raise an exception automatically
'regex': How to associate each file with a combination of heirachical levels.
This is a regular expression that will match a filename in the dataset. We use the (?P<label>...) syntax to capture parts of the filename that are informative. Specifically, we need to be able to infer all the relevent heirarchical information from the file name. This should be a left-substring of the keys list.
Note that the regex is matched to the entire pathname, relative to whatever path we give it (see below). Here, we have also used the (?x) flag to allow us to break the regex over multiple lines, and to include comments.
End of explanation
"""
import a3
ds = a3.Dataset('example.hdf5',roi=(140.,320.,250.,580.),n_points=32,scale=1/3)
"""
Explanation: Creating the dataset
We will now set up our dataset. The roi, n_points, and scale kwargs will be passed down to the default data extraction callbacks (see a3/dataset.py documentation). Custom callbacks can be provided by putting a callable in the ds.callbacks dict. These should return a numpy array of type float32.
If you don't have CUDA properly installed, importing a3 will throw some errors about nvcc (nvidia cuda compiler) not being found. This is fine so long as you are fine with only using the CPU (instead of the GPU) to train.
End of explanation
"""
d = 'example_data'
ds.scan_directory(d,types,keys)
"""
Explanation: The directory containing our data is example_data. You can scan multiple directories if you need to, possibly with different type definitions, but watch out for file conflicts! Your key heirarchy should be the same accross calls to scan_directory. For large datasets may take a while to complete, since it is doing a full walk of the file heirarchy.
End of explanation
"""
ds.sources.keys()
#ds.sources.items()[0]
"""
Explanation: At this point, you can inspect what data sources you have by looking at the ds.sources dict. This dict can get very large, so be cautious about printing the whole of it to stdout.
End of explanation
"""
ds.read_sources(['trace','image','name'])
"""
Explanation: Once you have your data sources figured out, you can extract that data with ds.read_sources(). The arg here is a set-like object with all of the data types you need. This will take a while, since it is opening and processing a lot of files.
End of explanation
"""
a = a3.Autotracer('examples/example.a3.json', roi=(140.,320.,250.,580.), train='example.hdf5')
# a.loadHDF5('example2.hdf5')
"""
Explanation: Training a network
The rest is easy. Construct an Autotracer from your new dataset. The required argument points to a json file that specifies a network architechture (see example.a3.json for an example). In order to train, you have to specify a training dataset location. You can also do this later, (or even change datasets mid-training) by using the a.loadHDF5() method. Whichever way you load your training data, you can specify a valid keyword as well to specify a validation set, but leaving it as the default (None) sets aside part of your training data as validation data (no guarantees about randomness). Make sure you use the same ROI as above, or at least the same size.
End of explanation
"""
a.train(10)
"""
Explanation: To train on your dataset, simply call the train() method. In reality, training will require thousands of epochs (runs through the entire dataset), but for time we will just train a couple times. Minibatch size can be controlled with the minibatch kwarg, which defaults to 512. If your logging level is set to INFO you will see the training loss and validation loss at the end of each epoch.
End of explanation
"""
a.save('example.a3.json.bz2')
"""
Explanation: Make sure you save your network! Saving with compression is highly recommended. If you save as a plain .json file, your weights will not be saved, by default.
End of explanation
"""
import h5py
with h5py.File('example.hdf5','r') as h:
# trace all images used in training
a.trace(h, 'example_test.json', h['name'],'autotrace_test','001')
"""
Explanation: Testing a network
Get the traces for your dataset! This will create a file named original_test.json that can be used with the APIL web tracer. The remaining positional arguments are the filenames for the images, the tracer ID, and subject ID.
End of explanation
"""
import json
len(json.load(open('example_test.json', 'r'))['trace-data'])
"""
Explanation: This output can be easily inspected using the json module:
End of explanation
"""
b = a3.Autotracer('example.a3.json.bz2', roi=(140.,320.,250.,580.), train='example.hdf5', valid='example.hdf5')
b.train(1)
"""
Explanation: If you want to know your loss on some dataset, the best way for now is to train once using that set as the validation set. Note that if you are usign dropout layers, you have to look at valid_loss, since train_loss will be non-deterministic. Also note that this will train once epoch first, so will be slightly diferent every time.
End of explanation
"""
|
jrieke/machine-intelligence-2 | sheet08/sheet08.ipynb | mit | from __future__ import division, print_function
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats
import numpy as np
"""
Explanation: Machine Intelligence II - Team MensaNord
Sheet 08
Nikolai Zaki
Alexander Moore
Johannes Rieke
Georg Hoelger
Oliver Atanaszov
End of explanation
"""
def E(W, s):
N = len(s)
return -0.5 * np.sum(W[i, j] * s[i] * s[j] for i, j in np.ndindex(N, N))
N = 6
beta_0 = 0.007
tau = 1.06
epsilon = 1e-20
t_max = 150
W = np.random.random(size=(N, N))
W = (W + W.T) / 2 # make symmetric
for i in range(N):
W[i, i] = 0
plt.imshow(W)
"""
Explanation: Exercise 1
End of explanation
"""
M = 1
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
"""
Explanation: Simulation with M=1
End of explanation
"""
M = 500
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
"""
Explanation: Simulation with M=500
End of explanation
"""
# generate all posible states & energies
all_states = [[0, 0, 0, 0, 0, 0] for i in range(2**6)]
all_energies = [0.0 for i in range(2**6)]
for si in range(2**6):
all_states[si] = [int(x) for x in list('{0:06b}'.format(si))]
all_energies[si] = E(W, all_states[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_energies)
plt.title('histogram of all possible energies')
plt.grid()
plt.show()
probab_beta = [0.005, 1, 3]
for beta in probab_beta:
Z = 0
for en in all_energies:
Z += np.exp(-beta * en)
all_probabilities = [0.0 for i in range(2**6)]
for si in range(2**6):
all_probabilities[si] = np.exp(-beta * all_energies[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_probabilities)
plt.title('histogram of all possible probabilities for beta {}'.format(beta))
plt.grid()
plt.show()
"""
Explanation: All possible states
End of explanation
"""
# Other parameters and W from exercise 1.
epsilon = 1e-50
s = np.random.choice([-1., 1.], N)
e = np.zeros_like(s)
beta = beta_0
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
for t in range(t_max):
#print('t =', t, '- beta =', beta)
distance = np.inf
while distance >= epsilon:
e_old = e.copy()
for i in range(N):
neighbors = range(N)
neighbors.remove(i)
e[i] = -np.sum(W[i, j] * s[j] for j in neighbors)
s[i] = np.tanh(-beta * e[i])
#print(distance)
distance = np.linalg.norm(e - e_old)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
#print('-'*10)
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
"""
Explanation: Exercise 2
End of explanation
"""
|
datapythonista/pandas | doc/source/user_guide/style.ipynb | bsd-3-clause | import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
import matplotlib as mpl
df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]],
index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'),
columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))
df.style
"""
Explanation: Table Visualization
This section demonstrates visualization of tabular data using the Styler
class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
Styler Object and HTML
Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs.
The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook.
End of explanation
"""
# Hidden cell to just create the below example: code is covered throughout the guide.
s = df.style\
.hide_columns([('Random', 'Tumour'), ('Random', 'Non-Tumour')])\
.format('{:.0f}')\
.set_table_styles([{
'selector': '',
'props': 'border-collapse: separate;'
},{
'selector': 'caption',
'props': 'caption-side: bottom; font-size:1.3em;'
},{
'selector': '.index_name',
'props': 'font-style: italic; color: darkgrey; font-weight:normal;'
},{
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
},{
'selector': 'th.col_heading',
'props': 'text-align: center;'
},{
'selector': 'th.col_heading.level0',
'props': 'font-size: 1.5em;'
},{
'selector': 'th.col2',
'props': 'border-left: 1px solid white;'
},{
'selector': '.col2',
'props': 'border-left: 1px solid #000066;'
},{
'selector': 'td',
'props': 'text-align: center; font-weight:bold;'
},{
'selector': '.true',
'props': 'background-color: #e6ffe6;'
},{
'selector': '.false',
'props': 'background-color: #ffe6e6;'
},{
'selector': '.border-red',
'props': 'border: 2px dashed red;'
},{
'selector': '.border-green',
'props': 'border: 2px dashed green;'
},{
'selector': 'td:hover',
'props': 'background-color: #ffffb3;'
}])\
.set_td_classes(pd.DataFrame([['true border-green', 'false', 'true', 'false border-red', '', ''],
['false', 'true', 'false', 'true', '', '']],
index=df.index, columns=df.columns))\
.set_caption("Confusion matrix for multiple cancer prediction models.")\
.set_tooltips(pd.DataFrame([['This model has a very strong true positive rate', '', '', "This model's total number of false negatives is too high", '', ''],
['', '', '', '', '', '']],
index=df.index, columns=df.columns),
css_class='pd-tt', props=
'visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;'
'background-color: white; color: #000066; font-size: 0.8em;'
'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;')
s
"""
Explanation: The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the .to_html() method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build s:
End of explanation
"""
df.style.format(precision=0, na_rep='MISSING', thousands=" ",
formatter={('Decision Tree', 'Tumour'): "{:.2f}",
('Regression', 'Non-Tumour'): lambda x: "$ {:,.1f}".format(x*-1e6)
})
"""
Explanation: Formatting the Display
Formatting Values
Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavlaues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
Additionally, the format function has a precision argument to specifically help formatting floats, as well as decimal and thousands separators to support other locales, an na_rep argument to display missing data, and an escape argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' styler.format.precision option, controllable using with pd.option_context('format.precision', 2):
End of explanation
"""
weather_df = pd.DataFrame(np.random.rand(10,2)*5,
index=pd.date_range(start="2021-01-01", periods=10),
columns=["Tokyo", "Beijing"])
def rain_condition(v):
if v < 1.75:
return "Dry"
elif v < 2.75:
return "Rain"
return "Heavy Rain"
def make_pretty(styler):
styler.set_caption("Weather Conditions")
styler.format(rain_condition)
styler.format_index(lambda v: v.strftime("%A"))
styler.background_gradient(axis=None, vmin=1, vmax=5, cmap="YlGnBu")
return styler
weather_df
weather_df.loc["2021-01-04":"2021-01-08"].style.pipe(make_pretty)
"""
Explanation: Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations.
End of explanation
"""
s = df.style.format('{:.0f}').hide([('Random', 'Tumour'), ('Random', 'Non-Tumour')], axis="columns")
s
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_hide')
"""
Explanation: Hiding Data
The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.
The index can be hidden from rendering by calling .hide() without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling .hide(axis="columns") without any further arguments.
Specific rows or columns can be hidden from rendering by calling the same .hide() method and passing in a row/column label, a list-like or a slice of row/column labels to for the subset argument.
Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at col2, since col0 and col1 are simply ignored.
We can update our Styler object from before to hide some data and format the values.
End of explanation
"""
cell_hover = { # for row hover use <tr> instead of <td>
'selector': 'td:hover',
'props': [('background-color', '#ffffb3')]
}
index_names = {
'selector': '.index_name',
'props': 'font-style: italic; color: darkgrey; font-weight:normal;'
}
headers = {
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
}
s.set_table_styles([cell_hover, index_names, headers])
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tab_styles1')
"""
Explanation: Methods to Add Styles
There are 3 primary methods of adding custom CSS styles to Styler:
Using .set_table_styles() to control broader areas of the table with specified internal CSS. Although table styles allow the flexibility to add CSS selectors and properties controlling all individual parts of the table, they are unwieldy for individual cell specifications. Also, note that table styles cannot be exported to Excel.
Using .set_td_classes() to directly link either external CSS classes to your data cells or link the internal CSS classes created by .set_table_styles(). See here. These cannot be used on column header rows or indexes, and also won't export to Excel.
Using the .apply() and .applymap() functions to add direct internal CSS to specific data cells. See here. As of v1.4.0 there are also methods that work directly on column header rows or indexes; .apply_index() and .applymap_index(). Note that only these methods add styles that will export to Excel. These methods work in a similar way to DataFrame.apply() and DataFrame.applymap().
Table Styles
Table styles are flexible enough to control all individual parts of the table, including column headers and indexes.
However, they can be unwieldy to type for individual data cells or for any kind of conditional formatting, so we recommend that table styles are used for broad styling, such as entire rows or columns at a time.
Table styles are also used to control features which can apply to the whole table at once such as creating a generic hover functionality. The :hover pseudo-selector, as well as other pseudo-selectors, can only be used this way.
To replicate the normal format of CSS selectors and properties (attribute value pairs), e.g.
tr:hover {
background-color: #ffff99;
}
the necessary format to pass styles to .set_table_styles() is as a list of dicts, each with a CSS-selector tag and CSS-properties. Properties can either be a list of 2-tuples, or a regular CSS-string, for example:
End of explanation
"""
s.set_table_styles([
{'selector': 'th.col_heading', 'props': 'text-align: center;'},
{'selector': 'th.col_heading.level0', 'props': 'font-size: 1.5em;'},
{'selector': 'td', 'props': 'text-align: center; font-weight: bold;'},
], overwrite=False)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tab_styles2')
"""
Explanation: Next we just add a couple more styling artifacts targeting specific parts of the table. Be careful here, since we are chaining methods we need to explicitly instruct the method not to overwrite the existing styles.
End of explanation
"""
s.set_table_styles({
('Regression', 'Tumour'): [{'selector': 'th', 'props': 'border-left: 1px solid white'},
{'selector': 'td', 'props': 'border-left: 1px solid #000066'}]
}, overwrite=False, axis=0)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('xyz01')
"""
Explanation: As a convenience method (since version 1.2.0) we can also pass a dict to .set_table_styles() which contains row or column keys. Behind the scenes Styler just indexes the keys and adds relevant .col<m> or .row<n> classes as necessary to the given CSS selectors.
End of explanation
"""
out = s.set_table_attributes('class="my-table-cls"').to_html()
print(out[out.find('<table'):][:109])
"""
Explanation: Setting Classes and Linking to External CSS
If you have designed a website then it is likely you will already have an external CSS file that controls the styling of table and cell objects within it. You may want to use these native files rather than duplicate all the CSS in python (and duplicate any maintenance work).
Table Attributes
It is very easy to add a class to the main <table> using .set_table_attributes(). This method can also attach inline styles - read more in CSS Hierarchies.
End of explanation
"""
s.set_table_styles([ # create internal CSS classes
{'selector': '.true', 'props': 'background-color: #e6ffe6;'},
{'selector': '.false', 'props': 'background-color: #ffe6e6;'},
], overwrite=False)
cell_color = pd.DataFrame([['true ', 'false ', 'true ', 'false '],
['false ', 'true ', 'false ', 'true ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_classes')
"""
Explanation: Data Cell CSS Classes
New in version 1.2.0
The .set_td_classes() method accepts a DataFrame with matching indices and columns to the underlying Styler's DataFrame. That DataFrame will contain strings as css-classes to add to individual data cells: the <td> elements of the <table>. Rather than use external CSS we will create our classes internally and add them to table style. We will save adding the borders until the section on tooltips.
End of explanation
"""
np.random.seed(0)
df2 = pd.DataFrame(np.random.randn(10,4), columns=['A','B','C','D'])
df2.style
"""
Explanation: Styler Functions
Acting on Data
We use the following methods to pass your style functions. Both of those methods take a function (and some other keyword arguments) and apply it to the DataFrame in a certain way, rendering CSS styles.
.applymap() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply() (column-/row-/table-wise): accepts a function that takes a Series or DataFrame and returns a Series, DataFrame, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each column or row of your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument. For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.
This method is powerful for applying multiple, complex logic to data cells. We create a new DataFrame to demonstrate this.
End of explanation
"""
def style_negative(v, props=''):
return props if v < 0 else None
s2 = df2.style.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)
s2
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_applymap')
"""
Explanation: For example we can build a function that colors text if it is negative, and chain this with a function that partially fades cells of negligible value. Since this looks at each element in turn we use applymap.
End of explanation
"""
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
s2.apply(highlight_max, props='color:white;background-color:darkblue', axis=0)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_apply')
"""
Explanation: We can also build a function that highlights the maximum value across rows, cols, and the DataFrame all at once. In this case we use apply. Below we highlight the maximum in a column.
End of explanation
"""
s2.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_apply_again')
"""
Explanation: We can use the same function across the different axes, highlighting here the DataFrame maximum in purple, and row maximums in pink.
End of explanation
"""
s2.applymap_index(lambda v: "color:pink;" if v>4 else "color:darkblue;", axis=0)
s2.apply_index(lambda s: np.where(s.isin(["A", "B"]), "color:pink;", "color:darkblue;"), axis=1)
"""
Explanation: This last example shows how some styles have been overwritten by others. In general the most recent style applied is active but you can read more in the section on CSS hierarchies. You can also apply these styles to more granular parts of the DataFrame - read more in section on subset slicing.
It is possible to replicate some of this functionality using just classes but it can be more cumbersome. See item 3) of Optimization
<div class="alert alert-info">
*Debugging Tip*: If you're having trouble writing your style function, try just passing it into ``DataFrame.apply``. Internally, ``Styler.apply`` uses ``DataFrame.apply`` so the result should be the same, and with ``DataFrame.apply`` you will be able to inspect the CSS string output of your intended function in each cell.
</div>
Acting on the Index and Column Headers
Similar application is acheived for headers by using:
.applymap_index() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply_index() (level-wise): accepts a function that takes a Series and returns a Series, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each level of your Index one-at-a-time. To style the index use axis=0 and to style the column headers use axis=1.
You can select a level of a MultiIndex but currently no similar subset application is available for these methods.
End of explanation
"""
s.set_caption("Confusion matrix for multiple cancer prediction models.")\
.set_table_styles([{
'selector': 'caption',
'props': 'caption-side: bottom; font-size:1.25em;'
}], overwrite=False)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_caption')
"""
Explanation: Tooltips and Captions
Table captions can be added with the .set_caption() method. You can use table styles to control the CSS relevant to the caption.
End of explanation
"""
tt = pd.DataFrame([['This model has a very strong true positive rate',
"This model's total number of false negatives is too high"]],
index=['Tumour (Positive)'], columns=df.columns[[0,3]])
s.set_tooltips(tt, props='visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;'
'background-color: white; color: #000066; font-size: 0.8em;'
'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;')
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tooltips')
"""
Explanation: Adding tooltips (since version 1.3.0) can be done using the .set_tooltips() method in the same way you can add CSS classes to data cells by providing a string based DataFrame with intersecting indices and columns. You don't have to specify a css_class name or any css props for the tooltips, since there are standard defaults, but the option is there if you want more visual control.
End of explanation
"""
s.set_table_styles([ # create internal CSS classes
{'selector': '.border-red', 'props': 'border: 2px dashed red;'},
{'selector': '.border-green', 'props': 'border: 2px dashed green;'},
], overwrite=False)
cell_border = pd.DataFrame([['border-green ', ' ', ' ', 'border-red '],
[' ', ' ', ' ', ' ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color + cell_border)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_borders')
"""
Explanation: The only thing left to do for our table is to add the highlighting borders to draw the audience attention to the tooltips. We will create internal CSS classes as before using table styles. Setting classes always overwrites so we need to make sure we add the previous classes.
End of explanation
"""
df3 = pd.DataFrame(np.random.randn(4,4),
pd.MultiIndex.from_product([['A', 'B'], ['r1', 'r2']]),
columns=['c1','c2','c3','c4'])
df3
"""
Explanation: Finer Control with Slicing
The examples we have shown so far for the Styler.apply and Styler.applymap functions have not demonstrated the use of the subset argument. This is a useful argument which permits a lot of flexibility: it allows you to apply styles to specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame;
A scalar is treated as a column label
A list (or Series or NumPy array) is treated as multiple column labels
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one. We will create a MultiIndexed DataFrame to demonstrate the functionality.
End of explanation
"""
slice_ = ['c3', 'c4']
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
"""
Explanation: We will use subset to highlight the maximum in the third and fourth columns with red text. We will highlight the subset sliced region in yellow.
End of explanation
"""
idx = pd.IndexSlice
slice_ = idx[idx[:,'r1'], idx['c2':'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
"""
Explanation: If combined with the IndexSlice as suggested then it can index across both dimensions with greater flexibility.
End of explanation
"""
slice_ = idx[idx[:,'r2'], :]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
"""
Explanation: This also provides the flexibility to sub select rows when used with the axis=1.
End of explanation
"""
slice_ = idx[idx[(df3['c1'] + df3['c3']) < -2.0], ['c2', 'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
"""
Explanation: There is also scope to provide conditional filtering.
Suppose we want to highlight the maximum across columns 2 and 4 only in the case that the sum of columns 1 and 3 is less than -2.0 (essentially excluding rows (:,'r2')).
End of explanation
"""
df4 = pd.DataFrame([[1,2],[3,4]])
s4 = df4.style
"""
Explanation: Only label-based slicing is supported right now, not positional, and not callables.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Optimization
Generally, for smaller tables and most cases, the rendered HTML does not need to be optimized, and we don't really recommend it. There are two cases where it is worth considering:
If you are rendering and styling a very large HTML table, certain browsers have performance issues.
If you are using Styler to dynamically create part of online user interfaces and want to improve network performance.
Here we recommend the following steps to implement:
1. Remove UUID and cell_ids
Ignore the uuid and set cell_ids to False. This will prevent unnecessary HTML.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
"""
from pandas.io.formats.style import Styler
s4 = Styler(df4, uuid_len=0, cell_ids=False)
"""
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
"""
props = 'font-family: "Times New Roman", Times, serif; color: #e83e8c; font-size:1.3em;'
df4.style.applymap(lambda x: props, subset=[1])
"""
Explanation: 2. Use table styles
Use table styles where possible (e.g. for all cells or rows or columns at a time) since the CSS is nearly always more efficient than other formats.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
"""
df4.style.set_table_styles([{'selector': 'td.col1', 'props': props}])
"""
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
"""
df2.style.apply(highlight_max, props='color:white;background-color:darkblue;', axis=0)\
.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
"""
Explanation: 3. Set classes instead of using Styler functions
For large DataFrames where the same style is applied to many cells it can be more efficient to declare the styles as classes and then apply those classes to data cells, rather than directly applying styles to cells. It is, however, probably still easier to use the Styler function api when you are not concerned about optimization.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
"""
build = lambda x: pd.DataFrame(x, index=df2.index, columns=df2.columns)
cls1 = build(df2.apply(highlight_max, props='cls-1 ', axis=0))
cls2 = build(df2.apply(highlight_max, props='cls-2 ', axis=1, result_type='expand').values)
cls3 = build(highlight_max(df2, props='cls-3 '))
df2.style.set_table_styles([
{'selector': '.cls-1', 'props': 'color:white;background-color:darkblue;'},
{'selector': '.cls-2', 'props': 'color:white;background-color:pink;'},
{'selector': '.cls-3', 'props': 'color:white;background-color:purple;'}
]).set_td_classes(cls1 + cls2 + cls3)
"""
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
"""
my_css = {
"row_heading": "",
"col_heading": "",
"index_name": "",
"col": "c",
"row": "r",
"col_trim": "",
"row_trim": "",
"level": "l",
"data": "",
"blank": "",
}
html = Styler(df4, uuid_len=0, cell_ids=False)
html.set_table_styles([{'selector': 'td', 'props': props},
{'selector': '.c1', 'props': 'color:green;'},
{'selector': '.l0', 'props': 'color:blue;'}],
css_class_names=my_css)
print(html.to_html())
html
"""
Explanation: 4. Don't use tooltips
Tooltips require cell_ids to work and they generate extra HTML elements for every data cell.
5. If every byte counts use string replacement
You can remove unnecessary HTML, or shorten the default class names by replacing the default css dict. You can read a little more about CSS below.
End of explanation
"""
df2.iloc[0,2] = np.nan
df2.iloc[4,3] = np.nan
df2.loc[:4].style.highlight_null(color='yellow')
"""
Explanation: Builtin Styles
Some styling functions are common enough that we've "built them in" to the Styler, so you don't have to write them and apply them yourself. The current list of such functions is:
.highlight_null: for use with identifying missing data.
.highlight_min and .highlight_max: for use with identifying extremeties in data.
.highlight_between and .highlight_quantile: for use with identifying classes within data.
.background_gradient: a flexible method for highlighting cells based on their, or other, values on a numeric scale.
.text_gradient: similar method for highlighting text based on their, or other, values on a numeric scale.
.bar: to display mini-charts within cell backgrounds.
The individual documentation on each function often gives more examples of their arguments.
Highlight Null
End of explanation
"""
df2.loc[:4].style.highlight_max(axis=1, props='color:white; font-weight:bold; background-color:darkblue;')
"""
Explanation: Highlight Min or Max
End of explanation
"""
left = pd.Series([1.0, 0.0, 1.0], index=["A", "B", "D"])
df2.loc[:4].style.highlight_between(left=left, right=1.5, axis=1, props='color:white; background-color:purple;')
"""
Explanation: Highlight Between
This method accepts ranges as float, or NumPy arrays or Series provided the indexes match.
End of explanation
"""
df2.loc[:4].style.highlight_quantile(q_left=0.85, axis=None, color='yellow')
"""
Explanation: Highlight Quantile
Useful for detecting the highest or lowest percentile values
End of explanation
"""
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
df2.style.background_gradient(cmap=cm)
df2.style.text_gradient(cmap=cm)
"""
Explanation: Background Gradient and Text Gradient
You can create "heatmaps" with the background_gradient and text_gradient methods. These require matplotlib, and we'll use Seaborn to get a nice colormap.
End of explanation
"""
df2.loc[:4].style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
"""
Explanation: .background_gradient and .text_gradient have a number of keyword arguments to customise the gradients and colors. See the documentation.
Set properties
Use Styler.set_properties when the style doesn't actually depend on the values. This is just a simple wrapper for .applymap where the function returns the same properties for all cells.
End of explanation
"""
df2.style.bar(subset=['A', 'B'], color='#d65f5f')
"""
Explanation: Bar charts
You can include "bar charts" in your DataFrame.
End of explanation
"""
df2.style.format('{:.3f}', na_rep="")\
.bar(align=0, vmin=-2.5, vmax=2.5, cmap="bwr", height=50,
width=60, props="width: 120px; border-right: 1px solid black;")\
.text_gradient(cmap="bwr", vmin=-2.5, vmax=2.5)
"""
Explanation: Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values or a matplotlib colormap.
To showcase an example here's how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars. We also use text_gradient to color the text the same as the bars using a matplotlib colormap (although in this case the visualization is probably better without this additional effect).
End of explanation
"""
# Hide the construction of the display chart from the user
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
test3 = pd.Series([10,20,50,100], name='All Positive')
test4 = pd.Series([100, 103, 101, 102], name='Large Positive')
head = """
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>Both Neg and Pos</th>
<th>All Positive</th>
<th>Large Positive</th>
</thead>
</tbody>
"""
aligns = ['left', 'right', 'zero', 'mid', 'mean', 99]
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for series in [test1,test2,test3, test4]:
s = series.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.hide_index().bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).to_html()) #testn['width']
row += '</tr>'
head += row
head+= """
</tbody>
</table>"""
HTML(head)
"""
Explanation: The following example aims to give a highlight of the behavior of the new align options:
End of explanation
"""
style1 = df2.style\
.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)\
.set_table_styles([{"selector": "th", "props": "color: blue;"}])\
.hide(axis="index")
style1
style2 = df3.style
style2.use(style1.export())
style2
"""
Explanation: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
End of explanation
"""
from ipywidgets import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df2.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
"""
Explanation: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns do not need to be unique, but certain styling functions can only work with unique indexes.
No large repr, and construction performance isn't great; although we have some HTML optimizations
You can only apply styles, you can't insert new HTML entities, except via subclassing.
Other Fun and Useful Stuff
Here are a few interesting examples.
Widgets
Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.
End of explanation
"""
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.format(precision=2)\
.set_table_styles(magnify())
"""
Explanation: Magnify
End of explanation
"""
bigdf = pd.DataFrame(np.random.randn(16, 100))
bigdf.style.set_sticky(axis="index")
"""
Explanation: Sticky Headers
If you display a large matrix or DataFrame in a notebook, but you want to always see the column and row headers you can use the .set_sticky method which manipulates the table styles CSS.
End of explanation
"""
bigdf.index = pd.MultiIndex.from_product([["A","B"],[0,1],[0,1,2,3]])
bigdf.style.set_sticky(axis="index", pixel_size=18, levels=[1,2])
"""
Explanation: It is also possible to stick MultiIndexes and even only specific levels.
End of explanation
"""
df4 = pd.DataFrame([['<div></div>', '"&other"', '<span></span>']])
df4.style
df4.style.format(escape="html")
df4.style.format('<a href="https://pandas.pydata.org" target="_blank">{}</a>', escape="html")
"""
Explanation: HTML Escaping
Suppose you have to display HTML within HTML, that can be a bit of pain when the renderer can't distinguish. You can use the escape formatting option to handle this, and even use it within a formatter that contains HTML itself.
End of explanation
"""
df2.style.\
applymap(style_negative, props='color:red;').\
highlight_max(axis=0).\
to_excel('styled.xlsx', engine='openpyxl')
"""
Explanation: Export to Excel
Some support (since version 0.20.0) is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include:
background-color
border-style properties
border-width properties
border-color properties
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Shorthand and side-specific border properties are supported (e.g. border-style and border-left-style) as well as the border shorthands for all sides (border: 1px solid green) or specified sides (border-left: 1px solid green). Using a border shorthand will override any border properties set before it (See CSS Working Group for more details)
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
The following pseudo CSS properties are also available to set excel specific style properties:
number-format
Table level styles, and data cell CSS-classes are not included in the export to Excel: individual cells must have their properties mapped by the Styler.apply and/or Styler.applymap methods.
End of explanation
"""
print(pd.DataFrame([[1,2],[3,4]], index=['i1', 'i2'], columns=['c1', 'c2']).style.to_html())
"""
Explanation: A screenshot of the output:
Export to LaTeX
There is support (since version 1.3.0) to export Styler to LaTeX. The documentation for the .to_latex method gives further detail and numerous examples.
More About CSS and HTML
Cascading Style Sheet (CSS) language, which is designed to influence how a browser renders HTML elements, has its own peculiarities. It never reports errors: it just silently ignores them and doesn't render your objects how you intend so can sometimes be frustrating. Here is a very brief primer on how Styler creates HTML and interacts with CSS, with advice on common pitfalls to avoid.
CSS Classes and Ids
The precise structure of the CSS class attached to each cell is as follows.
Cells with Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
level<k> where k is the level in a MultiIndex
row<m> where m is the numeric position of the row
Column label cells include
col_heading
level<k> where k is the level in a MultiIndex
col<n> where n is the numeric position of the column
Data cells include
data
row<m>, where m is the numeric position of the cell.
col<n>, where n is the numeric position of the cell.
Blank cells include blank
Trimmed cells include col_trim or row_trim
The structure of the id is T_uuid_level<k>_row<m>_col<n> where level<k> is used only on headings, and headings will only have either row<m> or col<n> whichever is needed. By default we've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page. You can read more about the use of UUIDs in Optimization.
We can see example of the HTML by calling the .to_html() method.
End of explanation
"""
df4 = pd.DataFrame([['text']])
df4.style.applymap(lambda x: 'color:green;')\
.applymap(lambda x: 'color:red;')
df4.style.applymap(lambda x: 'color:red;')\
.applymap(lambda x: 'color:green;')
"""
Explanation: CSS Hierarchies
The examples have shown that when CSS styles overlap, the one that comes last in the HTML render, takes precedence. So the following yield different results:
End of explanation
"""
df4.style.set_uuid('a_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'}])\
.applymap(lambda x: 'color:green;')
"""
Explanation: This is only true for CSS rules that are equivalent in hierarchy, or importance. You can read more about CSS specificity here but for our purposes it suffices to summarize the key points:
A CSS importance score for each HTML element is derived by starting at zero and adding:
1000 for an inline style attribute
100 for each ID
10 for each attribute, class or pseudo-class
1 for each element name or pseudo-element
Let's use this to describe the action of the following configurations
End of explanation
"""
df4.style.set_uuid('b_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
"""
Explanation: This text is red because the generated selector #T_a_ td is worth 101 (ID plus element), whereas #T_a_row0_col0 is only worth 100 (ID), so is considered inferior even though in the HTML it comes after the previous.
End of explanation
"""
df4.style.set_uuid('c_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
"""
Explanation: In the above case the text is blue because the selector #T_b_ .cls-1 is worth 110 (ID plus class), which takes precendence.
End of explanation
"""
df4.style.set_uuid('d_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green !important;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
"""
Explanation: Now we have created another table style this time the selector T_c_ td.data (ID plus element plus class) gets bumped up to 111.
If your style fails to be applied, and its really frustrating, try the !important trump card.
End of explanation
"""
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
"""
Explanation: Finally got that green text after all!
Extensibility
The core of pandas is, and will remain, its "high-performance, easy-to-use data structures".
With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is "good enough" for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we'll link to it.
Subclassing
If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.
We'll show an example of extending the default template to insert a custom header before each table.
End of explanation
"""
with open("templates/myhtml.tpl") as f:
print(f.read())
"""
Explanation: We'll use the following template:
End of explanation
"""
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template_html_table = env.get_template("myhtml.tpl")
"""
Explanation: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
End of explanation
"""
MyStyler(df3)
"""
Explanation: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
End of explanation
"""
HTML(MyStyler(df3).to_html(table_title="Extending Example"))
"""
Explanation: Our custom template accepts a table_title keyword. We can provide the value in the .to_html method.
End of explanation
"""
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
HTML(EasyStyler(df3).to_html(table_title="Another Title"))
"""
Explanation: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
End of explanation
"""
with open("templates/html_style_structure.html") as f:
style_structure = f.read()
HTML(style_structure)
"""
Explanation: Template Structure
Here's the template structure for the both the style generation template and the table generation template:
Style template:
End of explanation
"""
with open("templates/html_table_structure.html") as f:
table_structure = f.read()
HTML(table_structure)
"""
Explanation: Table template:
End of explanation
"""
# # Hack to get the same style in the notebook as the
# # main site. This is hidden in the docs.
# from IPython.display import HTML
# with open("themes/nature_with_gtoc/static/nature.css_t") as f:
# css = f.read()
# HTML('<style>{}</style>'.format(css))
"""
Explanation: See the template in the GitHub repo for more details.
End of explanation
"""
|
astarostin/MachineLearningSpecializationCoursera | course6/week5/ParseTraining.ipynb | apache-2.0 | import requests
from bs4 import BeautifulSoup
"""
Explanation: Парсинг веб-страниц
End of explanation
"""
req = requests.get('https://en.wikipedia.org/wiki/Bias-variance_tradeoff')
print req
"""
Explanation: 3a. Парсинг заголовков верхнего уровня со страницы https://en.wikipedia.org/wiki/Bias-variance_tradeoff
Выполним запрос указанной страницы и проверим, что она доступна
End of explanation
"""
parser = BeautifulSoup(req.text, 'lxml')
print parser.prettify()
"""
Explanation: Создадим парсер для страницы и выведем содержимое страницы
End of explanation
"""
def print_tags(parser, key='h1'):
for tag in parser.find_all(key):
print tag.text
"""
Explanation: Зададим функцию для печати значений нужного тега:
End of explanation
"""
print_tags(parser, 'h1')
"""
Explanation: Так как из текста задания не совсем понятно, какие заголовки считать заголовками верхнего уровня, выведем отдельно все заголовки для первого, второго и третьего уровней.
Заголовки первого уровня:
End of explanation
"""
for t in parser.find_all('h2'):
if t.span is not None and 'class' in t.span.attrs and 'mw-headline' in t.span.attrs['class']:
print t.span.text
"""
Explanation: Заголовки второго уровня:
End of explanation
"""
for t in parser.find_all('h3'):
if t.span is not None and 'class' in t.span.attrs and 'mw-headline' in t.span.attrs['class']:
print t.span.text
"""
Explanation: Заголовки третьего уровня
End of explanation
"""
req = requests.get('https://en.wikipedia.org/wiki/Category:Machine_learning_algorithms')
print req
"""
Explanation: 3b. Парсинг названий всех статей в категории Machine Learning Algorithms со страницы https://en.wikipedia.org/wiki/Category:Machine_learning_algorithms
Выполним запрос указанной страницы и проверим, что она доступна
End of explanation
"""
parser = BeautifulSoup(req.text, 'lxml')
print parser.prettify()
"""
Explanation: Создадим парсер для страницы и выведем содержимое страницы
End of explanation
"""
pages = parser.find(string='Pages in category "Machine learning algorithms"')
data_parent = pages.parent.parent.div.div
"""
Explanation: Изучив структуру верстки страницы, определим тег, содержащий интересующие нас заголовки
End of explanation
"""
print_tags(data_parent, 'a')
"""
Explanation: Названия статей в категории "Machine Learning"
End of explanation
"""
|
google-research/google-research | socraticmodels/SocraticModels_MSR_VTT.ipynb | apache-2.0 | openai_api_key = "your-api-key"
"""
Explanation: Copyright 2021 Google LLC.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Socratic Models: MSR-VTT Video-to-Text Retrieval
Socratic Models (SMs) is a framework that composes multiple pre-existing foundation models (e.g., large language models, visual language models, audio-language models) to provide results for new multimodal tasks, without any model finetuning.
This colab runs SMs for zero-shot video-to-text retrieval on the MSR-VTT Full and 1k-A test sets. Specifically, this augments Portillo-Quintero et al. 2021 with audio information by using an ALM for speech-to-text, summarizing the transcriptions with a causal LM (e.g., GPT-3), and re-ranking CLIP (VLM) matching scores against captions with a masked LM (e.g., RoBERTa) on the summaries.
This is a reference implementation of one task demonstrated in the work: Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
Disclaimer: this colab uses CLIP and GPT-3 as foundation models, and may be subject to unwanted biases. This code should be used with caution (and checked for correctness) in downstream applications.
Quick Start:
Step 1. Register for an OpenAI API key to use GPT-3 (there's a free trial) and enter it below
Step 2. Menu > Change runtime type > Hardware accelerator > "GPU"
Step 3. Menu > Runtime > Run all
End of explanation
"""
!pip install -U --no-cache-dir gdown --pre
!pip install -U sentence-transformers
!pip install openai ftfy
!nvidia-smi # Show GPU info.
import json
import os
import numpy as np
import openai
import pandas as pd
import pickle
from sentence_transformers import SentenceTransformer
from sentence_transformers import util as st_utils
import torch
openai.api_key = openai_api_key
# From: https://github.com/Deferf/CLIP_Video_Representation
if not os.path.exists('MSRVTT_test_dict_CLIP_text.pt'):
!gdown 1-3tpfZzo1_D18WdrioQzc-iogEl-KSnA -O "MSRVTT_test_dict_CLIP_text.pt"
if not os.path.exists('MSRVTT_test_dict_CLIP_visual.pt'):
!gdown 1Gp3_I_OvcKwjOQmn334-T4wfwQk29TCp -O "MSRVTT_test_dict_CLIP_visual.pt"
if not os.path.exists('test_videodatainfo.json'):
!gdown 1BzTt1Bf-XJSUXxBfJVxLL3mYWLZ6odsw -O "test_videodatainfo.json"
if not os.path.exists('JS_test_dict_CLIP_text.pt'):
!gdown --id 15mvFQxrWLNvBvFg4_9rr_Kqyzsy9dudj -O "JS_test_dict_CLIP_text.pt"
# Load generated video transcriptions from Google cloud speed-to-text API.
if not os.path.exists('video_id_to_gcloud_transcription_full.json'):
!gdown 1LTmvtf9zzw61O7D8YUqdS2mbql76nO6E -O "video_id_to_gcloud_transcription_full.json"
# Load generated summaries from LM (comment this out to generate your own with GPT-3).
if not os.path.exists('msr_full_summaries.pkl'):
!gdown 1ESXkRv3-3Kz1jZTNtkIhBXME6k1Jr9SW -O "msr_full_summaries.pkl"
# Import helper functions from Portillo-Quintero et al. 2021
!git clone https://github.com/Deferf/Experiments
%cd Experiments
from metrics import rank_at_k_precomputed,stack_encoded_dict,generate_sim_tensor,tensor_video_to_text_sim,tensor_text_to_video_metrics,normalize_matrix,pad_dict,list_recall
%cd "/content"
"""
Explanation: Setup
This installs a few dependencies: PyTorch, CLIP, GPT-3.
End of explanation
"""
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
roberta_model = SentenceTransformer('stsb-roberta-large').to(device)
"""
Explanation: Load RoBERTa (masked LM)
End of explanation
"""
gpt_version = "text-davinci-002"
def prompt_llm(prompt, max_tokens=64, temperature=0, stop=None):
response = openai.Completion.create(engine=gpt_version, prompt=prompt, max_tokens=max_tokens, temperature=temperature, stop=stop)
return response["choices"][0]["text"].strip()
"""
Explanation: Wrap GPT-3 (causal LM)
End of explanation
"""
# Load raw text captions from MSR-Full.
with open('test_videodatainfo.json', 'r') as j:
msr_full_info = json.loads(j.read())
msr_full_vid_id_to_captions = {}
for info in msr_full_info['sentences']:
if info['video_id'] not in msr_full_vid_id_to_captions:
msr_full_vid_id_to_captions[info['video_id']] = []
msr_full_vid_id_to_captions[info['video_id']].append(info['caption'])
# Reproduce original results with original eval code.
msr_full_vid_id_to_clip_vid_feats = torch.load("/content/MSRVTT_test_dict_CLIP_visual.pt", map_location="cpu")
msr_full_vid_ids_to_clip_text_feats = torch.load("/content/MSRVTT_test_dict_CLIP_text.pt", map_location="cpu")
msr_full_vid_ids = list(msr_full_vid_ids_to_clip_text_feats.keys())
msr_full_sim_tensor = generate_sim_tensor(msr_full_vid_ids_to_clip_text_feats, msr_full_vid_id_to_clip_vid_feats, msr_full_vid_ids)
msr_full_vid_text_sim = tensor_video_to_text_sim(msr_full_sim_tensor)
msr_full_metrics_vtt = rank_at_k_precomputed(msr_full_vid_text_sim)
print(msr_full_metrics_vtt)
# Transcription results from gCloud API.
with open('video_id_to_gcloud_transcription_full.json', 'r') as j:
msr_full_vid_id_to_transcript = json.loads(j.read())
# Sort video IDs by transcription length.
num_transcripts = 0
transcript_lengths = []
for i in msr_full_vid_ids:
if msr_full_vid_id_to_transcript[i] is None:
transcript_lengths.append(0)
else:
num_transcripts += 1
transcript_lengths.append(len(msr_full_vid_id_to_transcript[i]))
msr_full_sorted_vid_ids = [msr_full_vid_ids[i] for i in np.argsort(transcript_lengths)[::-1]]
# Summarize transcriptions with LLM.
if os.path.exists('msr_full_summaries.pkl'):
msr_full_vid_id_to_summary = pickle.load(open('msr_full_summaries.pkl', 'rb'))
else:
# Zero-shot LLM: summarize transcriptions.
msr_full_vid_id_to_summary = {}
for vid_id in msr_full_sorted_vid_ids:
transcript = msr_full_vid_id_to_transcript[vid_id]
print('Video ID:', vid_id)
print('Transcript:', transcript)
if transcript is not None:
transcript = transcript.strip()
prompt = 'I am an intelligent video captioning bot.'
prompt += f'\nI hear a person saying: "{transcript}".'
prompt += f"\nQ: What's a short video caption for this video? A: In this video,"
print('Prompt:', prompt)
summary = prompt_llm(prompt, temperature=0, stop='.')
print('Summary:', summary)
msr_full_vid_id_to_summary[vid_id] = summary
pickle.dump(msr_full_vid_id_to_summary, open(f'msr_full_summaries.pkl', 'wb'))
# Compute RoBERTa features for all captions.
msr_full_vid_id_to_roberta_feats = {}
for vid_id in msr_full_sorted_vid_ids:
msr_full_vid_id_to_roberta_feats[vid_id] = roberta_model.encode(msr_full_vid_id_to_captions[vid_id], convert_to_tensor=True, device=device)
topk = 100 # Pre-rank with top-100 from Portillo.
combine_clip_roberta = True # Combine CLIP (text-video) x RoBERTa (text-text) scores?
portillo_vid_id_to_topk_vid_ids = {}
socratic_vid_id_to_topk_vid_ids = {}
msr_full_all_clip_text_feats = torch.cat([msr_full_vid_ids_to_clip_text_feats[i] for i in msr_full_sorted_vid_ids], dim=0).cpu().numpy()
for vid_id in msr_full_sorted_vid_ids:
# Get Portillo top-K captions.
vid_feats = msr_full_vid_id_to_clip_vid_feats[vid_id] # CLIP features for all frames of the video
vid_feat = normalize_matrix(torch.mean(vid_feats, dim = 0, keepdim = True)).cpu().numpy()
clip_scores = msr_full_all_clip_text_feats @ vid_feat.T
clip_scores = clip_scores.squeeze()
clip_scores = clip_scores.reshape(-1, 20)
clip_scores = np.max(clip_scores, axis=1)
sorted_idx = np.argsort(clip_scores).squeeze()[::-1]
portillo_topk_vid_ids = [msr_full_sorted_vid_ids[i] for i in sorted_idx[:topk]]
portillo_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids
# If no LLM summary, default to Portillo ranking.
socratic_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids
if vid_id not in msr_full_vid_id_to_summary:
continue
# Get RoBERTa scores between LLM summary and captions.
summary = msr_full_vid_id_to_summary[vid_id]
summary_feat = roberta_model.encode([summary], convert_to_tensor=True, device=device)
caption_feats = torch.cat([msr_full_vid_id_to_roberta_feats[i] for i in portillo_topk_vid_ids], dim=0)
roberta_scores = st_utils.pytorch_cos_sim(caption_feats, summary_feat).detach().cpu().numpy().squeeze()
roberta_scores = roberta_scores.reshape(-1, 20)
roberta_scores = np.max(roberta_scores, axis=1)
# Re-rank top-K with RoBERTa scores.
sort_idx = np.argsort(roberta_scores, kind='stable').squeeze()[::-1]
socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx]
# Combine CLIP (text-video) x RoBERTa (text-text) scores.
if combine_clip_roberta:
clip_scores = np.sort(clip_scores, kind='stable').squeeze()[::-1][:topk]
scores = clip_scores * roberta_scores
sort_idx = np.argsort(scores, kind='stable').squeeze()[::-1]
socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx] # Override ranking from only LLM
# Return R@1, R@5, R@10.
def get_recall(vid_ids, socratic_subset, k=[1, 5, 10]):
recall = []
rank = []
for vid_id in vid_ids:
sorted_vid_ids = portillo_vid_id_to_topk_vid_ids[vid_id]
if vid_id in socratic_subset:
sorted_vid_ids = socratic_vid_id_to_topk_vid_ids[vid_id]
recall.append([(vid_id in sorted_vid_ids[:i]) for i in k])
rank.append(sorted_vid_ids.index(vid_id) + 1 if vid_id in sorted_vid_ids else len(sorted_vid_ids))
mdr = np.median(rank)
return np.mean(np.float32(recall) * 100, axis=0), mdr
subset_size = 1007 # Subset of long transcripts.
# Portillo only.
recall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:0])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Socratic + Portillo.
recall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:subset_size])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Portillo only on long transcripts.
recall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:0])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Socratic + Portillo on long transcripts.
recall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:subset_size])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
"""
Explanation: Evaluate on MSR-Full
End of explanation
"""
|
olgabot/kvector | overview.ipynb | bsd-3-clause | import kvector
"""
Explanation: Overview of kvector features
End of explanation
"""
motifs = kvector.read_motifs('kvector/tests/data/example_rbps.motif', residues='ACGT')
motifs.head()
"""
Explanation: Read HOMER Motifs
Read HOMER motif file and create a pandas dataframe for each position weight matrix (PWM), with all motifs saved as a series with the motif name as the key.
End of explanation
"""
# the 4th (counting from 0) motif
motifs[3]
# Specific motif name
motifs['M004_0.6_BRUNOL4_ENSG00000101489_Homo_sapiens\tM004_0.6_BRUNOL4_ENSG00000101489_Homo_sapiens\t5.0']
"""
Explanation: You can can access individual motifs with the usual pandas indexing:
End of explanation
"""
%pdb
motif_kmer_vectors = kvector.motifs_to_kmer_vectors(motifs, residues='ACGT',
kmer_lengths=(3, 4))
motif_kmer_vectors
"""
Explanation: Convert motifs to kmer vectors
Instead of representing a motif as a position-specific weight matrix which would require aligning motifs to compare them, you can convert them to a vector of kmers, where the value for each kmer is the score of the kmer in that motif.
Citation: Xu and Su, PLoS Computational Biology (2010)
End of explanation
"""
asdf = 'akjsdhfkjahsf klasjdfk asdfasdf'
asdf.replace('\t', ' ')
kmer_vector = kvector.count_kmers('kvector/tests/data/example.fasta', kmer_lengths=(3, 4))
kmer_vector.head()
"""
Explanation: Count kmers in fasta files
You may also want to just count the integer number of occurences of a DNA word (kmer) in a file. count_kmers does just that, returning a pandas dataframe.
End of explanation
"""
kmer_vector.mean()
kmer_vector.std()
"""
Explanation: Since this is a pandas dataframe, you can do convenient things like get the mean and standard deviation.
End of explanation
"""
|
rasbt/algorithms_in_ipython_notebooks | ipython_nbs/data-structures/bloom-filter.ipynb | gpl-3.0 | import hashlib
h1 = hashlib.md5()
h1.update('hello-world'.encode('utf-8'))
int(h1.hexdigest(), 16)
"""
Explanation: Bloom Filters
Bloom filters in a nutshell
A bloom filter is a probablistic data structures for memory-efficient look-ups to test if a element or value is a member of a set. In a nutshell, you can think of a bloom filter as a large bit array (an array that contains 1s and 0s), and by only checking a few elements (bits) of this array, we can tell whether an element is likely a member of a set or is definitely not a member of a set. Checking the set membership via bloom filters can return the following outputs:
Element is probably a member of the set
Element is definitely not a member of a set
Or in other words, bloom filters can produce false positives (a match means that a element is a member of the set with a given uncertainty) but does not produce false negatives (non-matches always mean that the element is not a member of that set).
So, if bloom filters are probablistic and can produce false positives, why are they useful in practice? Bloom filters are extremely useful if we are working with large databases and want to run a quick check whether or not it's worth to run a computationally more expensive database query to retrieve an element. If a bloom-filter returns a non-match, we know that an element is not contained in the set and thus we can't save computational resources to check the actual database for that element.
The basic mechanics of bloom filters
Bloom filters allow us to implement computationally cheap and memory efficient set membership checks using bit arrays. Given an element, a bloom filter uses multiple hash functions (or the same hash function with different random seeds) to encode the element as a position in the bit array. To walk through the inner works of a bloom filter step by step.
1)
let's assume we have initialized the following, empty bitarray b underlying the bloom filter of size 10:
b = [0 0 0 0 0 0 0 0 0 0]
2)
Next, we use two different hash functions, h1 and h2 to encode an element e. These hash functions convert the output of the hash into an integer and normalize the integer so that it fits into the bounds of the array b:
h1(e) -> 5
h2(e) -> 3
In the example above, the first hash function hashes element e to the array index position 5, and the second hash function hashes the element to the array index position 3.
3)
If we consider the hash operations of step 2) as part of an add to the set operation, we would use those returned array index position to update the bitarray b as follows:
[0 0 0 0 0 0 0 0 0 0] -> [0 0 0 1 0 1 0 0 0 0]
If step 2) was part of a query or look-up operation, we would simply check the respective array index positions:
If both position 3 and 5 have the bit value 1, the query returns "probably in set"
If position 3 or 5 (or both) have the bit value 0, the query returns "definitely not in set"
Implementing a bloom filter in Python
In this section, we are going to implement a bloom filter in Python. However, note that the following implementation of a bloom filter in Python mainly serves illustrative purposes and has not be designed for efficiency. For example, using Python list objects for representing bit arrays is very inefficient compared to using the bitarray package.
To generate the hashes, we will use the hashlib module from Python's standard library. Let's start with a simple example generating an integer hash using the MD5 hash function:
End of explanation
"""
h1.update('hello-world'.encode('utf-8'))
int(h1.hexdigest(), 16)
int(hashlib.new('md5', ('%s' % 'hello-world').encode('utf-8')).hexdigest(), 16)
int(hashlib.new('md5', ('%s' % 'hello-world').encode('utf-8')).hexdigest(), 16)
"""
Explanation: Unfortunately, the update method will render the hash function non-deterministic in the context of bloom filters. I.e., hashing the same value would return a different integer hash:
End of explanation
"""
class BloomFilter():
def __init__(self, array_size, hash_names):
self.array_size = array_size
self.bitarray = [0] * array_size
self.hash_names = hash_names
def _get_hash_positions(self, value):
pos = []
for h in self.hash_names:
hash_hex = hashlib.new(h, ('%s' % value).encode(
'utf-8')).hexdigest()
# convert hashed value into an integer
asint = int(hash_hex, 16)
# modulo array_size to fit hash value into the bitarray
pos.append(asint % self.array_size)
return pos
def add(self, value):
hash_pos = self._get_hash_positions(value)
for pos in hash_pos:
self.bitarray[pos] = 1
def query(self, value):
hash_pos = self._get_hash_positions(value)
for pos in hash_pos:
if not self.bitarray[pos]:
return False
return True
"""
Explanation: Next, let's implement a BloomFilter class based on the concepts we discussed earlier:
End of explanation
"""
bloom = BloomFilter(array_size=10, hash_names=('md5', 'sha1'))
bloom.bitarray
"""
Explanation: To test our implementation, let's initialize a bloom filter with array size 10 and two different hash function, a simple SHA hash and MD5 hash:
End of explanation
"""
bloom.add('hello world!')
bloom.bitarray
"""
Explanation: Next, we will add a new value to the bloom filter, 'hello world!'
End of explanation
"""
bloom.query('hello world!')
"""
Explanation: As we can see from running the previous code example, the array index position the value was hashed into are 1 and 7. Let's check the element for membership now:
End of explanation
"""
bloom.add('foo-bar')
bloom.bitarray
"""
Explanation: So far so good. Let's add another value, 'foo-bar':
End of explanation
"""
bloom.query('foo-bar')
"""
Explanation: The 'foo-bar' value was hashed into positions 3 and 5. Similarly, we can check the membership as follows:
End of explanation
"""
bloom.query('test')
"""
Explanation: Just to confirm that our bloom filter is implemented correctly and does not return false negative, let's run a query on a new value:
End of explanation
"""
|
dsacademybr/PythonFundamentos | Cap10/Notebooks/DSA-Python-Cap10-Intro-TensorFlow.ipynb | gpl-3.0 | # Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 10</font>
Download: http://github.com/dsacademybr
End of explanation
"""
from IPython.display import Image
Image('imagens/tensor1.png')
"""
Explanation: Introdução ao TensorFlow
O Tensorflow é uma das bibliotecas mais amplamente utilizadas para implementar o aprendizado de máquina e outros algoritmos que envolvem grandes operações matemáticas. O Tensorflow foi desenvolvido pelo Google e é uma das bibliotecas de aprendizado de máquina mais populares no GitHub. O Google usa o Tensorflow para aprendizado de máquina em quase todos os aplicativos. Se você já usou o Google Photos ou o Google Voice Search, então já utlizou uma aplicação criada com a ajuda do TensorFlow. Vamos compreender os detalhes por trás do TensorFlow.
Matematicamente, um tensor é um vetor N-dimensional, significando que um tensor pode ser usado para representar conjuntos de dados N-dimensionais. Aqui está um exemplo:
End of explanation
"""
from IPython.display import Image
Image('imagens/tensor2.png')
"""
Explanation: A figura acima mostra alguns tensores simplificados com dimensões mínimas. À medida que a dimensão continua crescendo, os dados se tornam mais e mais complexos. Por exemplo, se pegarmos um Tensor da forma (3x3), posso chamá-lo de matriz de 3 linhas e colunas. Se eu selecionar outro Tensor de forma (1000x3x3), posso chamá-lo como tensor ou conjunto de 1000 matrizes 3x3. Aqui chamamos (1000x3x3) como a forma ou dimensão do tensor resultante. Os tensores podem ser constantes ou variáveis.
End of explanation
"""
from IPython.display import Image
Image('imagens/tf_numpy.png')
"""
Explanation: TensorFlow x NumPy
TensorFlow e NumPy são bastante semelhantes (ambos são bibliotecas de matriz N-d). NumPy é o pacote fundamental para computação científica com Python. Ele contém um poderoso objeto array N-dimensional, funções sofisticadas (broadcasting) e etc. Acredito que os usuários Python não podem viver sem o NumPy. O NumPy tem suporte a matriz N-d, mas não oferece métodos para criar funções de tensor e automaticamente computar derivadas, além de não ter suporte a GPU, e esta é uma das principais razões para a existência do TensorFlow. Abaixo uma comparação entre NumPy e TensorFlow, e você vai perceber que muitas palavras-chave são semelhantes.
End of explanation
"""
from IPython.display import Image
Image('imagens/grafo1.png')
"""
Explanation: Grafo Computacional
Conheça a Formação Inteligência Artificial, um programa completo, 100% online e 100% em português, com mais de 402 horas em 9 cursos de nível intermediário/avançado, que vão ajudá-lo a se tornar um dos profissionais mais cobiçados do mercado de tecnologia. Clique no link abaixo, faça sua inscrição, comece hoje mesmo e aumente sua empregabilidade:
https://www.datascienceacademy.com.br/bundle/formacao-inteligencia-artificial
Agora sabemos o que o tensor realmente significa e é hora de entender o fluxo. Este fluxo refere-se a um grafo computacional ou simplesmente um grafo.
Grafos computacionais são uma boa maneira de pensar em expressões matemáticas. O conceito de grafo foi introduzido por Leonhard Euler em 1736 para tentar resolver o problema das Pontes de Konigsberg. Grafos são modelos matemáticos para resolver problemas práticos do dia a dia, com várias aplicações no mundo real tais como: circuitos elétricos, redes de distribuição, relações de parentesco entre pessoas, análise de redes sociais, logística, redes de estradas, redes de computadores e muito mais. Grafos são muito usados para modelar problemas em computação.
Um Grafo é um modelo matemático que representa relações entre objetos. Um grafo G = (V, E) consiste de um conjunto de vértices V (também chamados de nós), ligados por um conjunto de bordas ou arestas E.
Considere o diagrama abaixo:
End of explanation
"""
from IPython.display import Image
Image('imagens/grafo2.png')
"""
Explanation: Existem três operações: duas adições e uma multiplicação. Ou seja:
c = a+b
d = b+1
e = c∗d
Para criar um grafo computacional, fazemos cada uma dessas operações nos nós, juntamente com as variáveis de entrada. Quando o valor de um nó é a entrada para outro nó, uma seta vai de um para outro e temos nesse caso um grafo direcionado.
Esses tipos de grafos surgem o tempo todo em Ciência da Computação, especialmente ao falar sobre programas funcionais. Eles estão intimamente relacionados com as noções de grafos de dependência e grafos de chamadas. Eles também são a principal abstração por trás do popular framework de Deep Learning, o TensorFlow.
Para mais detalhes sobre grafos, leia um dos capítulos do Deep Learning Book:
https://deeplearningbook.com.br/algoritmo-backpropagation-parte1-grafos-computacionais-e-chain-rule/
Um grafo para execução de um modelo de Machine Learning pode ser bem grande e podemos executar sub-grafos (porções dos grafos) em dispositivos diferentes, como uma GPU. Exemplo:
End of explanation
"""
from IPython.display import Image
Image('imagens/grafo3.png')
"""
Explanation: A figura acima explica a execução paralela de sub-grafos. Aqui estão 2 operações de multiplicação de matrizes, já que ambas estão no mesmo nível. Os nós são executados em gpu_0 e gpu_1 em paralelo.
Modelo de Programação TensorFlow
O principal objetivo de um programa TensorFlow é expressar uma computação numérica como um grafo direcionado. A figura abaixo é um exemplo de grafo de computação, que representa o cálculo de h = ReLU (Wx + b). Este é um componente muito clássico em muitas redes neurais, que conduz a transformação linear dos dados de entrada e, em seguida, alimenta uma linearidade (função de ativação linear retificada, neste caso).
End of explanation
"""
from IPython.display import Image
Image('imagens/grafo4.png')
"""
Explanation: O grafo acima representa um cálculo de fluxo de dados; cada nó está em operação com zero ou mais entradas e zero ou mais saídas. As arestas do grafo são tensores que fluem entre os nós. Os Cientistas de Dados geralmente constroem um grafo computacional usando uma das linguagens frontend suportadas como Python e C ++ e, em seguida, iniciam o grafo.
Vamos ver o grafo computacional acima em detalhes. Truncamos o grafo e deixamos a parte acima do nó ReLU, que é exatamente o cálculo h = ReLU (Wx + b).
End of explanation
"""
# Versão do TensorFlow a ser usada
!pip install -q tensorflow==2.5
import tensorflow as tf
tf.__version__
# Cria um tensor
# Esse tensor é adicionado como um nó ao grafo.
hello = tf.constant('Hello, TensorFlow!')
print(hello)
"""
Explanation: Podemos ver o grafo como um sistema, que tem entradas (os dados x), saída (h neste caso), variáveis com estado (W e b) e um monte de operações (matmul, add e ReLU). Deixe-me apresentar-lhe um por um.
Variáveis: quando treinamos um modelo, usamos variáveis para manter e atualizar parâmetros. Ao contrário de muitos tensores que fluem ao longo das margens do grafo, uma variável é um tipo especial de operação. Na maioria dos modelos de aprendizado de máquina, existem muitos parâmetros que temos que aprender, que são atualizados durante o treinamento. Variáveis são nós com estado que armazenam parâmetros e produzem seus valores atuais de tempos em tempos. Seus estados são mantidos em múltiplas execuções de um grafo. Por exemplo, os valores desses nós não serão atualizados até que uma etapa completa de treinamento usando um mini lote de dados seja concluída.
Operações matemáticas: Neste grafo, existem três tipos de operações matemáticas. A operação MatMul multiplica dois valores de matriz; A operação Add adiciona elementos e a operação ReLU é ativada com a função linear retificada de elementos.
Hello World
End of explanation
"""
# Constantes
const_a = tf.constant(5)
const_b = tf.constant(9)
print(const_a)
# Soma
total = const_a + const_b
print(total)
# Criando os nodes no grafo computacional
node1 = tf.constant(5, dtype = tf.int32)
node2 = tf.constant(9, dtype = tf.int32)
node3 = tf.add(node1, node2)
# Executa o grafo
print("\nA soma do node1 com o node2 é:", node3)
"""
Explanation: Operações Matemáticas com Tensores
Soma
End of explanation
"""
# Tensores randômicos
rand_a = tf.random.normal([3], 2.0)
rand_b = tf.random.uniform([3], 1.0, 4.0)
print(rand_a)
print(rand_b)
# Subtração
diff = tf.subtract(rand_a, rand_b)
type(diff)
print('\nSubtração entre os 2 tensores é: ', diff)
"""
Explanation: Subtração
End of explanation
"""
# Tensores
node1 = tf.constant(21, dtype = tf.int32)
node2 = tf.constant(7, dtype = tf.int32)
# Divisão
div = tf.math.truediv(node1, node2)
print('\nDivisão Entre os Tensores: \n', div)
"""
Explanation: Divisão
End of explanation
"""
# Criando tensores
tensor_a = tf.constant([[4., 2.]])
tensor_b = tf.constant([[3.],[7.]])
print(tensor_a)
print(tensor_b)
# Multiplicação
# tf.math.multiply(X, Y) executa multiplicação element-wise
# https://www.tensorflow.org/api_docs/python/tf/math/multiply
prod = tf.math.multiply(tensor_a, tensor_b)
print('\nProduto Element-wise Entre os Tensores: \n', prod)
# Outro exemplo de Multiplicação de Matrizes
mat_a = tf.constant([[2, 3], [9, 2], [4, 5]])
mat_b = tf.constant([[6, 4, 5], [3, 7, 2]])
print(mat_a)
print(mat_b)
# Multiplicação
# tf.linalg.matmul(X, Y) executa multiplicação entre matrizes
# https://www.tensorflow.org/api_docs/python/tf/linalg/matmul
mat_prod = tf.linalg.matmul(mat_a, mat_b)
print('\nProduto Element-wise Entre os Tensores (Matrizes): \n', mat_prod)
from IPython.display import Image
Image('imagens/matrizes.png')
"""
Explanation: Multiplicação
Para aprender a teoria e prática sobre operações com matrizes acesse <a href="https://www.datascienceacademy.com.br/bundle/formacao-inteligencia-artificial">aqui</a>.
End of explanation
"""
# Criado o mesmo tensor com tf.Variable() e tf.constant()
changeable_tensor = tf.Variable([10, 7])
unchangeable_tensor = tf.constant([10, 7])
changeable_tensor, unchangeable_tensor
# Isso vai gerar erro - requer o método assign()
changeable_tensor[0] = 7
changeable_tensor
# Isso vai funcionar
changeable_tensor[0].assign(7)
changeable_tensor
# Isso vai gerar erro (não podemos alterar tensores criados com tf.constant())
unchangeable_tensor[0].assign(7)
unchangleable_tensor
"""
Explanation: Usando Variáveis
O TensorFlow também possui nodes variáveis que podem conter dados variáveis. Elas são usados principalmente para manter e atualizar parâmetros de um modelo de treinamento.
Variáveis são buffers na memória contendo tensores. Elas devem ser inicializados e podem ser salvas durante e após o treinamento. Você pode restaurar os valores salvos posteriormente para exercitar ou analisar o modelo.
Uma diferença importante a notar entre uma constante e uma variável é:
O valor de uma constante é armazenado no grafo e seu valor é replicado onde o grafo é carregado. Uma variável é armazenada separadamente e pode estar em um servidor de parâmetros.
End of explanation
"""
# Tensor preenchido com 1
tf.ones(shape=(3, 2))
# Tensor preenchido com 0
tf.zeros(shape=(3, 2))
# Cria um tensor rank 4 (4 dimensões)
rank_4_tensor = tf.zeros([2, 3, 4, 5])
rank_4_tensor
# Imprime atributos do tensor
print("Tipo de dado de cada elemento:", rank_4_tensor.dtype)
print("Número de dimensões (rank):", rank_4_tensor.ndim)
print("Shape do tensor:", rank_4_tensor.shape)
print("Elementos no eixo 0 do tensor:", rank_4_tensor.shape[0])
print("Elementos no último eixo do tensor:", rank_4_tensor.shape[-1])
print("Número total de elementos (2*3*4*5):", tf.size(rank_4_tensor).numpy())
# Obtém os 2 primeiros itens de cada dimensão
rank_4_tensor[:2, :2, :2, :2]
# Obtém a dimensão de cada índice, exceto o final
rank_4_tensor[:1, :1, :1, :]
# Cria um tensor e rank 2 (2 dimensões)
rank_2_tensor = tf.constant([[10, 7],
[3, 4]])
# Obtém o último item de cada linha
rank_2_tensor[:, -1]
"""
Explanation: Qual você deve usar? tf.constant () ou tf.Variable ()?
Dependerá do que o seu problema requer. No entanto, na maioria das vezes, o TensorFlow escolhe automaticamente para você (ao carregar dados ou modelar dados).
Outras Formas de Criar Tensores
End of explanation
"""
|
weikang9009/pysal | notebooks/viz/splot/esda_morans_viz.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
from pysal.lib.weights.contiguity import Queen
from pysal.lib import examples
import numpy as np
import pandas as pd
import geopandas as gpd
import os
from pysal.viz import splot
"""
Explanation: Exploratory Analysis of Spatial Data: Visualizing Spatial Autocorrelation with splot and esda
Content
Imports
Load Example data
Assessing Global Spatial Autocorrelation
Visualizing Local Autocorrelation Statistics with splot
Combined visualizations: Moran Local Scatterplot, LISA clustermap and Choropleth map
Bivariate Moran Statistics
Imports
End of explanation
"""
link_to_data = examples.get_path('Guerry.shp')
gdf = gpd.read_file(link_to_data)
"""
Explanation: Example Data
First, we will load the Guerry.shp data from examples in pysal.lib.
End of explanation
"""
y = gdf['Donatns'].values
w = Queen.from_dataframe(gdf)
w.transform = 'r'
"""
Explanation: For this example we will focus on the Donatns (charitable donations per capita) variable. We will calculate Contiguity weights w with pysal.libs Queen.from_dataframe(gdf). Then we transform our weights to be row-standardized.
End of explanation
"""
from pysal.explore.esda.moran import Moran
w = Queen.from_dataframe(gdf)
moran = Moran(y, w)
moran.I
"""
Explanation: Assessing Global Spatial Autocorrelation
We calculate Moran's I. A test for global autocorrelation for a continuous attribute.
End of explanation
"""
from pysal.viz.splot.esda import moran_scatterplot
fig, ax = moran_scatterplot(moran, aspect_equal=True)
plt.show()
from pysal.viz.splot.esda import plot_moran
plot_moran(moran, zstandard=True, figsize=(10,4))
plt.show()
"""
Explanation: Our value for the statistic is interpreted against a reference distribution under the null hypothesis of complete spatial randomness. PySAL uses the approach of random spatial permutations.
End of explanation
"""
moran.p_sim
"""
Explanation: Our observed value is statistically significant:
End of explanation
"""
from pysal.viz.splot.esda import moran_scatterplot
from pysal.explore.esda.moran import Moran_Local
# calculate Moran_Local and plot
moran_loc = Moran_Local(y, w)
fig, ax = moran_scatterplot(moran_loc)
ax.set_xlabel('Donatns')
ax.set_ylabel('Spatial Lag of Donatns')
plt.show()
fig, ax = moran_scatterplot(moran_loc, p=0.05)
ax.set_xlabel('Donatns')
ax.set_ylabel('Spatial Lag of Donatns')
plt.show()
"""
Explanation: Visualizing Local Autocorrelation with splot - Hot Spots, Cold Spots and Spatial Outliers
In addition to visualizing Global autocorrelation statistics, splot has options to visualize local autocorrelation statistics. We compute the local Moran m. Then, we plot the spatial lag and the Donatns variable in a Moran Scatterplot.
End of explanation
"""
from pysal.viz.splot.esda import lisa_cluster
lisa_cluster(moran_loc, gdf, p=0.05, figsize = (9,9))
plt.show()
"""
Explanation: We can distinguish the specific type of local spatial autocorrelation in High-High, Low-Low, High-Low, Low-High.
Where the upper right quadrant displays HH, the lower left, LL, the upper left LH and the lower left HL.
These types of local spatial autocorrelation describe similarities or dissimilarities between a specific polygon with its neighboring polygons. The upper left quadrant for example indicates that polygons with low values are surrounded by polygons with high values (LH). The lower right quadrant shows polygons with high values surrounded by neighbors with low values (HL). This indicates an association of dissimilar values.
Let's now visualize the areas we found to be significant on a map:
End of explanation
"""
from pysal.viz.splot.esda import plot_local_autocorrelation
plot_local_autocorrelation(moran_loc, gdf, 'Donatns')
plt.show()
plot_local_autocorrelation(moran_loc, gdf, 'Donatns', quadrant=1)
plt.show()
"""
Explanation: Combined visualizations
Often, it is easier to asses once statistical results or interpret these results comparing different visualizations.
Here we for example look at a static visualization of a Moran Scatterplot, LISA cluster map and choropleth map.
End of explanation
"""
from pysal.explore.esda.moran import Moran_BV, Moran_Local_BV
from pysal.viz.splot.esda import plot_moran_bv_simulation, plot_moran_bv
"""
Explanation: Bivariate Moran Statistics
Additionally, to assessing the correlation of one variable over space. It is possible to inspect the relationwhip of two variables and their position in space with so called Bivariate Moran Statistics. These can be found in esda.moran.Moran_BV.
End of explanation
"""
x = gdf['Suicids'].values
"""
Explanation: Next to y we will also be looking at the suicide rate x.
End of explanation
"""
moran = Moran(y,w)
moran_bv = Moran_BV(y, x, w)
moran_loc = Moran_Local(y, w)
moran_loc_bv = Moran_Local_BV(y, x, w)
fig, axs = plt.subplots(2, 2, figsize=(15,10),
subplot_kw={'aspect': 'equal'})
moran_scatterplot(moran, ax=axs[0,0])
moran_scatterplot(moran_loc, p=0.05, ax=axs[1,0])
moran_scatterplot(moran_bv, ax=axs[0,1])
moran_scatterplot(moran_loc_bv, p=0.05, ax=axs[1,1])
plt.show()
"""
Explanation: Before we dive into Bivariate Moran startistics, let's make a quick overview which esda.moran objects are supported by moran_scatterplot:
End of explanation
"""
plot_moran_bv(moran_bv)
plt.show()
"""
Explanation: As you can see an easy moran_scatterplot call provides you with loads of options. Now what are Bivariate Moran Statistics?
Bivariate Moran Statistics describe the correlation between one variable and the spatial lag of another variable. Therefore, we have to be careful interpreting our results. Bivariate Moran Statistics do not take the inherent correlation between the two variables at the same location into account. They much more offer a tool to measure the degree one polygon with a specific attribute is correlated with its neighboring polygons with a different attribute.
splot can offer help interpreting the results by providing visualizations of reference distributions and a Moran Scatterplot:
End of explanation
"""
from pysal.explore.esda.moran import Moran_Local_BV
moran_loc_bv = Moran_Local_BV(x, y, w)
fig, ax = moran_scatterplot(moran_loc_bv, p=0.05)
ax.set_xlabel('Donatns')
ax.set_ylabel('Spatial lag of Suicids')
plt.show()
plot_local_autocorrelation(moran_loc_bv, gdf, 'Suicids')
plt.show()
"""
Explanation: Local Bivariate Moran Statistics
Similar to univariate local Moran statistics pysal and splot offer tools to asses local autocorrelation for bivariate analysis:
End of explanation
"""
|
dmc-2016/dmc | notebooks/week-4/01-tensorflow ANN for regression.ipynb | apache-2.0 | %matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
"""
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are:
While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution.
Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand.
You can get more details about various popular machine learning libraries in this comparison.
To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features.
This lab will conclude with two assignments:
Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model.
Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem.
Let's start by importing some of the libraries we will use for this tutorial:
End of explanation
"""
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
"""
Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
End of explanation
"""
print dataset['DESCR']
"""
Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
End of explanation
"""
# Create a datset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
"""
Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
End of explanation
"""
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
"""
Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail:
End of explanation
"""
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
"""
Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab:
We will first re-split the data into a feature set (X) and a target set (y)
Then we will normalize the feature set so that the values range from 0 to 1
Finally, we will split both data sets into a training and test set.
End of explanation
"""
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 16
num_hidden_2 = 16
learning_rate = 0.0001
training_epochs = 200
dropout_keep_prob = 1.0 # set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
"""
Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include:
batch size, which sets how many training samples are used at a time
learning rate which controls how quickly the gradient descent algorithm works
training epochs which sets how many rounds of training occurs
dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here.
End of explanation
"""
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
"""
Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
End of explanation
"""
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
"""
Explanation: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things:
describes the architecture of the network, including how many layers it has and how many neurons are in each layer
initializes all the parameters of the network
describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result
defines the loss function which describes how well the model is performing
specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss
Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'.
End of explanation
"""
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
"""
Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we:
Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above.
Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow.
End of explanation
"""
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
"""
Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
End of explanation
"""
|
keras-team/keras-io | examples/nlp/ipynb/bidirectional_lstm_imdb.ipynb | apache-2.0 | import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
max_features = 20000 # Only consider the top 20k words
maxlen = 200 # Only consider the first 200 words of each movie review
"""
Explanation: Bidirectional LSTM on IMDB
Author: fchollet<br>
Date created: 2020/05/03<br>
Last modified: 2020/05/03<br>
Description: Train a 2-layer bidirectional LSTM on the IMDB movie review sentiment classification dataset.
Setup
End of explanation
"""
# Input for variable-length sequences of integers
inputs = keras.Input(shape=(None,), dtype="int32")
# Embed each integer in a 128-dimensional vector
x = layers.Embedding(max_features, 128)(inputs)
# Add 2 bidirectional LSTMs
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(x)
x = layers.Bidirectional(layers.LSTM(64))(x)
# Add a classifier
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.summary()
"""
Explanation: Build the model
End of explanation
"""
(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(
num_words=max_features
)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen)
"""
Explanation: Load the IMDB movie review sentiment data
End of explanation
"""
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val))
"""
Explanation: Train and evaluate the model
You can use the trained model hosted on Hugging Face Hub and try the demo on Hugging Face Spaces.
End of explanation
"""
|
bliebeskind/Gene-Ages | Notebooks/nodeStats_plotting.ipynb | mit | import pandas as pd
from matplotlib import pyplot as plt
import matplotlib.lines as mlines
from LECA.plotting import histLinePlot
%matplotlib inline
nodestats = pd.read_csv("nodeStats_HUMAN.csv",index_col=0,na_values=[None])
nodestats.head()
"""
Explanation: Distributions of node-based statistics
This notebook was used to generate Figure 3. It shows that systematic differences between algorithms, which we capture in the bimodality statistic, has a large effect on gene age estimation.
End of explanation
"""
nodestats.corr("spearman")
"""
Explanation: Correlation between node error and the bimodality statistic
End of explanation
"""
nodestats["NodeError"].hist(bins=50,color='grey')
plt.ylabel("Number of Genes")
plt.xlabel("Avg. Node Error Between Algorithms")
"""
Explanation: Distribution of the node error statistic
End of explanation
"""
nodestats["Bimodality"].hist(bins=50,color='grey')
plt.ylabel("Number of Genes")
plt.xlabel("Bimodality")
#plt.savefig("Polarization_distribution.svg")
"""
Explanation: Distribution of the bimodality statistic
Note the positive skew. The bimodality statisitc is defined as the average node error between the "old" and "young" groups of the algorithms minus the average node error within those groups. So the positive skew means that this grouping is capturing a true clustering with respect to node error. Genes falling below zero here are bimodal with respect to some other grouping, but are clearly in the minority
End of explanation
"""
# Those split between other groups will have bimodality score <0 (greater within group difference
# than between)
len(nodestats[nodestats["Bimodality"] < 0])/float(len(nodestats))
# Bimodal or neutral genes (score greater than or equal to 0)
len(nodestats[nodestats["Bimodality"] >= 0])/float(len(nodestats))
"""
Explanation: Percent of proteins that are split bimodally between the "young" and "old" algorithm groups. Compare with those split between other groupings
End of explanation
"""
%%capture
stats = histLinePlot.getLineScoreStats(nodestats,"Bimodality","NodeError")
stats.head()
fig,ax1 = plt.subplots()
nodestats["NodeError"].hist(bins=50,color='grey')
ax2 = ax1.twinx()
ax2.plot(stats.index,stats['mean'],'black',label="Avg Bimodality")
ax1.set_ylabel("Number of Genes")
ax1.set_xlabel("Avg. Node Error Between Algorithms")
ax2.set_ylabel("Average Bimodality")
plt.legend()
plt.savefig("nodeError-polarization_correlation.svg")
"""
Explanation: Get polarization statistics for each bin in node error histogram
I made a module to do the binning called histLinePlot. It makes a dataframe with the mean,standard deviation, and variance for the bimodality statistic in each bin in the node error histogram. The plots below visualize these statistics.The clear takeaway is that genes with more node error are more bimodal with respect to the "old" and "young" algorithms. There are therefore systematic differences between these algorithms that make determination of a true age very difficult for a substantial subset of genes
End of explanation
"""
fig,ax1 = plt.subplots()
nodestats["NodeError"].hist(bins=50,color='grey')
ax2 = ax1.twinx()
ax2.plot(stats.index,stats['var'],'black',label="Var Bimodality")
ax1.set_ylabel("Number of Genes")
ax1.set_xlabel("Avg. Node Error Between Algorithms")
ax2.set_ylabel("Variance Bimodality")
plt.legend()
#plt.savefig("nodeError-polarization_correlation.svg")
fig,ax1 = plt.subplots()
nodestats["NodeError"].hist(bins=50,color='grey')
ax2 = ax1.twinx()
ax2.plot(stats.index,stats['stanDev.'],'black',label="stdev Bimodality")
ax1.set_ylabel("Number of Genes")
ax1.set_xlabel("Std. Dev. Node Error Between Algorithms")
ax2.set_ylabel("Std. Dev. Bimodality")
plt.legend()
#plt.savefig("nodeError-polarization_correlation.svg")
"""
Explanation: The scatter in the average bimodality of larger bins is due to small sample size and high variance
End of explanation
"""
|
newsapps/public-notebooks | Shootings and homicides within the Austin community area.ipynb | mit | import requests
from shapely.geometry import shape, Point
r = requests.get('https://data.cityofchicago.org/api/geospatial/cauq-8yn6?method=export&format=GeoJSON')
for feature in r.json()['features']:
if feature['properties']['community'] == 'AUSTIN':
austin = feature
poly = shape(austin['geometry'])
"""
Explanation: We're going to start by grabbing the geometry for the Austin community area.
End of explanation
"""
import os
def get_data(table):
r = requests.get('%stable/json/%s' % (os.environ['NEWSROOMDB_URL'], table))
return r.json()
shootings = get_data('shootings')
homicides = get_data('homicides')
"""
Explanation: Now let's get the shootings data.
End of explanation
"""
shootings_ca = []
for row in shootings:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
shootings_ca.append(row)
print 'Found %d shootings in this community area' % len(shootings_ca)
for f in shootings_ca:
print f['Date'], f['Time'], f['Age'], f['Sex'], f['Shooting Location']
"""
Explanation: Now let's iterate through the shootings, generate shapely points and check to see if they're in the geometry we care about.
End of explanation
"""
homicides_ca = []
years = {}
for row in homicides:
if not row['Geocode Override']:
continue
points = row['Geocode Override'][1:-1].split(',')
if len(points) != 2:
continue
point = Point(float(points[1]), float(points[0]))
row['point'] = point
if poly.contains(point):
homicides_ca.append(row)
print 'Found %d homicides in this community area' % len(homicides_ca)
for f in homicides_ca:
print f['Occ Date'], f['Occ Time'], f['Age'], f['Sex'], f['Address of Occurrence']
if not f['Occ Date']:
continue
dt = datetime.strptime(f['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
"""
Explanation: Let's do something similar with homicides. It's exactly the same, in fact, but a few field names are different.
End of explanation
"""
import pyproj
from datetime import datetime, timedelta
geod = pyproj.Geod(ellps='WGS84')
associated = []
for homicide in homicides_ca:
if not homicide['Occ Time']:
homicide['Occ Time'] = '00:01'
if not homicide['Occ Date']:
homicide['Occ Date'] = '2000-01-01'
homicide_dt = datetime.strptime('%s %s' % (homicide['Occ Date'], homicide['Occ Time']), '%Y-%m-%d %H:%M')
for shooting in shootings_ca:
if not shooting['Time']:
shooting['Time'] = '00:01'
if not shooting['Time']:
shooting['Time'] = '2000-01-01'
shooting_dt = datetime.strptime('%s %s' % (shooting['Date'], shooting['Time']), '%Y-%m-%d %H:%M')
diff = homicide_dt - shooting_dt
seconds = divmod(diff.days * 86400 + diff.seconds, 60)[0]
if abs(seconds) <= 600:
angle1, angle2, distance = geod.inv(
homicide['point'].x, homicide['point'].y, shooting['point'].x, shooting['point'].y)
if distance < 5:
associated.append((homicide, shooting))
break
print len(associated)
years = {}
for homicide in homicides:
if not homicide['Occ Date']:
continue
dt = datetime.strptime(homicide['Occ Date'], '%Y-%m-%d')
if dt.year not in years:
years[dt.year] = 0
years[dt.year] += 1
print years
from csv import DictWriter
from ftfy import fix_text, guess_bytes
for idx, row in enumerate(shootings_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = fix_text(row[key].replace('\xa0', '').decode('utf8'))
for idx, row in enumerate(homicides_ca):
if 'point' in row.keys():
del row['point']
for key in row:
#print idx, key, row[key]
if type(row[key]) is str:
#print row[key]
row[key] = row[key].decode('utf8')
with open('/Users/abrahamepton/Documents/austin_shootings.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(shootings_ca[0].keys()))
writer.writeheader()
for row in shootings_ca:
try:
writer.writerow(row)
except:
print row
with open('/Users/abrahamepton/Documents/austin_homicides.csv', 'w+') as fh:
writer = DictWriter(fh, sorted(homicides_ca[0].keys()))
writer.writeheader()
for row in homicides_ca:
try:
writer.writerow(row)
except:
print row
"""
Explanation: Now let's see how many homicides we can associate with shootings. We'll say that if the locations are within five meters and the date and time of the shooting is within 10 minutes of the homicide, they're the same incident.
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/prod/n04_day28_model_choosing_close_feat_all_syms_equal.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
import utils.misc as misc
AHEAD_DAYS = 28
"""
Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 28 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation.
End of explanation
"""
datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl')
print(datasets_params_list_df.shape)
datasets_params_list_df.head()
train_days_arr = 252 * np.array([1, 2, 3])
params_list_df = pd.DataFrame()
for train_days in train_days_arr:
temp_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == AHEAD_DAYS].copy()
temp_df['train_days'] = train_days
params_list_df = params_list_df.append(temp_df, ignore_index=True)
print(params_list_df.shape)
params_list_df.head()
"""
Explanation: Let's get the data.
End of explanation
"""
tic = time()
from predictor.dummy_mean_predictor import DummyPredictor
PREDICTOR_NAME = 'dummy'
# Global variables
eval_predictor = DummyPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
"""
Explanation: Let's find the best params set for some different models
- Dummy Predictor (mean)
End of explanation
"""
tic = time()
from predictor.linear_predictor import LinearPredictor
PREDICTOR_NAME = 'linear'
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
"""
Explanation: - Linear Predictor
End of explanation
"""
tic = time()
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor = RandomForestPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
toc = time()
print('Elapsed time: {} seconds.'.format((toc-tic)))
"""
Explanation: - Random Forest model
End of explanation
"""
|
zzsza/Datascience_School | 17. 로지스틱 회귀 분석/01. 로지스틱 회귀 분석.ipynb | mit | xx = np.linspace(-10, 10, 1000)
plt.plot(xx, (1/(1+np.exp(-xx)))*2-1, label="logistic (scaled)")
plt.plot(xx, sp.special.erf(0.5*np.sqrt(np.pi)*xx), label="erf (scaled)")
plt.plot(xx, np.tanh(xx), label="tanh")
plt.ylim([-1.1, 1.1])
plt.legend(loc=2)
plt.show()
"""
Explanation: 로지스틱 회귀 분석
로지스틱 회귀(Logistic Regression) 분석은 회귀 분석이라는 명칭을 가지고 있지만 분류(classsification) 방법의 일종이다.
로지스틱 회귀 모형에서는 베르누이 확률 변수(Bernoilli random variable)의 모수(parameter) $\theta$가 독립 변수 $x$에 의존한다고 가정한다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
여기에서 모수 $\theta$ 는 0과 1사이의 실수이며 다음과 같이 $x$의 값에 의존하는 함수이다.
$$
\theta = f(w^Tx)
$$
시그모이드 함수
모수 $\theta$는 일반적인 회귀 분석의 종속 변수와 달리 0 부터 1까지의 실수값만 가질 수 있기 때문에 시그모이드 함수(sigmoid function)이라 불리는 특별한 형태의 함수 $f$를 사용해야 한다.
시그모이드 함수는 종속 변수의 모든 실수 값에 대해 유한한 구간 $(a,b)$ 사이의 한정된(bounded) 값과 양의 기울기를 가지는 함수를 말하며 다음과 같은 함수들이 주로 사용된다.
로지스틱 함수 (Logistic Function)
$$ \text{logitstic}(z) = \dfrac{1}{1+\exp{(-z)}} $$
오차 함수 (Error Function)
$$ \text{erf}(z) = \frac{2}{\sqrt\pi}\int_0^z e^{-t^2}\,dt $$
하이퍼볼릭 탄젠트 함수 (Hyperbolic tangent)
$$ \tanh(z) = \frac{\sinh z}{\cosh z} = \frac {e^z - e^{-z}} {e^z + e^{-z}} $$
역 탄젠트 함수 (Arc-tangent)
$$ \arctan(z) = \tan^{-1}(z) $$
End of explanation
"""
from sklearn.datasets import make_classification
X0, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
X = sm.add_constant(X0)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression().fit(X0, y)
xx = np.linspace(-3, 3, 100)
sigm = 1.0/(1 + np.exp(-model.coef_[0][0]*xx - model.intercept_[0]))
plt.plot(xx, sigm)
plt.scatter(X0, y, marker='o', c=y, s=100)
plt.scatter(X0, model.predict(X0), marker='x', c=y, s=200, lw=2, alpha=0.5, cmap=mpl.cm.jet)
plt.xlim(-3, 3)
plt.show()
"""
Explanation: 로지스틱 함수
여러가지 시그모이드 중 로지스틱 함수는 다음과 같은 물리적인 의미를 부여할 수 있기 때문에 많이 사용된다.
우선 Bernoulli 시도에서 1이 나올 확률 $\theta$ 과 0이 나올 확률 $1-\theta$ 의 비(ratio)는 다음과 같은 수식이 되며 odds ratio 라고 한다.
$$ \text{odds ratio} = \dfrac{\theta}{1-\theta} $$
이 odds ratio 를 로그 변환한 것이 로지트 함수(Logit function)이다.
$$ z = \text{logit}(\text{odds ratio}) = \log \left(\dfrac{\theta}{1-\theta}\right) $$
로지스틱 함수(Logistic function) 는 이 로지트 함수의 역함수이다.
$$ \text{logitstic}(z) = \theta(z) = \dfrac{1}{1+\exp{(-z)}} $$
로지스틱 모형의 모수 추정
로지스틱 모형은 일종의 비선형 회귀 모형이지만 다음과 같이 MLE(Maximum Likelihood Estimation) 방법으로 모수 $w$를 추정할 수 있다.
여기에서는 종속 변수 $y$가 베르누이 확률 변수라고 가정한다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
데이터 표본이 ${ x_i, y_i }$일 경우 Log Likelihood $\text{LL}$ 를 구하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \log \prod_{i=1}^N \theta_i(x_i)^{y_i} (1-\theta_i(x_i))^{1-y_i} \
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
\end{eqnarray}
$$
$\theta$가 로지스틱 함수 형태로 표현된다면
$$
\log \left(\dfrac{\theta(x)}{1-\theta(x)}\right) = w^T x
$$
$$
\theta(x) = \dfrac{1}{1 + \exp{(-w^Tx)}}
$$
가 되고 이를 Log Likelihood 에 적용하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
&=& \sum_{i=1}^N \left( y_i \log\left(\dfrac{1}{1 + \exp{(-w^Tx_i)}}\right) - (1-y_i)\log\left(\dfrac{\exp{(-w^Tx_i)}}{1 + \exp{(-w^Tx_i)}}\right) \right) \
\end{eqnarray}
$$
이 값의 최대화하는 값을 구하기 위해 chain rule를 사용하여 $w$로 미분해야 한다.
우선 $\theta$를 $w$로 미분하면
$$ \dfrac{\partial \theta}{\partial w}
= \dfrac{\partial}{\partial w} \dfrac{1}{1 + \exp{(-w^Tx)}} \
= \dfrac{\exp{(-w^Tx)}}{(1 + \exp{(-w^Tx)})^2} x \
= \theta(1-\theta) x $$
chain rule를 적용하면
$$
\begin{eqnarray}
\dfrac{\partial \text{LL}}{\partial w}
&=& \sum_{i=1}^N \left( y_i \dfrac{1}{\theta_i(x_i;w)} - (1-y_i)\dfrac{1}{1-\theta_i(x_i;w)} \right) \dfrac{\partial \theta}{\partial w} \
&=& \sum_{i=1}^N \big( y_i (1-\theta_i(x_i;w)) - (1-y_i)\theta_i(x_i;w) \big) x_i \
&=& \sum_{i=1}^N \big( y_i - \theta_i(x_i;w) \big) x_i \
\end{eqnarray}
$$
이 값은 $w$에 대한 비선형 함수이므로 선형 모형과 같이 간단하게 그레디언트가 0이 되는 모수 $w$ 값에 대한 수식을 구할 수 없으며 수치적인 최적화 방법(numerical optimization)을 통해 최적 모수 $w$의 값을 구해야 한다.
수치적 최적화
단순한 Steepest Gradient 방법을 사용한다면 최적화 알고리즘은 다음과 같다.
그레디언트 벡터는
$$
g_k = \dfrac{d}{dw}(-LL)
$$
이 방향으로 step size $\eta_k$ 만큼 움직이면 다음과 같이 반복적으로 최적 모수값을 구할 수 있다.
$$
\begin{eqnarray}
w_{k+1}
&=& w_{k} - \eta_k g_k \
&=& w_{k} + \eta_k \sum_{i=1}^N \big( y_i - \theta_i(x_i) \big) x_i\
\end{eqnarray}
$$
Scikit-Learn 패키지의 로지스틱 회귀
Scikit-Learn 패키지는 로지스틱 회귀 모형 LogisticRegression 를 제공한다.
End of explanation
"""
logit_mod = sm.Logit(y, X)
logit_res = logit_mod.fit(disp=0)
print(logit_res.summary())
xx = np.linspace(-3, 3, 100)
sigmoid = logit_res.predict(sm.add_constant(xx))
plt.plot(xx, sigmoid, lw=5, alpha=0.5)
plt.scatter(X0, y, marker='o', c=y, s=100)
plt.scatter(X0, logit_res.predict(X), marker='x', c=y, s=200, lw=2, alpha=0.5, cmap=mpl.cm.jet)
plt.xlim(-3, 3)
plt.show()
"""
Explanation: statsmodels 패키지의 로지스틱 회귀
statsmodels 패키지는 로지스틱 회귀 모형 Logit 를 제공한다. 사용방법은 OLS 와 동일하다. Scikit-Learn 패키지와 달리 Logit 클래스는 classification 되기 전의 값을 출력한다
End of explanation
"""
df = pd.read_table("~/data/sheather/MichelinFood.txt")
df
df.plot(kind="scatter", x="Food", y="proportion", s=100)
plt.show()
X = sm.add_constant(df.Food)
y = df.proportion
model = sm.Logit(y, X)
result = model.fit()
print(result.summary())
df.plot(kind="scatter", x="Food", y="proportion", s=50, alpha=0.5)
xx = np.linspace(10, 35, 100)
plt.plot(xx, result.predict(sm.add_constant(xx)), "r", lw=4)
plt.xlim(10, 35)
plt.show()
"""
Explanation: 예제 1: Michelin and Zagat 가이드 비교
다음 데이터는 뉴욕시의 레스토랑에 대한 두 개의 가이드북에서 발취한 것이다.
Food: Zagat Survey 2006 의 고객 평가 점수
InMichelin: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실린 레스토랑의 수
NotInMichelin: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실리지 않은 레스토랑의 수
mi: 해당 고객 평가 점수를 받은 레스토랑의 수
proportion: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실린 레스토랑의 비율
End of explanation
"""
df = pd.read_csv("~/data/sheather/MichelinNY.csv")
df.tail()
sns.stripplot(x="Food", y="InMichelin", data=df, jitter=True, orient='h', order=[1, 0])
plt.grid(True)
plt.show()
X = sm.add_constant(df.Food)
y = df.InMichelin
model = sm.Logit(y, X)
result = model.fit()
print(result.summary())
xx = np.linspace(10, 35, 100)
pred = result.predict(sm.add_constant(xx))
decision_value = xx[np.argmax(pred > 0.5)]
print(decision_value)
plt.plot(xx, pred, "r", lw=4)
plt.axvline(decision_value)
plt.xlim(10, 35)
plt.show()
"""
Explanation: 예제 2: Michelin 가이드 예측
다음 데이터는 뉴욕시의 개별 레스토랑의 고객 평가 점수와 Michelin 가이드 수록 여부를 보인 것이다.
InMichelin: Michelin 가이드 수록 여부
Restaurant Name: 레스토랑 이름
Food: 식사에 대한 고객 평가 점수 (1~30)
Decor: 인테리어에 대한 고객 평가 점수 (1~30)
Service: 서비스에 대한 고객 평가 점수 (1~30)
Price: 저녁 식사 가격 (US$)
End of explanation
"""
print(sm.datasets.fair.SOURCE)
print(sm.datasets.fair.NOTE)
df = sm.datasets.fair.load_pandas().data
df.head()
sns.factorplot(x="affairs", y="children", row="yrs_married", data=df,
orient="h", size=2, aspect=5, kind="box")
plt.show()
df['affair'] = (df['affairs'] > 0).astype(float)
modoel = smf.logit("affair ~ rate_marriage + religious + yrs_married + age + educ + children", df).fit()
print(modoel.summary())
"""
Explanation: 예제 3: Fair's Affair Dataset
End of explanation
"""
|
neuromusic/neuronexus-probe-data | denormalizing.ipynb | bsd-3-clause | for col in probe_spec.columns:
if col.endswith('ID'):
print col
"""
Explanation: Let's see what the column names that end in 'ID' are. Those are probably primary keys and foreign keys.
End of explanation
"""
probe_spec.set_index('DesignID',inplace=True)
probe_spec.head()
design_type = pd.read_csv('NiPOD-DesignType.csv')
for col in design_type.columns:
if col.endswith('ID'):
print col
design_type.head()
probe_spec = probe_spec.merge(design_type, on='DesignTypeID')
manufacture = pd.read_csv('NiPOD-Manufacture.csv')
for col in manufacture.columns:
if col.endswith('ID'):
print col
manufacture.head()
probe_spec = probe_spec.merge(manufacture, on='ManufactureID')
package = pd.read_csv('NiPOD-ProbePackage.csv')
for col in package.columns:
if col.endswith('ID'):
print col
package.head()
probe_spec = probe_spec.merge(package, on='PackageID')
probe_type = pd.read_csv('NiPOD-ProbeType.csv')
for col in probe_type.columns:
if col.endswith('ID'):
print col
probe_type.head()
probe_spec = probe_spec.merge(probe_type, on='ProbeTypeID')
probe_spec.head()
keep = ['DesignName',
'FirstChannelYSpacing',
'NumChannel',
'NumShank',
'NumSitePerShank',
'OtherParameters',
'PackageID',
'ShankHeight',
'ShankSpace',
'ShankStartingXLocation',
'ShankStartingYLocation',
'ShankWidth',
'SiteArea',
'TetrodeOffsetLeft',
'TetrodeOffsetRight',
'TetrodeOffsetUp',
'TrueShankLength',
'TrueSiteSpacing',
'DesignType',
'PackageName',
'ProbeType']
probe_spec = probe_spec[keep]
probe_spec.head()
probe_spec.to_csv('NiPOD-ProbeSpec-denormalized.csv',
encoding='utf-8',
index=False)
"""
Explanation: First, let's set the index to what I think is the primary key
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/structured/labs/4b_keras_dnn_babyweight.ipynb | apache-2.0 | import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
"""
Explanation: LAB 4b: Create Keras DNN model.
Learning Objectives
Set CSV Columns, label column, and column defaults
Make dataset of features and label from CSV files
Create input layers for raw features
Create feature columns for inputs
Create DNN dense hidden layers and output layer
Create custom evaluation metric
Build DNN model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
End of explanation
"""
%%bash
ls *.csv
%%bash
head -5 *.csv
"""
Explanation: Verify CSV files exist
In the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
End of explanation
"""
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
"""
Explanation: Create Keras model
Lab Task #1: Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
End of explanation
"""
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
"""
Explanation: Lab Task #2: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
End of explanation
"""
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
"""
Explanation: Lab Task #3: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
End of explanation
"""
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
"""
Explanation: Lab Task #4: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
End of explanation
"""
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
"""
Explanation: Lab Task #5: Create DNN dense hidden layers and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
End of explanation
"""
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
"""
Explanation: Lab Task #6: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
End of explanation
"""
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
"""
Explanation: Lab Task #7: Build DNN model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
End of explanation
"""
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
"""
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
"""
Explanation: Run and evaluate model
Lab Task #8: Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
End of explanation
"""
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
"""
Explanation: Visualize loss curve
End of explanation
"""
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
"""
Explanation: Save the model
End of explanation
"""
|
vascotenner/holoviews | doc/Tutorials/Composing_Data.ipynb | bsd-3-clause | import numpy as np
import holoviews as hv
hv.notebook_extension()
np.random.seed(10)
def sine_curve(phase, freq, amp, power, samples=102):
xvals = [0.1* i for i in range(samples)]
return [(x, amp*np.sin(phase+freq*x)**power) for x in xvals]
phases = [0, np.pi/2, np.pi, 3*np.pi/2]
powers = [1,2,3]
amplitudes = [0.5,0.75, 1.0]
frequencies = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
gridspace = hv.GridSpace(kdims=['Amplitude', 'Power'], group='Parameters', label='Sines')
for power in powers:
for amplitude in amplitudes:
holomap = hv.HoloMap(kdims=['Frequency'])
for frequency in frequencies:
sines = {phase : hv.Curve(sine_curve(phase, frequency, amplitude, power))
for phase in phases}
ndoverlay = hv.NdOverlay(sines , kdims=['Phase']).relabel(group='Phases',
label='Sines', depth=1)
overlay = ndoverlay * hv.Points([(i,0) for i in range(0,10)], group='Markers', label='Dots')
holomap[frequency] = overlay
gridspace[amplitude, power] = holomap
penguins = hv.RGB.load_image('../assets/penguins.png').relabel(group="Family", label="Penguin")
layout = gridspace + penguins
"""
Explanation: The Containers tutorial shows examples of each of the container types in HoloViews, and it is useful to look at the description of each type there, as you work through this tutorial.
This tutorial shows you how to combine the various container types, in order to build data structures that can contain all of the data that you want to visualize or analyze, in an extremely flexible way. For instance, you may have a large set of measurements of different types of data (numerical, image, textual notations, etc.) from different experiments done on different days, with various different parameter values associated with each one. HoloViews can store all of this data together, which will allow you to select just the right bit of data "on the fly" for any particular analysis or visualization, by indexing, slicing, selecting, and sampling in this data structure.
To illustrate the full functionality provided, we will create an example of the maximally nested object structure currently possible with HoloViews:
End of explanation
"""
layout
"""
Explanation: This code produces what looks like a relatively simple animation of two side-by-side figures, but is actually a deeply nested data structure:
End of explanation
"""
print(repr(layout))
"""
Explanation: The structure of this object can be seen in the repr():
End of explanation
"""
print(repr(layout))
"""
Explanation: Nesting hierarchy <a id='NestingHierarchy'></a>
To help us understand this structure, here is a schematic for us to refer to as we unpack this object, level by level:
<center><img src="http://assets.holoviews.org/nesting-diagram.png"></center>
Everything that is displayable in HoloViews has this same basic structure, although any of the levels can be omitted in simpler cases, and many different Element types (not containers) can be substituted for any other.
Since HoloViews 1.3.0, you are allowed to build data-structures that violate this hierarchy (e.g., you can put Layout objects into HoloMaps) but the resulting object cannot be displayed. Instead, you will be prompted with a message to call the collate method. Using the collate method will allow you to generate the appropriate object that correctly obeys the hierarchy shown above, so that it can be displayed.
As shown in the diagram, there are three different types of container involved:
Basic Element: elementary HoloViews object containing raw data in an external format like Numpy or pandas.
Homogenous container (UniformNdMapping): collections of Elements or other HoloViews components that are all the same type. These are indexed using array-style key access with values sorted along some dimension(s), e.g. [0.50] or ["a",7.6].
Heterogenous container (AttrTree): collections of data of different types, e.g. different types of Element. These are accessed by categories using attributes, e.g. .Parameters.Sines, which does not assume any ordering of a dimension.
We will now go through each of the containers of these different types, at each level.
Layout Level
Above, we have already viewed the highest level of our data structure as a Layout. Here is the repr of entire Layout object, which reflects all the levels shown in the diagram:
End of explanation
"""
layout.Parameters.Sines
"""
Explanation: In the examples below, we will unpack this data structure using attribute access (explained in the Introductory tutorial) as well as indexing and slicing (explained in the Sampling Data tutorial).
GridSpace Level
Elements within a Layout, such as the GridSpace in this example, are reached via attribute access:
End of explanation
"""
layout.Parameters.Sines[0.5, 1]
"""
Explanation: HoloMap Level
This GridSpace consists of nine HoloMaps arranged in a two-dimensional space. Let's now select one of these HoloMap objects, by indexing to retrieve the one at [Amplitude,Power] [0.5,1.0], i.e. the lowest amplitude and power:
End of explanation
"""
layout.Parameters.Sines[0.5, 1][1.0]
"""
Explanation: As shown in the schematic above, a HoloMap contains many elements with associated keys. In this example, these keys are indexed with a dimension Frequency, which is why the Frequency varies when you play the animation here.
Overlay Level
The repr() showed us that the HoloMap is composed of Overlay objects, six in this case (giving six frames to the animation above). Let us access one of these elements, i.e. one frame of the animation above, by indexing to retrieve an Overlay associated with the key with a Frequency of 1.0:
End of explanation
"""
(layout.Parameters.Sines[0.5, 1][1].Phases.Sines +
layout.Parameters.Sines[0.5, 1][1].Markers.Dots)
"""
Explanation: NdOverlay Level
As the repr() shows, the Overlay contains a Points object and an NdOverlay object. We can access either one of these using the attribute access supported by Overlay:
End of explanation
"""
l=layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0]
l
repr(l)
"""
Explanation: Curve Level
The NdOverlay is so named because it is an overlay of items indexed by dimensions, unlike the regular attribute-access overlay types. In this case it is indexed by Phase, with four values. If we index to select one of these values, we will get an individual Curve, e.g. the one with zero phase:
End of explanation
"""
type(layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0].data)
"""
Explanation: Data Level
At this point, we have reached the end of the HoloViews objects; below this object is only the raw data as a Numpy array:
End of explanation
"""
layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0][5.2]
"""
Explanation: Actually, HoloViews will let you go even further down, accessing data inside the Numpy array using the continuous (floating-point) coordinate systems declared in HoloViews. E.g. here we can ask for a single datapoint, such as the value at x=5.2:
End of explanation
"""
layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0][5.23], layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0][5.27]
"""
Explanation: Indexing into 1D Elements like Curve and higher-dimensional but regularly gridded Elements like Image, Surface, and HeatMap will return the nearest defined value (i.e., the results "snap" to the nearest data item):
End of explanation
"""
o1 = layout.Parameters.Sines.select(Amplitude=0.5, Power=1.0).select(Frequency=1.0)
o2 = layout.Parameters.Sines.select(Amplitude=0.5, Power=1.0, Frequency=1.0)
o1 + o2
"""
Explanation: For other Element types, such as Points, snapping is not supported and thus indexing down into the .data array will be less useful, because it will only succeed for a perfect floating-point match on the key dimensions. In those cases, you can still use all of the access methods provided by the numpy array itself, via .data, e.g. .data[52], but note that such native operations force you to use the native indexing scheme of the array, i.e. integer access starting at zero, not the more convenient and semantically meaningful continuous coordinate systems we provide through HoloViews.
Indexing using .select
The curve displayed immediately above shows the final, deepest Element access possible in HoloViews for this object:
python
layout.Parameters.Sines[0.5, 1][1].Phases.Sines[0.0]
This is the curve with an amplitude of 0.5, raised to a power of 1.0 with frequency of 1.0 and 0 phase. These are all the numbers, in order, used in the access shown above.
The .select method is a more explicit way to use key access, with both of these equivalent to each other:
End of explanation
"""
layout.Parameters.Sines.select(Amplitude=0.5,Power=1.0,
Frequency=1.0).Phases.Sines.select(Phase=0.0)
"""
Explanation: The second form demonstrates HoloViews' deep indexing feature, which allows indexes to cross nested container boundaries. The above is as far as we can index before reaching a heterogeneous type (the Overlay), where we need to use attribute access. Here is the more explicit method of indexing down to a curve, using .select to specify dimensions by name instead of bracket-based indexing by position:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-2/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
AllenDowney/ModSimPy | soln/salmon_soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Case Study: Predicting salmon returns
This case study is based on a ModSim student project by Josh Deng and Erika Lu.
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
pops = [2749, 2845, 4247, 1843, 2562, 1774, 1201, 1284, 1287, 2339, 1177, 962, 1176, 2149, 1404, 969, 1237, 1615, 1201];
"""
Explanation: Can we predict salmon populations?
Each year the U.S. Atlantic Salmon Assessment Committee reports estimates of salmon populations in oceans and rivers in the northeastern United States. The reports are useful for monitoring changes in these populations, but they generally do not include predictions.
The goal of this case study is to model year-to-year changes in population, evaluate how predictable these changes are, and estimate the probability that a particular population will increase or decrease in the next 10 years.
As an example, I'll use data from page 18 of the 2017 report, which provides population estimates for the Narraguagus and Sheepscot Rivers in Maine.
At the end of this notebook, I make some suggestions for extracting data from a PDF document automatically, but for this example I will keep it simple and type it in.
Here are the population estimates for the Narraguagus River:
End of explanation
"""
years = range(1997, 2016)
"""
Explanation: To get this data into a Pandas Series, I'll also make a range of years to use as an index.
End of explanation
"""
pop_series = TimeSeries(pops, index=years, dtype=np.float64)
"""
Explanation: And here's the series.
End of explanation
"""
def plot_population(series):
plot(series, label='Estimated population')
decorate(xlabel='Year',
ylabel='Population estimate',
title='Narraguacus River',
ylim=[0, 5000])
plot_population(pop_series)
"""
Explanation: Here's what it looks like:
End of explanation
"""
abs_diffs = np.ediff1d(pop_series, to_end=0)
"""
Explanation: Modeling changes
To see how the population changes from year-to-year, I'll use ediff1d to compute the absolute difference between each year and the next.
End of explanation
"""
rel_diffs = abs_diffs / pop_series
"""
Explanation: We can compute relative differences by dividing by the original series elementwise.
End of explanation
"""
rel_diffs = compute_rel_diff(pop_series)
"""
Explanation: Or we can use the modsim function compute_rel_diff:
End of explanation
"""
rates = rel_diffs.drop(2015)
"""
Explanation: These relative differences are observed annual net growth rates. So let's drop the 0 and save them.
End of explanation
"""
np.random.choice(rates)
"""
Explanation: A simple way to model this system is to draw a random value from this series of observed rates each year. We can use the NumPy function choice to make a random choice from a series.
End of explanation
"""
t_0 = 2015
p_0 = pop_series[t_0]
"""
Explanation: Simulation
Now we can simulate the system by drawing random growth rates from the series of observed rates.
I'll start the simulation in 2015.
End of explanation
"""
system = System(t_0=t_0,
p_0=p_0,
duration=10,
rates=rates)
"""
Explanation: Create a System object with variables t_0, p_0, rates, and duration=10 years.
The series of observed rates is one big parameter of the model.
End of explanation
"""
# Solution
def update_func1(pop, t, system):
"""Simulate one time step.
pop: population
t: time step
system: System object
"""
rate = np.random.choice(system.rates)
pop += rate * pop
return pop
"""
Explanation: Write an update functon that takes as parameters pop, t, and system.
It should choose a random growth rate, compute the change in population, and return the new population.
End of explanation
"""
update_func1(p_0, t_0, system)
"""
Explanation: Test your update function and run it a few times
End of explanation
"""
def run_simulation(system, update_func):
"""Simulate a queueing system.
system: System object
update_func: function object
"""
t_0 = system.t_0
t_end = t_0 + system.duration
results = TimeSeries()
results[t_0] = system.p_0
for t in linrange(t_0, t_end):
results[t+1] = update_func(results[t], t, system)
return results
"""
Explanation: Here's a version of run_simulation that stores the results in a TimeSeries and returns it.
End of explanation
"""
# Solution
results = run_simulation(system, update_func1)
plot(results, label='Simulation')
plot_population(pop_series)
"""
Explanation: Use run_simulation to run generate a prediction for the next 10 years.
The plot your prediction along with the original data. Your prediction should pick up where the data leave off.
End of explanation
"""
def plot_many_simulations(system, update_func, iters):
"""Runs simulations and plots the results.
system: System object
update_func: function object
iters: number of simulations to run
"""
for i in range(iters):
results = run_simulation(system, update_func)
plot(results, color='gray', linewidth=5, alpha=0.1)
"""
Explanation: To get a sense of how much the results vary, we can run the model several times and plot all of the results.
End of explanation
"""
# Solution
plot_many_simulations(system, update_func1, 30)
plot_population(pop_series)
"""
Explanation: The plot option alpha=0.1 makes the lines semi-transparent, so they are darker where they overlap.
Run plot_many_simulations with your update function and iters=30. Also plot the original data.
End of explanation
"""
def run_many_simulations(system, update_func, iters):
"""Runs simulations and report final populations.
system: System object
update_func: function object
iters: number of simulations to run
returns: series of final populations
"""
# FILL THIS IN
# Solution
def run_many_simulations(system, update_func, iters):
"""Runs simulations and report final populations.
system: System object
update_func: function object
iters: number of simulations to run
returns: series of final populations
"""
last_pops = ModSimSeries()
for i in range(iters):
results = run_simulation(system, update_func)
last_pops[i] = get_last_value(results)
return last_pops
"""
Explanation: The results are highly variable: according to this model, the population might continue to decline over the next 10 years, or it might recover and grow rapidly!
It's hard to say how seriously we should take this model. There are many factors that influence salmon populations that are not included in the model. For example, if the population starts to grow quickly, it might be limited by resource limits, predators, or fishing. If the population starts to fall, humans might restrict fishing and stock the river with farmed fish.
So these results should probably not be considered useful predictions. However, there might be something useful we can do, which is to estimate the probability that the population will increase or decrease in the next 10 years.
Distribution of net changes
To describe the distribution of net changes, write a function called run_many_simulations that runs many simulations, saves the final populations in a ModSimSeries, and returns the ModSimSeries.
End of explanation
"""
run_many_simulations(system, update_func1, 5)
"""
Explanation: Test your function by running it with iters=5.
End of explanation
"""
last_pops = run_many_simulations(system, update_func1, 1000)
last_pops.describe()
"""
Explanation: Now we can run 1000 simulations and describe the distribution of the results.
End of explanation
"""
net_changes = last_pops - p_0
net_changes.describe()
"""
Explanation: If we substract off the initial population, we get the distribution of changes.
End of explanation
"""
np.sum(net_changes > 0)
"""
Explanation: The median is negative, which indicates that the population decreases more often than it increases.
We can be more specific by counting the number of runs where net_changes is positive.
End of explanation
"""
np.mean(net_changes > 0)
"""
Explanation: Or we can use mean to compute the fraction of runs where net_changes is positive.
End of explanation
"""
np.mean(net_changes < 0)
"""
Explanation: And here's the fraction where it's negative.
End of explanation
"""
weights = linspace(0, 1, len(rates))
weights /= sum(weights)
plot(weights)
decorate(xlabel='Index into the rates array',
ylabel='Weight')
"""
Explanation: So, based on observed past changes, this model predicts that the population is more likely to decrease than increase over the next 10 years, by about 2:1.
A refined model
There are a few ways we could improve the model.
It looks like there might be cyclic behavior in the past data, with a period of 4-5 years. We could extend the model to include this effect.
Older data might not be as relevant for prediction as newer data, so we could give more weight to newer data.
The second option is easier to implement, so let's try it.
I'll use linspace to create an array of "weights" for the observed rates. The probability that I choose each rate will be proportional to these weights.
The weights have to add up to 1, so I divide through by the total.
End of explanation
"""
system.weights = weights
"""
Explanation: I'll add the weights to the System object, since they are parameters of the model.
End of explanation
"""
np.random.choice(system.rates, p=system.weights)
"""
Explanation: We can pass these weights as a parameter to np.random.choice (see the documentation)
End of explanation
"""
# Solution
def update_func2(pop, t, system):
"""Simulate one time step.
pop: population
t: time step
system: System object
"""
rate = np.random.choice(system.rates, p=system.weights)
pop += rate * pop
return pop
"""
Explanation: Write an update function that takes the weights into account.
End of explanation
"""
# Solution
plot_many_simulations(system, update_func2, 30)
plot_population(pop_series)
"""
Explanation: Use plot_many_simulations to plot the results.
End of explanation
"""
# Solution
last_pops = run_many_simulations(system, update_func2, 1000)
net_changes = last_pops - p_0
net_changes.describe()
"""
Explanation: Use run_many_simulations to collect the results and describe to summarize the distribution of net changes.
End of explanation
"""
# Solution
np.mean(net_changes < 0)
"""
Explanation: Does the refined model have much effect on the probability of population decline?
End of explanation
"""
from tabula import read_pdf
df = read_pdf('data/USASAC2018-Report-30-2017-Activities-Page11.pdf')
"""
Explanation: Extracting data from a PDF document
The following section uses tabula-py to get data from a PDF document.
If you don't already have it installed, and you are using Anaconda, you can install it by running the following command in a Terminal or Git Bash:
conda install -c conda-forge tabula-py
End of explanation
"""
|
MRod5/pyturb | notebooks/Gas Mixtures.ipynb | mit | from pyturb.gas_models import GasMixture
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('O2', mass=0.5)
gas_mix.add_gas('H2', mass=0.5)
"""
Explanation: Gas Mixtures: Perfect and Semiperfect Models
This Notebook is an example about how to declare and use Gas Mixtures with pyTurb. Gas Mixtures in pyTurb are treated as a combination of different gases of pyTurb:
- PerfectIdealGas: Ideal Equation of State ($pv=R_gT$) and constant $c_p$, $c_v$, $\gamma_g$
- SemiperfectIdealGas: Ideal Equation of State and $c_p\left(T\right)$, $c_v\left(T\right)$, $\gamma_g\left(T\right)$ as a function of temperature
The Gas Mixture class and the rest of the gas models can be found at the following folder:
pyturb
gas_models
thermo_prop
PerfectIdealGas
SemiperfectIdealGas
GasMixture
python
from pyturb.gas_models import GasMixture
from pyturb.gas_models import PerfectIdealGas
from pyturb.gas_models import SemiperfectIdealGas
from pyturb.gas_models import GasMixture
When the GasMixture object is imported the gas model must be selected: The mixture can be treated as a Perfect Gas or Semiperfect Gas. Note that both options are ideal gases (the ideal equation of state $pv=R_gT$ is available). Thus:
If the gas is Perfect: $c_v, c_p, \gamma_g \equiv constant$
If the gas is Semiperfect: $c_v(T), c_p(T), \gamma_g(T) \equiv f(T)$
To choose one of the gas models simply specify it when creating the Gas Mixture object:
python
gas_mix_perfect = GasMixture(gas_model='Perfect')
gas_mix_semiperfect = GasMixture(gas_model='Semiperfect')
Note that 'gas_model' options are not case sensitive e.g. Semi-perfect, semiperfect or Semiperfect yield the same result.
A gas mixture can be defined adding the gas species that conform the mixture. For that purpose, the method add_gas can be used:
python
gas_mix = GasMixture()
gas_mix.add_gas(species, moles=quantity)
gas_mix.add_gas(species, mass=quantity)
Note that the gas species (pure substance) specified in species must be available as a PerfectIdealGas or SemiperfectIdealGas. The gas availability can be checked using the is_available function at ThermoProperties.
When using add_gas, the quantity of the gas to be added must be specified. This can be done by introducing the moles or the mass of the gas. For example, if a mixture of $1.5mol$ of $Ar$ and $3mol$ of $He$ is intended:
python
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('Ar', moles=1.5)
gas_mix.add_gas('He', moles=3.5)
Whilst a mix of $500g$ of $O_2$ and $500g$ of $H_2$ would be:
python
gas_mix = GasMixture(gas_model='Perfect')
gas_mix.add_gas('O2', mass=0.5)
gas_mix.add_gas('H2', mass=0.5)
Finally, the gas mixture provides the same outputs of a PerfectIdealGas or SemiperfectIdealGas, plus the molar and mass fractions:
- Gas properties: Ru, Rg, Mg, cp, cp_molar, cv, cv_molar, gamma
- Gas enthalpies, moles and mass: h0, h0_molar, mg, Ng
- Mixture condition: Molar fraction, mass fraction
Gas Mixture example:
Let's create a mixture Perfect Gases, with $500g$ of $O_2$ and $500g$ of $H_2$
End of explanation
"""
gas_mix.mixture_gases
"""
Explanation: To inspect the gas mixture contidions, we can use Pandas Dataframe contained in gas_mixture:
End of explanation
"""
gas_mix2 = GasMixture(gas_model='Perfect')
gas_mix2.add_gas('O2', moles=0.5)
gas_mix2.add_gas('H2', moles=0.5)
gas_mix2.mixture_gases
"""
Explanation: Note that the gas_mixture dataframe contains the information of the mixture: amount of moles, amount of mass, molar and mass fractions and the objects containing the pure subtance information.
It is also possible to create a gas mixtures defining moles:
End of explanation
"""
gas_mix3 = GasMixture(gas_model='Perfect')
gas_mix3.add_gas('O2', mass=0.5)
gas_mix3.add_gas('H2', moles=0.121227)
gas_mix3.mixture_gases
"""
Explanation: One can also define the mixture defining some pure substances as moles and some as mass:
End of explanation
"""
from pyturb.gas_models import PerfectIdealGas
air_perfgas = PerfectIdealGas('Air')
print(air_perfgas.thermo_prop)
"""
Explanation: Note that gas_mix and gas_mix3 are equivalent.
Perfect Air as a mixture
In this example we will create a gas mixture following the air composition (as a perfect mix of oxygen, nitrogen, argon and carbon dioxide) and we will compare it to the 'Air' substance from PerfectIdelGas.
Note that Air is an available gas at the Nasa Glenn coefficients and is therefore available as a PerfectIdealGas and as SemiperfectIdeal.
Thus there is no need to declare Air as a gas mixture from pyTurb. However, for the sake of clarity, we will compare both mixtures.
From the PerfectIdealGas class:
End of explanation
"""
pyturb_mix = GasMixture('Perfect')
pyturb_mix.add_gas('O2', 0.209476)
pyturb_mix.add_gas('N2', 0.78084)
pyturb_mix.add_gas('Ar', 0.009365)
pyturb_mix.add_gas('CO2', 0.000319)
"""
Explanation: And now, applying a mixture of molar quantities (per unit mole):
- Diatomic Oxygen: $O_2$ 20.9476\%
- Diatomic nitrogen: $N_2$ 78.0840\%
- Argon: $Ar$ 0.9365\%
- Carbon dioxide: $CO_2$ 0.0319\%
End of explanation
"""
pyturb_mix.mixture_gases
"""
Explanation: Therefore, the mixture is composed of:
End of explanation
"""
print('pyTurb air mixture: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(pyturb_mix.Rg, pyturb_mix.cp(), pyturb_mix.cv(), pyturb_mix.gamma()))
print('Perfect air: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(air_perfgas.Rg, air_perfgas.cp(), air_perfgas.cv(), air_perfgas.gamma()))
"""
Explanation: Where the gas constant, heat capacity at constant pressure, heat capacity at constant volume and the heat capacity ratio are:
End of explanation
"""
# Objective temperature:
T = 1500 #K
# Gas mixture:
pyturb_mix_sp = GasMixture('Semiperfect')
pyturb_mix_sp.add_gas('O2', 0.209476)
pyturb_mix_sp.add_gas('N2', 0.78084)
pyturb_mix_sp.add_gas('Ar', 0.009365)
pyturb_mix_sp.add_gas('CO2', 0.000319)
print('pyTurb air mixture: Rair={0:6.1f}J/kg/K; cp={1:6.1f} J/kg/K; cv={2:6.1f} J/kg/K; gamma={3:4.1f}'.format(pyturb_mix_sp.Rg, pyturb_mix_sp.cp(T), pyturb_mix_sp.cv(T), pyturb_mix_sp.gamma(T)))
"""
Explanation: Semiperfect Gas Mixture
Following the last example, a Semi Perfect model can be used by just changing the gas_model option:
End of explanation
"""
|
mjlong/openmc | docs/source/pythonapi/examples/mgxs-part-i.ipynb | mit | from IPython.display import Image
Image(filename='images/mgxs.png', width=350)
"""
Explanation: This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:
General equations for scalar-flux averaged multi-group cross sections
Creation of multi-group cross sections for an infinite homogeneous medium
Use of tally arithmetic to manipulate multi-group cross sections
Note: This Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data. We recommend using Pandas >v0.15.0 or later since OpenMC's Python API leverages the multi-indexing feature included in the most recent releases of Pandas.
Introduction to Multi-Group Cross Sections (MGXS)
Many Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
%matplotlib inline
"""
Explanation: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.
Before proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.
Introductory Notation
The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.
Spatial and Energy Discretization
The energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.
Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.
General Scalar-Flux Weighted MGXS
The multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\sigma_{n,x,k,g}$ as follows:
$$\sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator.
Multi-Group Scattering Matrices
The general multi-group cross section $\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes.
We denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\sigma_{n,s}(\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\sigma_{n,s,k,g \to g'}$ as follows:
$$\sigma_{n,s,k,g\rightarrow g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,s}(\mathbf{r},E'\rightarrow E'')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters.
Multi-Group Fission Spectrum
The energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$.
Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n}(\mathbf{r},E)$. The multi-group fission spectrum $\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$.
Similar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\chi_{n,k,g}$ as follows:
$$\chi_{n,k,g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n}(\mathbf{r},E'\rightarrow E'')\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$
The fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.
This concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.
Generate Input Files
End of explanation
"""
# Instantiate some Nuclides
h1 = openmc.Nuclide('H-1')
o16 = openmc.Nuclide('O-16')
u235 = openmc.Nuclide('U-235')
u238 = openmc.Nuclide('U-238')
zr90 = openmc.Nuclide('Zr-90')
"""
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
"""
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide(h1, 0.028999667)
inf_medium.add_nuclide(o16, 0.01450188)
inf_medium.add_nuclide(u235, 0.000114142)
inf_medium.add_nuclide(u238, 0.006886019)
inf_medium.add_nuclide(zr90, 0.002116053)
"""
Explanation: With the nuclides we defined, we will now create a material for the homogeneous medium.
End of explanation
"""
# Instantiate a MaterialsFile, register all Materials, and export to XML
materials_file = openmc.MaterialsFile()
materials_file.default_xs = '71c'
materials_file.add_material(inf_medium)
materials_file.export_to_xml()
"""
Explanation: With our material, we can now create a MaterialsFile object that can be exported to an actual XML file.
End of explanation
"""
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
"""
Explanation: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
End of explanation
"""
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
"""
Explanation: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Instantiate Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Instantiate a GeometryFile
geometry_file = openmc.GeometryFile()
geometry_file.geometry = openmc_geometry
# Export to "geometry.xml"
geometry_file.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a GeometryFile object, and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a SettingsFile
settings_file = openmc.SettingsFile()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True, 'summary': True}
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
settings_file.set_source_space('fission', bounds)
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
"""
# Instantiate a 2-group EnergyGroups object
groups = mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625e-6, 20.])
"""
Explanation: Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
"""
# Instantiate a few different sections
total = mgxs.TotalXS(domain=cell, domain_type='cell', groups=groups)
absorption = mgxs.AbsorptionXS(domain=cell, domain_type='cell', groups=groups)
scattering = mgxs.ScatterXS(domain=cell, domain_type='cell', groups=groups)
"""
Explanation: We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:
TotalXS
TransportXS
AbsorptionXS
CaptureXS
FissionXS
NuFissionXS
ScatterXS
NuScatterXS
ScatterMatrixXS
NuScatterMatrixXS
Chi
These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.
End of explanation
"""
absorption.tallies
"""
Explanation: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.
End of explanation
"""
# Instantiate an empty TalliesFile
tallies_file = openmc.TalliesFile()
# Add total tallies to the tallies file
for tally in total.tallies.values():
tallies_file.add_tally(tally)
# Add absorption tallies to the tallies file
for tally in absorption.tallies.values():
tallies_file.add_tally(tally)
# Add scattering tallies to the tallies file
for tally in scattering.tallies.values():
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a TalliesFile object to generate the "tallies.xml" input file for OpenMC.
End of explanation
"""
# Run OpenMC
executor = openmc.Executor()
executor.run_simulation()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Load the summary file and link it with the statepoint
su = openmc.Summary('summary.h5')
sp.link_with_summary(su)
"""
Explanation: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.mgxs module to properly process the tally data. We first create a Summary object and link it with the statepoint.
End of explanation
"""
# Load the tallies from the statepoint into each MGXS object
total.load_from_statepoint(sp)
absorption.load_from_statepoint(sp)
scattering.load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
total.print_xs()
"""
Explanation: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
Let's first inspect our total cross section by printing it to the screen.
End of explanation
"""
df = scattering.get_pandas_dataframe()
df.head(10)
"""
Explanation: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
End of explanation
"""
absorption.export_xs_data(filename='absorption-xs', format='excel')
"""
Explanation: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
End of explanation
"""
total.build_hdf5_store(filename='mgxs', append=True)
absorption.build_hdf5_store(filename='mgxs', append=True)
scattering.build_hdf5_store(filename='mgxs', append=True)
"""
Explanation: The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.
End of explanation
"""
# Use tally arithmetic to compute the difference between the total, absorption and scattering
difference = total.xs_tally - absorption.xs_tally - scattering.xs_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
difference.get_pandas_dataframe()
"""
Explanation: Comparing MGXS with Tally Arithmetic
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.
End of explanation
"""
# Use tally arithmetic to compute the absorption-to-total MGXS ratio
absorption_to_total = absorption.xs_tally / total.xs_tally
# The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
absorption_to_total.get_pandas_dataframe()
# Use tally arithmetic to compute the scattering-to-total MGXS ratio
scattering_to_total = scattering.xs_tally / total.xs_tally
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
scattering_to_total.get_pandas_dataframe()
"""
Explanation: Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.
End of explanation
"""
# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity
sum_ratio = absorption_to_total + scattering_to_total
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
sum_ratio.get_pandas_dataframe()
"""
Explanation: Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/quantum/tutorials/gradients.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow==2.1.0
"""
Explanation: 그래디언트 계산하기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/gradients"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
이 튜토리얼에서는 양자 회로의 기대 값에 대한 그래디언트 계산 알고리즘을 탐색합니다.
양자 회로에서 관찰 가능한 특정 기대 값의 그래디언트를 계산하는 것은 복잡한 프로세스입니다. 관찰 가능 항목의 기대 값은 기록하기 쉬운 분석적 그래디언트 수식이 있는 행렬 곱셈 또는 벡터 더하기와 같은 기존의 머신러닝 변환과 달리 언제든 기록하기 쉬운 분석적 그래디언트 수식을 사용할 수 없습니다. 결과적으로 다양한 시나리오에 유용한 다양한 양자 그래디언트 계산 방법이 있습니다. 이 튜토리얼에서는 두 가지의 다른 미분 체계를 비교하고 대조합니다.
설정
End of explanation
"""
!pip install tensorflow-quantum
"""
Explanation: TensorFlow Quantum을 설치하세요.
End of explanation
"""
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
"""
Explanation: 이제 TensorFlow 및 모듈 종속성을 가져옵니다.
End of explanation
"""
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
"""
Explanation: 1. 준비
양자 회로에 대한 그래디언트 계산 개념을 좀 더 구체적으로 만들어 보겠습니다. 다음과 같은 매개변수화된 회로가 있다고 가정합니다.
End of explanation
"""
pauli_x = cirq.X(qubit)
pauli_x
"""
Explanation: 관찰 가능 항목과 함께:
End of explanation
"""
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
"""
Explanation: 이 연산자를 보면 $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \ alpha)$라는 것을 알 수 있습니다.
End of explanation
"""
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
"""
Explanation: $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$를 정의하면 $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$입니다. 확인해 보겠습니다.
End of explanation
"""
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
"""
Explanation: 2. 미분기의 필요성
더 큰 회로일수록 주어진 양자 회로의 그래디언트를 정확하게 계산하는 공식이 항상 주어지지 않습니다. 간단한 공식으로 그래디언트를 계산하기에 충분하지 않은 경우 tfq.differentiators.Differentiator 클래스를 사용하여 회로의 그래디언트를 계산하기 위한 알고리즘을 정의할 수 있습니다. 예를 들어, 다음을 사용하여 TensorFlow Quantum(TFQ)의 상기 예를 다시 재현할 수 있습니다.
End of explanation
"""
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
"""
Explanation: 그러나 샘플링을 기반으로 예상치로 전환하면(실제 기기에서 발생하는 일) 값이 약간 변경될 수 있습니다. 이것은 이제 불완전한 추정치를 가지고 있음을 의미합니다.
End of explanation
"""
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
"""
Explanation: 이것은 그래디언트와 관련하여 심각한 정확성 문제로 빠르게 복합화될 수 있습니다.
End of explanation
"""
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
"""
Explanation: 여기서 유한 차분 공식은 분석 사례에서 그래디언트를 계산하는 것이 빠르지만 샘플링 기반 방법의 경우 노이즈가 너무 많습니다. 좋은 그래디언트를 계산할 수 있도록 보다 신중한 기술을 사용해야 합니다. 다음으로 분석적 기대 그래디언트 계산에는 적합하지 않지만 실제 샘플 기반 사례에서 훨씬 더 성능을 발휘하는 훨씬 느린 기술을 살펴보겠습니다.
End of explanation
"""
pauli_z = cirq.Z(qubit)
pauli_z
"""
Explanation: 위에서 특정 연구 시나리오에 특정 미분기가 가장 잘 사용됨을 알 수 있습니다. 일반적으로 기기 노이즈 등에 강한 느린 샘플 기반 방법은 보다 '실제' 설정에서 알고리즘을 테스트하거나 구현할 때 유용한 미분기입니다. 유한 차분과 같은 더 빠른 방법은 분석 계산 및 더 높은 처리량을 원하지만 아직 알고리즘의 기기 실행 가능성에 관심이 없는 경우 적합합니다.
3. 다중 observable
두 번째 observable을 소개하고 TensorFlow Quantum이 단일 회로에 대해 여러 observable을 지원하는 방법을 살펴보겠습니다.
End of explanation
"""
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
"""
Explanation: 이 observable이 이전과 같은 회로에서 사용된다면 $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y (\alpha)⟩ = \cos(\pi \alpha)$ 및 $f_{2}^{'}(\alpha) = -\pi \sin (\pi \alpha)$입니다. 간단하게 확인해 보겠습니다.
End of explanation
"""
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
"""
Explanation: 이 정도면 일치한다고 볼 수 있습니다.
이제 $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$를 정의하면 $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$입니다. 회로와 함께 사용하기 위해 TensorFlow Quantum에서 하나 이상의 observable을 정의하는 것은 $g$에 더 많은 용어를 추가하는 것과 같습니다.
이것은 회로에서 특정 심볼의 그래디언트가 해당 회로에 적용된 해당 심볼의 각 observable에 대해 그래디언트의 합과 동일함을 의미합니다. 이는 TensorFlow 그래디언트 가져오기 및 역전파(특정 심볼에 대한 그래디언트로 모든 observable에 대한 그래디언트 합계를 제공)와 호환됩니다.
End of explanation
"""
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
"""
Explanation: 여기서 첫 번째 항목은 예상 w.r.t Pauli X이고, 두 번째 항목은 예상 w.r.t Pauli Z입니다. 그래디언트를 사용할 때는 다음과 같습니다.
End of explanation
"""
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
"""
Explanation: 여기에서 각 observable의 그래디언트의 합이 실제로 $\alpha$의 그래디언트임을 확인했습니다. 이 동작은 모든 TensorFlow Quantum 미분기에서 지원하며 나머지 TensorFlow와의 호환성에 중요한 역할을 합니다.
4. 고급 사용법
여기서는 양자 회로에 대한 사용자 정의 미분 루틴을 정의하는 방법을 배웁니다. TensorFlow Quantum 서브 클래스 tfq.differentiators.Differentiator 내에 존재하는 모든 미분기입니다. 미분기에서 differentiate_analytic 및 differentiate_sampled를 구현해야 합니다.
다음은 TensorFlow Quantum 구조를 사용하여 이 튜토리얼의 첫 번째 부분에 나온 폐쇄형 솔루션을 구현합니다.
End of explanation
"""
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
"""
Explanation: 이 새로운 미분기는 이제 기존 tfq.layer 객체와 함께 사용할 수 있습니다.
End of explanation
"""
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
"""
Explanation: 이제 이 새로운 미분기를 사용하여 미분 ops를 생성할 수 있습니다.
요점: 차별화 요소는 한 번에 하나의 op에만 연결할 수 있으므로 이전 op에 연결된 미분기는 새 op에 연결하기 전에 새로 고쳐야 합니다.
End of explanation
"""
|
Walter1218/self_driving_car_ND | CarND-LaneLines-P1/P1.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
#return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines_old(img, lines, color=[255, 0, 0], thickness=6):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img,(x1,y1),(x2,y2), color, thickness)
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
#define parameters here
left_m = 0
left_bias = 0
right_m = 0
right_bias = 0
left_size = 0
right_size = 0
height = img.shape[0]
width = img.shape[1]
#in this function, we use y = m * x + b; m is slope, and b is bias.
for line in lines:
for x1,y1,x2,y2 in line:
#calculate the slope
slope = ((y2 - y1)/(x2 - x1))
#print(slope)
#because in lane line detection, if the slope is close to 0, this line maybe is noise
if(slope < 0.5 and slope > -0.5):
break
point = np.mean(line, axis=0)
#left line
if slope < -0.5:
#sum of m & bias
left_m += slope
left_bias += point[1] - point[0] * slope
left_size += 1
#print(slope)
if slope > 0.5:
#sum of m & bias
right_m += slope
right_bias += point[1] - point[0] * slope
right_size += 1
#print(slope)
if(right_size>0):
#draw single right line
cv2.line(img,(int(width*9/16), int(right_m/right_size * int(width*9/16) + right_bias/right_size)),(width, int(right_m/right_size * width +right_bias/right_size)), color, thickness )
#cv2.line(img, (int((539 - right_bias/right_size)/(right_m/right_size)), 539), (549, int(right_m/right_size * 549 + right_bias/right_size)), color, thickness)
if(left_size > 0):
pass
#draw single left line
cv2.line(img, (0, int(left_bias/left_size)), (int(width*7/16), int(left_m/left_size * int(width*7/16) + left_bias/left_size)), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, new_draw_fun = True):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
if(new_draw_fun):
draw_lines(line_img, lines)
else:
draw_lines_old(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def fine_lane_lines_function(image, new_draw_func):
# define parameters here
kernel_size = 3
low_threshold = 50
high_threshold = 150
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(imshape[1]*7/16,imshape[0]*5/8), (imshape[1]*9/16, imshape[0]*5/8), (imshape[1],imshape[0])]], dtype=np.int32)
#convert the RGB image to grayscale
gray = grayscale(image)
#blur the gray image
blur_gray = gaussian_blur(gray, kernel_size)
#using canny edge function for the blur gray image
edges = canny(blur_gray, low_threshold, high_threshold)
#masked edges image by using region_of_interest function
masked_edges = region_of_interest(edges, vertices)
#here we use hough transform to process the masked edges
#define parameters here
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 100 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 100 #minimum number of pixels making up a line
max_line_gap = 160 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
#calling hough transform function and output color_edges
color_edges = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap, new_draw_fun = new_draw_func)
lines_edges = weighted_img(color_edges, image)
return lines_edges
for img_ in os.listdir("test_images/"):
image_address = "test_images/"+img_
#read image from input address
image = cv2.imread(image_address)
lines_edges = fine_lane_lines_function(image, False)
#debug-code for visiulization
fig = plt.figure()
plt.imshow(lines_edges)
#saving image to the test_images directory
cv2.imwrite("test_images/test_"+img_,lines_edges)
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = fine_lane_lines_function(image, False)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'white_1.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
yello_output = 'yello_1.mp4'
clip1 = VideoFileClip("solidYellowLeft.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(yello_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yello_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = fine_lane_lines_function(image, True)
return result
white_output = 'white_2.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
End of explanation
"""
yellow_output = 'yellow_2.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
def pipeline(image, new_draw_func):
# define parameters here
#print(image.shape)
kernel_size = 6
low_threshold = 50
high_threshold = 150
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(imshape[1]*7/16,imshape[0]*5/8), (imshape[1]*9/16, imshape[0]*5/8), (imshape[1],imshape[0])]], dtype=np.int32)
#using hsv image
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
yellow = cv2.inRange(hsv, (20, 80, 80), (25, 255, 255))
white = cv2.inRange(hsv, (0, 0, 180), (255, 25, 255))
#convert to grayscale
gray = cv2.bitwise_or(yellow, white)
#using canny edge function for the blur gray image
edges = canny(gray, low_threshold, high_threshold)
#masked edges image by using region_of_interest function
masked_edges = region_of_interest(edges, vertices)
#here we use hough transform to process the masked edges
#define parameters here
rho = 4 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 100 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 10 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
#calling hough transform function and output color_edges
color_edges = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap, new_draw_fun = new_draw_func)
lines_edges = weighted_img(color_edges, image)
return lines_edges
def process_image_1(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = pipeline(image, True)
return result
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image_1)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/solutions/word2vec.ipynb | apache-2.0 | import io
import itertools
import os
import re
import string
import numpy as np
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import (
Activation,
Dense,
Dot,
Embedding,
Flatten,
GlobalAveragePooling1D,
Reshape,
)
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
"""
Explanation: Word2Vec
Learning Objectives
Learn how to build a Word2Vec model
Prepare training data for Word2Vec
Train a Word2Vec model. In this lab we will build a Skip Gram Model
Learn how to visualize embeddings and analyze them using the Embedding Projector
Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on Efficient Estimation of Word Representations in Vector Space and
Distributed
Representations of Words and Phrases and their Compositionality. It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
Continuous Bag-of-Words Model which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
Continuous Skip-gram Model which predict words within a certain range before and after the current word in the same sentence.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the TensorFlow Embedding Projector.
Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of (target_word, context_word) where context_word appears in the neighboring context of target_word.
Consider the following sentence of 8 words.
The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a target_word that can be considered context word. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this lab, a window size of n implies n words on each side with a total window span of 2*n+1 words across a word.
The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>, the objective can be written as the average log probability
where c is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.
where v and v<sup>'<sup> are target and context vector representations of words and W is vocabulary size. Here v<sub>0 and v<sub>1 are model parameters which are updated by gradient descent.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The Noise Contrastive Estimation loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be simplified to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from num_ns negative samples drawn from noise distribution P<sub>n</sub>(w) of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and num_ns negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the window_size neighborhood of the target_word. For the example sentence, these are few potential negative samples (when window_size is 2).
(hot, shimmered)
(wide, hot)
(wide, sun)
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
Setup
End of explanation
"""
# Show the currently installed version of TensorFlow you should be using TF 2.6
print("TensorFlow version: ", tf.version.VERSION)
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
OUTDIR = f"gs://{BUCKET}/text_models"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env OUTDIR=$OUTDIR
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
"""
Explanation: Please check your tensorflow version using the cell below.
End of explanation
"""
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
"""
Explanation: Vectorize an example sentence
Consider the following sentence:
The wide road shimmered in the hot sun.
Tokenize the sentence:
End of explanation
"""
vocab, index = {}, 1 # start indexing from 1
vocab["<pad>"] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
"""
Explanation: Create a vocabulary to save mappings from tokens to integer indices.
End of explanation
"""
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
"""
Explanation: Create an inverse vocabulary to save mappings from integer indices to tokens.
End of explanation
"""
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
"""
Explanation: Vectorize your sentence.
End of explanation
"""
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0,
)
print(len(positive_skip_grams))
"""
Explanation: Generate skip-grams from one sentence
The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for Word2Vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).
Note: negative_samples is set to 0 here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
End of explanation
"""
for target, context in positive_skip_grams[:5]:
print(
f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})"
)
"""
Explanation: Take a look at few positive skip-grams.
End of explanation
"""
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling", # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
"""
Explanation: Negative sampling for one skip-gram
The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: num_ns (number of negative samples per positive context word) between [5, 20] is shown to work best for smaller datasets, while num_ns between [2,5] suffices for larger datasets.
End of explanation
"""
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0] * num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
"""
Explanation: Construct one training example
For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labelled as 1) and negative samples (labelled as 0) for each target word.
End of explanation
"""
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
"""
Explanation: Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
End of explanation
"""
print(f"target :", target)
print(f"context :", context)
print(f"label :", label)
"""
Explanation: A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)
End of explanation
"""
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
"""
Explanation: Summary
This picture summarizes the procedure of generating training example from a sentence.
Lab Task 1
Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as the, is, on) don't add much useful information for the model to learn from. Mikolov et al. suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The tf.keras.preprocessing.sequence.skipgrams function accepts a sampling table argument to encode probabilities of sampling any token. You can use the tf.keras.preprocessing.sequence.make_sampling_table to generate a word-frequency rank based probabilistic sampling table and pass it to skipgrams function. Take a look at the sampling probabilities for a vocab_size of 10.
End of explanation
"""
"""
Generates skip-gram pairs with negative sampling for a list of sequences
(int-encoded sentences) based on window size, number of negative samples
and vocabulary size.
"""
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(
vocab_size
)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0,
)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1
)
(
negative_sampling_candidates,
_,
_,
) = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling",
)
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1
)
context = tf.concat(
[context_class, negative_sampling_candidates], 0
)
label = tf.constant([1] + [0] * num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
"""
Explanation: sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.
Key point: The tf.random.log_uniform_candidate_sampler already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
End of explanation
"""
path_to_file = tf.keras.utils.get_file(
"shakespeare.txt",
"https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt",
)
"""
Explanation: Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
End of explanation
"""
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
"""
Explanation: Read text from the file and take a look at the first few lines.
End of explanation
"""
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(
lambda x: tf.cast(tf.strings.length(x), bool)
)
"""
Explanation: Use the non empty lines to construct a tf.data.TextLineDataset object for next steps.
End of explanation
"""
"""
We create a custom standardization function to lowercase the text and
remove punctuation.
"""
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(
lowercase, "[%s]" % re.escape(string.punctuation), ""
)
"""
Define the vocabulary size and number of words in a sequence.
"""
vocab_size = 4096
sequence_length = 10
"""
Use the text vectorization layer to normalize, split, and map strings to
integers. Set output_sequence_length length to pad all samples to same length.
"""
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode="int",
output_sequence_length=sequence_length,
)
"""
Explanation: Vectorize sentences from the corpus
You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text Classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.
End of explanation
"""
vectorize_layer.adapt(text_ds.batch(1024))
"""
Explanation: Call adapt on the text dataset to create vocabulary.
End of explanation
"""
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
"""
Explanation: Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with get_vocabulary(). This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
End of explanation
"""
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = (
text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
)
"""
Explanation: The vectorize_layer can now be used to generate vectors for each element in the text_ds.
End of explanation
"""
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
"""
Explanation: Obtain sequences from the dataset
You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the generate_training_data() defined earlier uses non-TF python/numpy functions, you could also use a tf.py_function or tf.numpy_function with tf.data.Dataset.map().
End of explanation
"""
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
"""
Explanation: Take a look at few examples from sequences.
End of explanation
"""
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED,
)
print(len(targets), len(contexts), len(labels))
"""
Explanation: Generate training examples from sequences
sequences is now a list of int encoded sentences. Just call the generate_training_data() function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
End of explanation
"""
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
"""
Explanation: Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your Word2Vec model!
End of explanation
"""
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
"""
Explanation: Add cache() and prefetch() to improve performance.
End of explanation
"""
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super().__init__()
self.target_embedding = Embedding(
vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding",
)
self.context_embedding = Embedding(
vocab_size, embedding_dim, input_length=num_ns + 1
)
self.dots = Dot(axes=(3, 2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
"""
Explanation: Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
Subclassed Word2Vec Model
Use the Keras Subclassing API to define your Word2Vec model with the following layers:
target_embedding: A tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are (vocab_size * embedding_dim).
context_embedding: Another tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in target_embedding, i.e. (vocab_size * embedding_dim).
dots: A tf.keras.layers.Dot layer that computes the dot product of target and context embeddings from a training pair.
flatten: A tf.keras.layers.Flatten layer to flatten the results of dots layer into logits.
With the sublassed model, you can define the call() function that accepts (target, context) pairs which can then be passed into their corresponding embedding layer. Reshape the context_embedding to perform a dot product with target_embedding and return the flattened result.
Key point: The target_embedding and context_embedding layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
End of explanation
"""
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(
optimizer="adam",
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
"""
Explanation: Define loss function and compile model
For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the tf.keras.optimizers.Adam optimizer.
End of explanation
"""
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
"""
Explanation: Also define a callback to log training statistics for tensorboard.
End of explanation
"""
dataset
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
"""
Explanation: Train the model with dataset prepared above for some number of epochs.
End of explanation
"""
def copy_tensorboard_logs(local_path: str, gcs_path: str):
"""Copies Tensorboard logs from a local dir to a GCS location.
After training, batch copy Tensorboard logs locally to a GCS location.
Args:
local_path: local filesystem directory uri.
gcs_path: cloud filesystem directory uri.
Returns:
None.
"""
pattern = f"{local_path}/*/events.out.tfevents.*"
local_files = tf.io.gfile.glob(pattern)
gcs_log_files = [
local_file.replace(local_path, gcs_path) for local_file in local_files
]
for local_file, gcs_file in zip(local_files, gcs_log_files):
tf.io.gfile.copy(local_file, gcs_file)
copy_tensorboard_logs("./logs", OUTDIR + "/word2vec_logs")
"""
Explanation: Visualize training on Tensorboard
In order to visualize how the model has trained we can use tensorboard to show the Word2Vec model's accuracy and loss. To do that, we first have to copy the logs from local to a GCS (Cloud Storage) folder.
End of explanation
"""
# TODO 4a
weights = word2vec.get_layer("w2v_embedding").get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
"""
Explanation: To visualize the embeddings, open Cloud Shell and use the following command:
tensorboard --port=8081 --logdir OUTDIR/word2vec_logs
In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.
Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.
End of explanation
"""
out_v = open("text_models/vectors.tsv", "w", encoding="utf-8")
out_m = open("text_models/metadata.tsv", "w", encoding="utf-8")
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write("\t".join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
"""
Explanation: Create and save the vectors and metadata file.
End of explanation
"""
|
ck-quantuniversity/cntk_pyspark | .ipynb_checkpoints/CNTK_model_scoring_on_Spark_walkthrough-checkpoint.ipynb | mit | from cntk import load_model
import findspark
findspark.init('/root/spark-2.1.0-bin-hadoop2.6')
import os
import numpy as np
import pandas as pd
import pickle
import sys
from pyspark import SparkFiles
from pyspark import SparkContext
from pyspark.sql.session import SparkSession
sc =SparkContext()
spark = SparkSession(sc)
import tarfile
from urllib.request import urlretrieve
import xml.etree.ElementTree
cifar_uri = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz' # Location of test image dataset
mean_image_uri = 'https://raw.githubusercontent.com/Azure-Samples/hdinsight-pyspark-cntk-integration/master/CIFAR-10_mean.xml' # Mean image for subtraction
model_uri = 'https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration/raw/master/resnet20_meanimage_159.dnn' # Location of trained model
local_tmp_dir = '/tmp/cifar'
local_cifar_path = os.path.join(local_tmp_dir, os.path.basename(cifar_uri))
local_model_path = os.path.join(local_tmp_dir, 'model.dnn')
local_mean_image_path = os.path.join(local_tmp_dir, 'mean_image.xml')
os.makedirs(local_tmp_dir, exist_ok=True)
"""
Explanation: Walkthrough: Scoring a trained CNTK model with PySpark on a Microsoft Azure HDInsight cluster
This notebook demonstrates how a trained Microsoft Cognitive Toolkit deep learning model can be applied to files in a distributed and scalable fashion using the Spark Python API (PySpark). An image classification model pretrained on the CIFAR-10 dataset is applied to 10,000 withheld images. A sample of the images is shown below along with their classes:
<img src="https://cntk.ai/jup/201/cifar-10.png" width=500 height=500>
To begin, follow the instructions below to set up a cluster and storage account. You will be prompted to upload a copy of this notebook to the cluster, where you can continue following the walkthrough by executing the PySpark code cells.
Outline
Load sample images into a Spark Resiliant Distributed Dataset or RDD
Load modules and define presets
Download the dataset locally on the Spark cluster
Convert the dataset into an RDD
Score the images using a trained CNTK model
Download the trained CNTK model to the Spark cluster
Define functions to be used by worker nodes
Score the images on worker nodes
Evaluate model accuracy
<a name="images"></a>
Load sample images into a Spark Resiliant Distributed Dataset or RDD
We will now use Python to obtain the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. For more details on the dataset, see Alex Krizhevsky's Learning Multiple Layers of Features from Tiny Images (2009).
<a name="imports"></a>
Load modules and define presets
Execute the cell below by selecting it with the mouse or arrow keys, then pressing Shift+Enter.
End of explanation
"""
if not os.path.exists(local_cifar_path):
urlretrieve(cifar_uri, filename=local_cifar_path)
with tarfile.open(local_cifar_path, 'r:gz') as f:
test_dict = pickle.load(f.extractfile('cifar-10-batches-py/test_batch'), encoding='latin1')
"""
Explanation: <a name="tarball"></a>
Download the dataset locally on the Spark cluster
The image data are ndarrays stored in a Python dict which has been pickled and tarballed. The cell below downloads the tarball and extracts the dict containing the test image data.
End of explanation
"""
def reshape_image(record):
image, label, filename = record
return image.reshape(3,32,32).transpose(1,2,0), label, filename
image_rdd = sc.parallelize(zip(test_dict['data'], test_dict['labels'], test_dict['filenames']))
image_rdd = image_rdd.map(reshape_image)
"""
Explanation: <a name="rdd"></a>
Convert the dataset into an RDD
The following code cell illustrates how the collection of images can be distributed to create a Spark RDD. The cell creates an RDD with one partition per worker to limit the number of times that the trained model must be reloaded during scoring.
End of explanation
"""
sample_images = image_rdd.take(5)
image_data = np.array([i[0].reshape((32*32*3)) for i in sample_images]).T
image_labels = [i[2] for i in sample_images]
image_df = pd.DataFrame(image_data, columns=image_labels)
spark.createDataFrame(image_df).coalesce(1).write.mode("overwrite").csv("/tmp/cifar_image", header=True)
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from glob import glob
image_df = pd.read_csv(glob('/tmp/cifar_image/*.csv')[0])
plt.figure(figsize=(15,1))
for i, col in enumerate(image_df.columns):
plt.subplot(1, 5, i+1)
image = image_df[col].values.reshape((32, 32, 3))
plt.imshow(image)
plt.title(col)
cur_axes = plt.gca()
cur_axes.axes.get_xaxis().set_visible(False)
cur_axes.axes.get_yaxis().set_visible(False)
"""
Explanation: To convince ourselves that the data has been properly loaded, let's visualize a few of these images. For plotting, we will need to transfer them to the local context by way of a Spark dataframe:
End of explanation
"""
urlretrieve(model_uri, local_model_path)
sc.addFile(local_model_path)
urlretrieve(mean_image_uri, local_mean_image_path)
mean_image = xml.etree.ElementTree.parse(local_mean_image_path).getroot()
mean_image = [float(i) for i in mean_image.find('MeanImg').find('data').text.strip().split(' ')]
mean_image = np.array(mean_image).reshape((32, 32, 3)).transpose((2, 0, 1))
mean_image_bc = sc.broadcast(mean_image)
"""
Explanation: <a name="score"></a>
Score the images using a trained CNTK model
Now that the cluster and sample dataset have been created, we can use PySpark to apply a trained model to the images.
<a name="model"></a>
Download the trained CNTK model and mean image to the Spark cluster
We previously trained a twenty-layer ResNet model to classify CIFAR-10 images by following this tutorial from the CNTK git repo. The model expects input images to be preprocessed by subtracting the mean image defined in an OpenCV XML file. The following cell downloads both the trained model and the mean image, and ensures that data from both files can be accessed by worker nodes.
End of explanation
"""
def get_preprocessed_image(my_image, mean_image):
''' Reshape and flip RGB order '''
my_image = my_image.astype(np.float32)
bgr_image = my_image[:, :, ::-1] # RGB -> BGR
image_data = np.ascontiguousarray(np.transpose(bgr_image, (2, 0, 1)))
image_data -= mean_image
return(image_data)
def run_worker(records):
''' Scoring script run by each worker '''
loaded_model = load_model(SparkFiles.get('./model.dnn'))
mean_image = mean_image_bc.value
# Iterate through the records in the RDD.
# record[0] is the image data
# record[1] is the true label
# record[2] is the file name
for record in records:
preprocessed_image = get_preprocessed_image(record[0], mean_image)
dnn_output = loaded_model.eval({loaded_model.arguments[0]: [preprocessed_image]})
yield record[1], np.argmax(np.squeeze(dnn_output))
"""
Explanation: <a name="functions"></a>
Define functions to be used by worker nodes
The following functions will be used during scoring to load, preprocess, and score images. A class label (integer in the range 0-9) will be returned for each image, along with its filename.
End of explanation
"""
labelled_images = image_rdd.mapPartitions(run_worker)
# Time how long it takes to score 10k test images
start = pd.datetime.now()
results = labelled_images.collect()
print('Scored {} images'.format(len(results)))
stop = pd.datetime.now()
print(stop - start)
"""
Explanation: <a name="map"></a>
Score the images on worker nodes
The code cell below maps each partition of image_rdd to a worker node and collects the results. Runtimes of 1-3 minutes are typical.
End of explanation
"""
df = pd.DataFrame(results, columns=['true_label', 'predicted_label'])
num_correct = sum(df['true_label'] == df['predicted_label'])
num_total = len(results)
print('Correctly predicted {} of {} images ({:0.2f}%)'.format(num_correct, num_total, 100 * num_correct / num_total))
"""
Explanation: <a name="evaluate"></a>
Evaluate model accuracy
The trained model assigns a class label (represented by an integer value 0-9) to each image. We now compare the true and predicted class labels to evaluate our model's accuracy.
End of explanation
"""
spark.createDataFrame(df).coalesce(1).write.mode("overwrite").csv("/tmp/cifar_scores", header=True)
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import os
from glob import glob
df = pd.read_csv(glob('/tmp/cifar_scores/*.csv')[0])
print('Constructing a confusion matrix with the first {} samples'.format(len(df.index)))
label_to_name_dict = {0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
labels = np.sort(df['true_label'].unique())
named_labels = [label_to_name_dict[i] for i in labels]
cm = confusion_matrix(df['true_label'], df['predicted_label'], labels=labels)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, named_labels, rotation=90)
plt.yticks(tick_marks, named_labels)
plt.xlabel('Predicted label')
plt.ylabel('True Label')
plt.show()
"""
Explanation: We can construct a confusion matrix to visualize which classification errors are most common:
End of explanation
"""
|
khrapovs/metrix | notebooks/asymptotic_and_bootstrap_ci.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import datetime as dt
from numpy.linalg import inv, lstsq
from scipy.stats import norm
# Local file ols.py
from ols import ols
# For inline pictures
%matplotlib inline
sns.set_context('paper')
# For nicer output of Pandas dataframes
pd.set_option('float_format', '{:8.2f}'.format)
np.set_printoptions(precision=3, suppress=True)
"""
Explanation: Asymptotic and Bootstrap Confidence Intervals
Nerlove.dat contains data used by Marc Nerlove to analyze a cost function for 145 American electric companies. The variables (in order) are:
# : The number of the observation
$C$ : Total production cost, in \$millions
$Q$ : Kilowatt-hours of output in billions
$P_{w}$ : Wage rate per hour
$P_{f}$ : Price of fuels in cents per million BTUs
$P_{k}$ : The rental price of capital
Nerlove was interested in estimating a cost function: $C=f\left(Q,P_{w},P_{f},P_{k}\right)$.
End of explanation
"""
names = ['Obs', 'C', 'Q', 'Pw', 'Pf', 'Pk']
df = pd.read_csv('../data/Nerlove/Nerlove.dat', names=names, sep=' ', skipinitialspace=True)
# Drop empty lines
df = df.dropna().drop('Obs', axis=1)
# Take logs
df = df.apply(np.log)
print(df.head())
"""
Explanation: Import the data
End of explanation
"""
df.plot(subplots=True, figsize=(10, 6))
plt.show()
"""
Explanation: Draw some plots
End of explanation
"""
Y = np.array(df['C'])
X = np.array(df[['Q', 'Pw', 'Pf', 'Pk']].T)
K, N = X.shape
res = ols(Y, X, 'White')
theta, se, V = res['beta'], res['s'], res['V']
"""
Explanation: OLS estimation
The model under consideration is
$$
\mathbb{E}\left[\log C\left|Q,P_{w},P_{f},P_{k}\right.\right]=\alpha_{1}+\alpha_{2}\log Q+\alpha_{3}\log P_{w}+\alpha_{4}\log P_{f}+\alpha_{5}\log P_{k}.
$$
End of explanation
"""
%%time
CI_asy_hi = theta + norm.ppf(.975) * se
CI_asy_lo = theta - norm.ppf(.975) * se
print(theta)
print(CI_asy_hi)
print(CI_asy_lo)
"""
Explanation: Confidence Intervals for individual parameters
Asymptotic CI
End of explanation
"""
def resample(Y, X):
"""Resample data randomly with replacement."""
N = len(Y)
ind = np.random.choice(N, size=N)
return Y[ind], X[:, ind]
B = 100
# Initialize array for bootstrapped estimates and standard errors
theta_b = np.empty((B, K+1))
se_b = theta_b.copy()
for b in range(B):
Yb, Xb = resample(Y, X)
res = ols(Yb, Xb, 'White')
theta_b[b], se_b[b], V_b = res['beta'], res['s'], res['V']
tstat_boot = np.abs(theta_b - theta) / se_b
q = np.percentile(tstat_boot, 95, axis=0)
CI_boo_hi = theta + q * se
CI_boo_lo = theta - q * se
print(theta)
print(CI_boo_hi)
print(CI_boo_lo)
"""
Explanation: Bootstrap CI
End of explanation
"""
# Parameter picker for testing linear restriction
r = np.array([0, 0, 1, 1, 1])
# T-statistics for the test
tstat = (theta[2:].sum() - 1) / (np.dot(r, V).dot(r))**.5
# Corresponding p-value
pval = norm.cdf(tstat)
print('T-statistics = %.4f' % tstat)
print('Asy. P-value = %.4f' % pval)
"""
Explanation: Test for the linear combination of parameters
Asymptotic
Test the hypothesis $H_{0}:\;\alpha_{3}+\alpha_{4}+\alpha_{5}=1$ against $H_{a}:\;\alpha_{3}+\alpha_{4}+\alpha_{5}<1$ at the 5% significance level.
End of explanation
"""
B = 100
# Initialize containers
theta_b = np.empty((B, K+1))
tstat_lin_b = np.empty(B)
for b in range(B):
Yb, Xb = resample(Y, X)
res = ols(Yb, Xb, 'White')
theta_b, V_b = res['beta'], res['V']
# Bootstrap t-statistic. Note recentering!
tstat_lin_b[b] = (theta_b[2:].sum() - theta[2:].sum()) / (np.dot(r, V_b).dot(r))**.5
pval_b = np.sum(tstat_lin_b < tstat) / B
print('Boot. P-value = %.4f' % pval_b)
"""
Explanation: Bootstrap
End of explanation
"""
Y = np.array(df['C'])
# Create new regressors
df['Pw-Pk'] = df['Pw'] - df['Pk']
df['Pf-Pk'] = df['Pf'] - df['Pk']
X = np.array(df[['Q', 'Pw-Pk', 'Pf-Pk']].T)
# Update the number of regressors
K, N = X.shape
def ols_restr(Y, X):
# Estimate parameters via OLS
res = ols(Y, X, 'White')
theta_ols, V, se_ols = res['beta'], res['V'], res['s']
# Append alpha_5 to the parameter vector
theta_ols = np.append(theta_ols, 1 - theta_ols[2:].sum())
# Parameter picker
m = np.array([0, 0, -1, -1])
# Append standard error of alpha_5
se_ols = np.append(se_ols, (np.dot(m, V).dot(m))**.5)
return theta_ols, se_ols
theta_ols, se_ols = ols_restr(Y, X)
print(theta_ols)
print(se_ols)
"""
Explanation: Estimation under restriction
The restricted model is
$$
\mathbb{E}\left[\log C\left|Q,P_{w},P_{f},P_{k}\right.\right]=\alpha_{1}+\alpha_{2}\log Q+\alpha_{3}\left(\log P_{w}-\log P_{k}\right)+\alpha_{4}\left(\log P_{f}-\log P_{k}\right).
$$
End of explanation
"""
%%time
CI_asy_hi = theta_ols + norm.ppf(.975) * se_ols
CI_asy_lo = theta_ols - norm.ppf(.975) * se_ols
print(theta_ols)
print(CI_asy_hi)
print(CI_asy_lo)
"""
Explanation: Asymptotic CI
End of explanation
"""
B = 100
theta_b = np.empty((B, K+2))
se_b = theta_b.copy()
for b in range(B):
Yb, Xb = resample(Y, X)
# Estimate restricted model using resampled data
theta_b[b], se_b[b] = ols_restr(Yb, Xb)
# Bootstrapped t-statistics
tstat_boot = np.abs(theta_b - theta_ols) / se_b
# 95% Quantile
q = np.percentile(tstat_boot, 95, axis=0)
CI_boo_hi = theta_ols + q * se_ols
CI_boo_lo = theta_ols - q * se_ols
print(theta_ols)
print(CI_boo_hi)
print(CI_boo_lo)
"""
Explanation: Bootstrap CI
End of explanation
"""
def nls_a7(df):
"""Estimation of non-linear model via concentration method.
The function returns only the best alpha_7. Other parameters are computed conditional on that.
"""
# Number of grid points
steps = 10
sum_e2 = []
b7 = np.linspace(np.percentile(df['Q'], 10), np.percentile(df['Q'], 90), steps)
theta_all, V_all = [], []
Y = np.array(df['C'])
df['Pw-Pk'] = df['Pw'] - df['Pk']
df['Pf-Pk'] = df['Pf'] - df['Pk']
for s in range(steps):
df['Z'] = df['Q'] / (1 + np.exp(b7[s] - df['Q']))
X = np.array(df[['Q', 'Pw-Pk', 'Pf-Pk', 'Z']].T)
res = ols(Y, X, 'White')
sum_e2.append(np.sum(res['e']**2))
theta7 = b7[np.argmin(sum_e2)]
return theta7, b7, sum_e2
"""
Explanation: Non-linear least squares
Additional term in the regression is $\alpha_6Z$, where
$$
Z=\frac{\log Q}{1+\exp\left{ \alpha_{7}-\log Q\right} }.
$$
End of explanation
"""
theta7, b7, sum_e2 = nls_a7(df)
plt.plot(b7, sum_e2)
plt.xlabel('b7')
plt.ylabel('sum(e^2)')
plt.axvline(b7[np.argmin(sum_e2)], color='red')
plt.show()
def nls(df, theta7):
"""Estimation of linear parameters given estimated non-linear parameter."""
# Additional non-linear regressor given theta7
df['Z'] = df['Q'] / (1 + np.exp(theta7 - df['Q']))
# All regressors
X = np.array(df[['Q', 'Pw-Pk', 'Pf-Pk', 'Z']].T)
# OLS estimation
res = ols(Y, X, 'White')
# The derivative of non-linear regressor with parameter theta7
df['Zprime'] = df['Z'] / (1 + np.exp(df['Q'] - theta7))
# First-order approximation to non-linear regression
M = np.array(df[['Q', 'Pw-Pk', 'Pf-Pk', 'Z', 'Zprime']].T)
# Add constant
M = np.concatenate((np.ones((1, N)), M), axis = 0)
# Find standard errors corresponding to NLS estimates
Qmm = np.dot(M, M.T)
Me = M * res['e']
Qmme = np.dot(Me, Me.T)
V = np.dot(inv(Qmm), Qmme).dot(inv(Qmm))
se = np.diag(V)**.5
# Augment parameter vector with theta7
theta = np.append(res['beta'], theta7)
# Insert alpha_5
theta = np.insert(theta, 4, 1 - theta[2:4].sum())
# Parameter picker
m = np.array([0, 0, -1, -1, 0, 0])
# Insert standard error of alpha_5
se = np.insert(se, 4, (np.dot(m, V).dot(m))**.5)
return theta, se
"""
Explanation: Use NLS to estimate parameters
End of explanation
"""
theta, se = nls(df, theta7)
CI_asy_hi = theta + norm.ppf(.975) * se
CI_asy_lo = theta - norm.ppf(.975) * se
print(theta)
print(CI_asy_hi)
print(CI_asy_lo)
"""
Explanation: Asymptotic CI
End of explanation
"""
def resample(df):
"""Resampling of the dataFrame."""
N = Y.shape[0]
ind = np.random.choice(N, size=N)
return df.iloc[ind]
%%time
B = 100
theta_b = np.empty((B, K+4))
se_b = theta_b.copy()
for b in range(B):
df_b = resample(df)
theta7, b7, sum_e2 = nls_a7(df_b)
theta_b[b], se_b[b] = nls(df_b, theta7)
tstat_boot = np.abs(theta_b - theta) / se_b
q = np.percentile(tstat_boot, 95, axis=0)
CI_boo_hi = theta + q * se
CI_boo_lo = theta - q * se
print(theta)
print(CI_boo_hi)
print(CI_boo_lo)
"""
Explanation: Bootstrap CI
End of explanation
"""
|
Pybonacci/notebooks | Joyas en la biblioteca estandar de Python (I).ipynb | bsd-2-clause | from collections import ChainMap
dict_a = {'a': 1, 'b': 10}
dict_b = {'b': 100, 'c': 1000}
cm = ChainMap(dict_a, dict_b)
for key, value in cm.items():
print(key, value)
"""
Explanation: Dentro de la biblioteca estándar de Python dispones de auténticas joyas, muchas veces ignoradas u olvidadas. Es por ello que voy a empezar un breve pero intenso recorrido por algunas piezas de arte disponibles de serie.
Módulo collections
Con la ayuda de este módulo puedes aumentar las estructuras de datos típicas disponibles en Python (listas, tuplas, diccionarios,...). Veamos algunas utilidades disponibles:
ChainMap
Solo Python 3. Actualízate!!
Dicho en bruto, es un conglomerado de diccionarios (también conocidos como mappings o hash tables).
Para que puede ser útil:
Ejemplos en la documentación de Python.
Actualizar partes de una configuración.
Actualizar un diccionario pero que pueda ser de forma reversible.
Ejemplos de uso en github.
...
Ejemplo, imaginemos que tenemos un diccionario de configuración dict_a, que posee las claves a y b, y queremos actualizar sus valores con otros pares clave:valor que están en el diccionario dict_b, que posee las claves b y c. Podemos hacer:
End of explanation
"""
cm = ChainMap(dict_b, dict_a)
for key, value in cm.items():
print(key, value)
"""
Explanation: Hemos añadido el valor de la clave c de dict_b sin necesidad de modificar nuestro diccionario original de configuración dict_a, es decir, hemos hecho un 'cambio' reversible. También podemos 'sobreescribir' las claves que están en nuestro diccionario original de configuración, dict_b variando los parámetros del constructor:
End of explanation
"""
cm.maps
"""
Explanation: Vemos que, además de añadir la clave c, hemos sobreescrito la clave b.
Los diccionarios originales están disponibles haciendo uso del atributo maps:
End of explanation
"""
from io import StringIO
from collections import Counter
virtual_file = StringIO("""2010/01/01 2.7
2010/01/02 2.2
2010/01/03 2.1
2010/01/04 2.3
2010/01/05 2.4
2010/01/06 2.2
2010/01/02 2.2
2010/01/03 2.1
2010/01/04 2.3
""")
if Counter(virtual_file.readlines()).most_common(1)[0][1] > 1:
print('fichero con fecha repetida')
"""
Explanation: Ejercicio: haced un dir de cm y un dir de dict_a y veréis que los atributos y métodos disponibles son parecidos.
Más información en este hilo de stackoverflow en el que me he basado para el ejemplo anterior (¿basar y copiar no son sinónimos?).
Counter
Permite contar ocurrencias de forma simple. En realidad, su funcionalidad se podría conseguir sin problemas con algunas líneas extra de código pero ya que lo tenemos, está testeado e implementado por gente experta vamos a aprovecharnos de ello.
En la documentación oficial hay algunos ejemplos interesantes y en github podéis encontrar unos cuantos más. Veamos un ejemplo simple pero potente, yo trabajo mucho con datos meteorológicos y uno de los problemas recurrentes es tener fechas repetidas que no deberían existir (pero pasa demasiado a menudo). Una forma rápida de buscar problemas de estos en ficheros y lanzar una alarma cuando ocurra lo que buscamos, sería:
End of explanation
"""
import numpy as np
import datetime as dt
from pprint import pprint
datos = {
'valores': np.random.randn(100),
'frecuencia': dt.timedelta(minutes = 10),
'fecha_inicial': dt.datetime(2016, 1, 1, 0, 0),
'parametro': 'wind_speed',
'unidades': 'm/s'
}
pprint(datos)
"""
Explanation: namedtuple
A veces me toca crear algún tipo de estructura que guarda datos y algunos metadatos. Una forma simple sin crear una clase ad-hoc sería usar un diccionario. Un ejemplo simple sería:
End of explanation
"""
from collections import namedtuple
Datos = namedtuple('Datos', 'valores frecuencia fecha_inicial parametro unidades')
datos = Datos(np.random.randn(100),
dt.timedelta(minutes = 10),
dt.datetime(2016, 1, 1, 0, 0),
'wind_speed',
'm/s')
print(datos)
"""
Explanation: Lo anterior es simple y rápido pero usando una namedtuple dispongo de algo parecido con algunas cosas extra. Veamos un ejemplo similar usando namedtuple:
End of explanation
"""
print(datos.valores)
"""
Explanation: Ventajas que le veo con respecto a lo anterior:
Puedo acceder a los 'campos' o claves del diccionario usando dot notation
End of explanation
"""
Datos = namedtuple('Datos', 'valores frecuencia fecha_inicial parametro unidades', verbose = True)
# Lo mismo de antes
print(datos._source)
"""
Explanation: Puedo ver el código usado para crear la estructura de datos usando verbose = True. Usa exec entre bambalinas (o_O). Puedo ver que todas las claves se transforman en property's. Puedo ver que se crea documentación... MAGIA en estado puro!!!
(Si no quieres usar la keyword verbose = True puedes seguir teniendo acceso en un objeto usando obj._source)
End of explanation
"""
datos._asdict()['valores']
"""
Explanation: Puedo seguir obteniendo un diccionario (un OrderedDict, también incluido en el módulo collections) si así lo deseo:
End of explanation
"""
class DatosExtendidos(Datos):
def media(self):
"Calcula la media de los valores."
return self.valores.mean()
datos_ext = DatosExtendidos(**datos._asdict())
print(datos_ext.media())
"""
Explanation: Puedo crear subclases de forma simple para añadir funcionalidad. Por ejemplo, creamos una nueva clase con un nuevo método que calcula la media de los valores:
End of explanation
"""
from collections import deque
dq = deque(range(10), maxlen = 10)
lst = list(range(10))
print(dq)
print(lst)
# los tres últimos elementos los anexa nuevamente al principio de la secuencia.
dq.rotate(3)
print(dq)
lst = lst[-3:] + lst[:-3]
print(lst)
"""
Explanation: WOW!!!!!
Los ejemplos en la documentación oficial son muy potentes y dan nuevas ideas de potenciales usos.
deque
Otra joyita que quizá debería usar más a menudo sería deque. Es una secuencia mutable (parecido a una lista), pero con una serie de ventajas. Es una cola/lista cuyo principio y fin es 'indistinguible', es thread-safe y está diseñada para poder insertar y eliminar de forma rápida en ambos extremos de la cola (ahora veremos qué significa todo esto). Un uso evidente es el de usar, por ejemplo, una secuencia como stream de datos con un número de elementos fijo y/o rápidamente actualizable:
Podemos limitar su tamaño y si añadimos elementos por un lado se eliminan los del otro extremo.
Podemos rotar los datos de forma eficiente.
...
Veamos un ejemplo:
End of explanation
"""
tmp = deque(range(100000), maxlen = 100000)
%timeit dq.rotate(30000)
tmp = list(range(100000))
%timeit tmp[-30000:] + tmp[:-30000]
"""
Explanation: Veamos la eficiencia de esta operación:
End of explanation
"""
dq.append(100)
print(dq)
dq.appendleft(10000)
print(dq)
dq.extend(range(10))
print(dq)
dq.extendleft([10, 100])
print(dq)
"""
Explanation: Con una queue podemos anexar de forma eficiente a ambos lados:
End of explanation
"""
|
smharper/openmc | examples/jupyter/nuclear-data.ipynb | mit | %matplotlib inline
import os
from pprint import pprint
import shutil
import subprocess
import urllib.request
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm
from matplotlib.patches import Rectangle
import openmc.data
"""
Explanation: In this notebook, we will go through the salient features of the openmc.data package in the Python API. This package enables inspection, analysis, and conversion of nuclear data from ACE files. Most importantly, the package provides a mean to generate HDF5 nuclear data libraries that are used by the transport solver.
End of explanation
"""
openmc.data.atomic_mass('Fe54')
openmc.data.NATURAL_ABUNDANCE['H2']
openmc.data.atomic_weight('C')
"""
Explanation: Physical Data
Some very helpful physical data is available as part of openmc.data: atomic masses, natural abundances, and atomic weights.
End of explanation
"""
url = 'https://anl.box.com/shared/static/kxm7s57z3xgfbeq29h54n7q6js8rd11c.ace'
filename, headers = urllib.request.urlretrieve(url, 'gd157.ace')
# Load ACE data into object
gd157 = openmc.data.IncidentNeutron.from_ace('gd157.ace')
gd157
"""
Explanation: The IncidentNeutron class
The most useful class within the openmc.data API is IncidentNeutron, which stores to continuous-energy incident neutron data. This class has factory methods from_ace, from_endf, and from_hdf5 which take a data file on disk and parse it into a hierarchy of classes in memory. To demonstrate this feature, we will download an ACE file (which can be produced with NJOY 2016) and then load it in using the IncidentNeutron.from_ace method.
End of explanation
"""
total = gd157[1]
total
"""
Explanation: Cross sections
From Python, it's easy to explore (and modify) the nuclear data. Let's start off by reading the total cross section. Reactions are indexed using their "MT" number -- a unique identifier for each reaction defined by the ENDF-6 format. The MT number for the total cross section is 1.
End of explanation
"""
total.xs
"""
Explanation: Cross sections for each reaction can be stored at multiple temperatures. To see what temperatures are available, we can look at the reaction's xs attribute.
End of explanation
"""
total.xs['294K'](1.0)
"""
Explanation: To find the cross section at a particular energy, 1 eV for example, simply get the cross section at the appropriate temperature and then call it as a function. Note that our nuclear data uses eV as the unit of energy.
End of explanation
"""
total.xs['294K']([1.0, 2.0, 3.0])
"""
Explanation: The xs attribute can also be called on an array of energies.
End of explanation
"""
gd157.energy
energies = gd157.energy['294K']
total_xs = total.xs['294K'](energies)
plt.loglog(energies, total_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: A quick way to plot cross sections is to use the energy attribute of IncidentNeutron. This gives an array of all the energy values used in cross section interpolation for each temperature present.
End of explanation
"""
pprint(list(gd157.reactions.values())[:10])
"""
Explanation: Reaction Data
Most of the interesting data for an IncidentNeutron instance is contained within the reactions attribute, which is a dictionary mapping MT values to Reaction objects.
End of explanation
"""
n2n = gd157[16]
print('Threshold = {} eV'.format(n2n.xs['294K'].x[0]))
"""
Explanation: Let's suppose we want to look more closely at the (n,2n) reaction. This reaction has an energy threshold
End of explanation
"""
n2n.xs
xs = n2n.xs['294K']
plt.plot(xs.x, xs.y)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
plt.xlim((xs.x[0], xs.x[-1]))
"""
Explanation: The (n,2n) cross section, like all basic cross sections, is represented by the Tabulated1D class. The energy and cross section values in the table can be directly accessed with the x and y attributes. Using the x and y has the nice benefit of automatically acounting for reaction thresholds.
End of explanation
"""
n2n.products
neutron = n2n.products[0]
neutron.distribution
"""
Explanation: To get information on the energy and angle distribution of the neutrons emitted in the reaction, we need to look at the products attribute.
End of explanation
"""
dist = neutron.distribution[0]
dist.energy_out
"""
Explanation: We see that the neutrons emitted have a correlated angle-energy distribution. Let's look at the energy_out attribute to see what the outgoing energy distributions are.
End of explanation
"""
for e_in, e_out_dist in zip(dist.energy[::5], dist.energy_out[::5]):
plt.semilogy(e_out_dist.x, e_out_dist.p, label='E={:.2f} MeV'.format(e_in/1e6))
plt.ylim(top=1e-6)
plt.legend()
plt.xlabel('Outgoing energy (eV)')
plt.ylabel('Probability/eV')
plt.show()
"""
Explanation: Here we see we have a tabulated outgoing energy distribution for each incoming energy. Note that the same probability distribution classes that we could use to create a source definition are also used within the openmc.data package. Let's plot every fifth distribution to get an idea of what they look like.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
cm = matplotlib.cm.Spectral_r
# Determine size of probability tables
urr = gd157.urr['294K']
n_energy = urr.table.shape[0]
n_band = urr.table.shape[2]
for i in range(n_energy):
# Get bounds on energy
if i > 0:
e_left = urr.energy[i] - 0.5*(urr.energy[i] - urr.energy[i-1])
else:
e_left = urr.energy[i] - 0.5*(urr.energy[i+1] - urr.energy[i])
if i < n_energy - 1:
e_right = urr.energy[i] + 0.5*(urr.energy[i+1] - urr.energy[i])
else:
e_right = urr.energy[i] + 0.5*(urr.energy[i] - urr.energy[i-1])
for j in range(n_band):
# Determine maximum probability for a single band
max_prob = np.diff(urr.table[i,0,:]).max()
# Determine bottom of band
if j > 0:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
value = (urr.table[i,0,j] - urr.table[i,0,j-1])/max_prob
else:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
value = urr.table[i,0,j]/max_prob
# Determine top of band
if j < n_band - 1:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
else:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
# Draw rectangle with appropriate color
ax.add_patch(Rectangle((e_left, xs_bottom), e_right - e_left, xs_top - xs_bottom,
color=cm(value)))
# Overlay total cross section
ax.plot(gd157.energy['294K'], total.xs['294K'](gd157.energy['294K']), 'k')
# Make plot pretty and labeled
ax.set_xlim(1.0, 1.0e5)
ax.set_ylim(1e-1, 1e4)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Energy (eV)')
ax.set_ylabel('Cross section(b)')
"""
Explanation: Unresolved resonance probability tables
We can also look at unresolved resonance probability tables which are stored in a ProbabilityTables object. In the following example, we'll create a plot showing what the total cross section probability tables look like as a function of incoming energy.
End of explanation
"""
gd157.export_to_hdf5('gd157.h5', 'w')
"""
Explanation: Exporting HDF5 data
If you have an instance IncidentNeutron that was created from ACE or HDF5 data, you can easily write it to disk using the export_to_hdf5() method. This can be used to convert ACE to HDF5 or to take an existing data set and actually modify cross sections.
End of explanation
"""
gd157_reconstructed = openmc.data.IncidentNeutron.from_hdf5('gd157.h5')
np.all(gd157[16].xs['294K'].y == gd157_reconstructed[16].xs['294K'].y)
"""
Explanation: With few exceptions, the HDF5 file encodes the same data as the ACE file.
End of explanation
"""
h5file = h5py.File('gd157.h5', 'r')
main_group = h5file['Gd157/reactions']
for name, obj in sorted(list(main_group.items()))[:10]:
if 'reaction_' in name:
print('{}, {}'.format(name, obj.attrs['label'].decode()))
n2n_group = main_group['reaction_016']
pprint(list(n2n_group.values()))
"""
Explanation: And one of the best parts of using HDF5 is that it is a widely used format with lots of third-party support. You can use h5py, for example, to inspect the data.
End of explanation
"""
n2n_group['294K/xs'][()]
"""
Explanation: So we see that the hierarchy of data within the HDF5 mirrors the hierarchy of Python objects that we manipulated before.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename)
gd157_endf
"""
Explanation: Working with ENDF files
In addition to being able to load ACE and HDF5 data, we can also load ENDF data directly into an IncidentNeutron instance using the from_endf() factory method. Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in:
End of explanation
"""
elastic = gd157_endf[2]
"""
Explanation: Just as before, we can get a reaction by indexing the object directly:
End of explanation
"""
elastic.xs
"""
Explanation: However, if we look at the cross section now, we see that it isn't represented as tabulated data anymore.
End of explanation
"""
elastic.xs['0K'](0.0253)
"""
Explanation: If had Cython installed when you built/installed OpenMC, you should be able to evaluate resonant cross sections from ENDF data directly, i.e., OpenMC will reconstruct resonances behind the scenes for you.
End of explanation
"""
gd157_endf.resonances.ranges
"""
Explanation: When data is loaded from an ENDF file, there is also a special resonances attribute that contains resolved and unresolved resonance region data (from MF=2 in an ENDF file).
End of explanation
"""
[(r.energy_min, r.energy_max) for r in gd157_endf.resonances.ranges]
"""
Explanation: We see that $^{157}$Gd has a resolved resonance region represented in the Reich-Moore format as well as an unresolved resonance region. We can look at the min/max energy of each region by doing the following:
End of explanation
"""
# Create log-spaced array of energies
resolved = gd157_endf.resonances.resolved
energies = np.logspace(np.log10(resolved.energy_min),
np.log10(resolved.energy_max), 1000)
# Evaluate elastic scattering xs at energies
xs = elastic.xs['0K'](energies)
# Plot cross section vs energies
plt.loglog(energies, xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: With knowledge of the energy bounds, let's create an array of energies over the entire resolved resonance range and plot the elastic scattering cross section.
End of explanation
"""
resolved.parameters.head(10)
"""
Explanation: Resonance ranges also have a useful parameters attribute that shows the energies and widths for resonances.
End of explanation
"""
gd157.add_elastic_0K_from_endf('gd157.endf')
"""
Explanation: Heavy-nuclide resonance scattering
OpenMC has two methods for accounting for resonance upscattering in heavy nuclides, DBRC and RVS. These methods rely on 0 K elastic scattering data being present. If you have an existing ACE/HDF5 dataset and you need to add 0 K elastic scattering data to it, this can be done using the IncidentNeutron.add_elastic_0K_from_endf() method. Let's do this with our original gd157 object that we instantiated from an ACE file.
End of explanation
"""
gd157[2].xs
"""
Explanation: Let's check to make sure that we have both the room temperature elastic scattering cross section as well as a 0K cross section.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/H/2'
filename, headers = urllib.request.urlretrieve(url, 'h2.endf')
# Run NJOY to create deuterium data
h2 = openmc.data.IncidentNeutron.from_njoy('h2.endf', temperatures=[300., 400., 500.], stdout=True)
"""
Explanation: Generating data from NJOY
To run OpenMC in continuous-energy mode, you generally need to have ACE files already available that can be converted to OpenMC's native HDF5 format. If you don't already have suitable ACE files or need to generate new data, both the IncidentNeutron and ThermalScattering classes include from_njoy() methods that will run NJOY to generate ACE files and then read those files to create OpenMC class instances. The from_njoy() methods take as input the name of an ENDF file on disk. By default, it is assumed that you have an executable named njoy available on your path. This can be configured with the optional njoy_exec argument. Additionally, if you want to show the progress of NJOY as it is running, you can pass stdout=True.
Let's use IncidentNeutron.from_njoy() to run NJOY to create data for $^2$H using an ENDF file. We'll specify that we want data specifically at 300, 400, and 500 K.
End of explanation
"""
h2[2].xs
"""
Explanation: Now we can use our h2 object just as we did before.
End of explanation
"""
url = 'https://github.com/mit-crpg/WMP_Library/releases/download/v1.1/092238.h5'
filename, headers = urllib.request.urlretrieve(url, '092238.h5')
u238_multipole = openmc.data.WindowedMultipole.from_hdf5('092238.h5')
"""
Explanation: Note that 0 K elastic scattering data is automatically added when using from_njoy() so that resonance elastic scattering treatments can be used.
Windowed multipole
OpenMC can also be used with an experimental format called windowed multipole. Windowed multipole allows for analytic on-the-fly Doppler broadening of the resolved resonance range. Windowed multipole data can be downloaded with the openmc-get-multipole-data script. This data can be used in the transport solver, but it can also be used directly in the Python API.
End of explanation
"""
u238_multipole(1.0, 294)
"""
Explanation: The WindowedMultipole object can be called with energy and temperature values. Calling the object gives a tuple of 3 cross sections: elastic scattering, radiative capture, and fission.
End of explanation
"""
E = np.linspace(5, 25, 1000)
plt.semilogy(E, u238_multipole(E, 293.606)[1])
"""
Explanation: An array can be passed for the energy argument.
End of explanation
"""
E = np.linspace(6.1, 7.1, 1000)
plt.semilogy(E, u238_multipole(E, 0)[1])
plt.semilogy(E, u238_multipole(E, 900)[1])
"""
Explanation: The real advantage to multipole is that it can be used to generate cross sections at any temperature. For example, this plot shows the Doppler broadening of the 6.67 eV resonance between 0 K and 900 K.
End of explanation
"""
|
brettavedisian/phys202-2015-work | assignments/assignment12/FittingModelsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Fitting Models Exercise 1
Imports
End of explanation
"""
a_true = 0.5
b_true = 2.0
c_true = -4.0
"""
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
"""
N = 30
xdata = np.linspace(-5, 5, N)
dy = 2
ydata = a_true*xdata**2 + b_true*xdata + c_true + np.random.normal(0.0, dy, size = N)
plt.figure(figsize=(8,6))
plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray')
plt.tick_params(axis='x', direction='out', top='off')
plt.tick_params(axis='y', direction='out', right='off')
plt.xlabel('x'), plt.ylabel('y'), plt.title('Random Quadratic Raw Data');
assert True # leave this cell for grading the raw data generation and plot
"""
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
"""
def model(x, a, b, c):
return a*x**2+b*x+c
theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy)
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))
xfit = np.linspace(-5.0,5.0)
yfit = theta_best[0]*xfit**2 + theta_best[1]*xfit + theta_best[2]
plt.figure(figsize=(8,6))
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy, fmt='.k', ecolor='lightgray')
plt.xlabel('x'), plt.ylabel('y'), plt.title('Random Quadratic Curve Fitted Data')
plt.tick_params(axis='x', direction='out', top='off')
plt.tick_params(axis='y', direction='out', right='off')
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
"""
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation
"""
|
esa-as/2016-ml-contest | geoLEARN/Submission_3_RF_FE.ipynb | apache-2.0 | ###### Importing all used packages
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
from pandas import set_option
# set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
###### Import packages needed for the make_vars functions
import Feature_Engineering as FE
##### import stuff from scikit learn
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict
from sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head()
training_data.describe()
"""
Explanation: Facies classification using Random Forest
Contest entry by <a href=\"https://geolern.github.io/index.html#\">geoLEARN</a>:
<a href=\"https://github.com/mablou\">Martin Blouin</a>, <a href=\"https://github.com/lperozzi\">Lorenzo Perozzi</a> and <a href=\"https://github.com/Antoine-Cate\">Antoine Caté</a> <br>
in collaboration with <a href=\"http://ete.inrs.ca/erwan-gloaguen\">Erwan Gloaguen</a>
Original contest notebook by Brendon Hall, Enthought
In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types.
Exploring the dataset
First, we import and examine the dataset used to train the classifier.
End of explanation
"""
##### cD From wavelet db1
dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cA From wavelet db1
dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cD From wavelet db3
dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### cA From wavelet db3
dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### From entropy
entropy_df = FE.make_entropy_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
l_foots=[2, 3, 4, 5, 7, 10])
###### From gradient
gradient_df = FE.make_gradient_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
dx_list=[2, 3, 4, 5, 6, 10, 20])
##### From rolling average
moving_av_df = FE.make_moving_av_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[1, 2, 5, 10, 20])
##### From rolling standard deviation
moving_std_df = FE.make_moving_std_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
##### From rolling max
moving_max_df = FE.make_moving_max_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3, 4, 5, 7, 10, 15, 20])
##### From rolling min
moving_min_df = FE.make_moving_min_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
###### From rolling NM/M ratio
rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=training_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])
###### From distance to NM and M, up and down
dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=training_data)
dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=training_data)
dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=training_data)
dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=training_data)
list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,
entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,
rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]
combined_df = training_data
for var_df in list_df_var:
temp_df = var_df
combined_df = pd.concat([combined_df,temp_df],axis=1)
combined_df.replace(to_replace=np.nan, value='-1', inplace=True)
print (combined_df.shape)
combined_df.head(5)
"""
Explanation: A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the "Facies" class.
Feature engineering
As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below.
The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment.
Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies.
A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are:
detail and approximation coeficients at various levels of two wavelet transforms (using two types of Daubechies wavelets);
measures of the local entropy with variable observation windows;
measures of the local gradient with variable observation windows;
rolling statistical calculations (i.e., mean, standard deviation, min and max) with variable observation windows;
ratios between marine and non-marine lithofacies with different observation windows;
distances from the nearest marine or non-marine occurence uphole and downhole.
Functions used to build these variables are located in the Feature Engineering python script.
All the data exploration work related to the conception and study of these variables is not presented here.
End of explanation
"""
X = combined_df.iloc[:, 4:]
y = combined_df['Facies']
groups = combined_df['Well Name']
"""
Explanation: Building a prediction model from these variables
A Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction.
Box plot for a series of scores obtained through cross validation are presented below.
Create predictor and target arrays
End of explanation
"""
scoring_param = ['accuracy', 'recall_weighted', 'precision_weighted','f1_weighted']
scores = []
Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1)
lpgo = LeavePGroupsOut(n_groups=2)
for scoring in scoring_param:
cv=lpgo.split(X, y, groups)
validated = cross_val_score(Cl, X, y, scoring=scoring, cv=cv, n_jobs=-1)
scores.append(validated)
scores = np.array(scores)
scores = np.swapaxes(scores, 0, 1)
scores = pd.DataFrame(data=scores, columns=scoring_param)
sns.set_style('white')
fig,ax = plt.subplots(figsize=(8,6))
sns.boxplot(data=scores)
plt.xlabel('scoring parameters')
plt.ylabel('score')
plt.title('Classification scores for tuned parameters');
"""
Explanation: Estimation of validation scores from this tuning
End of explanation
"""
####### Evaluation of feature importances
Cl = RandomForestClassifier(n_estimators=75, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42,oob_score=True, n_jobs=-1)
Cl.fit(X, y)
print ('OOB estimate of accuracy for prospectivity classification using all features: %s' % str(Cl.oob_score_))
importances = Cl.feature_importances_
std = np.std([tree.feature_importances_ for tree in Cl.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
Vars = list(X.columns.values)
for f in range(X.shape[1]):
print("%d. feature %d %s (%f)" % (f + 1, indices[f], Vars[indices[f]], importances[indices[f]]))
"""
Explanation: Evaluating feature importances
The individual contribution to the classification for each feature (i.e., feature importances) can be obtained from a Random Forest classifier. This gives a good idea of the classification power of individual features and helps understanding which type of feature engineering is the most promising.
Caution should be taken when interpreting feature importances, as highly correlated variables will tend to dilute their classification power between themselves and will rank lower than uncorelated variables.
End of explanation
"""
sns.set_style('white')
fig,ax = plt.subplots(figsize=(15,5))
ax.bar(range(X.shape[1]), importances[indices],color="r", align="center")
plt.ylabel("Feature importance")
plt.xlabel('Ranked features')
plt.xticks([], indices)
plt.xlim([-1, X.shape[1]]);
"""
Explanation: Plot the feature importances of the forest
End of explanation
"""
######## Confusion matrix from this tuning
cv=LeaveOneGroupOut().split(X, y, groups)
y_pred = cross_val_predict(Cl, X, y, cv=cv, n_jobs=-1)
conf_mat = confusion_matrix(y, y_pred)
list_facies = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
conf_mat = pd.DataFrame(conf_mat, columns=list_facies, index=list_facies)
conf_mat.head(10)
"""
Explanation: Features derived from raw geological variables tend to have the highest classification power. Rolling min, max and mean tend to have better classification power than raw data. Wavelet approximation coeficients tend to have a similar to lower classification power than raw data. Features expressing local texture of the data (entropy, gradient, standard deviation and wavelet detail coeficients) have a low classification power but still participate in the prediction.
Confusion matrix
The confusion matrix from the validation test is presented below.
End of explanation
"""
filename = '../validation_data_nofacies.csv'
test_data = pd.read_csv(filename)
test_data.head(5)
##### cD From wavelet db1
dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cA From wavelet db1
dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cD From wavelet db3
dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### cA From wavelet db3
dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### From entropy
entropy_df = FE.make_entropy_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
l_foots=[2, 3, 4, 5, 7, 10])
###### From gradient
gradient_df = FE.make_gradient_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
dx_list=[2, 3, 4, 5, 6, 10, 20])
##### From rolling average
moving_av_df = FE.make_moving_av_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[1, 2, 5, 10, 20])
##### From rolling standard deviation
moving_std_df = FE.make_moving_std_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
##### From rolling max
moving_max_df = FE.make_moving_max_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3, 4, 5, 7, 10, 15, 20])
##### From rolling min
moving_min_df = FE.make_moving_min_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
###### From rolling NM/M ratio
rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=test_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])
###### From distance to NM and M, up and down
dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=test_data)
dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=test_data)
dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=test_data)
dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=test_data)
combined_test_df = test_data
list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,
entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,
rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]
for var_df in list_df_var:
temp_df = var_df
combined_test_df = pd.concat([combined_test_df,temp_df],axis=1)
combined_test_df.replace(to_replace=np.nan, value='-99999', inplace=True)
X_test = combined_test_df.iloc[:, 3:]
print (combined_test_df.shape)
combined_test_df.head(5)
Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1)
Cl.fit(X, y)
y_test = Cl.predict(X_test)
y_test = pd.DataFrame(y_test, columns=['Predicted Facies'])
test_pred_df = pd.concat([combined_test_df[['Well Name', 'Depth']], y_test], axis=1)
test_pred_df.head()
"""
Explanation: Applying the classification model to test data
End of explanation
"""
test_pred_df.to_pickle('Prediction_blind_wells_RF_c.pkl')
"""
Explanation: Exporting results
End of explanation
"""
|
dipanjank/ml | simple_implementations/Flavours_of_Gradient_Descent.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
def L(x):
return x**2 - 2*x + 1
def L_prime(x):
return 2*x - 2
def converged(x_prev, x, epsilon):
"Return True if the abs value of all elements in x-x_prev are <= epsilon."
absdiff = np.abs(x-x_prev)
return np.all(absdiff <= epsilon)
def gradient_descent(f_prime, x_0, learning_rate=0.2, n_iters=100, epsilon=1E-8):
x = x_0
for _ in range(n_iters):
x_prev = x
x -= learning_rate*f_prime(x)
if converged(x_prev, x, epsilon):
break
return x
x_min = gradient_descent(L_prime, 2)
print('Minimum value of L(x) = x**2 - 2*x + 1.0 is [%.2f] at x = [%.2f]' % (L(x_min), x_min))
"""
Explanation: Flavours of Gradient Descent
A quick recap of the Gradient Descent method: This is an iterative algorithm to minize a loss function $L(x)$, where we start with a guess of what the answer should be - and then take steps proportional to the gradient at the current point.
$x = x_0$ (initial guess)
Until Convergence is achieved:
$x_{i+1} = x_{i} - \eta\nabla_L(x_i)$
For example, Let's say $L(x) = x^2 - 2x + 1$ and we start at $x0 = 2$. Coding the Gradient Descent method in Python:
End of explanation
"""
import seaborn as sns
import pandas as pd
iris_df = sns.load_dataset('iris')
print('Columns: %s' % (iris_df.columns.values, ))
print('Labels: %s' % (pd.unique(iris_df['species']), ))
iris_df.head(5)
"""
Explanation: Batch Gradient Descent
In most supervised ML applications, we will try to learn a pattern from a number of labeled examples. In Batch Gradient Descent, each iteration loops over entire set of examples.
So, let's build 1-layer network of Linear Perceptrons to classify Fisher's IRIS dataset (again!). Remember that a Linear Perceptron can only distinguish between two classes.
<table>
<tr>
<td><img src="http://blog.zabarauskas.com/img/perceptron.gif"></td>
<td><img src="http://cmp.felk.cvut.cz/cmp/courses/recognition/Labs/perceptron/images/linear.png" />
</tr>
</table>
Since there are 3 classes, our mini-network will have 3 Perceptrons. We'll channel the output
of each Perceptron $w_i^T + b$ into a softmax function to pick the final label. We'll train this network using Batch Gradient Descent.
Getting Data
End of explanation
"""
def softmax(x):
# Uncomment to find out why we shouldn't do it this way...
# return np.exp(x) / np.sum(np.exp(x))
scaled_x = x - np.max(x)
result = np.exp(scaled_x) / np.sum(np.exp(scaled_x))
return result
a = np.array([-500.9, 2000, 7, 11, 12, -15, 100])
sm_a = softmax(a)
print('Softmax(%s) = %s' % (a, sm_a))
"""
Explanation: The Softmax Function
The softmax function is a technique to apply a probabilistic classifier by making a probability distribution out of a set of values $(v_1, v_2, ..., v_n)$ which may or may not satisfy all the features of probability distribution:
$v_i >= 0$
$\sum_{i=1}^n v_i = 1$
The probability distribution is the Gibbs Distribution: $v'i = \frac {\exp {v_i}} {\sum{j=1}^n\exp {v_j})}$ for $i = 1, 2, ... n$.
End of explanation
"""
def encode_1_of_n(ordered_labels, y):
label2idx = dict((label, idx)
for idx, label in enumerate(ordered_labels))
def encode_one(y_i):
enc = np.zeros(len(ordered_labels))
enc[label2idx[y_i]] = 1.0
return enc
return np.array([x for x in map(encode_one, y)])
encode_1_of_n(['apple', 'banana', 'orange'],
['apple', 'banana', 'orange', 'apple', 'apple'])
"""
Explanation: Non-linear Perceptron With SoftMax
With softmax, we typically use the cross-entropy error as the function to minimize.
The Cross Entropy Error for a given input $X = (x_1, x_2, ..., x_n)$, where each $x_i$ is a vector, is given by:
$L(x) = - \frac {1}{n} \sum_{i=1}^n y_i^T log(\hat{y_i})$
Where
The sum runs over $X = (x_1, x_2, ..., x_n)$.
Each $y_i$ is the 1-of-n encoded label of the $i$-th example, so it's also a vector. For example, if the labels in order are ('apple', 'banana', 'orange') and the label of $x_i$ is 'banana', then $y_i = [0, 1, 0]$.
$\hat{y_i}$ is the softmax output for $x_i$ from the network.
The term $y_i^T log(\hat{y_i})$ is the vector dot product between $y_i$ and $log(\hat{y_i})$.
One of n Encoding
End of explanation
"""
def cross_entropy_loss(Y, Y_hat):
entropy_sum = 0.0
log_Y_hat = np.log(Y_hat)
for y, y_hat in zip(Y, log_Y_hat):
entropy_sum += np.dot(y, y_hat)
return -entropy_sum/Y.shape[0]
Y_tst = np.array([[1, 0, 0],
[0, 1, 0]])
# log(Y_hat_tst1) is the same as Y_tst, so we expect the x-entropy error to be the min (-1) in this case.
print(Y_tst)
Y_hat_tst1 = np.array([[np.e, 1, 1,],
[1, np.e, 1]])
print(Y_hat_tst1)
print(cross_entropy_loss(Y_tst, Y_hat_tst1))
print()
# expect it to be > -1
Y_hat_tst2 = np.array([[1, 1, 1,],
[1, np.e, 1]])
print(Y_hat_tst2)
print(cross_entropy_loss(Y_tst, Y_hat_tst2))
print()
"""
Explanation: Cross Entropy Error
End of explanation
"""
import pandas as pd
class OneLayerNetworkWithSoftMax:
def __init__(self):
self.w, self.bias = None, 0.0
self.optimiser = None
self.output = None
def init_weights(self, X, Y):
"""
Initialize a 2D weight matrix as a Dataframe with
dim(n_labels*n_features).
"""
self.labels = np.unique(Y)
w_init = np.random.randn(len(self.labels), X.shape[1])
self.w = pd.DataFrame(data=w_init)
self.w.index.name = 'node_id'
def predict(self, x):
"""
Return the predicted label of x using current weights.
"""
output = self.forward(x, update=False)
max_label_idx = np.argmax(output)
return self.labels[max_label_idx]
def forward(self, x, update=True):
"""
Calculate softmax(w^Tx+b) for x using current $w_i$ s.
"""
#output = self.w.apply(lambda row: np.dot(row, x), axis=1)
output = np.dot(self.w, x)
output += self.bias
output = softmax(output)
if update:
self.output = output
return output
def backward(self, x, y, learning_rate):
"""
Executes the weight update step
grad = (self.output - y)
for i in range(len(grad)):
dw[i] -= grad[i] * x
w -= learning_rate * dw
:param x: one sample vector.
:param y: One-hot encoded label for x.
"""
# [y_hat1 - y1, y_hat2-y2, ... ]
y_hat_min_y = self.output - y
# Transpose the above to a column vector
# and then multiply x with each element
# to produce a 2D array (n_labels*n_features), same as w
error_grad = np.apply_along_axis(lambda z: z*x ,
1, np.atleast_2d(y_hat_min_y).T)
dw = learning_rate * error_grad
return dw
def print_weight_diff(self, i, w_old, diff_only=True):
if not diff_only:
print('Before Iteration [%s]: weights are: \n%s' %
(i+1, w_old))
print('After Iteration [%s]: weights are: \n%s' %
(i+1, self.w))
w_diff = np.abs(w_old - self.w)
print('After Iteration [%s]: weights diff: \n%s' %
(i+1, w_diff))
def _gen_minibatch(self, X, Y, mb_size):
"""Generates `mb_size` sized chunks from X and Y."""
n_samples = X.shape[0]
indices = np.arange(n_samples)
np.random.shuffle(indices)
for start in range(0, n_samples, mb_size):
yield X[start:start+mb_size, :], Y[start:start+mb_size, :]
def _update_batch(self, i, X_batch, Y_batch, learning_rate, print_every=100):
w_old = self.w.copy()
dw = []
for x, y in zip(X_batch, Y_batch):
self.forward(x)
dw_item = self.backward(x, y, learning_rate)
dw.append(dw_item)
dw_batch = np.mean(dw, axis=0)
self.w -= dw_batch
if (i == 0) or ((i+1) % print_every == 0):
self.print_weight_diff(i, w_old)
def train(self, X, Y,
n_iters=1000,
learning_rate=0.2,
minibatch_size=30,
epsilon=1E-8):
"""
Entry point for the Minibatch SGD training method.
Calls forward+backward for each (x_i, y_i) pair and adjusts the
weight w accordingly.
"""
self.init_weights(X, Y)
Y = encode_1_of_n(self.labels, Y)
n_samples = X.shape[0]
# MiniBatch SGD
for i in range(n_iters):
for X_batch, Y_batch in self._gen_minibatch(X, Y, minibatch_size):
self._update_batch(i, X_batch, Y_batch, learning_rate)
# Set aside test data
label_grouper = iris_df.groupby('species')
test = label_grouper.head(10).set_index('species')
train = label_grouper.tail(100).set_index('species')
# Train the Network
X_train, Y_train = train.as_matrix(), train.index.values
nn = OneLayerNetworkWithSoftMax()
nn.train(X_train, Y_train)
# Test
results = test.apply(lambda row : nn.predict(row.as_matrix()), axis=1)
results.name = 'predicted_label'
results.index.name = 'expected_label'
results.reset_index()
"""
Explanation: Gradient of the Cross Entropy Error
The Gradient update step in Gradient Descent when the Loss Function uses Cross Entropy Error is:
$w_i^{j+1} = w_i^{j} - \eta [\frac {\partial L} {\partial w_i}]^{j}$
End of explanation
"""
import networkx as nx
from matplotlib import pylab
G = nx.DiGraph()
G.add_edges_from(
[('i', 'n1'),
('i', 'n2'),
('n1', 's1'),
('n2', 's1'),
('n1', 's2'),
('n2', 's2'),
('s1', 'y1'),
('s2', 'y2'),
])
pos = {'i': (1, 1),
'n1': (2, 0), 'n2': (2, 2),
's1': (3, 0), 's2': (3, 2),
'y1': (4, 0), 'y2': (4, 2),
}
labels = {'i': r'$x_i$',
'n1': r'$w_1$', 'n2': r'$w_2$',
's1': r'$s_1$', # r'$\frac {\exp(z_{i1})} {S_i}$',
's2': r'$s_2$', # r'$\frac {\exp(z_{i2})} {S_i}$'
}
edge_labels = {('i', 'n1'): r'$x_i$',
('i', 'n2'): r'$x_i$',
('n1', 's1'): r'$w_1^Tx_i$',
('n1', 's2'): r'$w_1^Tx_i$',
('n2', 's1'): r'$w_2^Tx_i$',
('n2', 's2'): r'$w_2^Tx_i$',
('n2', 's1'): r'$w_2^Tx_i$',
('s1', 'y1'): r'$\frac {\exp(z_{i1})} {S_i}$',
('s2', 'y2'): r'$\frac {\exp(z_{i2})} {S_i}$',
}
nx.draw(G, pos=pos, node_size=1000)
nx.draw_networkx_labels(G,pos,labels, font_size=15, color='white')
nx.draw_networkx_edge_labels(G, pos=pos,
edge_labels=edge_labels, font_size=15)
"""
Explanation: Gradient of the Cross Entropy Error
Recap We know the the cross entropy error is the average of the vector products between the 1-hot enconding of label and the softmax output.
$L = - \frac {1}{n} \sum_{i=1}^n Y_i^T ln(\hat Y_i)$
Where the sum runs over all of the $n$ input samples.
This is a complex derivation, and we need to approach it step-by step. First, let's work out what the $i$-th sample contributes to the gradient of L, i.e. the derivative of - $Y_i^Tln(\hat Y_i)$.
Let's draw the structure of the Network using networkx for a 2-class problem, so we have 2 input nodes.
End of explanation
"""
|
fonnesbeck/scientific-python-workshop | notebooks/Model Selection and Validation.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
import warnings
warnings.simplefilter("ignore")
salmon = pd.read_table("../data/salmon.dat", sep=r'\s+', index_col=0)
salmon.plot(x='spawners', y='recruits', kind='scatter')
"""
Explanation: Model Selection and Validation
As with Bayesian inference, model selection and validation are fundamental steps in statistical learning applications. In particular, we wish to select the model that performs optimally, both with respect to the training data and to external data.
Depending on the type of learning method we use, we may be interested in one or more of the following:
how many variables should be included in the model?
what hyperparameter values should be used in fitting the model?
how many groups should we use to cluster our data?
Givens and Hoeting (2012) includes a dataset for salmon spawning success. If we plot the number of recruits against the number of spawners, we see a distinct positive relationship, as we would expect. The question is, what sort of polynomial relationship best describes the relationship?
End of explanation
"""
fig, axes = plt.subplots(1, 2, figsize=(14,6))
xvals = np.arange(salmon.spawners.min(), salmon.spawners.max())
fit1 = np.polyfit(salmon.spawners, salmon.recruits, 1)
p1 = np.poly1d(fit1)
axes[0].plot(xvals, p1(xvals))
axes[0].scatter(x=salmon.spawners, y=salmon.recruits)
fit15 = np.polyfit(salmon.spawners, salmon.recruits, 15)
p15 = np.poly1d(fit15)
axes[1].plot(xvals, p15(xvals))
axes[1].scatter(x=salmon.spawners, y=salmon.recruits)
"""
Explanation: On the one extreme, a linear relationship is underfit; on the other, we see that including a very large number of polynomial terms is clearly overfitting the data.
End of explanation
"""
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(salmon.spawners,
salmon.recruits, test_size=0.3)
"""
Explanation: We can select an appropriate polynomial order for the model using cross-validation, in which we hold out a testing subset from our dataset, fit the model to the remaining data, and evaluate its performance on the held-out subset.
End of explanation
"""
def rmse(x, y, coefs):
yfit = np.polyval(coefs, x)
return np.sqrt(np.mean((y - yfit) ** 2))
"""
Explanation: A natural criterion to evaluate model performance is root mean square error.
End of explanation
"""
degrees = np.arange(14)
train_err = np.zeros(len(degrees))
validation_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(xtrain, ytrain, d)
train_err[i] = rmse(xtrain, ytrain, p)
validation_err[i] = rmse(xtest, ytest, p)
fig, ax = plt.subplots()
ax.plot(degrees, validation_err, lw=2, label = 'cross-validation error')
ax.plot(degrees, train_err, lw=2, label = 'training error')
ax.legend(loc=0)
ax.set_xlabel('degree of fit')
ax.set_ylabel('rms error')
"""
Explanation: We can now evaluate the model at varying polynomial degrees, and compare their fit.
End of explanation
"""
aic = lambda rss, n, k: n*np.log(float(rss)/n) + 2*k
"""
Explanation: In the cross-validation above, notice that the testing error is high for both very low and very high polynomial values, while training error declines monotonically with degree. The cross-validation error is composed of two components: bias and variance. When a model is underfit, bias is low but variance is high, while when a model is overfit, the reverse is true.
One can show that the MSE decomposes into a sum of the bias (squared) and variance of the estimator:
$$\begin{aligned}
\text{Var}(\hat{\theta}) &= E[\hat{\theta} - \theta]^2 - (E[\hat{\theta} - \theta])^2 \
\Rightarrow E[\hat{\theta} - \theta]^2 &= \text{Var}(\hat{\theta}) + \text{Bias}(\hat{\theta})^2
\end{aligned}$$
The training error, on the other hand, does not have this tradeoff; it will always decrease (or at least, never increase) as variables (polynomial terms) are added to the model.
Information-theoretic Model Selection
One approach to model selection is to use an information-theoretic criterion to identify the most appropriate model. Akaike (1973) found a formal relationship between Kullback-Leibler information (a dominant paradigm in information and coding theory) and likelihood theory. Akaike's Information Criterion (AIC) is an estimator of expected relative K-L information based on the maximized log-likelihood function, corrected for asymptotic bias.
$$\text{AIC} = −2 \log(L(\theta|data)) + 2K$$
AIC balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC from the residual sums of squares as:
$$\text{AIC} = n \log(\text{RSS}/n) + 2k$$
where $k$ is the number of parameters in the model. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.
To apply AIC to a model selection problem, we choose the model that has the lowest AIC value.
AIC can be shown to be equivalent to leave-one-out cross-validation.
End of explanation
"""
bodyfat = pd.read_table("../data/bodyfat.dat", sep='\s+')
bodyfat.head()
"""
Explanation: As an example, consider the body fat dataset that is used in Chapter 12 of Givens and Hoeting (2012). It measures the percentage of body fat for 251 men, estimated using an underwater weighing technique. In addition, the variables age, weight, height, and ten body circumference measurements were recorded for each subject.
End of explanation
"""
subsets = [['weight', 'height', 'neck', 'chest', 'abd', 'hip', 'thigh',
'knee', 'ankle', 'biceps'],
['weight', 'height', 'neck', 'chest', 'abd', 'hip', 'thigh',
'knee'],
['weight', 'height', 'neck', 'chest', 'abd', 'hip'],
['weight', 'height', 'neck'],
['weight']]
fit = pd.ols(y=bodyfat['fat'], x=bodyfat[subsets[3]])
fit
k0 = len(subsets[0])
aic_values = np.zeros(len(subsets))
params = np.zeros((len(subsets), k0))
for i, s in enumerate(subsets):
x = bodyfat[s]
y = bodyfat['fat']
fit = pd.ols(y=y, x=x)
aic_values[i] = fit.sm_ols.aic
params[i, :len(s)] = fit.beta[:-1]
plt.plot(aic_values, 'ro')
plt.xlabel('model')
plt.gca().set_xticks(np.arange(5))
plt.ylabel('AIC')
p_best = params[np.where(aic_values==aic_values.min())]
p_best.round(2)
aic_values
"""
Explanation: To illustrate model selection, we will consider 5 competing models consisting of different subsets of available covariates.
End of explanation
"""
aic_trans = np.exp(-0.5*(aic_values - aic_values.min()))
aic_probs = aic_trans/aic_trans.sum()
aic_probs.round(2)
"""
Explanation: For ease of interpretation, AIC values can be transformed into model weights via:
$$w_i = \frac{\exp(-\frac{1}{2} \delta \text{AIC}i)}{\sum{m=1}^M \exp(-\frac{1}{2} \delta \text{AIC}_m)}$$
End of explanation
"""
p_weighted = ((params.T * aic_probs).T).sum(0)
p_weighted.round(2)
"""
Explanation: For some problems, we can use AIC weights to perform multimodel inference, whereby we use model weights to calculate model-averaged parameter estimates, thereby accounting for model selection uncertainty.
End of explanation
"""
from sklearn.cross_validation import cross_val_score, KFold
nfolds = 5
fig, axes = plt.subplots(1, nfolds, figsize=(14,4))
for i, fold in enumerate(KFold(len(salmon), n_folds=nfolds,
shuffle=True)):
training, validation = fold
y, x = salmon.values[training].T
axes[i].plot(x, y, 'ro')
y, x = salmon.values[validation].T
axes[i].plot(x, y, 'bo')
plt.tight_layout()
k = 5
degrees = np.arange(8)
k_fold_err = np.empty(len(degrees))
for i, d in enumerate(degrees):
error = np.empty(k)
#for j, fold in enumerate(gen_k_folds(salmon, k)):
for j, fold in enumerate(KFold(len(salmon), n_folds=k)):
training, validation = fold
y_train, x_train = salmon.values[training].T
y_test, x_test = salmon.values[validation].T
p = np.polyfit(x_train, y_train, d)
error[j] = rmse(x_test, y_test, p)
k_fold_err[i] = error.mean()
fig, ax = plt.subplots()
ax.plot(degrees, k_fold_err, lw=2)
ax.set_xlabel('degree of fit')
ax.set_ylabel('average rms error')
"""
Explanation: K-fold Cross-validation
In k-fold cross-validation, the training set is split into k smaller sets. Then, for each of the k "folds":
trained model on k-1 of the folds as training data
validate this model the remaining fold, using an appropriate metric
The performance measure reported by k-fold CV is then the average of the k computed values. This approach can be computationally expensive, but does not waste too much data, which is an advantage over having a fixed test subset.
End of explanation
"""
from sklearn.ensemble import BaggingRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
X,y = salmon.values.T
br = BaggingRegressor(LinearRegression(), oob_score=True)
X2 = PolynomialFeatures(degree=2).fit_transform(X[:, None])
br.fit(X2, y)
"""
Explanation: If the model shows high bias, the following actions might help:
Add more features. In our example of predicting home prices,
it may be helpful to make use of information such as the neighborhood
the house is in, the year the house was built, the size of the lot, etc.
Adding these features to the training and test sets can improve
a high-bias estimator
Use a more sophisticated model. Adding complexity to the model can
help improve on bias. For a polynomial fit, this can be accomplished
by increasing the degree d. Each learning technique has its own
methods of adding complexity.
Decrease regularization. Regularization is a technique used to impose
simplicity in some machine learning models, by adding a penalty term that
depends on the characteristics of the parameters. If a model has high bias,
decreasing the effect of regularization can lead to better results.
If the model shows high variance, the following actions might help:
Use fewer features. Using a feature selection technique may be
useful, and decrease the over-fitting of the estimator.
Use a simpler model. Model complexity and over-fitting go hand-in-hand.
Use more training samples. Adding training samples can reduce
the effect of over-fitting, and lead to improvements in a high
variance estimator.
Increase regularization. Regularization is designed to prevent
over-fitting. In a high-variance model, increasing regularization
can lead to better results.
Bootstrap aggregating regression
Splitting datasets into training, cross-validation and testing subsets is inefficient, particularly when the original dataset is not large. As an alternative, we can use bootstrapping to both develop and validate our model without dividing our dataset. One algorithm to facilitate this is the bootstrap aggreggation (or bagging) algorithm.
A Bagging regressor is an ensemble meta-estimator that fits base regressors each on random subsets of the original dataset and then aggregate their individual predictions (either by voting or by averaging) to form a final prediction.
End of explanation
"""
br.oob_score_
scores = []
for d in degrees:
Xd = PolynomialFeatures(degree=d).fit_transform(X[:, None])
br = BaggingRegressor(LinearRegression(), oob_score=True)
br.fit(Xd, y)
scores.append(br.oob_score_)
plt.plot(scores)
"""
Explanation: In order to evaluate a particular model, the samples that were not selected for a particular resampled dataset (the out-of-bag sample) can be used to estimate the generalization error.
End of explanation
"""
from sklearn import datasets
# Predictors: "age" "sex" "bmi" "map" "tc" "ldl" "hdl" "tch" "ltg" "glu"
diabetes = datasets.load_diabetes()
"""
Explanation: Regularization
The scikit-learn package includes a built-in dataset of diabetes progression, taken from Efron et al. (2003), which includes a set of 10 normalized predictors.
End of explanation
"""
diabetes['data'].shape
from sklearn import cross_validation, linear_model
def plot_learning_curve(estimator, label=None):
scores = list()
train_sizes = np.linspace(10, 200, 10).astype(np.int)
for train_size in train_sizes:
test_error = cross_validation.cross_val_score(estimator, diabetes['data'], diabetes['target'],
cv=cross_validation.ShuffleSplit(train_size=train_size,
test_size=200,
n=len(diabetes['target']),
random_state=0)
)
scores.append(test_error)
plt.plot(train_sizes, np.mean(scores, axis=1), label=label or estimator.__class__.__name__)
plt.ylim(0, 1)
plt.ylabel('Explained variance on test set')
plt.xlabel('Training set size')
plt.legend(loc='best')
plot_learning_curve(linear_model.LinearRegression())
"""
Explanation: Let's examine how a linear regression model performs across a range of sample sizes.
End of explanation
"""
from sklearn import preprocessing
k = diabetes['data'].shape[1]
alphas = np.linspace(0, 4)
params = np.zeros((len(alphas), k))
for i,a in enumerate(alphas):
X = preprocessing.scale(diabetes['data'])
y = diabetes['target']
fit = linear_model.Ridge(alpha=a, normalize=True).fit(X, y)
params[i] = fit.coef_
plt.figure(figsize=(14,6))
for param in params.T:
plt.plot(alphas, param)
plot_learning_curve(linear_model.LinearRegression())
plot_learning_curve(linear_model.Ridge())
"""
Explanation: Notice the linear regression is not defined for scenarios where the number of features/parameters exceeds the number of observations. It performs poorly as long as the number of sample is not several times the number of features.
One approach for dealing with overfitting is to regularize the regession model.
The ridge estimator is a simple, computationally efficient regularization for linear regression.
$$\hat{\beta}^{ridge} = \text{argmin}{\beta}\left{\sum{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^k \beta_j^2 \right}$$
Typically, we are not interested in shrinking the mean, and coefficients are standardized to have zero mean and unit L2 norm. Hence,
$$\hat{\beta}^{ridge} = \text{argmin}{\beta} \sum{i=1}^N (y_i - \sum_{j=1}^k x_{ij} \beta_j)^2$$
$$\text{subject to } \sum_{j=1}^k \beta_j^2 < \lambda$$
Note that this is equivalent to a Bayesian model $y \sim N(X\beta, I)$ with a Gaussian prior on the $\beta_j$:
$$\beta_j \sim \text{N}(0, \lambda)$$
The estimator for the ridge regression model is:
$$\hat{\beta}^{ridge} = (X'X + \lambda I)^{-1}X'y$$
End of explanation
"""
for a in [0.001, 0.01, 0.1, 1, 10]:
plot_learning_curve(linear_model.Ridge(a), a)
plot_learning_curve(linear_model.LinearRegression())
plot_learning_curve(linear_model.Ridge())
plot_learning_curve(linear_model.RidgeCV())
"""
Explanation: Notice that at very small sample sizes, the ridge estimator outperforms the unregularized model.
The regularization of the ridge is a shrinkage: the coefficients learned are shrunk towards zero.
The amount of regularization is set via the alpha parameter of the ridge, which is tunable. The RidgeCV method in scikits-learn automatically tunes this parameter via cross-validation.
End of explanation
"""
k = diabetes['data'].shape[1]
alphas = np.linspace(0.1, 3)
params = np.zeros((len(alphas), k))
for i,a in enumerate(alphas):
X = preprocessing.scale(diabetes['data'])
y = diabetes['target']
fit = linear_model.Lasso(alpha=a, normalize=True).fit(X, y)
params[i] = fit.coef_
plt.figure(figsize=(14,6))
for param in params.T:
plt.plot(alphas, param)
plot_learning_curve(linear_model.RidgeCV())
plot_learning_curve(linear_model.Lasso(0.05))
"""
Explanation: The Lasso estimator is useful to impose sparsity on the coefficients. In other words, it is to be prefered if we believe that many of the features are not relevant.
$$\hat{\beta}^{lasso} = \text{argmin}{\beta}\left{\frac{1}{2}\sum{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^k |\beta_j| \right}$$
or, similarly:
$$\hat{\beta}^{lasso} = \text{argmin}{\beta} \frac{1}{2}\sum{i=1}^N (y_i - \sum_{j=1}^k x_{ij} \beta_j)^2$$
$$\text{subject to } \sum_{j=1}^k |\beta_j| < \lambda$$
Note that this is equivalent to a Bayesian model $y \sim N(X\beta, I)$ with a Laplace prior on the $\beta_j$:
$$\beta_j \sim \text{Laplace}(\lambda) = \frac{\lambda}{2}\exp(-\lambda|\beta_j|)$$
Note how the Lasso imposes sparseness on the parameter coefficients:
End of explanation
"""
plot_learning_curve(linear_model.RidgeCV())
plot_learning_curve(linear_model.LassoCV(n_alphas=10, max_iter=5000))
"""
Explanation: In this example, the ridge estimator performs better than the lasso, but when there are fewer observations, the lasso matches its performance. Otherwise, the variance-reducing effect of the lasso regularization is unhelpful relative to the increase in bias.
With the lasso too, me must tune the regularization parameter for good performance. There is a corresponding LassoCV function in scikit-learn, but it is computationally expensive. To speed it up, we can reduce the number of values explored for the alpha parameter.
End of explanation
"""
plot_learning_curve(linear_model.RidgeCV())
plot_learning_curve(linear_model.ElasticNetCV(l1_ratio=.7, n_alphas=10))
"""
Explanation: Can't decide? ElasticNet is a compromise between lasso and ridge regression.
$$\hat{\beta}^{elastic} = \text{argmin}{\beta}\left{\frac{1}{2}\sum{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + (1 - \alpha) \sum_{j=1}^k \beta^2_j + \alpha \sum_{j=1}^k |\beta_j| \right}$$
where $\alpha = \lambda_1/(\lambda_1 + \lambda_2)$. Its tuning parameter $\alpha$ (l1_ratio in scikit-learn) controls this mixture: when set to 0, ElasticNet is a ridge regression, when set to 1, it is a lasso. The sparser the coefficients, the higher we should set $\alpha$.
Note that $\alpha$ can also be set by cross-validation, though it is computationally costly.
End of explanation
"""
lasso = linear_model.Lasso()
alphas = np.logspace(-4, -1, 20)
scores = np.empty(len(alphas))
scores_std = np.empty(len(alphas))
for i,alpha in enumerate(alphas):
lasso.alpha = alpha
s = cross_validation.cross_val_score(lasso, diabetes.data, diabetes.target, n_jobs=-1)
scores[i] = s.mean()
scores_std[i] = s.std()
plt.semilogx(alphas, scores)
plt.semilogx(alphas, np.array(scores) + np.array(scores_std)/20, 'b--')
plt.semilogx(alphas, np.array(scores) - np.array(scores_std)/20, 'b--')
plt.yticks(())
plt.ylabel('CV score')
plt.xlabel('alpha')
plt.axhline(np.max(scores), linestyle='--', color='.5')
plt.text(5e-2, np.max(scores)+1e-4, str(np.max(scores).round(3)))
"""
Explanation: Using Cross-validation for Parameter Tuning
End of explanation
"""
from sklearn.model_selection import learning_curve
train_sizes, train_scores, test_scores = learning_curve(lasso,
diabetes.data, diabetes.target,
train_sizes=[50, 70, 90, 110, 130], cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
"""
Explanation: Model Checking using Learning Curves
A useful way of checking model performance (in terms of bias and/or variance) is to plot learning curves, which illustrates the learning process as your model is exposed to more data. When the dataset is small, it is easier for a model of a particular complexity to be made to fit the training data well. As the dataset grows, we expect the training error to increase (model accuracy decreases). Conversely, a relatively small dataset will mean that the model will not generalize well, and hence the cross-validation score will be lower, on average.
End of explanation
"""
X,y = salmon.values.T
X2 = PolynomialFeatures(degree=2).fit_transform(X[:, None])
train_sizes, train_scores, test_scores = learning_curve(linear_model.LinearRegression(),
X2, y,
train_sizes=[10, 15, 20, 30], cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
"""
Explanation: For models with high bias, training and cross-validation scores will tend to converge at a low value (high error), indicating that adding more data will not improve performance.
For models with high variance, there may be a gap between the training and cross-validation scores, suggesting that model performance could be improved with additional information.
End of explanation
"""
vlbw = pd.read_csv("../data/vlbw.csv", index_col=0)
vlbw = vlbw.replace({'inout':{'born at Duke':0, 'transported':1},
'delivery':{'abdominal':0, 'vaginal':1},
'ivh':{'absent':0, 'present':1, 'possible':1, 'definite':1},
'sex':{'female':0, 'male':1}})
vlbw = vlbw[[u'birth', u'exit', u'hospstay', u'lowph', u'pltct',
u'bwt', u'gest', u'meth',
u'toc', u'delivery', u'apg1', u'vent', u'pneumo', u'pda', u'cld',
u'ivh']].dropna()
# Write your answer here
"""
Explanation: Exercise: Very low birthweight infants
Compare logistic regression models (using the linear_model.LogisticRegression interface) with varying degrees of regularization for the VLBW infant database. Use a relevant metric such as the Brier's score as a metric.
$$B = \frac{1}{n} \sum_{i=1}^n (\hat{p}_i - y_i)^2$$
End of explanation
"""
|
ColeLab/informationtransfermapping | MasterScripts/ManuscriptS1a_NetworkInformationEstimate_Supplementary.ipynb | gpl-3.0 | import sys
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scripts3_functions as func
import scipy.stats as stats
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import sys
import multiprocessing as mp
import pandas
%matplotlib inline
import permutationTesting as pt
import os
os.environ['OMP_NUM_THREADS'] = str(1)
"""
Explanation: ManuscriptS1a - Network-level information estimates (using RSA)
Analysis for Supplementary Figure 1B.
Master code for Ito et al., 2017¶
Takuya Ito (takuya.ito@rutgers.edu)
End of explanation
"""
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
glasserparcels = lg.loadGlasserParcels()
networkdef = lg.loadGlasserNetworks()
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Define new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
"""
Explanation: Basic parameters
End of explanation
"""
def loadBetas(subj, net='all'):
"""Loads in task betas"""
datafile = resultsdir + 'glmMiniblockBetaSeries/' + subj + '_miniblock_taskBetas_Glasser.csv'
betas = np.loadtxt(datafile, delimiter=',')
betas = betas[:,17:]
if net == 'all':
return betas
else:
net_ind = np.where(networkdef==net)[0]
return betas[net_ind,:].T
def setupMatrix(subj,ruledim,net):
"""
Sets up basic SVM Matrix for a classification of a particular rule dimension and network
"""
betas = loadBetas(subj,net=net)
rules, rulesmb = func.importRuleTimingsV3(subj,ruledim)
svm_mat = np.zeros((betas.shape))
samplecount = 0
labels = []
for rule in rulesmb:
rule_ind = rulesmb[rule].keys()
sampleend = samplecount + len(rule_ind)
svm_mat[samplecount:sampleend,:] = betas[rule_ind,:]
labels.extend(np.ones(len(rule_ind),)*rule)
samplecount += len(rule_ind)
labels = np.asarray(labels)
return svm_mat, labels
"""
Explanation: Set up basic functions
End of explanation
"""
def rsaCV(svm_mat,labels,ruledim):
"""Runs a leave-4-out CV for a 4 way classification"""
cvfolds = []
# 32 folds, if we do a leave 4 out for 128 total miniblocks
# Want to leave a single block from each rule from each CV
for rule in np.unique(labels):
cvfolds.append(np.where(labels==rule)[0])
cvfolds = np.asarray(cvfolds)
# Number of CVs is columns
ncvs = cvfolds.shape[1]
nrules = cvfolds.shape[0]
# Randomly sample cross-validation folds
for i in range(nrules): np.random.shuffle(cvfolds[i,:])
corr_rho_cvs = []
err_rho_cvs = []
for cv in range(ncvs):
# Select a test set from the CV Fold matrix
test_ind = cvfolds[:,cv].copy()
# Delete the CV included from the train set
train_ind = np.delete(cvfolds,cv,axis=1)
# Identify the train and test sets
svm_train = svm_mat[np.reshape(train_ind,-1),:]
svm_test = svm_mat[test_ind,:]
# ## Feature-wise normalization (computed by hand)
# # Compute mean of train set
# train_mean = np.mean(svm_train,axis=0)
# train_mean.shape = (1,len(train_mean))
# # Compute std of train set
# train_std = np.std(svm_train,axis=0)
# train_std.shape = (1,len(train_std))
# # Normalize train set
# svm_train = np.divide((svm_train - train_mean),train_std)
# # Normalize test set with trainset mean and std to avoid circularity
# svm_test = (svm_test - train_mean)/train_std
prototype = {}
# Construct RSA prototypes
for rule in range(nrules):
prototype_ind = np.reshape(train_ind[rule,:],-1)
prototype[rule] = np.mean(svm_mat[prototype_ind],axis=0)
corr_rho = []
err_rho = []
for rule1 in range(nrules):
for rule2 in range(nrules):
r = stats.spearmanr(prototype[rule1],svm_test[rule2,:])[0]
r = np.arctanh(r)
if rule1==rule2:
corr_rho.append(r)
else:
err_rho.append(r)
corr_rho_cvs.append(np.mean(corr_rho))
err_rho_cvs.append(np.mean(err_rho))
return np.mean(corr_rho_cvs), np.mean(err_rho_cvs)
def subjRSACV((subj,ruledim,net)):
svm_mat, labels = setupMatrix(subj,ruledim,net)
# Demean each sample
svmmean = np.mean(svm_mat,axis=1)
svmmean.shape = (len(svmmean),1)
svm_mat = svm_mat - svmmean
# svm_mat = preprocessing.scale(svm_mat,axis=0)
corr_rho, err_rho = rsaCV(svm_mat, labels, ruledim)
diff_rho = corr_rho - err_rho
# diff_rho = np.arctanh(corr_rho) - np.arctanh(err_rho)
# diff_rho = np.arctanh(corr_rho) - np.arctanh(err_rho)
return corr_rho, err_rho, diff_rho
netkeys = {0:'fpn', 1:'dan', 2:'con', 3:'dmn', 4:'vis', 5:'aud', 6:'smn'}
ruledims = ['logic','sensory','motor']
corr_rho = {}
err_rho = {}
diff_rho = {}
avg_acc = {}
for ruledim in ruledims:
avg_acc[ruledim] = {}
corr_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
err_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
diff_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
print 'Running', ruledim
for net in netkeys.keys():
# print 'Running network', net
inputs = []
for subj in subjNums: inputs.append((subj,ruledim,networkmappings[netkeys[net]]))
pool = mp.Pool(processes=11)
results = pool.map_async(subjRSACV,inputs).get()
pool.close()
pool.join()
scount = 0
for result in results:
tmp_corr, tmp_err, tmp_diff = result
corr_rho[ruledim][net,scount] = tmp_corr
err_rho[ruledim][net,scount] = tmp_err
diff_rho[ruledim][net,scount] = tmp_diff
scount += 1
avg_acc[ruledim][net] = np.mean(diff_rho[ruledim][net])
"""
Explanation: Set up functions for information estimation (RSA instead of SVM decoding)
End of explanation
"""
# Compute group stats
chance = 0.0
results_dict_fdr = {}
for ruledim in ruledims:
results_dict_fdr[ruledim] = {}
for net in netkeys.keys(): results_dict_fdr[ruledim][netkeys[net]] = {}
pvals = []
for net in netkeys.keys():
results_dict_fdr[ruledim][netkeys[net]]['Accuracy'] = str(round(np.mean(avg_acc[ruledim][net]),3))
t, p = stats.ttest_1samp(diff_rho[ruledim][net],chance)
results_dict_fdr[ruledim][netkeys[net]]['T-stats'] = t
results_dict_fdr[ruledim][netkeys[net]]['P-values'] = p
pvals.append(p)
qvals = mc.fdrcorrection0(pvals)[1]
qcount = 0
for net in netkeys.keys():
results_dict_fdr[ruledim][netkeys[net]]['Q-values'] = qvals[qcount]
qcount += 1
"""
Explanation: Run statistics with FDR-correction
End of explanation
"""
results_dframe_fdr = {}
for ruledim in ruledims:
print 'Dataframe for', ruledim, 'classification'
results_dframe_fdr[ruledim] = pandas.DataFrame(data=results_dict_fdr[ruledim])
display(results_dframe_fdr[ruledim])
#### Compute statistics for bar plot
bar_avgs = {}
bar_sems = {}
bar_avg_all = {}
for net in netkeys.keys(): bar_avg_all[net] = np.zeros((len(ruledims),len(subjNums)))
rulecount = 0
for ruledim in ruledims:
bar_avgs[ruledim] = {}
bar_sems[ruledim] = {}
for net in netkeys.keys():
bar_avgs[ruledim][net] = np.mean(diff_rho[ruledim][net])
bar_sems[ruledim][net] = np.std(diff_rho[ruledim][net])/np.sqrt(len(subjNums))
bar_avg_all[net][rulecount,:] = diff_rho[ruledim][net]
rulecount += 1
bar_sem_all = {}
for net in netkeys.keys():
meanacc = np.mean(bar_avg_all[net],axis=0)
bar_avg_all[net] = np.mean(meanacc)
bar_sem_all[net] = np.std(meanacc)/np.sqrt(len(subjNums))
##### Generate figures
width=0.25
width=.265
networks = netkeys.keys()
nbars = len(networks)
fig = plt.figure()
ax = fig.add_subplot(111)
rects = {}
widthcount = 0
colors = ['b','g','r']
colorcount = 0
for ruledim in ruledims:
rects[ruledim] = ax.bar(np.arange(nbars)+widthcount, bar_avgs[ruledim].values(), width,align='center',
yerr=bar_sems[ruledim].values(), color=colors[colorcount], error_kw=dict(ecolor='black'))
widthcount += width
colorcount += 1
ax.set_title('Network Information Estimation of CPRO rules (FDR)',
y=1.04, fontsize=16)
ax.set_ylabel('Match V. Mismatch Difference (Rho)',fontsize=12)
ax.set_xlabel('Rule types by Networks', fontsize=12)
ax.set_xticks(np.arange(nbars)+width)
ax.set_xticklabels(netkeys.values(),rotation=-45)
vmax=0.07
ax.set_ylim([0,vmax])
plt.legend((rects['logic'], rects['sensory'], rects['motor']),
('Logic', 'Sensory', 'Motor'), loc=((1.08,.65)))
## Add asterisks
def autolabel(rects,df,ruledim):
# attach some text labels
netcount = 0
for rect in rects:
height = rect.get_height()
# Decide where to put the asterisk
if height > vmax:
yax = vmax - .005
else:
yax = height + .01
# Slightly move this asterisk since it's in the way
if ruledim=='sensory' and netkeys[netcount]=='dan': yax -= .0025
# Retrive q-value and assign asterisk accordingly
q = df[netkeys[netcount]]['Q-values']
if q > .05: asterisk=''
if q < .05: asterisk='*'
if q < .01: asterisk='**'
if q < .001: asterisk='***'
# Label bar
ax.text(rect.get_x() + rect.get_width()/2., yax,
asterisk, ha='center', va='bottom', fontsize=8)
# Go to next network
netcount += 1
for ruledim in ruledims:
autolabel(rects[ruledim],results_dframe_fdr[ruledim],ruledim)
# autolabel(rects2)
plt.tight_layout()
# plt.savefig('FigS1a_NetworkRSA_InformationEstimate.pdf')
"""
Explanation: Show results as dataframe
End of explanation
"""
# Compute group stats
chance = 0.0
results_dict_fwe = {}
for ruledim in ruledims:
results_dict_fwe[ruledim] = {}
for net in netkeys.keys(): results_dict_fwe[ruledim][netkeys[net]] = {}
pvals = []
for net in netkeys.keys():
results_dict_fwe[ruledim][netkeys[net]]['Accuracy'] = str(round(np.mean(avg_acc[ruledim][net]),3))
t, p = pt.permutationFWE(diff_rho[ruledim],nullmean=0,permutations=10000,nproc=15)
# t, p = stats.ttest_1samp(diff_rho[ruledim][net],chance)
results_dict_fwe[ruledim][netkeys[net]]['T-stats'] = t[net]
results_dict_fwe[ruledim][netkeys[net]]['P-FWE'] = 1.0 - p[net]
"""
Explanation: Run statistics with FWER-correction (Permutation testing)
End of explanation
"""
results_dframe_fwe = {}
for ruledim in ruledims:
print 'Dataframe for', ruledim, 'classification'
results_dframe_fwe[ruledim] = pandas.DataFrame(data=results_dict_fwe[ruledim])
display(results_dframe_fwe[ruledim])
"""
Explanation: Show results as dataframe
End of explanation
"""
results_dict_fwe[ruledim]['fpn'].keys()
ie = {}
t_avg = {}
p_avg = {}
for ruledim in ruledims:
ie[ruledim] = {'sig':[], 'nonsig':[]}
t_avg[ruledim] = {'sig':[], 'nonsig':[]}
p_avg[ruledim] = {'sig':[], 'nonsig':[]}
for net in results_dict_fwe[ruledim].keys():
if results_dict_fwe[ruledim][net]['P-FWE']<0.05:
ie[ruledim]['sig'].append(float(results_dict_fwe[ruledim][net]['Accuracy']))
t_avg[ruledim]['sig'].append(float(results_dict_fwe[ruledim][net]['T-stats']))
p_avg[ruledim]['sig'].append(float(results_dict_fwe[ruledim][net]['P-FWE']))
else:
ie[ruledim]['nonsig'].append(float(results_dict_fwe[ruledim][net]['Accuracy']))
t_avg[ruledim]['nonsig'].append(float(results_dict_fwe[ruledim][net]['T-stats']))
p_avg[ruledim]['nonsig'].append(float(results_dict_fwe[ruledim][net]['P-FWE']))
# Read out statistics for manuscript
print 'Average significant IE for', ruledim, ':', np.mean(ie[ruledim]['sig'])
print 'Average significant T-stats for', ruledim, ':', np.mean(t_avg[ruledim]['sig'])
print 'Max significant p-value for', ruledim, ':', np.max(p_avg[ruledim]['sig'])
print '#################'
print 'Average nonsignificant IE for', ruledim, ':', np.mean(ie[ruledim]['nonsig'])
print 'Average nonsignificant T-stats for', ruledim, ':', np.mean(t_avg[ruledim]['nonsig'])
print 'Min nonsignificant p-value for', ruledim, ':', np.min(p_avg[ruledim]['nonsig'])
print '\n'
#### Compute statistics for bar plot
bar_avgs = {}
bar_sems = {}
bar_avg_all = {}
t_avgs = {}
p_avgs = {}
for net in netkeys.keys(): bar_avg_all[net] = np.zeros((len(ruledims),len(subjNums)))
rulecount = 0
for ruledim in ruledims:
bar_avgs[ruledim] = {}
bar_sems[ruledim] = {}
for net in netkeys.keys():
bar_avgs[ruledim][net] = np.mean(diff_rho[ruledim][net])
bar_sems[ruledim][net] = np.std(diff_rho[ruledim][net])/np.sqrt(len(subjNums))
bar_avg_all[net][rulecount,:] = diff_rho[ruledim][net]
rulecount += 1
bar_sem_all = {}
for net in netkeys.keys():
meanacc = np.mean(bar_avg_all[net],axis=0)
bar_avg_all[net] = np.mean(meanacc)
bar_sem_all[net] = np.std(meanacc)/np.sqrt(len(subjNums))
##### Generate figures
width=0.25
width=.265
networks = netkeys.keys()
nbars = len(networks)
fig = plt.figure()
ax = fig.add_subplot(111)
rects = {}
widthcount = 0
colors = ['b','g','r']
colorcount = 0
for ruledim in ruledims:
rects[ruledim] = ax.bar(np.arange(nbars)+widthcount, bar_avgs[ruledim].values(), width,align='center',
yerr=bar_sems[ruledim].values(), color=colors[colorcount], error_kw=dict(ecolor='black'))
widthcount += width
colorcount += 1
ax.set_title('Network Information Estimation of CPRO rules (FWE)',
y=1.04, fontsize=16)
ax.set_ylabel('Match V. Mismatch Difference (Rho)',fontsize=12)
ax.set_xlabel('Rule types by Networks', fontsize=12)
ax.set_xticks(np.arange(nbars)+width)
ax.set_xticklabels(netkeys.values(),rotation=-45)
vmax=0.07
ax.set_ylim([0,vmax])
plt.legend((rects['logic'], rects['sensory'], rects['motor']),
('Logic', 'Sensory', 'Motor'), loc=((1.08,.65)))
## Add asterisks
def autolabel(rects,df,ruledim):
# attach some text labels
netcount = 0
for rect in rects:
height = rect.get_height()
# Decide where to put the asterisk
if height > vmax:
yax = vmax - .005
else:
yax = height + .01
# Slightly move this asterisk since it's in the way
if ruledim=='sensory' and netkeys[netcount]=='dan': yax -= .0025
# Retrive q-value and assign asterisk accordingly
q = results_dict_fwe[ruledim][netkeys[netcount]]['P-FWE']
if q > .05: asterisk=''
if q < .05: asterisk='*'
if q < .01: asterisk='**'
if q < .001: asterisk='***'
# Label bar
ax.text(rect.get_x() + rect.get_width()/2., yax,
asterisk, ha='center', va='bottom', fontsize=8)
# Go to next network
netcount += 1
for ruledim in ruledims:
autolabel(rects[ruledim],results_dframe_fwe[ruledim],ruledim)
# autolabel(rects2)
plt.tight_layout()
# plt.savefig('FigS1a_NetworkRSA_InformationEstimate.pdf')
"""
Explanation: Manually compute average significant and non-significant t-stats for manuscript
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2019-02-28-functions-will.ipynb | mit | def print_a_phrase(): # we start the definition of a function with "def"
print("Academics of the world unite! You have nothing to lose but your over-priced proprietary software licenses.")
#return 0;
print_a_phrase()
"""
Explanation: Basic principles and features
Functions are exactly that: they usually take an input and return an output.
When you do your typical Python coding you will be using functions all the time, for example np.mean() or np.arange() from the numpy library.
The great thing is you can write them yourself!
Below is a very simple example:
End of explanation
"""
def convert_pa_to_mb(pascal):
millibar = pascal * 0.01
return millibar
"""
Explanation: We define the function using def, followed by whatever name we choose for the function, which is immediately followed by a round bracket (). The bracket is where you would write your arguments in, for example your input variable.
The function above doesn't take an input, but it prints a pre-defined phrase. If we were to make it more dynamic and make it print any input we want to give it, it would just become print(), so not much use for this here: Do not re-invent the wheel!
It also doesn't actually return anything, it just does something. In C++ this function would be defined as void.
What if we do give it an input? Below are two examples of simple functions for unit conversion:
End of explanation
"""
convert_pa_to_mb(80000)
"""
Explanation: See how the above takes the input, operates on it, and then returns your output. Note how we didn't specifify what form Pascal has to take: we could put in an integer or a float. This is due to Python's polymorphism. We could also put a string in, and it wouldn't complain until it has to do maths on it, which is when it causes an error. More on this later.
End of explanation
"""
def convert_mb_to_pa(millibar):
return millibar * 100
convert_mb_to_pa(1050)
"""
Explanation: But if your function is that simple, you can save a line by calculating and returning the result in one go:
End of explanation
"""
def check_integer_and_change(value):
if value.is_integer() == False:
remainder = value
while remainder > 1:
remainder = remainder - 1 # reduce number to smallest positive value
if remainder >= 0.5: # if half ovr above, round up
value = value + 1 - remainder
else:
value = value - remainder # round down
return value
"""
Explanation: Now all of this was very basic, and you might think, why not just write the calculation directly into my main code?
For a start, as your work becomes more sophisticated the longer and more complex your calculations and operations become. Then consider that you'll probably end up wanting to reuse the same task in your code or in another code. If you wrote out the same stuff again and again your code would get very messy very quickly. Instead it really does help to keep recurring functions neatly titied away at the top or bottom of your code, or in a seperate file.
End of explanation
"""
import numpy as np # need this to make array
my_array = np.arange(5) # make an array of integers
new_array = np.array([1.2, 8, 4.5]) # make another array to insert into our main array
my_array = np.concatenate((my_array, new_array)) # let's add some non-integers to our little array
print(my_array)
for i in range(len(my_array)):
my_array[i] = check_integer_and_change(my_array[i]) # we perform our function from above on each array element
print(my_array)
"""
Explanation: Above is just another example of a function; it figures out if a value is an integer, and if it is not, it rounds it up or down, as appropriate.
Notice is_integer() is also a function, from the main python library. Here it doesn't take an input in it's brackets (), but instead it's "attached" to the variable value. This is because it is a class function, and in Python variables are class objects, but that's a story for another presentation. In fact, you can find an excellent tutorial on classes here: https://docs.python.org/3/tutorial/classes.html .
End of explanation
"""
def double_something(value):
value = value * 2 # we actively modify the original input variable; in many cases the original value would be irretrievable
return value
input_value = 5.0
new_value = double_something(input_value)
print("new_value =", new_value)
# now check if our original input is the same
print("input =", input_value)
# now let's do the same the function does, but in a loop for 1 iteration:
value = 5.0
new_value = 0
for i in range(1):
# we copy the function above exactly
value = value * 2
new_value = value # our "return"
print("new_value =", new_value)
# now check if our original input is the same
print("input =", value)
"""
Explanation: Another good reason for using functions is how it deals with memory: If you use arrays a lot, and you do all your calculations in line with your main code then you might start piling up a lot of stuff, i.e. you use up your memory, which can lead to your program to slow, or even memory leackage. Instead, functions take your input, temporarily take some extra space in your memory, and once they spit out their output, they delete whatever extra stuff they needed for their calculation, but without affecting your input, unless you want it to. - for and while loops, or when you use if, also delete any variables that were defined within them, but they might change your input from before the loop, if you're not careful!
To illustrate this danger:
End of explanation
"""
def convert_JeV_and_mean(eV_values): # take input in eV
joule_values = 1.60218e-18 * eV_values # here just do the simple conversion
# if we want the mean from numpy we need to make sure we only do that for an array input
try:
array_length = len(joule_values) # if we have a single value, this line will cause an error
mean = np.mean(joule_values)
except:
return joule_values # if it's just a single value, return it by itself
return joule_values, mean # if it's an array, we can return both it and its mean
"""
Explanation: So we can see that due to sloppy coding we now changed our original input in the for loop, but preserved it when we used a function instead.
There are instances where a function could permanently affect your input data, due to the fact that in python when you use the = operator, the variables you get are really just pointers to the same data. It is recommended to make sure you've properly copied vital data before passing it to a function.
Now let's look at putting different variable types in, see what we can get out, and think of how what we'll find could be useful to us.
Say you want a function that can operate on a single quantity, as well as multiple quantities, e.g. in an array. For this example we will do a unit conversion and then calculate the average of all the input values.
But we will use try and except to make a distinction between single values and arrays.
End of explanation
"""
convert_JeV_and_mean(1.0)
"""
Explanation: Let's put in a float...
End of explanation
"""
some_data = np.array([5.0956e-11, 5.1130e-11, 4.8856e-11 ])
convert_JeV_and_mean(some_data)
"""
Explanation: Only a single output was returned: the converted float.
Now let's input a numpy array. (Btw, the values seen below are of the order GeV, which is a common sight in high energy particle physics, and in astrophysics)
End of explanation
"""
results, mean = convert_JeV_and_mean(some_data)
print("results =", results); print("mean =", mean, "J")
"""
Explanation: The output above looks complicated, but it is essentially first the array of converted values, and then the mean. But this bracketed output looks messy and in your own code you would do something like this:
End of explanation
"""
from functools import wraps
import time
def decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
start_time = time.time()
rv = f(*args, **kwargs) # here we run the function f
print("Time taken =", time.time() - start_time) # difference in points in time gives duration of f run
return rv
return wrapper # we return the function called wrapper
@decorator
def func():
pass # does nothing, just for demonstration
"""
Explanation: As the function potentially outputs two different objects, we can assign them to seperate variables as shown above.
Notice how the order of the output (i.e. values then mean) corresponds to the order in which we wrote them after return at the bottom of our function.
Examples of advanced features
Decorators and Wrappers
Quite neatly, we can define functions within functions, and also return functions from functions. An example where this is applied is the use of decorators and wrappers, which are types of functions. Below is a demonstration of a very simple example: we have a function called func(), and we wrap it up in another function, which will just measure time, inside the decorator.
End of explanation
"""
func()
"""
Explanation: The @ followed by the name of a function, here wraps and decorator, acts like a kind of override. You place the @ right above a function definition, which, for example, tells the program that whenever func is run, it is decorated by the function called decorator. decorator returns wrapper, which will now run everytime func is run. *args and **kwargs are ways of allowing you to pass a flexible number of arguments to your function, and are explained here: http://book.pythontips.com/en/latest/args_and_kwargs.html .
Let's see if it works:
End of explanation
"""
def check_integer_and_round(value):
"""
Function takes a float or integer value and tests the quantity for its type.
If the input is an integer ther function does nothing more and returns the original input
If the input is a non-integer it rounds it to the nearest whole integer, and returns the result.
"""
if value.is_integer() == False:
remainder = value
while remainder > 1:
remainder = remainder - 1 # reduce number to smallest positive value
if remainder >= 0.5: # if half ovr above, round up
value = value + 1 - remainder
else:
value = value - remainder # round down
return value
"""
Explanation: Running func() printed the time taken, which will naturally be negligible, due to the simplicity of the function.
Docstrings
Finally, not necessarily an "advanced" feature but still useful, we have docstrings, which are known to be documentation inside your code. You can insert a docstring anywhere in your code, but it's not advised as that can slow your program down, and in most places you should use comments instead. However, they are very useful, say, at the top of a function definition to explain what the function does. For example:
End of explanation
"""
help(check_integer_and_round)
"""
Explanation: We reused the previous function for rounding numbers to integers, with a small change to its name to avoid any issues with doubly defining it. We also addedd some text within two triple quotation marks """; this is a docstring. Like a comment, which starts with a hash #, it does nothing functionally when the code is run, it simply serves to help the user understand what the function does. In your terminal you can call it with the help() function:
End of explanation
"""
HTML(html)
"""
Explanation: A few more examples can be found in an earlier post: Some peculiarities of using functions in Python.
End of explanation
"""
|
swara-salih/Portfolio | 2001 SAT Scores Analysis/Analysis of 2001 Iowa SAT Scores.ipynb | mit | import scipy as sci
import pandas as pd
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Remember that for specific functions, the array function in numpy
# can be useful in listing out the elements in a list (example would
# be for finding the mode.)
with open('./data/sat_scores.csv', 'r') as f:
data = [i.split(",") for i in f.read().split()]
print data
"""
Explanation: Initial Data Analysis
Step 1: Open the sat_scores.csv file. Investigate the data, and answer the questions below.
AND
Step 2: Load the data.
4. Load the data into a list of lists and 5. Print the data
End of explanation
"""
header = data[0]
data = data[1:]
print(header)
"""
Explanation: 1. What does the data describe?
The data describes SAT scores for verbal and math sections in 2001 across the US. It does appear to be complete, except for the issue I'm having with the median score for math. When I ran the median function for sat_scores.math, it returned a value of 521. However, I could not find that value in the dataset. Below are some other observations I made.
2. Does the data look complete? Are there any obvious issues with the observations?¶
Overall, the data does look complete, but doing my EDA I noticed that the median value computed for Math, 521, does not actually appear in the list of Math scores. There must be an issue with the data.
6. Extract a list of the labels from the data, and remove them from the data.
End of explanation
"""
sat_data = {}
for index, column_name in enumerate(header):
sat_data[column_name] = []
for row in data:
sat_data[column_name].append(row[index])
"""
Explanation: 3. Create a data dictionary for the dataset.
End of explanation
"""
state_names = sat_data['State']
print state_names
"""
Explanation: 7. Create a list of State names extracted from the data. (Hint: use the list of labels to index on the State column)
End of explanation
"""
print 'The type of the State column is' + ' ' + str(type (sat_data['State'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Math'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Verbal'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Rate'][2]))
"""
Explanation: 8. Print the types of each column
End of explanation
"""
#Math, Verbal, and Rate need to be reassigned to integers.
for item in sat_data['Math']:
item = int(item)
for item in sat_data['Verbal']:
item = int(item)
for item in sat_data['Rate']:
item = int(item)
"""
Explanation: 9. Do any types need to be reassigned? If so, go ahead and do it.
End of explanation
"""
verbal_values = {x:sat_data['Verbal'] for x in state_names}
math_values = {x:sat_data['Math'] for x in state_names}
rate_values = {x:sat_data['Rate'] for x in state_names}
"""
Explanation: 10. Create a dictionary for each column mapping the State to its respective value for that column.
End of explanation
"""
#SAT_values = {x:sat_data['Verbal'] for x in sat_data['Verbal']}
"""
Explanation: 11. Create a dictionary with the values for each of the numeric columns
End of explanation
"""
#Convert to a pandas dataframe to perform functions.
SAT_scores = pd.DataFrame(sat_data)
SAT_scores['Math'] = SAT_scores.Math.astype(int)
SAT_scores['Verbal'] = SAT_scores.Verbal.astype(int)
SAT_scores['Rate'] = SAT_scores.Rate.astype(int)
print 'The minimum Verbal score is' + ' ' + str(min(SAT_scores.Verbal))
print 'The maximum Verbal score is' + ' ' + str(max(SAT_scores.Verbal))
print 'The minimum Math score is' + ' ' + str(min(SAT_scores.Math))
print 'The maximum Math score is' + ' ' + str(max(SAT_scores.Math))
print 'The minimum Rate is' + ' ' + str(min(SAT_scores.Rate))
print 'The maximum Rate is' + ' ' + str(max(SAT_scores.Rate))
"""
Explanation: # Step 3: Describe the data
12. Print the min and max of each column
End of explanation
"""
#Standard Deviation function.
from math import sqrt
def standard_deviation(column):
num_int = len(column)
mean = sum(column)/len(column)
differences = [x - mean for x in column]
sq_diff = [t ** 2 for t in differences]
num = sum(sq_diff)
den = len(column)-1
var = num/den
print sqrt(var)
standard_deviation(SAT_scores['Math'])
standard_deviation(SAT_scores['Verbal'])
standard_deviation(SAT_scores['Rate'])
#Check to see the standard deviations are right.
print SAT_scores.describe()
#Approximately on point.
"""
Explanation: The miniumum rate is 4, found in North Dakota, South Dakota, and Mississippi, and the maximum rate is 82 found in Connecticut.
The minimum verbal score is 482 in D.C., and the maximum is 593 in Iowa.
The median verbal score is 526 in Oregon.
The minimum math score is 439 in Ohio, and the maximum is 603, which is interestingly also in Iowa.
The median math score is 521.
Iowa has the highest SAT Scores in the country overall.
13. Write a function using only list comprehensions, no loops, to compute Standard Deviation. Print the Standard Deviation of each numeric column.
End of explanation
"""
# Find the mean, median, and mode for the set of verbal scores and the set of math scores.
import numpy as np
print np.median(SAT_scores.Verbal)
print np.median(SAT_scores.Math)
#Numpy doesn't have a built in function for mode. However, stats does;
#its function returns the mode, and how many times the mode appears.
verb_mode = stats.mode(SAT_scores.Verbal)
math_mode = stats.mode(SAT_scores.Math)
print verb_mode
print math_mode
"""
Explanation: Mean, Median and Mode in NumPy and SciPy
End of explanation
"""
#Will be using Pandas dataframe for plotting.
"""
Explanation: The median Verbal SAT score is 526, its mean is approximately 532, and its mode is above its mean at 562 (appears, 3 times).
The median Math SAT score is 521, its mean is 531.5, and its mode is below its mean at 499 (appears 6 times).
Step 4: Visualize the data¶
End of explanation
"""
import seaborn as sns
import matplotlib.pyplot as plt
"""
Explanation: 19. Plot some scatterplots. BONUS: Use a PyPlot figure to present multiple plots at once.
End of explanation
"""
sns.pairplot(SAT_scores)
plt.show()
"""
Explanation: Scatter Plotting
End of explanation
"""
# Not really. I had already assigned the Verbal, Math, and Rate columns to integers,
# so no conversion is needed there.
"""
Explanation: 20. Are there any interesting relationships to note?
Both Verbal and Math scores are highly correlated with each other, whichever way you plot them, with Math appearing to affect Verbal at a faster rate than the other way around.
End of explanation
"""
SAT_scores['Verbal'] = SAT_scores['Verbal'].apply(pd.to_numeric)
SAT_scores['Math'] = SAT_scores['Math'].apply(pd.to_numeric)
SAT_scores['Rate'] = SAT_scores['Rate'].apply(pd.to_numeric)
SAT_scores.dtypes
"""
Explanation: 9. Do any types need to be reassigned? If so, go ahead and do it.
End of explanation
"""
# Display box plots to visualize the distribution of the datasets.
# Recall the median verbal score is 526, the mean is 532, the max is 593, the min is 482,
# and the std. deviation is 33.236.
ax = sns.boxplot(y=SAT_scores.Verbal, saturation=0.75, width=0.1, fliersize=5)
ax.set(xlabel = 'SAT Verbal Scores', ylabel = 'Range of Scores')
ax.set_title('2001 Iowa Verbal Scores Distribution', fontsize = 15)
plt.show()
sns.boxplot(data = SAT_scores, y=SAT_scores.Math, saturation=0.75, width=0.1, fliersize=5)
plt.xlabel('SAT Math Scores')
plt.ylabel('Range of Scores')
plt.show()
sns.boxplot(data = SAT_scores, y=SAT_scores.Rate, saturation=0.75, width=0.1, fliersize=5)
plt.xlabel('SAT Rates')
plt.ylabel('Range of Rates')
plt.show()
"""
Explanation: 21. Create box plots for each variable.
End of explanation
"""
SAT_scores.Math.plot (kind='hist', bins=15)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: 14. Using MatPlotLib and PyPlot, plot the distribution of the Rate using histograms
Histograms
15. Plot the Math distribution
End of explanation
"""
SAT_scores.Verbal.plot (kind='hist', bins=15)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: 16. Plot the Verbal distribution
End of explanation
"""
SAT_scores.Rate.plot (kind='hist', bins=15)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: 16. Plot the Rate distribution
End of explanation
"""
# Used seaborn website as guidance: http://seaborn.pydata.org/tutorial/distributions.html
# I used a feature called the 'Kernel Density Estimation" (KDE) to
# visualize a distribution to the data.
# KDE is an estimator that uses each data point to make an estimate of the distibution and attempts to
# smooth it out on the histogram.
# This resulting curve has an area below it equal to one, hence the decimal units for frequency.
sns.distplot(SAT_scores.Verbal, bins=15)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.distplot(SAT_scores.Math, bins=15)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.distplot(SAT_scores.Rate, bins=15)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Verbal)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Math)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Rate)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency (KDE)')
plt.show()
"""
Explanation: 17. What is the typical assumption for data distribution? and 18. Does that distribution hold true for our data?
The typical assumption of data distribution is that it should follow a normal distribution, with standard deviations being relatively equal on both sides of the mean. Neither of the histograms appear to follow a normal distribution, with the Verbal scores in particular following a right/positive skew. But I need to properly check for normal distribution, and find a way the relative distribution onto the histograms. Perhaps Seaborn has a function that can help me with that.
Seaborn Plotting for Histograms and Fitting a Distribution
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.8/notebooks/auto_examples/store-and-load-results.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
import os
import sys
"""
Explanation: Store and load skopt optimization results
Mikhail Pak, October 2016.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Problem statement
We often want to store optimization results in a file. This can be useful,
for example,
if you want to share your results with colleagues;
if you want to archive and/or document your work;
or if you want to postprocess your results in a different Python instance or on an another computer.
The process of converting an object into a byte stream that can be stored in
a file is called serialization.
Conversely, deserialization means loading an object from a byte stream.
Warning: Deserialization is not secure against malicious or erroneous
code. Never load serialized data from untrusted or unauthenticated sources!
End of explanation
"""
from skopt import gp_minimize
noise_level = 0.1
def obj_fun(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() \
* noise_level
res = gp_minimize(obj_fun, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
x0=[0.], # the starting point
acq_func="LCB", # the acquisition function (optional)
n_calls=15, # the number of evaluations of f including at x0
n_random_starts=3, # the number of random initial points
random_state=777)
"""
Explanation: Simple example
We will use the same optimization problem as in the
sphx_glr_auto_examples_bayesian-optimization.py notebook:
End of explanation
"""
from skopt import dump, load
dump(res, 'result.pkl')
"""
Explanation: As long as your Python session is active, you can access all the
optimization results via the res object.
So how can you store this data in a file? skopt conveniently provides
functions :class:skopt.dump and :class:skopt.load that handle this for you.
These functions are essentially thin wrappers around the
joblib <https://joblib.readthedocs.io/en/latest/>_ module's :obj:joblib.dump and :obj:joblib.load.
We will now show how to use :class:skopt.dump and :class:skopt.load for storing
and loading results.
Using skopt.dump() and skopt.load()
For storing optimization results into a file, call the :class:skopt.dump
function:
End of explanation
"""
res_loaded = load('result.pkl')
res_loaded.fun
"""
Explanation: And load from file using :class:skopt.load:
End of explanation
"""
dump(res, 'result.gz', compress=9)
from os.path import getsize
print('Without compression: {} bytes'.format(getsize('result.pkl')))
print('Compressed with gz: {} bytes'.format(getsize('result.gz')))
"""
Explanation: You can fine-tune the serialization and deserialization process by calling
:class:skopt.dump and :class:skopt.load with additional keyword arguments. See the
joblib <https://joblib.readthedocs.io/en/latest/>_ documentation
:obj:joblib.dump and
:obj:joblib.load for the additional parameters.
For instance, you can specify the compression algorithm and compression
level (highest in this case):
End of explanation
"""
dump(res, 'result_without_objective.pkl', store_objective=False)
"""
Explanation: Unserializable objective functions
Notice that if your objective function is non-trivial (e.g. it calls MATLAB
engine from Python), it might be not serializable and :class:skopt.dump will
raise an exception when you try to store the optimization results.
In this case you should disable storing the objective function by calling
:class:skopt.dump with the keyword argument store_objective=False:
End of explanation
"""
res_loaded_without_objective = load('result_without_objective.pkl')
print('Loaded object: ', res_loaded_without_objective.specs['args'].keys())
print('Local variable:', res.specs['args'].keys())
"""
Explanation: Notice that the entry 'func' is absent in the loaded object but is still
present in the local variable:
End of explanation
"""
del res.specs['args']['func']
dump(res, 'result_without_objective_2.pkl')
"""
Explanation: Possible problems
Python versions incompatibility: In general, objects serialized in
Python 2 cannot be deserialized in Python 3 and vice versa.
Security issues: Once again, do not load any files from untrusted
sources.
Extremely large results objects: If your optimization results object
is extremely large, calling :class:skopt.dump with store_objective=False might
cause performance issues. This is due to creation of a deep copy without the
objective function. If the objective function it is not critical to you, you
can simply delete it before calling :class:skopt.dump. In this case, no deep
copy is created:
End of explanation
"""
|
mcs07/MolVS | examples/standardization.ipynb | mit | from rdkit.Chem.Draw import IPythonConsole
import logging
logger = logging.getLogger('molvs')
logger.setLevel(logging.INFO)
"""
Explanation: Standardization
Here are some examples of how to standardize molecules.
First set our iPython notebook to display molecule images and log messages:
End of explanation
"""
from molvs import standardize_smiles
standardize_smiles('C[n+]1c([N-](C))cccc1')
"""
Explanation: Standardizing a SMILES string
The standardize_smiles function provides a quick and easy way to get the standardized version of a given SMILES string:
End of explanation
"""
from rdkit import Chem
import molvs
from molvs import Standardizer
mol = Chem.MolFromSmiles('[Na]OC(=O)c1ccc(C[S+2]([O-])([O-]))cc1')
mol
s = Standardizer()
smol = s.standardize(mol)
smol
Chem.MolToSmiles(smol)
"""
Explanation: While this is convenient for one-off cases, it's inefficient when dealing with multiple molecules and doesn't allow any customization of the standardization process.
The Standardizer class
The Standardizer class provides flexibility to specify custom standardization stages and efficiently standardize multiple molecules.
End of explanation
"""
from molvs.normalize import Normalization
norms = (
Normalization('Nitro to N+(O-)=O', '[*:1][N,P,As,Sb:2](=[O,S,Se,Te:3])=[O,S,Se,Te:4]>>[*:1][*+1:2]([*-1:3])=[*:4]'),
Normalization('Pyridine oxide to n+O-', '[n:1]=[O:2]>>[n+:1][O-:2]'),
)
my_s = Standardizer(normalizations=norms)
smol = my_s.standardize(mol)
smol
"""
Explanation: The Standardizer class takes a number of initialization parameters to customize its behaviour:
End of explanation
"""
my_s.standardize(Chem.MolFromSmiles('C1=C(C=C(C(=C1)O)C(=O)[O-])[S](O)(=O)=O.[Na+]'))
my_s.standardize(Chem.MolFromSmiles('[Ag]OC(=O)O[Ag]'))
"""
Explanation: Notice that the sulfone group wasn't normalized in this case, because when initializing the Standardizer we only specified two Normalizations.
The default list of normalizations is molvs.normalize.NORMALIZATIONS.
It is possible to resuse a Standardizer instance on many molecules once it has been initialized with some parameters:
End of explanation
"""
|
eford/rebound | ipython_examples/Checkpoints.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-6, a=1.)
sim.add(a=2.)
sim.integrator = "whfast"
sim.save("checkpoint.bin")
sim.status()
"""
Explanation: Checkpoints
You can easily save and load a REBOUND simulation to a binary file. The binary file includes all information about the particles (mass, position, velocity, etc), as well as the current simulation settings such as time, integrator choise, etc.
Let's add three particles to REBOUND and save them to a file.
End of explanation
"""
del sim
sim = rebound.Simulation.from_file("checkpoint.bin")
sim.status()
"""
Explanation: The binary files are small in size and store every floating point number exactly, so you don't have to worry about efficiency or losing precision. You can make lots of checkpoints if you want!
Let's delete the old REBOUND simulation (that frees up the memory from that simulation) and then read the binary file we just saved.
End of explanation
"""
|
aborgher/Main-useful-functions-for-ML | NLP/NLP.ipynb | gpl-3.0 | import enchant
# The underlying programming model provided by the Enchant library is based on the notion of Providers.
# A provider is a piece of code that provides spell-checking services which Enchant can use to perform its work.
# Different providers exist for performing spellchecking using different frameworks -
# for example there is an aspell provider and a MySpell provider.
## no need to check brokers while running enchant, this is just a simple check if all is installed
b = enchant.Broker()
print(b.describe())
b.list_dicts()
enchant.list_languages()
d = enchant.Dict("it_IT")
d.check('Giulia'), d.check('pappapero')
print( d.suggest("potreima") )
print( d.suggest("marema") )
print( d.suggest("se metto troppe parole lo impallo") )
print( d.suggest("van no") )
print( d.suggest("due parole") )
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Correction-with-enchant" data-toc-modified-id="Correction-with-enchant-1"><span class="toc-item-num">1 </span>Correction with enchant</a></div><div class="lev2 toc-item"><a href="#Add-your-own-dictionary" data-toc-modified-id="Add-your-own-dictionary-11"><span class="toc-item-num">1.1 </span>Add your own dictionary</a></div><div class="lev2 toc-item"><a href="#check-entire-phrase" data-toc-modified-id="check-entire-phrase-12"><span class="toc-item-num">1.2 </span>check entire phrase</a></div><div class="lev2 toc-item"><a href="#tokenization" data-toc-modified-id="tokenization-13"><span class="toc-item-num">1.3 </span>tokenization</a></div><div class="lev1 toc-item"><a href="#Word2vec" data-toc-modified-id="Word2vec-2"><span class="toc-item-num">2 </span>Word2vec</a></div><div class="lev1 toc-item"><a href="#Translate-using-google-translate" data-toc-modified-id="Translate-using-google-translate-3"><span class="toc-item-num">3 </span>Translate using google translate</a></div><div class="lev1 toc-item"><a href="#TreeTagger-usage-to-tag-an-italian-(or-other-languages)-sentence" data-toc-modified-id="TreeTagger-usage-to-tag-an-italian-(or-other-languages)-sentence-4"><span class="toc-item-num">4 </span>TreeTagger usage to tag an italian (or other languages) sentence</a></div>
# Correction with enchant
- install via pip install pyenchant
- add ita dictionary: sudo apt-get install myspell-it myspell-es
- Tutorial at: http://pythonhosted.org/pyenchant/tutorial.html
End of explanation
"""
# Dict objects can also be used to check words against a custom list of correctly-spelled words
# known as a Personal Word List. This is simply a file listing the words to be considered, one word per line.
# The following example creates a Dict object for the personal word list stored in “mywords.txt”:
pwl = enchant.request_pwl_dict("../../Data_nlp/mywords.txt")
pwl.check('pappapero'), pwl.suggest('cittin'), pwl.check('altro')
# PyEnchant also provides the class DictWithPWL which can be used to combine a language dictionary
# and a personal word list file:
d2 = enchant.DictWithPWL("it_IT", "../../Data_nlp/mywords.txt")
d2.check('altro') & d2.check('pappapero'), d2.suggest('cittin')
%%timeit
d2.suggest('poliza')
"""
Explanation: Add your own dictionary
End of explanation
"""
from enchant.checker import SpellChecker
chkr = SpellChecker("it_IT")
chkr.set_text("questo è un picclo esmpio per dire cm funziona")
for err in chkr:
print(err.word)
print(chkr.suggest(err.word))
print(chkr.word, chkr.wordpos)
chkr.replace('pippo')
chkr.get_text()
"""
Explanation: check entire phrase
End of explanation
"""
from enchant.tokenize import get_tokenizer
tknzr = get_tokenizer("en_US") # not tak for it_IT up to now
[w for w in tknzr("this is some simple text")]
from enchant.tokenize import get_tokenizer, HTMLChunker
tknzr = get_tokenizer("en_US")
[w for w in tknzr("this is <span class='important'>really important</span> text")]
tknzr = get_tokenizer("en_US",chunkers=(HTMLChunker,))
[w for w in tknzr("this is <span class='important'>really important</span> text")]
from enchant.tokenize import get_tokenizer, EmailFilter
tknzr = get_tokenizer("en_US")
[w for w in tknzr("send an email to fake@example.com please")]
tknzr = get_tokenizer("en_US", filters = [EmailFilter])
[w for w in tknzr("send an email to fake@example.com please")]
"""
Explanation: tokenization
As explained above, the module enchant.tokenize provides the ability to split text into its component words. The current implementation is based only on the rules for the English language, and so might not be completely suitable for your language of choice. Fortunately, it is straightforward to extend the functionality of this module.
To implement a new tokenization routine for the language TAG, simply create a class/function “tokenize” within the module “enchant.tokenize.TAG”. This function will automatically be detected by the module’s get_tokenizer function and used when appropriate. The easiest way to accomplish this is to copy the module “enchant.tokenize.en” and modify it to suit your needs.
End of explanation
"""
import gensim, logging
from gensim.models import Word2Vec
model = gensim.models.KeyedVectors.load_word2vec_format(
'../../Data_nlp/GoogleNews-vectors-negative300.bin.gz', binary=True)
model.doesnt_match("breakfast brian dinner lunch".split())
# give text with w1 w2 your_distance to check if model and w1-w2 have give the same distance
model.evaluate_word_pairs()
len(model.index2word)
# check accuracy against a premade grouped words
questions_words = model.accuracy('../../Data_nlp/word2vec/trunk/questions-words.txt')
phrases_words = model.accuracy('../../Data_nlp/word2vec/trunk/questions-phrases.txt')
questions_words[4]['incorrect']
print( model.n_similarity(['pasta'], ['spaghetti']) )
print( model.n_similarity(['pasta'], ['tomato']) )
print( model.n_similarity(['pasta'], ['car']) )
print( model.n_similarity(['cat'], ['dog']) )
model.similar_by_vector( model.word_vec('welcome') )
model.similar_by_word('welcome')
model.syn0[4,]
model.index2word[4]
model.word_vec('is')
model.syn0norm[4,]
model.vector_size
import numpy as np
model.similar_by_vector( (model.word_vec('Goofy') + model.word_vec('Minni'))/2 )
import pyemd
# This method only works if `pyemd` is installed (can be installed via pip, but requires a C compiler).
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
# Remove their stopwords.
import nltk
stopwords = nltk.corpus.stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stopwords]
sentence_president = [w for w in sentence_president if w not in stopwords]
# Compute WMD.
distance = model.wmdistance(sentence_obama, sentence_president)
print(distance)
import nltk
stopwords = nltk.corpus.stopwords.words('english')
def sentence_distance(s1, s2):
sentence_obama = [w for w in s1.split() if w not in stopwords]
sentence_president = [w for w in s2.split() if w not in stopwords]
print(sentence_obama, sentence_president, sep='\t')
print(model.wmdistance(sentence_obama, sentence_president), end='\n\n')
sentence_distance('I run every day in the morning', 'I like football')
sentence_distance('I run every day in the morning', 'I run since I was born')
sentence_distance('I run every day in the morning', 'you are idiot')
sentence_distance('I run every day in the morning', 'Are you idiot?')
sentence_distance('I run every day in the morning', 'Is it possible to die?')
sentence_distance('I run every day in the morning', 'Is it possible to die')
sentence_distance('I run every day in the morning', 'I run every day')
sentence_distance('I run every day in the morning', 'I eat every day')
sentence_distance('I run every day in the morning', 'I have breakfast in the morning')
sentence_distance('I run every day in the morning', 'I have breakfast every day in the morning')
sentence_distance('I run every day in the morning', 'Each day I run')
sentence_distance('I run every day in the morning', 'I run every day in the morning')
sentence_distance('I run every day in the morning', 'Each day I run')
sentence_distance('I run every day in the morning', 'Each I run')
sentence_distance('I run every day in the morning', 'Each day run')
sentence_distance('I run every day in the morning', 'Each day I')
sentence_distance('I every day in the morning', 'Each day I run')
sentence_distance('I run day in the morning', 'Each day I run')
sentence_distance('I run every in morning', 'Each day I run')
sentence_distance('I run every in', 'Each day I run')
def get_vect(w):
try:
return model.word_vec(w)
except KeyError:
return np.zeros(model.vector_size)
def calc_avg(s):
ws = [get_vect(w) for w in s.split() if w not in stopwords]
avg_vect = sum(ws)/len(ws)
return avg_vect
from scipy.spatial import distance
def get_euclidean(s1, s2):
return distance.euclidean(calc_avg(s1), calc_avg(s2))
# same questions
s1 = 'Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?'
s2 = "I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?"
sentence_distance(s1, s2)
print(get_euclidean(s1, s2))
# same questions as above without punctuations
s1 = 'Astrology I am a Capricorn Sun Cap moon and cap rising what does that say about me'
s2 = "I am a triple Capricorn Sun Moon and ascendant in Capricorn What does this say about me"
sentence_distance(s1, s2)
print(get_euclidean(s1, s2))
# same questions
s1 = 'What is best way to make money online'
s2 = 'What is best way to ask for money online?'
sentence_distance(s1,s2)
print(get_euclidean(s1, s2))
# different questions
s1 = 'How did Darth Vader fought Darth Maul in Star Wars Legends?'
s2 = 'Does Quora have a character limit for profile descriptions?'
sentence_distance(s1,s2)
print(get_euclidean(s1, s2))
# the order of the words doesn't change the distanace bewteeen the two phrases
s1ws = [w for w in s1.split() if w not in stopwords]
s2ws = [w for w in s2.split() if w not in stopwords]
print(model.wmdistance(s1ws, s2ws) )
print(model.wmdistance(s1ws[::-1], s2ws) )
print(model.wmdistance(s1ws, s2ws[::-1]) )
print(model.wmdistance(s1ws[3:]+s1ws[0:3], s2ws[::-1]) )
"""
Explanation: Other modules:
- CmdLineChecker
The module enchant.checker.CmdLineChecker provides the class CmdLineChecker which can be used to interactively check the spelling of some text. It uses standard input and standard output to interact with the user through a command-line interface. The code below shows how to create and use this class from within a python application, along with a short sample checking session:
wxSpellCheckerDialog
The module enchant.checker.wxSpellCheckerDialog provides the class wxSpellCheckerDialog which can be used to interactively check the spelling of some text. The code below shows how to create and use such a dialog from within a wxPython application.
Word2vec
pip install gensim
pip install pyemd
https://radimrehurek.com/gensim/models/word2vec.html
End of explanation
"""
from googletrans import Translator
o = open("../../AliceNelPaeseDelleMeraviglie.txt")
all = ''
for l in o: all += l
translator = Translator()
for i in range(42, 43, 1):
print(all[i * 1000:i * 1000 + 1000], end='\n\n')
print(translator.translate(all[i * 1000:i * 1000 + 1000], dest='en').text)
## if language is not passed it is guessed, so it can detect a language
frase = "Ciao Giulia, ti va un gelato?"
det = translator.detect(frase)
print("Languge:", det.lang, " with confidence:", det.confidence)
# command line usage, but it seems to don't work to me
!translate "veritas lux mea" -s la -d en
translations = translator.translate(
['The quick brown fox', 'jumps over', 'the lazy dog'], dest='ko')
for translation in translations:
print(translation.origin, ' -> ', translation.text)
phrase = translator.translate(frase, 'en')
phrase.origin, phrase.text, phrase.src, phrase.pronunciation, phrase.dest
"""
Explanation: conclusion:
- distance work well
- the order of the words is not taken into account
Translate using google translate
https://github.com/ssut/py-googletrans
should be free and unlimted, interned connection required
pip install googletrans
End of explanation
"""
from treetagger import TreeTagger
tt = TreeTagger(language='english')
tt.tag('What is the airspeed of an unladen swallow?')
tt = TreeTagger(language='italian')
tt.tag('Proviamo a vedere un pò se funziona bene questo tagger')
"""
Explanation: TreeTagger usage to tag an italian (or other languages) sentence
How To install:
- nltk need to be already installed and working
- follow the instruction from http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/
- run TreeTagger on terminal (echo 'Ciao Giulia come stai?' | tree-tagger-italian) to see if everything is working
- download the github to get the python support from: https://github.com/miotto/treetagger-python
- run /home/ale/anaconda3/bin/python setup.py install and everything should work (note that you need to specify which python you want, the default is python2)
Infos:
- The maximum character limit on a single text is 15k.
- this API does not guarantee that the library would work properly at all times
- for a more stability API use the non-free https://cloud.google.com/translate/docs/
- If you get HTTP 5xx error or errors like #6, it's probably because Google has banned your client IP address
End of explanation
"""
|
DistrictDataLabs/ceb-training | 03 - Regression Analysis.ipynb | mit | %matplotlib notebook
import os
import sklearn
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Fixtures
GENDATA = os.path.join("data", "generated")
DATASET = "dataset{}.txt"
TARGET = "target{}.txt"
COEFS = "coefs{}.txt"
def load_gendata(suffix=""):
X = np.loadtxt(os.path.join(GENDATA, DATASET.format(suffix)))
y = np.loadtxt(os.path.join(GENDATA, TARGET.format(suffix)))
w = np.loadtxt(os.path.join(GENDATA, COEFS.format(suffix)))
return X,y,w
X,y,w = load_gendata() # Sample data set
Xc,yc,wc = load_gendata("-collin") # Collinear data set
Xd,yd,wd = load_gendata("-demo") # Demo data set
# Fix for 1D demo (for viz)
Xd = Xd.reshape(Xd.shape[0], 1)
"""
Explanation: Regression Analysis with Scikit-Learn
Linear Regression fits a linear model to the data by adjusting a set of coefficients w to minimize the residual sum of squares between observed responses & prediction.
Linear model: $y=X\beta+\epsilon$
Objective function: $min_w \sum (Xw -y)^2$
Predictive model: $\hat{y}(w,x)=w_0 + w_1x_1+...+w_px_p$
Notation:
$y$ is the observed value
$X$ is the input variables
$\beta$ is the set of coefficients
$\epsilon$ is noise or randomness in observation
$w$ is the array of weights
$w_0$ is the ability to adjust the plane in space
$\hat{y}$ is the predicted value
End of explanation
"""
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(Xd, yd)
print(model)
print(model.coef_)
print(wd)
print(model.intercept_)
def draw_model(X, y, model, w):
k = X.shape[1]
if k > 2 or k < 1:
raise ValueError("Cannot plot in more than 3D!")
# Determine if 2D or 3D
fig = plt.figure()
if k == 2:
ax = fig.add_subplot(111, projection='3d')
# Scatter plot of points
ax.scatter(X[:,0], X[:,1], y)
# Line plot of original model
xm, ym = np.meshgrid(np.linspace(X[:,0].min(), X[:,0].max()), np.linspace(X[:,1].min(), X[:,1].max()))
zm = w[0]*xm + w[1]*ym + bias
ax.plot_wireframe(xm, ym, zm, alpha=0.5, c='b')
# Line plot of predicted model
zp = model.predict(np.append(xm, ym))
ax.plot_wireframe(xm, ym, zp, alpha=0.5, c='g')
else:
ax = fig.add_subplot(111)
# Scatter plot of points
ax.scatter(X, y)
# Line plot of original model
Xm = np.linspace(X.min(), X.max())
Xm = Xm.reshape(Xm.shape[0], 1)
ym = np.dot(Xm, w)
ax.plot(Xm, ym, c='b')
# Line plot of predicted model
yp = model.predict(Xm)
ax.plot(Xm, yp, c='g')
return ax
draw_model(Xd, yd, model, wd)
"""
Explanation: Ordinary Least Squares
Keep adjusting parameters until minimum squared residuals (e.g. minimize some cost function).
Relies on the independence of the model terms
multicollinearity: two or more predictor variables in a multiple regression model are highly correlated, one can be linearly predicted from the others
If this happens, the estimate becomes sensitive to error.
End of explanation
"""
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.cross_validation import train_test_split as tts
X_train, X_test, y_train, y_test = tts(X, y)
model = LinearRegression()
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("MSE: {:0.3f} | R2: {:0.3f}".format(mse, r2))
"""
Explanation: Evaluating Models
End of explanation
"""
X_train, X_test, y_train, y_test = tts(Xc, yc)
model = LinearRegression()
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model,mse, r2))
"""
Explanation: Regularization
As we increase the complexity of the model we reduce the bias but increase the variance of the model.
Variance: the tendency for the model to fit to noise (randomness) -- overfit.
Introduce a parameter to penalize complexity in the function being minimized.
Vector Norm
Describes the length of the vector.
L1: sum of the absolute values of components
L2: euclidian distance from the origin
L∞: maximal absolute value component
End of explanation
"""
from sklearn.linear_model import Ridge
model = Ridge(alpha=0.1)
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model, mse, r2))
"""
Explanation: Ridge Regularization
Prevent overfit/collinearity by penalizing the size of coefficients - minimize the penalized residual sum of squares:
Said another way, shrink the coefficients to zero.
$min_w (||Xw-y||_2)^2 + (\alpha||w||_2)^2$
Where 𝛼 > 0 is complexity parameter that controls shrinkage. The larger 𝛼, the more robust the model to collinearity.
Alpha influences the the bias/variance tradeoff: the larger the ridge alpha, the higher the bias and the lower the variance.
End of explanation
"""
from sklearn.linear_model import Lasso
model = Lasso(alpha=0.5)
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model, mse, r2))
"""
Explanation: LASSO Regularization
Reducing bias is one thing, but what if the coefficients are very sparse? E.g. the more dimensions we add, the more space goes into the model.
Lasso prefers fewer parameters attempting to reduce the number of variables the solution depends on.
$min_w \frac{1}{2n}(\sum{(Xw-w)^2+\alpha ||w||_1}$
The term $\alpha‖w‖_1$ is the L1 norm, whereas in ridge we used the L2 norm, $\alpha‖w‖_2$.
See also Least Angle Regression (LARS) as similar.
End of explanation
"""
from sklearn.linear_model import ElasticNet
model = ElasticNet(alpha=0.5)
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model, mse, r2))
"""
Explanation: ElasticNet Regularization
Model trained with both L1 and L2 prior as regularizer.
This combination allows for learning a sparse model where few of the weights are non-zero like Lasso, while still maintaining the regularization properties of Ridge. Can control the convex combination of L1 and L2 using a ratio parameter.
Elastic-net is useful when there are multiple features which are correlated with one another. Lasso is likely to pick one of these at random, while elastic-net is likely to pick both.
A practical advantage of trading-off between Lasso and Ridge is it allows Elastic-Net to inherit some of Ridge’s stability under rotation.
$min_w \frac{1} {2n} ||Xw-y||_2^2 + \alpha\rho||w||_1 + \frac{\alpha(1-\rho)} {2}||w||_2^2$
End of explanation
"""
from sklearn.linear_model import RidgeCV, LassoCV, ElasticNetCV
alphas = np.logspace(-10, -2, 200)
ridge = RidgeCV(alphas=alphas)
lasso = LassoCV(alphas=alphas)
elnet = ElasticNetCV(alphas=alphas)
for model in (ridge, lasso, elnet):
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nAlpha: {:0.3f} | MSE: {:0.3f} | R2: {:0.3f}".format(model, model.alpha_, mse, r2))
clf = Ridge(fit_intercept=False)
errors = []
for alpha in alphas:
splits = tts(X, y, test_size=0.2)
X_train, X_test, y_train, y_test = splits
clf.set_params(alpha=alpha)
clf.fit(X_train, y_train)
error = mean_squared_error(y_test, clf.predict(X_test))
errors.append(error)
axe = plt.gca()
axe.plot(alphas, errors)
"""
Explanation: Choosing Alpha
We can search for the best parameter using the ModelCV which is a form of Grid Search, but uses a more efficient form of leave-one-out cross-validation.
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
model = Pipeline([
('poly', PolynomialFeatures(2)),
('ridge', RidgeCV(alphas=alphas)),
])
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model, mse, r2))
"""
Explanation: Polynomial Regression
In order to do higher order polynomial regression, we can use linear models trained on nonlinear functions of data!
Speed of linear model computation
Fit a wider range of data or functions
But remember: polynomials aren’t the only functions to fit
The way this works is via Pipelining.
Consider the standard linear regression case:
$\hat{y}(w,x) = w_0 + \sum_i^n{w_ix_i}$
The quadratic case (polynomial degree = 2) is:
$\hat{y}(w,v,x) = w_0 + \sum_i^n{w_ix_i} + \sum_i^n{v_ix_i^2}$
But this can just be seen as a new feature space:
$z = [x_1,...,x_n,x_1^2,...,x_n^2]$
And this feature space can be computed in a linear fashion. We just need some way to add our 2nd degree dimensions.
End of explanation
"""
ENERGY = "http://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx"
def download_data(url, path='data'):
if not os.path.exists(path):
os.mkdir(path)
response = requests.get(url)
name = os.path.basename(url)
with open(os.path.join(path, name), 'wb') as f:
f.write(response.content)
download_data(ENERGY)
energy = pd.read_excel('data/ENB2012_data.xlsx', sep=",")
energy.columns = ['compactness','surface_area','wall_area','roof_area','height',\
'orientation','glazing_area','distribution','heating_load','cooling_load']
energy.head()
energy.describe()
from pandas.tools.plotting import scatter_matrix
ax = scatter_matrix(energy, alpha=0.2, figsize=(9,9), diagonal='kde')
energy_features = energy.ix[:,0:8]
energy_labels = energy.ix[:,8:]
"""
Explanation: Energy Data Set
End of explanation
"""
from sklearn.linear_model import RandomizedLasso
model = RandomizedLasso(alpha=0.1)
model.fit(energy_features, energy_labels["heating_load"])
names = list(energy_features)
print("Features sorted by their score:")
print(sorted(zip(map(lambda x: round(x, 4), model.scores_),
names), reverse=True))
model = RandomizedLasso(alpha=0.1)
model.fit(energy_features, energy_labels["cooling_load"])
names = list(energy_features)
print("Features sorted by their score:")
print(sorted(zip(map(lambda x: round(x, 4), model.scores_),
names), reverse=True))
"""
Explanation: Are features predictive?
End of explanation
"""
heat_labels = energy.ix[:,8]
def fit_and_evaluate(model, X, y):
X_train, X_test, y_train, y_test = tts(X, y)
model.fit(X_train, y_train)
mse = mean_squared_error(y_test, model.predict(X_test))
r2 = model.score(X_test, y_test)
print("{}\nMSE: {:0.3f} | R2: {:0.3f}".format(model, mse, r2))
from sklearn.ensemble import RandomForestRegressor
models = [
LinearRegression(),
RidgeCV(alphas=alphas),
LassoCV(alphas=alphas),
ElasticNetCV(alphas=alphas),
RandomForestRegressor(),
]
for model in models:
fit_and_evaluate(model, energy_features, heat_labels)
"""
Explanation: Predicting Heating Load
End of explanation
"""
|
wmvanvliet/neuroscience_tutorials | posthoc/linear_regression.ipynb | bsd-2-clause | import mne
epochs = mne.read_epochs('subject04-epo.fif')
epochs.metadata
"""
Explanation: <a href="https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb" target="_new" style="float: right"><img src="qr.png" alt="https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb"></a>
Marijn van Vliet
A deep dive into linear models
tiny.cc/deepdive
Loading the data
End of explanation
"""
epochs.plot(n_channels=32, n_epochs=10);
"""
Explanation: Epochs: snippets of EEG data
End of explanation
"""
unrelated = epochs['FAS < 0.1'].average()
related = epochs['FAS > 0.1'].average()
mne.viz.plot_evoked_topo([related, unrelated]);
"""
Explanation: Evoked: averaging across epochs
End of explanation
"""
ROI = epochs.copy()
ROI.pick_channels(['P3', 'Pz', 'P4'])
ROI.crop(0.3, 0.47)
FAS_pred = ROI.get_data().mean(axis=(1, 2))
from scipy.stats import pearsonr
print('Performance: %.2f' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
"""
Explanation: Challenge:
Deduce the memory priming effect for a word-pair, given the EEG epoch
Naive approach: average signal in ROI
End of explanation
"""
print(epochs.get_data().shape)
X = epochs.get_data().reshape(200, 32 * 60)
y = epochs.metadata['FAS'].values
from sklearn.preprocessing import normalize
X = normalize(X)
print('X:', X.shape)
print('y:', y.shape)
"""
Explanation: Machine learning approach: linear regression
End of explanation
"""
from sklearn.linear_model import LinearRegression
model = LinearRegression().fit(X, y)
FAS_pred = model.predict(X)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
from sklearn.model_selection import cross_val_predict
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
"""
Explanation: Performing linear regression
End of explanation
"""
model.fit(X, y)
weights = model.coef_.reshape(32, 60)
ev = mne.EvokedArray(weights, epochs.info, tmin=epochs.times[0], comment='weights')
ev.plot_topo();
"""
Explanation: Inspecting the weights
End of explanation
"""
from posthoc import Workbench
model = Workbench(LinearRegression())
model.fit(X, y)
cov_X = X.T @ X / len(X)
pattern = model.pattern_
normalizer = model.normalizer_
"""
Explanation: What's going on here?
https://users.aalto.fi/~vanvlm1/posthoc/regression.html
The post-hoc framework
Data covariance matrix
Haufe pattern matrix
Normalizer
End of explanation
"""
from matplotlib import pyplot as plt
plt.matshow(cov_X, cmap='magma')
# Show channel names
plt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)
plt.yticks(range(0, 32 * 60, 60), epochs.ch_names);
"""
Explanation: The data covariance
End of explanation
"""
import numpy as np
# Amount of shrinkage
alpha = 0.75
# Shrinkage formula
shrinkage_target = np.identity(32 * 60) * np.trace(cov_X) / len(cov_X)
cov_X_mod = alpha * shrinkage_target + (1 - alpha) * cov_X
# Plot shrunk covariance
plt.matshow(cov_X_mod, cmap='magma')
plt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)
plt.yticks(range(0, 32 * 60, 60), epochs.ch_names);
"""
Explanation: Shrinking the covariance
End of explanation
"""
from posthoc.cov_estimators import ShrinkageKernel
model = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
"""
Explanation: Post-hoc modification of the model
End of explanation
"""
pattern_ev = mne.EvokedArray(pattern.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')
pattern_ev.plot_topo();
"""
Explanation: The pattern matrix
End of explanation
"""
import numpy as np
def pattern_modifier(pattern, X_train=None, y_train=None, mu=0.36, sigma=0.06):
pattern = pattern.reshape(32, 60)
# Define mu and sigma in samples
mu = np.searchsorted(epochs.times, mu)
sigma = sigma * epochs.info['sfreq']
# Formula for Gaussian curve
kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)
return (pattern * kernel).ravel()
pattern_mod = pattern_modifier(pattern)
pattern_mod = mne.EvokedArray(pattern_mod.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')
pattern_mod.plot_topo();
"""
Explanation: Modifying the pattern matrix
<img src="kernel.png" width="400">
End of explanation
"""
model = Workbench(LinearRegression(), cov=ShrinkageKernel(0.97), pattern_modifier=pattern_modifier)
FAS_pred = cross_val_predict(model, X, y, cv=10)
print('Performance: %.2f (to beat: 0.30, 0.35)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
"""
Explanation: Post-hoc modifying the pattern in the model
End of explanation
"""
print(normalizer)
"""
Explanation: To find out more, read the paper!
https://www.biorxiv.org/content/10.1101/518662v2
Marijn van Vliet & Riitta Salmelin
Post-hoc modification of linear models: combining machine learning with domain information to make solid inferences from noisy data
NeuroImage (2020)
For more interactive neuroscience tutorials:
https://github.com/wmvanvliet/neuroscience_tutorials
The normalizer
End of explanation
"""
def scorer(model, X, y):
return pearsonr(model.predict(X), y)[0]
from posthoc import WorkbenchOptimizer
model = WorkbenchOptimizer(LinearRegression(), cov=ShrinkageKernel(0.95),
pattern_modifier=pattern_modifier, pattern_param_x0=[0.4, 0.05], pattern_param_bounds=[(0, 0.8), (0.01, 0.5)],
scoring=scorer)
model.fit(X, y)
print('Optimal parameters: alpha=%.3f, mu=%.3f, sigma=%.3f'
% tuple(model.cov_params_ + model.pattern_modifier_params_))
"""
Explanation: Automatic optimization
End of explanation
"""
import numpy as np
def modify_X(X, X_train=None, y_train=None, mu=0.36, sigma=0.06):
X = X.reshape(200, 32, 60)
# Define mu and sigma in samples
mu = np.searchsorted(epochs.times, mu)
sigma = sigma * epochs.info['sfreq']
# Formula for Gaussian curve
kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)
return (X * kernel).reshape(200, -1)
X_mod = modify_X(X)
model = LinearRegression()
FAS_pred = cross_val_predict(model, X_mod, y, cv=10)
print('LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
model = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))
FAS_pred = cross_val_predict(model, X_mod, y, cv=10)
print('Shrinkage LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])
"""
Explanation: Feature selection vs. Pattern modification
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.