markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
|
# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns) - 1 # The last field is the target and is not a feature
# TODO: Calculate passing students
n_passed = len([x for x in student_data["passed"] if x == "yes"])
# TODO: Calculate failing students
n_failed = n_students - n_passed
# TODO: Calculate graduation rate
grad_rate = 100.0 * n_passed / n_students
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
|
student_intervention/student_intervention.ipynb
|
taylort7147/udacity-projects
|
mit
|
Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
|
# TODO: Import any additional functionality you may need here
from sklearn.cross_validation import train_test_split
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
random_state = 0
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = train_test_split(
X_all,
y_all,
test_size=num_test,
train_size=num_train,
random_state=random_state
)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
|
student_intervention/student_intervention.ipynb
|
taylort7147/udacity-projects
|
mit
|
Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
The following supervised learning models are currently available in scikit-learn that you may choose from:
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. For each model chosen
- Describe one real-world application in industry where the model can be applied. (You may need to do a small bit of research for this — give references!)
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
Answer:
Gaussian Naive-Bayes
<br>
This method has been used in order to implement a classification algorithm in a database management system (DBMS) [1]. It was chosen because many NB operations translate well in database query languages, which enhances the scalability of the algorithm. For example, filtering and counting items is done very well by query languages, which comes in handy for calculating prior distributions of classes. Other attributes which support this solution include Gaussian NB's resilience to noise in data sets and its ability to scale well with a large amount of dimensions/features, which is likely the case in a DBMS.
The solution is demonstrated by classifying streamed JSON data from Twitter and using it to classify the emotion of the posts.
Gaussian NB is very robust when it comes to noise in the data. It also scales linearly with higher dimensionality, and can operate on discrete and continuous data. However, it does assume that features are independent, and so it does not model interrelationshiops. It also requires the data to be normally distributed.
The data we will be training on and classifying will likely have noise, and there are several dimensions, some of which are numeric and continuous, which makes NB a good candidate. However, while some of the data is normally distributed (i.e., studytime, freetime, goout), some of the data numerical data is not, and many of the features are categorical, which does not translate well to continuous distributions.
Logistic Regression
<br>
Logistic regression was used as a prediction model for customer satisfaction using factors such as on-time reliability, safety, etc. [2].
Logistic regression is ideal for splitting non-linear data. It works well when there are many examples in each category/feature, which is the case for this data.
Logistic regression would be a good candiated for this problem because the data is categorical and we are expecting a classification result. There are many examples in each category.
Ensemble Methods
<br>
Ensemble methods have been used in weather forecast systems [3]. They have led to more accurate predictions as opposed to the purely statistical post-processing systems which preceded it (and are still in use). Rather than replacing the previous models, it builds off of them, as ensemble methods are meant to do. It is also ushering in an opportunity to merge together predictions from other local forecasting systems to improve overall predictions based on a larger scope of input data.
One of the strengths of ensemble methods - specifically boosting -- is that accuracy only improves with larger training sizes, as opposed to other models which tend to start to overfit after a certain point. However, ensemble methods tend to take more computational power since they rely on multiple sub-models under the hood.
This model could make a good candidate for the student interventions system because there are many features to learn from which may or may not be independent of each other and the multiple methods of the ensemble can learn certain behaviors, much like a spam detection filter.
References
<br>
- [1] Sellam, T. Embedding Naive Bayes classification in a Functional and Object Oriented DBMS. 2010. <http://homepages.cwi.nl/~sellam/ThibaultSellamMSc.pdf >
- [2] http://www.macrothink.org/journal/index.php/ijhrs/article/viewFile/2868/2669
- [3] https://www.atmos.washington.edu/academics/classes/2010Q3/101/materials/wk6_ensemble.pdf
Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
|
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
|
student_intervention/student_intervention.ipynb
|
taylort7147/udacity-projects
|
mit
|
Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in clf_A, clf_B, and clf_C.
- Use a random_state for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.
- Fit each model with each training set size and make predictions on the test set (9 in total).
Note: Three tables are provided after the following code cell which can be used to store your results.
|
# TODO: Import the three supervised learning models from sklearn
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier
# TODO: Initialize the three models
clf_A = GaussianNB()
clf_B = LogisticRegression(random_state=14)
clf_C = AdaBoostClassifier(random_state=14)
# TODO: Set up the training set sizes
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train[:300]
y_train_300 = y_train[:300]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
for clf in [clf_A, clf_B, clf_C]:
for X_train_N, y_train_N in [(X_train_100, y_train_100), (X_train_200, y_train_200), (X_train_300, y_train_300)]:
train_predict(clf, X_train_N, y_train_N, X_test, y_test)
print("")
|
student_intervention/student_intervention.ipynb
|
taylort7147/udacity-projects
|
mit
|
Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Gaussian Naive Bayes
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0020 | 0.0000 | 0.8550 | 0.7481 |
| 200 | 0.0010 | 0.0000 | 0.8321 | 0.7132 |
| 300 | 0.0020 | 0.0000 | 0.8088 | 0.7500 |
Classifer 2 - Logistic Regression
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0020 | 0.0000 | 0.8571 | 0.7612 |
| 200 | 0.0020 | 0.0000 | 0.8380 | 0.7794 |
| 300 | 0.0040 | 0.0000 | 0.8381 | 0.7910 |
Classifer 3 - AdaBoost
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0920 | 0.0160 | 0.9538 | 0.7200 |
| 200 | 0.1220 | 0.0060 | 0.8826 | 0.8058 |
| 300 | 0.1120 | 0.0160 | 0.8688 | 0.7794 |
Choosing the Best Model
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Question 3 - Choosing the Best Model
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer:
Based on my experiments, I chose to use logistic regression as the model for the student intervention system. Of the three models chosen, this model fits both computational and accuracy requirements the best. Logisitic regression performed consistently well with the given training/test data set, but was noticeably quicker for both the training and prediction time cmopared to the similarly accuracte Adaboost classifier. The Gaussian Naive Bayes classifier was similar in speed to the logistic regression classifier, but was not as accurate. Logsitic regression is the best model given the available data and requirements.
Question 4 - Model in Layman's Terms
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.
Answer:
Logistic regression uses a function to map input features (e.g. absences, free time, family size, etc.) to one of a set of labels (e.g. "requires intervention" or "doesn't require intervention"). The function is based off of the probability of that label occurring given that the feature occurred. The particular function used is bound by 0 and 1, where values less than 0.5 are associated with one label, and values above 0.5 are associated with the other label.
During the training process, the probabilities for the function are estimated based on the occurrence rate in the training data. In order to account for unequal contributions of the features, certain weights are assigned to each feature. The weights are found using an algorithm that maximizes weights for examples which "help" find positive results (i.e. the prediction matches the outcome), and minimize the weights of those that do not.
Once all of the weights have been calculated, new sets of features may be given to the resulting function to output whether or not the student requires intervention. Because the mechanism for classifying the student is based on a simple mathematical function, as opposed to other models that use neighboring points or node traversal, runtime is very computationally and memory efficient.
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Initialize the classifier you've chosen and store it in clf.
- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.
- Set the pos_label parameter to the correct value!
- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.
|
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Create the parameters list you wish to tune
num_features = len(feature_cols)
parameters = {
"C": [0.5, 1.0, 1.5, 2.0]
}
# TODO: Initialize the classifier
clf = LogisticRegression(random_state=14)
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label="yes")
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, param_grid=parameters, scoring = f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
|
student_intervention/student_intervention.ipynb
|
taylort7147/udacity-projects
|
mit
|
Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
|
import time, sys
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes=10, learning_rate=0.1):
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab), hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = []
for review in reviews:
for word in review.split(' '):
review_vocab.append(word)
self.review_vocab = review_vocab
label_vocab = []
for label in labels:
label_vocab.append(label)
self.label_vocab = label_vocab
self.review_vocab_size = len(review_vocab)
self.label_vocab_size = len(label_vocab)
self.word2index = {}
for i, word in enumerate(vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
self.weights_input_to_hidden = np.zeros((self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
### Foward pass ###
# Input layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_input_to_hidden)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_hidden_to_output))
### Backward pass ###
# Output error
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Hidden error
layer_1_error = layer_2_delta.dot(self.weights_hidden_to_output.T)
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error (?)
### Update weights ###
self.weights_hidden_to_output += self.learning_rate * layer_1.T.dot(layer_2_delta)
self.weights_input_to_hidden += self.learning_rate * self.layer_0.T.dot(layer_1_delta)
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if pred == testing_labels[i]:
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_input_to_hidden)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_hidden_to_output))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
|
sentiment_network/Sentiment Classification - Mini Project 3.ipynb
|
danresende/deep-learning
|
mit
|
1. Загрузите выборку из файла svm-data.csv. В нем записана двумерная выборка (целевая переменная указана в первом столбце, признаки — во втором и третьем).
|
df_train = pd.read_csv('../data/svm-data.csv', header=None)
X_train = df_train[df_train.columns[1:3]]
y_train = df_train[df_train.columns[0]]
|
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
|
aKumpan/hse-shad-ml
|
apache-2.0
|
2. Обучите классификатор с линейным ядром, параметром C = 100000 и random_state=241. Такое значение параметра нужно использовать, чтобы убедиться, что SVM работает с выборкой как с линейно разделимой. При более низких значениях параметра алгоритм будет настраиваться с учетом слагаемого в функционале, штрафующего за маленькие отступы, из-за чего результат может не совпасть с решением классической задачи SVM для линейно разделимой выборки.Основными параметрами этого класса являются коэффициент С и тип ядра kernel. В данной задаче мы будем использовать линейное ядро — для этого нужно задать значение параметра kernel='linear'
|
clf = SVC(kernel='linear', C = 100000, random_state=241)
clf.fit(X_train, y_train)
|
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
|
aKumpan/hse-shad-ml
|
apache-2.0
|
3. Найдите номера объектов, которые являются опорными (нумерация с единицы). Они будут являться ответом на задание. Обратите внимание, что в качестве ответа нужно привести номера объектов в возрастающем порядке через запятую или пробел. Нумерация начинается с 1.
|
n_sv = clf.support_
n_sv
' '.join([str(n + 1) for n in n_sv])
|
03-svm_and_logistic_regression/statement-svm/statement-svm.ipynb
|
aKumpan/hse-shad-ml
|
apache-2.0
|
Problème
Il s'agit de dessiner la pyramide suivante à l'aide de matplotlib.
|
from IPython.display import Image
Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/biodiversite_tri2.png")
|
_doc/notebooks/td1a/td1a_pyramide_bigarree_correction.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Mais dans un premier temps, il faut un moyen de repérer chaque boule. On les numérote avec deux indices.
|
Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/pyramide_num2.png")
|
_doc/notebooks/td1a/td1a_pyramide_bigarree_correction.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Expected output:
<table>
<tr>
<td>
**list of sampled indices:**
</td>
<td>
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>
7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
</td>
</tr><tr>
<td>
**list of sampled characters:**
</td>
<td>
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>
'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>
'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
</td>
</tr>
</table>
3 - Building the language model
It is time to build the character-level language model for text generation.
3.1 - Gradient descent
In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:
Forward propagate through the RNN to compute the loss
Backward propagate through time to compute the gradients of the loss with respect to the parameters
Clip the gradients if necessary
Update your parameters using gradient descent
Exercise: Implement this optimization process (one step of stochastic gradient descent).
We provide you with the following functions:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in the backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
|
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = None
# Backpropagate through time (≈1 line)
gradients, a = None
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = None
# Update parameters (≈1 line)
parameters = None
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
|
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
|
jinzishuai/learn2deeplearn
|
gpl-3.0
|
Expected output:
<table>
<tr>
<td>
**Loss **
</td>
<td>
126.503975722
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]**
</td>
<td>
0.194709315347
</td>
<tr>
<td>
**np.argmax(gradients["dWax"])**
</td>
<td> 93
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td> -0.007773876032
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td> [-0.06809825]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>[ 0.01538192]
</td>
</tr>
<tr>
<td>
**a_last[4]**
</td>
<td> [-1.]
</td>
</tr>
</table>
3.2 - Training the model
Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
Exercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:
python
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
Note that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).
The first entry of X being None will be interpreted by rnn_forward() as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
|
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = None
X = None
Y = None
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = None
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
|
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
|
jinzishuai/learn2deeplearn
|
gpl-3.0
|
Loop Progress
ProgIter is a (mostly) drop-in alternative to
`tqdm https://pypi.python.org/pypi/tqdm`__.
The advantage of ProgIter is that it does not use any python threading,
and therefore can be safer with code that makes heavy use of multiprocessing.
Note: ProgIter is now a standalone module: pip intstall progiter)
|
import ubelt as ub
import math
for n in ub.ProgIter(range(7500)):
math.factorial(n)
import ubelt as ub
import math
for n in ub.ProgIter(range(7500), freq=1000, adjust=False):
math.factorial(n)
# Note that forcing freq=2 all the time comes at a performance cost
# The default adjustment algorithm causes almost no overhead
>>> import ubelt as ub
>>> def is_prime(n):
... return n >= 2 and not any(n % i == 0 for i in range(2, n))
>>> for n in ub.ProgIter(range(1000), verbose=2):
>>> # do some work
>>> is_prime(n)
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Caching
Cache intermediate results from blocks of code inside a script with minimal
boilerplate or modification to the original code.
For direct caching of data, use the Cacher class. By default results will
be written to the ubelt's appdir cache, but the exact location can be specified
via dpath or the appname arguments. Additionally, process dependencies
can be specified via the depends argument, which allows for implicit cache
invalidation. As far as I can tell, this is the most concise way (4 lines of
boilerplate) to cache a block of code with existing Python syntax (as of
2022-06-03).
|
import ubelt as ub
depends = ['config', {'of': 'params'}, 'that-uniquely-determine-the-process']
cacher = ub.Cacher('test_process', depends=depends, appname='myapp', verbose=3)
if 1:
cacher.fpath.delete()
for _ in range(2):
data = cacher.tryload()
if data is None:
myvar1 = 'result of expensive process'
myvar2 = 'another result'
data = myvar1, myvar2
cacher.save(data)
myvar1, myvar2 = data
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
For indirect caching, use the CacheStamp class. This simply writes a
"stamp" file that marks that a process has completed. Additionally you can
specify criteria for when the stamp should expire. If you let CacheStamp
know about the expected "product", it will expire the stamp if that file has
changed, which can be useful in situations where caches might becomes corrupt
or need invalidation.
|
import ubelt as ub
dpath = ub.Path.appdir('ubelt/demo/cache').delete().ensuredir()
params = {'params1': 1, 'param2': 2}
expected_fpath = dpath / 'file.txt'
stamp = ub.CacheStamp('name', dpath=dpath, depends=params,
hasher='sha256', product=expected_fpath,
expires='2101-01-01T000000Z', verbose=3)
if 1:
stamp.fpath.delete()
for _ in range(2):
if stamp.expired():
expected_fpath.write_text('expensive process')
stamp.renew()
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Hashing
The ub.hash_data constructs a hash for common Python nested data
structures. Extensions to allow it to hash custom types can be registered. By
default it handles lists, dicts, sets, slices, uuids, and numpy arrays.
|
import ubelt as ub
data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])]
ub.hash_data(data, hasher='sha256')
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Support for torch tensors and pandas data frames are also included, but needs to
be explicitly enabled. There also exists an non-public plugin architecture to
extend this function to arbitrary types. While not officially supported, it is
usable and will become better integrated in the future. See
ubelt/util_hash.py for details.
Command Line Interaction
The builtin Python subprocess.Popen module is great, but it can be a
bit clunky at times. The os.system command is easy to use, but it
doesn't have much flexibility. The ub.cmd function aims to fix this.
It is as simple to run as os.system, but it returns a dictionary
containing the return code, standard out, standard error, and the
Popen object used under the hood.
|
import ubelt as ub
info = ub.cmd('cmake --version')
# Quickly inspect and parse output of a
print(info['out'])
# The info dict contains other useful data
print(ub.repr2({k: v for k, v in info.items() if 'out' != k}))
# Also possible to simultaneously capture and display output in realtime
info = ub.cmd('cmake --version', tee=1)
# tee=True is equivalent to using verbose=1, but there is also verbose=2
info = ub.cmd('cmake --version', verbose=2)
# and verbose=3
info = ub.cmd('cmake --version', verbose=3)
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Cross-Platform Config and Cache Directories
If you have an application which writes configuration or cache files,
the standard place to dump those files differs depending if you are on
Windows, Linux, or Mac. Ubelt offers a unified functions for determining
what these paths are.
The ub.ensure_app_cache_dir and ub.ensure_app_config_dir
functions find the correct platform-specific location for these files
and ensures that the directories exist. (Note: replacing "ensure" with
"get" will simply return the path, but not ensure that it exists)
The config root directory is ~/AppData/Roaming on Windows,
~/.config on Linux and ~/Library/Application Support on Mac. The
cache root directory is ~/AppData/Local on Windows, ~/.config on
Linux and ~/Library/Caches on Mac.
Example usage on Linux might look like this:
|
import ubelt as ub
print(ub.shrinkuser(ub.ensure_app_cache_dir('my_app')))
print(ub.shrinkuser(ub.ensure_app_config_dir('my_app')))
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
New in version 1.0.0: the ub.Path.appdir classmethod provides a way to
achieve the above with a chainable object oriented interface.
|
import ubelt as ub
print(ub.Path.appdir('my_app').ensuredir().shrinkuser())
print(ub.Path.appdir('my_app', type='config').ensuredir().shrinkuser())
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Downloading Files
The function ub.download provides a simple interface to download a
URL and save its data to a file.
The function ub.grabdata works similarly to ub.download, but
whereas ub.download will always re-download the file,
ub.grabdata will check if the file exists and only re-download it if
it needs to.
New in version 0.4.0: both functions now accepts the hash_prefix keyword
argument, which if specified will check that the hash of the file matches the
provided value. The hasher keyword argument can be used to change which
hashing algorithm is used (it defaults to "sha512").
|
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.download(url, verbose=0)
>>> print(ub.shrinkuser(fpath))
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.grabdata(url, verbose=0, hash_prefix='944389a39')
>>> print(ub.shrinkuser(fpath))
url = 'http://i.imgur.com/rqwaDag.png'
ub.grabdata(url, verbose=3, hash_prefix='944389a39dfb8f')
try:
ub.grabdata(url, verbose=3, hash_prefix='wrong-944389a39dfb8f')
except RuntimeError as ex:
print('type(ex) = {!r}'.format(type(ex)))
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Dictionary Tools
|
import ubelt as ub
items = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'bannana']
groupids = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit']
groups = ub.group_items(items, groupids)
print(ub.repr2(groups, nl=1))
import ubelt as ub
items = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]
ub.dict_hist(items)
import ubelt as ub
items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]
ub.find_duplicates(items, k=2)
import ubelt as ub
dict_ = {'K': 3, 'dcvs_clip_max': 0.2, 'p': 0.1}
subdict_ = ub.dict_subset(dict_, ['K', 'dcvs_clip_max'])
print(subdict_)
import ubelt as ub
dict_ = {1: 'a', 2: 'b', 3: 'c'}
print(list(ub.take(dict_, [1, 3, 4, 5], default=None)))
import ubelt as ub
dict_ = {'a': [1, 2, 3], 'b': []}
newdict = ub.map_vals(len, dict_)
print(newdict)
import ubelt as ub
mapping = {0: 'a', 1: 'b', 2: 'c', 3: 'd'}
ub.invert_dict(mapping)
import ubelt as ub
mapping = {'a': 0, 'A': 0, 'b': 1, 'c': 2, 'C': 2, 'd': 3}
ub.invert_dict(mapping, unique_vals=False)
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
AutoDict - Autovivification
While the collections.defaultdict is nice, it is sometimes more
convenient to have an infinitely nested dictionary of dictionaries.
(But be careful, you may start to write in Perl)
|
>>> import ubelt as ub
>>> auto = ub.AutoDict()
>>> print('auto = {!r}'.format(auto))
>>> auto[0][10][100] = None
>>> print('auto = {!r}'.format(auto))
>>> auto[0][1] = 'hello'
>>> print('auto = {!r}'.format(auto))
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
String-based imports
Ubelt contains functions to import modules dynamically without using the
python import statement. While importlib exists, the ubelt
implementation is simpler to user and does not have the disadvantage of
breaking pytest.
Note ubelt simply provides an interface to this functionality, the
core implementation is in xdoctest (over as of version 0.7.0,
the code is statically copied into an autogenerated file such that ubelt
does not actually depend on xdoctest during runtime).
|
import ubelt as ub
try:
# This is where I keep ubelt on my machine, so it is not expected to work elsewhere.
module = ub.import_module_from_path(ub.expandpath('~/code/ubelt/ubelt'))
print('module = {!r}'.format(module))
except OSError:
pass
module = ub.import_module_from_name('ubelt')
print('module = {!r}'.format(module))
try:
module = ub.import_module_from_name('does-not-exist')
raise AssertionError
except ModuleNotFoundError:
pass
modpath = ub.Path(ub.util_import.__file__)
print(ub.modpath_to_modname(modpath))
modname = ub.util_import.__name__
assert ub.Path(ub.modname_to_modpath(modname)).resolve() == modpath.resolve()
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
Related to this functionality are the functions
ub.modpath_to_modname and ub.modname_to_modpath, which
statically transform (i.e. no code in the target modules is imported
or executed) between module names (e.g. ubelt.util_import) and
module paths (e.g.
~/.local/conda/envs/cenv3/lib/python3.5/site-packages/ubelt/util_import.py).
Horizontal String Concatenation
Sometimes its just prettier to horizontally concatenate two blocks of
text.
|
>>> import ubelt as ub
>>> B = ub.repr2([[1, 2], [3, 4]], nl=1, cbr=True, trailsep=False)
>>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False)
>>> print(ub.hzcat(['A = ', B, ' * ', C]))
|
docs/notebooks/Ubelt Demo.ipynb
|
Erotemic/ubelt
|
apache-2.0
|
TensorFlow を使用した Azure Blob Storage
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/azure"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
注: このノートブックは、Python パッケージのほか、npm install --user を使用してパッケージをインストールします。ローカルで実行する際には注意してください。
概要
このチュートリアルでは、TensorFlow を使って Azure Blob Storage 上のファイルの読み取りと書き込みを使用する方法を説明します。TensorFlow IO の Azure ファイルシステム統合を使用します。
Azure Blob Storage のファイルの読み取りと書き込みには、Azure Storage アカウントが必要です。Azure Storage キーは、環境変数で指定します。
os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'
Storage アカウント名とコンテナ名は、ファイル名 URL の一部です。
azfs://<storage-account-name>/<container-name>/<path>
このチュートリアルは実演を目的としているため、オプションとして Azure Storage のエミュレータである Azurite をセットアップできます。Azurite エミュレータでは、TensorFlow を使用して、Azure Blob Storage を介したファイルの読み取りと書き込みを行えます。
セットアップと使用方法
必要なパッケージをインストールし、ランタイムを再起動する
|
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
|
site/ja/io/tutorials/azure.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Azurite のインストールとセットアップ(オプション)
Azure Storage アカウントをお持ちでない場合に、Azure Storage インターフェースをエミュレートする Azurite をインストールしてセットアップするには次を行う必要があります。
|
!npm install azurite@2.7.0
# The path for npm might not be exposed in PATH env,
# you can find it out through 'npm bin' command
npm_bin_path = get_ipython().getoutput('npm bin')[0]
print('npm bin path: ', npm_bin_path)
# Run `azurite-blob -s` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw(npm_bin_path + '/' + 'azurite-blob -s &')
|
site/ja/io/tutorials/azure.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
TensorFlow を使用した Azure Storage のファイルの読み取りと書き込み
以下は、TensorFlow API を使用して、Azure Storage のファイルの読み取りと書き込みを行う例です。
tensorflow-io は自動的に azfs の使用を登録するため、tensorflow-io パッケージがインポートされると、ほかのファイルシステム(POSIX または GCS)と同じように動作します。
Azure Storage キーは、TF_AZURE_STORAGE_KEY 環境変数で指定します。これを行わない場合、TF_AZURE_USE_DEV_STORAGE は True に設定され、代わりに Azurite エミュレータが使用されてしまいます。
|
import os
import tensorflow as tf
import tensorflow_io as tfio
# Switch to False to use Azure Storage instead:
use_emulator = True
if use_emulator:
os.environ['TF_AZURE_USE_DEV_STORAGE'] = '1'
account_name = 'devstoreaccount1'
else:
# Replace <key> with Azure Storage Key, and <account> with Azure Storage Account
os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'
account_name = '<account>'
# Alternatively, you can use a shared access signature (SAS) to authenticate with the Azure Storage Account
os.environ['TF_AZURE_STORAGE_SAS'] = '<your sas>'
account_name = '<account>'
pathname = 'az://{}/aztest'.format(account_name)
tf.io.gfile.mkdir(pathname)
filename = pathname + '/hello.txt'
with tf.io.gfile.GFile(filename, mode='w') as w:
w.write("Hello, world!")
with tf.io.gfile.GFile(filename, mode='r') as r:
print(r.read())
|
site/ja/io/tutorials/azure.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Part 1: Encrypting and Decrypting a Message
Pick Your Super Secret Message
The super secret message you want to send must be the same or less than the length of the super secret key.
If the key is shorter than the message, you will be forced to use parts of the key more than once. This may allow your lurking enemies to pick up a pattern in your encrypted message and possibly decrypt it. (As you'll see later on, we need to start out with a key at least double the number of characters used in your message. For now, don't worry about those details, pick your message! For this tutorial, we picked the initial key to be 3x greater--just to be safe.) Enter your message on the line below which reads "mes = ".
|
#Super secret message
mes = 'hello world'
print('Your super secret message: ',mes)
#initial size of key
n = len(mes)*3
#break up message into smaller parts if length > 10
nlist = []
for i in range(int(n/10)):
nlist.append(10)
if n%10 != 0:
nlist.append(n%10)
print('Initial key length: ',n)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
The Big Picture
Now that you (Alice) have the key, here's the big question: how are we going to get your key to Bob without eavesdroppers intercepting it? Quantum key distribution! Here are the steps and big picture (the effects of eavesdropping will be discussed later on):
1. You (Alice) generate a random string--the key you wish to give to Bob.
2. You (Alice) convert your string bits into corresponding qubits.
3. You (Alice) send those qubits to Bob, BUT! you randomly rotate some into a superposition. This effectively turns your key into random noise. (This is good because your lurking enemies might measure your qubits.)
4. Bob receives yours qubits AND randomly rotates some qubits in the opposite direction before measuring.
5. Alice and Bob publicly share which qubits they rotated. When they both did the same thing (either both did nothing or both rotated), they know the original key bit value made it to Bob! (Overall, you can see that only some of the bits from Alice's original key should make it.)
6. Alice and Bob create their keys. Alice modifies her original key by keeping only the bits that she knows made it to Bob. Bob does the same.
Alice and Bob now have matching keys! They can now use this key to encrypt and decrypt their messages.
<img src='QKDnoEve.png'>
Here we see Alice sending the initial key to Bob. She sends her qubits and rotates them based on her rotation string. Bob rotates the incoming qubits based on his rotation string and measures the qubits.
Step 1: Alice Generates a Random Key
You and your friend need a super secret key so you can encrypt your message and your friend can decrypt it. Let's make a key--a pure random key.
To make a purely random string, we'll use quantum superposition. A qubit in the xy-plane of the Bloch sphere is in a 50-50 superposition; 50% of the time it'll be measured as 0, and 50% of the time it'll be measured as 1. We have Alice prepare several qubits like this and measure them to generate a purely random string of 1s and 0s.
|
# Make random strings of length string_length
def randomStringGen(string_length):
#output variables used to access quantum computer results at the end of the function
output_list = []
output = ''
#start up your quantum circuit information
backend = Aer.get_backend('qasm_simulator')
circuits = ['rs']
#run circuit in batches of 10 qubits for fastest results. The results
#from each run will be appended and then clipped down to the right n size.
n = string_length
temp_n = 10
temp_output = ''
for i in range(math.ceil(n/temp_n)):
#initialize quantum registers for circuit
q = QuantumRegister(temp_n, name='q')
c = ClassicalRegister(temp_n, name='c')
rs = QuantumCircuit(q, c, name='rs')
#create temp_n number of qubits all in superpositions
for i in range(temp_n):
rs.h(q[i]) #the .h gate is the Hadamard gate that makes superpositions
rs.measure(q[i],c[i])
#execute circuit and extract 0s and 1s from key
result = execute(rs, backend, shots=1).result()
counts = result.get_counts(rs)
result_key = list(result.get_counts(rs).keys())
temp_output = result_key[0]
output += temp_output
#return output clipped to size of desired string length
return output[:n]
key = randomStringGen(n)
print('Initial key: ',key)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Steps 2-4: Send Alice's Qubits to Bob
Alice turns her key bits into corresponding qubit states. If a bit is a 0 she will prepare a qubit on the negative z axis. If the bit is a 1 she will prepare a qubit on the positive z axis. Next, if Alice has a 1 in her rotate string, she rotates her key qubit with a Hadamard gate. She then sends the qubit to Bob. If Bob has a 1 in his rotate string, he rotates the incoming qubit in the opposite direction with a Hadamard gate. Bob then measures the state of the qubit and records the result. The quantum circuit below executes each of these steps.
|
#generate random rotation strings for Alice and Bob
Alice_rotate = randomStringGen(n)
Bob_rotate = randomStringGen(n)
print("Alice's rotation string:",Alice_rotate)
print("Bob's rotation string: ",Bob_rotate)
#start up your quantum program
backend = Aer.get_backend('qasm_simulator')
shots = 1
circuits = ['send_over']
Bob_result = ''
for ind,l in enumerate(nlist):
#define temp variables used in breaking up quantum program if message length > 10
if l < 10:
key_temp = key[10*ind:10*ind+l]
Ar_temp = Alice_rotate[10*ind:10*ind+l]
Br_temp = Bob_rotate[10*ind:10*ind+l]
else:
key_temp = key[l*ind:l*(ind+1)]
Ar_temp = Alice_rotate[l*ind:l*(ind+1)]
Br_temp = Bob_rotate[l*ind:l*(ind+1)]
#start up the rest of your quantum circuit information
q = QuantumRegister(l, name='q')
c = ClassicalRegister(l, name='c')
send_over = QuantumCircuit(q, c, name='send_over')
#prepare qubits based on key; add Hadamard gates based on Alice's and Bob's
#rotation strings
for i,j,k,n in zip(key_temp,Ar_temp,Br_temp,range(0,len(key_temp))):
i = int(i)
j = int(j)
k = int(k)
if i > 0:
send_over.x(q[n])
#Look at Alice's rotation string
if j > 0:
send_over.h(q[n])
#Look at Bob's rotation string
if k > 0:
send_over.h(q[n])
send_over.measure(q[n],c[n])
#execute quantum circuit
result_so = execute([send_over], backend, shots=shots).result()
counts_so = result_so.get_counts(send_over)
result_key_so = list(result_so.get_counts(send_over).keys())
Bob_result += result_key_so[0][::-1]
print("Bob's results: ", Bob_result)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Steps 5-6: Compare Rotation Strings and Make Keys
Alice and Bob can now generate a secret quantum encryption key. First, they publicly share their rotation strings. If a bit in Alice's rotation string is the same as the corresponding bit in Bob's they know that Bob's result is the same as what Alice sent. They keep these bits to form the new key. (Alice based on her original key and Bob based on his measured results).
|
def makeKey(rotation1,rotation2,results):
key = ''
count = 0
for i,j in zip(rotation1,rotation2):
if i == j:
key += results[count]
count += 1
return key
Akey = makeKey(Bob_rotate,Alice_rotate,key)
Bkey = makeKey(Bob_rotate,Alice_rotate,Bob_result)
print("Alice's key:",Akey)
print("Bob's key: ",Bkey)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Pause
We see that using only the public knowledge of Bob's and Alice's rotation strings, Alice and Bob can create the same identical key based on Alice's initial random key and Bob's results. Wow!! :D
<strong>If Alice's and Bob's key length is less than the message</strong>, the encryption is compromised. If this is the case for you, rerun all the cells above and see if you get a longer key. (We set the initial key length to 3x the message length to avoid this, but it's still possible.)
Encrypt (and decrypt) using quantum key
We can now use our super secret key to encrypt and decrypt messages!! (of length less than the key). Note: the below "encryption" method is not powerful and should not be used for anything you want secure; it's just for fun. In real life, the super secret key you made and shared with Bob would be used in a much more sophisticated encryption algorithm.
|
#make key same length has message
shortened_Akey = Akey[:len(mes)]
encoded_m=''
#encrypt message mes using encryption key final_key
for m,k in zip(mes,shortened_Akey):
encoded_c = chr(ord(m) + 2*ord(k) % 256)
encoded_m += encoded_c
print('encoded message: ',encoded_m)
#make key same length has message
shortened_Bkey = Bkey[:len(mes)]
#decrypt message mes using encryption key final_key
result = ''
for m,k in zip(encoded_m,shortened_Bkey):
encoded_c = chr(ord(m) - 2*ord(k) % 256)
result += encoded_c
print('recovered message:',result)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Part 2: Eve the Eavesdropper
What if someone is eavesdropping on Alice and Bob's line of communication? This process of random string making and rotations using quantum mechanics is only useful if it's robust against eavesdroppers.
Eve is your lurking enemy. She eavesdrops by intercepting your transmission to Bob. To be sneaky, Eve must send on the intercepted transmission--otherwise Bob will never receive anything and know that something is wrong!
Let's explain further why Eve can be detected. If Eve intercepts a qubit from Alice, she will not know if Alice rotated its state or not. Eve can only measure a 0 or 1. And she can't measure the qubit and then send the same qubit on, because her measurement will destroy the quantum state. Consequently, Eve doesn't know when or when not to rotate to recreate Alice's original qubit. She may as well send on qubits that have not been rotated, hoping to get the rotation right 50% of the time. After she sends these qubits to Bob, Alice and Bob can compare select parts of their keys to see if they have discrepancies in places they should not.
The scheme goes as follows:
1. Alice sends her qubit transmission Bob--but Eve measures the results
2. To avoid suspicion, Eve prepares qubits corresponding to the bits she measured and sends them to Bob.
3. Bob and Alice make their keys like normal
4. Alice and Bob randomly select the same parts of their keys to share publicly
5. If the selected part of the keys don't match, they know Eve was eavesdropping
6. If the selected part of the keys DO match, they can be confident Eve wasn't eavesdropping
7. They throw away the part of the key they made public and encrypt and decrypt super secret messages with the portion of the key they have left.
<img src="QKD.png">
Here we see Alice sending her qubits, rotationing them based on her rotation string, and Eve intercepting the transmittion. Eve then sending her results onto Bob who--like normal--rotates and measures the qubits.
Step 1: Eve intercepts Alice's transmission
The code below has Alice sending her qubits and Eve intercepting them. It then displays the results of Eve's measurements.
|
#start up your quantum program
backend = Aer.get_backend('qasm_simulator')
shots = 1
circuits = ['Eve']
Eve_result = ''
for ind,l in enumerate(nlist):
#define temp variables used in breaking up quantum program if message length > 10
if l < 10:
key_temp = key[10*ind:10*ind+l]
Ar_temp = Alice_rotate[10*ind:10*ind+l]
else:
key_temp = key[l*ind:l*(ind+1)]
Ar_temp = Alice_rotate[l*ind:l*(ind+1)]
#start up the rest of your quantum circuit information
q = QuantumRegister(l, name='q')
c = ClassicalRegister(l, name='c')
Eve = QuantumCircuit(q, c, name='Eve')
#prepare qubits based on key; add Hadamard gates based on Alice's and Bob's
#rotation strings
for i,j,n in zip(key_temp,Ar_temp,range(0,len(key_temp))):
i = int(i)
j = int(j)
if i > 0:
Eve.x(q[n])
if j > 0:
Eve.h(q[n])
Eve.measure(q[n],c[n])
#execute
result_eve = execute(Eve, backend, shots=shots).result()
counts_eve = result_eve.get_counts()
result_key_eve = list(result_eve.get_counts().keys())
Eve_result += result_key_eve[0][::-1]
print("Eve's results: ", Eve_result)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Step 2: Eve deceives Bob
Eve sends her measured qubits on to Bob to deceive him! Since she doesn't know which of the qubits she measured were in a superposition or not, she doesn't even know whether to send the exact values she measured or opposite values. In the end, sending on the exact values is just as good a deception as mixing them up again.
|
#start up your quantum program
backend = Aer.get_backend('qasm_simulator')
shots = 1
circuits = ['Eve2']
Bob_badresult = ''
for ind,l in enumerate(nlist):
#define temp variables used in breaking up quantum program if message length > 10
if l < 10:
key_temp = key[10*ind:10*ind+l]
Eve_temp = Eve_result[10*ind:10*ind+l]
Br_temp = Bob_rotate[10*ind:10*ind+l]
else:
key_temp = key[l*ind:l*(ind+1)]
Eve_temp = Eve_result[l*ind:l*(ind+1)]
Br_temp = Bob_rotate[l*ind:l*(ind+1)]
#start up the rest of your quantum circuit information
q = QuantumRegister(l, name='q')
c = ClassicalRegister(l, name='c')
Eve2 = QuantumCircuit(q , c, name='Eve2')
#prepare qubits
for i,j,n in zip(Eve_temp,Br_temp,range(0,len(key_temp))):
i = int(i)
j = int(j)
if i > 0:
Eve2.x(q[n])
if j > 0:
Eve2.h(q[n])
Eve2.measure(q[n],c[n])
#execute
result_eve = execute(Eve2, backend, shots=shots).result()
counts_eve = result_eve.get_counts()
result_key_eve = list(result_eve.get_counts().keys())
Bob_badresult += result_key_eve[0][::-1]
print("Bob's previous results (w/o Eve):",Bob_result)
print("Bob's results from Eve:\t\t ",Bob_badresult)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Step 4: Spot Check
Alice and Bob know Eve is lurking out there. They decide to pick a few random values from their individual keys and compare with each other. This requires making these subsections of their keys public (so the other can see them). If any of the values in their keys are different, they know Eve's eavesdropping messed up the superposition Alice originally created! If they find all the values are identical, they can be reasonably confident that Eve wasn't eavesdropping. Of course, making some random key values known to the public will require them to remove those values from their keys because those parts are no longer super secret. Also, Alice and Bob need to make sure they are sharing corresponding values from their respective keys.
Let's make a check key. If the randomly generated check key is a one, Alice and Bob will compare that part of their keys with each other (aka make publicly known).
|
#make keys for Alice and Bob
Akey = makeKey(Bob_rotate,Alice_rotate,key)
Bkey = makeKey(Bob_rotate,Alice_rotate,Bob_badresult)
print("Alice's key: ",Akey)
print("Bob's key: ",Bkey)
check_key = randomStringGen(len(Akey))
print('spots to check:',check_key)
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Steps 5-7: Compare strings and detect Eve
Alice and Bob compare the subsections of their keys. If they notice any discrepancy, they know that Eve was trying to intercept their message. They create new keys by throwing away the parts they shared publicly. It's possible that by throwing these parts away, they will not have a key long enough to encrypt the message and they will have to try again.
|
#find which values in rotation string were used to make the key
Alice_keyrotate = makeKey(Bob_rotate,Alice_rotate,Alice_rotate)
Bob_keyrotate = makeKey(Bob_rotate,Alice_rotate,Bob_rotate)
# Detect Eve's interference
#extract a subset of Alice's key
sub_Akey = ''
sub_Arotate = ''
count = 0
for i,j in zip(Alice_rotate,Akey):
if int(check_key[count]) == 1:
sub_Akey += Akey[count]
sub_Arotate += Alice_keyrotate[count]
count += 1
#extract a subset of Bob's key
sub_Bkey = ''
sub_Brotate = ''
count = 0
for i,j in zip(Bob_rotate,Bkey):
if int(check_key[count]) == 1:
sub_Bkey += Bkey[count]
sub_Brotate += Bob_keyrotate[count]
count += 1
print("subset of Alice's key:",sub_Akey)
print("subset of Bob's key: ",sub_Bkey)
#compare Alice and Bob's key subsets
secure = True
for i,j in zip(sub_Akey,sub_Bkey):
if i == j:
secure = True
else:
secure = False
break;
if not secure:
print('Eve detected!')
else:
print('Eve escaped detection!')
#sub_Akey and sub_Bkey are public knowledge now, so we remove them from Akey and Bkey
if secure:
new_Akey = ''
new_Bkey = ''
for index,i in enumerate(check_key):
if int(i) == 0:
new_Akey += Akey[index]
new_Bkey += Bkey[index]
print('new A and B keys: ',new_Akey,new_Bkey)
if(len(mes)>len(new_Akey)):
print('Your new key is not long enough.')
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Probability of Detecting Eve
The longer the key, the more likely you will detect Eve. In fact, the probability goes up as a function of $1 - (3/4)^n$ where n is the number of bits Alice and Bob compare in their spot check. So, the longer the key, the more bits you can use to compare and the more likely you will detect Eve.
|
#!!! you may need to execute this cell twice in order to see the output due to an problem with matplotlib
x = np.arange(0., 30.0)
y = 1-(3/4)**x
plt.plot(y)
plt.title('Probablity of detecting Eve')
plt.xlabel('# of key bits compared')
plt.ylabel('Probablity of detecting Eve')
plt.show()
|
community/teach_me_qiskit_2018/quantum_cryptography_qkd/Quantum_Cryptography2.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Tutorial: Checking and Comparing Models
Goodness of fit, information criteria, and Bayesian evidence
Introduction
In this tutorial we'll look at some simple, realistic, simulated data, and do some model evaluation, including
fitting a simple model, and then do a posterior predictive model check of the adequacy of the fit
quantifying the generalized predictive accuracy of the model with the Deviance Information Criterion (DIC)
calculating the Bayesian evidence for the model
Then you'll get to do it all again with a more complex model and determine which is preferred!
The Dataset
Our data is just a list of numbers. Each one represents a measured distance, $y$, between two different estimates of the center of a galaxy cluster: the location of the presumed central galaxy and a centroid of the diffuse, emissive gas. The context here is that automated algorithms sometimes fail to chose the central galaxy correctly (because of image artifacts or other problems), whereas the gas centroid is more reliable but also more expensive to measure. Therefore, we'd like to use this data set to characterize the distribution of mis-centerings so that the galaxy-based centers can be used for large sample, with the resulting errors propagated forward through future processing, e.g., weak lensing estimates.
Let's load up the data and have a look.
|
import numpy as np
import scipy.stats as st
from scipy.special import logsumexp
import emcee
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams.update({'font.size': 16});
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Load data into global variable y. Each entry is an offset in units of kpc.
|
y = np.loadtxt('data/model_comparison.dat')
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Check out a quick histogram of the data.
|
plt.rcParams['figure.figsize'] = (8.0, 6.0)
bins = np.linspace(0,1000,20)
plt.hist(y, bins=bins, color="skyblue");
plt.xlabel("Measured distance $y$");
plt.ylabel("Frequency");
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
1. Pre-registering a Test Statistic
The hypothesis we will test in this tutorial is the model outlined in the next section - but how well that model fits the data is a question we will answer in part using a test statistic.
Having understood what the data represent (and had a quick look at them), what feature in the data do you want your model to explain well?
With this in mind, what is a good test statistic to summarize the data? Spend a few mins thinking about this and discussing it, then implement is before moving on. You'll then use this "pre-registered" test statistic in a Bayesian model check below.
Your test statistic should be a function of the data only, although in general it's also possible to use statistics that are functions of both the data and model.
|
try:
exec(open('solutions/teststatistic.py').read())
except IOError:
REMOVE_THIS_LINE()
def T(yy):
"""
Argument: a data vector (either the real data or a simulated data set)
Returns: a scalar test statistic computed from the argument
"""
REPLACE_WITH_YOUR_SOLUTION()
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Setting up a Computational Framework
Once we define a model to work with (below), we'll want to fit that model to the data, and then evaluate it using the methods we saw in the model evaluation lesson. These include:
a visual check using replica datasets drawn from the posterior predictive distribution
a quantitative posterior predictive model check using a suitable test statistic $T(y)$
After chosing and fitting a second, alternative model, we can also compare the two in terms of
the Deviance Information Criterion (DIC), to assess the models' (relative) generalized predictive accuracy
the Bayesian Evidence, to provide insight on the (relative) probability of each model given the data
Notice that each of these bulleted operations can be coded as a function of the model (e.g. a visual check of the model, the evidence of the model, and so on). That suggests that we should write a class that completely describes the model, and then a set of functions that act on model objects passed to them. Since we anticipate looking at multiple models, we'll use inheritance. While this level of object oriented programming may not be familiar, most of the details are filled in for you below.
We start by defining a base class, which contains the functionality common to any model we care to define later. To make it clear what functionality we expect derived classes to provide, we'll include defintions of non-functional methods that the derived classes will need to override.
|
# This is something we can throw to discourage direct instantiation of the base class
class VirtualClassError(Exception):
def __init__(self):
Exception.__init__(self,"Do not directly instantiate the base Model class!")
class Model:
"""
Base class for inference and model evaluation in a simple cluster mis-centering analysis.
In all these functions, `args' is the ordered list of model parameters.
"""
def __init__(self):
"""
Note: derived classes should have their own __init__ function which ends by calling this one
"""
# Sometimes it will be convenient to compute many log_likelihood values at once:
self.vectorized_log_likelihood = np.vectorize(self.log_likelihood)
self.samples = None
self.Nsamples = 0
def log_prior(self, *args):
"""
Evaluate the log prior PDF P(args|H)
"""
raise VirtualClassError # to be overriden by child classes
def draw_samples_from_prior(self, N):
"""
Return N samples from the prior PDF P(args|H)
"""
raise VirtualClassError # to be overriden by child classes
def log_likelihood(self, *args):
"""
Evaluate the log of the likelihood function L(args) = P(y|args,H)
"""
raise VirtualClassError # to be overriden by child classes
def sampling_distribution(self, yy, *args):
"""
Evaluate the sampling distribution P(yy|args,H) at a point in data space yy given parameter(s) args
We expect a vector input yy, and return the corresponding probabilities.
Note: This is useful for making plots of "the model" overlaid on the histogram of the data
"""
raise VirtualClassError # to be overriden by child classes
def generate_replica_dataset(self, *args):
"""
Draw a replica dataset y_rep from the sampling distribution P(y_rep|args,H).
y_rep should have the same length as the true data set.
"""
raise VirtualClassError # to be overriden by child classes
def log_posterior(self, *args):
"""
Evaluate the log of the (unnormalized) posterior PDF P(args|y,H)
Note: We'll use this with an MCMC sampler, so it should call the non-vectorized likelihood.
"""
lnp = self.log_prior(*args)
if lnp != -np.inf:
lnp += self.log_likelihood(*args)
return lnp
def draw_samples_from_posterior(self, guess=None, nwalkers=None, nsteps=None, burn=None, thinby=None):
"""
Use emcee to draw samples from P(args|y,H)
"""
# Deal with unset inputs:
if guess is None: print("You need to specify a starting point in parameter space with the `guess=` kwarg...")
if nwalkers is None: print("You need to specify the `nwalkers=` kwarg...")
if nsteps is None: print("You need to specify the chain length `nsteps=` kwarg...")
if burn is None: print("You need to specify the length of burnin `burn=` kwarg...")
if thinby is None: print("You need to specify the thinning factor `thinby=` kwarg...")
# The density to sample is this model's own posterior PDF
lnprob = self.log_posterior
npars = len(guess)
self.sampler = emcee.EnsembleSampler(nwalkers, npars, lnprob)
# You could add e.g. threads=4 to speed things up with multiprocessing
# Generate an ensemble of walkers within +/-1% of the guess:
theta_0 = np.array([guess*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
# Note that the initial parameter array theta_0 should have dimensions nwalkers × npars
# Evolve the ensemble:
self.sampler.run_mcmc(theta_0, nsteps)
# Plot the raw samples:
plt.rcParams['figure.figsize'] = (12.0, 6.0)
plt.subplot(211)
for j in range(nwalkers):
plt.plot(self.sampler.chain[j,:,0], 'o', alpha=0.2)
plt.title("Raw Markov chains")
# Extract the chain, remove burnin, merge, and thin:
samples = self.sampler.chain[:, burn:, :].reshape((-1, npars))
samples = samples[range(0,samples.shape[0],thinby),:]
# Keep the samples with the model for future use!
self.samples = samples
self.Nsamples = len(samples)
# Plot the thinned chain
plt.subplot(212)
plt.plot(samples[:,0], 'o')
plt.title("Thinned, post-burnin chains");
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
2. Evaluating a Simple Model
First, let's assume a simple model $H_1$, that the sampling distribution is an exponential:
Model 1: $P(y|a_1, H_1) = \frac{1}{a_1}e^{-y/a_1}$; $y\geq0$
Our single parameter is $a_1$, the mean of the exponential distribution.
2a. Implementation in code
Complete the implementation of this model as a derived class of Model, below. Note than an ExponentialModel object still has all the methods defined for Model, in particular the ones that we don't need to redefine here.
Note that this includes choosing a reasonable prior for $a_1$. It should be a proper (normalizable) distribution. We don't want to deal with improper distributions when calculating the evidence later on.
Make sure you understand the workings of even the functions that are completely given.
|
try:
exec(open('solutions/exponentialmodel.py').read())
except IOError:
REMOVE_THIS_LINE()
class ExponentialModel(Model):
"""
Simple exponential model for mis-centering.
"""
def __init__(self):
# Define any hyperparameters for the a1 prior here.
# E.g., for uniform, something like "self.min_a1 = value" and "self.max_a1 = value"
# More sophisticatedly, you could make these values arguments of __init__.
REPLACE_WITH_YOUR_SOLUTION()
# The next line finishes initialization by calling the parent class' __init__
Model.__init__(self)
def log_prior(self, a1):
"""
Evaluate the log prior PDF P(a1|H)
"""
REPLACE_WITH_YOUR_SOLUTION()
def draw_samples_from_prior(self, N):
"""
Return N samples of a1 from the prior PDF P(a1|H)
"""
REPLACE_WITH_YOUR_SOLUTION()
def log_likelihood(self, a1):
"""
Evaluate the log of the likelihood function L(a1) = P(y|a1,H)
Argument a1 is scalar.
"""
return np.sum(st.expon.logpdf(y, scale=a1))
def sampling_distribution(self, yy, a1):
"""
Evaluate the sampling distribution P(yy|a,H) at a point in data space yy given parameter value a1
We expect a vector input yy, and return the corresponding probabilities.
Note: This is useful for making plots of "the model" overlaid on the histogram of the data
"""
return st.expon.pdf(yy, scale=a1)
def generate_replica_dataset(self, a1):
"""
Draw a replica data set y_rep from the sampling distribution P(y_rep|a1,H).
y_rep should have the same length as the true data set.
Argument a1 is a scalar.
"""
REPLACE_WITH_YOUR_SOLUTION()
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Test out the log-posterior function to make sure it's not obviously buggy.
|
for a1 in [1.0, 10.0, 100.0, -3.14]:
print('Log-posterior for a1=', a1, ' = ', Model1.log_posterior(a1))
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Similarly the mock-data producing function (with an arbitrary $a_1$ value).
|
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.hist(Model1.generate_replica_dataset(500.), bins=bins, color="lightgray");
plt.xlabel("Measured distance $y$");
plt.ylabel("Frequency (replica)");
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Finally, test the sampling distribution function.
|
plt.plot(bins, Model1.sampling_distribution(bins, 500.));
plt.xlabel("Measured distance $y$");
plt.ylabel("$p(y|a_1)$");
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
2b. Fit the model to the data
The draw_samples_from_posterior method carries out a parameter inference with emcee, displaying its Markov chains, removing burn-in, thinning, and concatenating the chains. Since this step isn't really the point of this problem, the code is given to you, but you'll still need to experiment with the keyword argument ("kwarg") inputs (and read the code to see what they do) in order to get good results. (The suggestions in the cell below are pretty terrible.)
As a rule, you should start with burn=0 and thinby=1, and set these appropriately for a final run once you know what the time to convergence and autocorrelation roughly are.
The MCMC samples are stored in the Model.samples array.
|
try:
exec(open('solutions/fit.py').read())
except IOError:
# This will execute out of the box, but will not work well. The arguments should be fiddled with.
Model1.draw_samples_from_posterior(guess=[1000.0], nwalkers=8, nsteps=10, burn=0, thinby=1)
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
It will be useful for later to know the mean of the posterior:
|
Model1.post_mean = np.mean(Model1.samples, axis=0)
print("Posterior mean value of a1 = ", Model1.post_mean)
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
2c. Visually compare the posterior predictions with the data.
First, let's just plot the posterior-mean model over the data.
|
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Now overlay a curve following the sampling distribution conditioned on the posterior mean value of a1:
pp = Model1.sampling_distribution(bins, Model1.post_mean)
plt.plot(bins, pp, linestyle="dashed", color="red", label="Posterior mean model")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
This kind of plot should be familiar: it's often a good idea to evaluate model adequacy in data space. You should already be able to see telling differences between the a well-fitting model's sampling distribution, and the data histogram.
Now, let's compare a random predicted ("replica") data set, drawn from the posterior predictive distribution, with the data. To do this we first draw a parameter value from the posterior PDF, and then generate a dataset from the sampling distribution conditioned on that value. The result is a sample from $P(y_{rep}|y)$.
|
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Choose a posterior sample at random and generate a replica dataset, and show its histogram
j = np.random.randint(0, len(Model1.samples))
mock = Model1.generate_replica_dataset(Model1.samples[j])
plt.hist(mock, bins=bins, alpha=1.0, histtype="step", color="red", density=True, label="Sample posterior prediction")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
This plot is nice because it is comparing apples with apples: do mock datasets drawn from our model sampling distribution with any plausible parameter value "look like" the real data?
To best evaluate this, we want to visualize the posterior predictive distribution of replica datasets. We can do this by plotting many replica datasets drawn from the posterior predictive PDF, e.g. one for each of our posterior samples. Let's put this plot in a function, so we can re-use it later.
|
def visual_check(Model, Nreps=None):
plt.rcParams['figure.figsize'] = (8.0, 6.0)
# First the histogram of observed data, as backdrop:
plt.hist(y, bins=bins, color="skyblue", density=True, label="Observed")
# Compute the posterior mean parameter (vector)
pm = np.mean(Model.samples, axis=0)
# Make a large number of replica datasets, and overlay histograms of them all
if Nreps is None: Nreps = len(Model.samples)
alpha = 5.0 / Nreps
for jj in np.round(np.linspace(0, len(Model.samples), num=Nreps, endpoint=False)):
j = int(jj)
if j==0:
# Plot a dataset drawn using a = the posterior mean a, to give a good legend
mock = Model.generate_replica_dataset(pm)
plt.hist(mock, bins=bins, histtype="step", alpha=1.0, color="red", density=True, label="Sample posterior predictions")
else:
# Take the next posterior sample a and generate a replica dataset
mock = Model1.generate_replica_dataset(Model.samples[j])
plt.hist(mock, bins=bins, histtype="step", alpha=alpha, color="red", density=True)
# Include the posterior mean model for comparison
pp = Model1.sampling_distribution(bins, pm)
plt.plot(bins, pp, linestyle="dashed", color="red", label="Posterior mean model")
plt.xlabel("Measured distance $y$")
plt.ylabel("Normalized Frequency")
plt.legend();
visual_check(Model1, Nreps=100)
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Based on these visual checks, would you say the model does a good job of predicting the observed data?
2c. Quantitative posterior predictive model check
Now let's quantify the (in)adequacy of the fit with a quantitative posterior predictive model check, based on the test_statistic function you've already defined.
To sample the posterior predictive distribution of test statistics $P(T(y_{rep})|y)$, we need to generate replica datasets from the model:
|
def distribution_of_T(Model):
"""
Compute T(yrep) for each yrep drawn from the posterior predictive distribution,
using parameter samples stored in Model.
"""
return np.array([T(Model.generate_replica_dataset(a)) for a in Model.samples])
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
We can now do the following:
* plot a histogram of $T(\mathrm{mock~data})$
* compare that distribution with $T(\mathrm{real~data})$
* compute and report the p-value for $T(\mathrm{real~data})$
And we want all of that in packaged in functions of the model, so that we can re-use it later (on different models!).
First let's write a function to compute the p-value, $P(T > T(y)|y,H)$:
|
try:
exec(open('solutions/pvalue.py').read())
except IOError:
REMOVE_THIS_LINE()
def pvalue(Model):
"""
Compute the posterior predictive p-value, P(T > T(y)|y,H):
"""
REPLACE_WITH_YOUR_SOLUTION()
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Here's a function that plots the distribution of T, and reports the p-value:
|
def posterior_predictive_check(Model, nbins=25):
"""
Compute the posterior predictive distribution of the test statistic T(y_rep), and compare with T(y_obs)
"""
# Compute distribution of T(yrep):
TT = distribution_of_T(Model)
# Plot:
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.hist(TT, bins=nbins, histtype="step", color="red", label="$P(T(y_{\\rm rep})|y)$")
# Overlay T(y_obs):
plt.axvline(x=T(y), color="gray", linestyle="dashed", label="$T(y_{\\rm observed})$")
plt.xlabel("Test statistic T(y)")
plt.ylabel("Posterior predictive probability density")
plt.legend();
# Compute p-value:
p = pvalue(Model)
print("p-value =", p)
return p
p1 = posterior_predictive_check(Model1)
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Does this result agree with your visual evaluation of the model fitness from the last section? If not, perhaps the test statistic you chose doesn't reflect the agreement you're looking for when inspecting the posterior predictions. If you'd like to re-define your test statistic, do so now and repeat this check.
6. Calculate the DIC for Model 1.
We saw in class that the Deviance Information Criterion is given by:
$\mathrm{DIC} = \langle D(\theta) \rangle + p_D; \quad p_D = \langle D(\theta) \rangle - D(\langle\theta\rangle)$
where the deviance $D(\theta)=-2\log P(\mathrm{data}|\theta)$, and averages $\langle\rangle$ are over the posterior.
Write this function, and execute it for the simple model.
|
try:
exec(open('solutions/dic.py').read())
except IOError:
REMOVE_THIS_LINE()
def DIC(Model):
"""
Compute the Deviance Information Criterion for the given model
"""
# Compute the deviance D for each sample, using the vectorized code.
D = -2.0*Model.vectorized_log_likelihood(Model.samples)
pD = REPLACE_WITH_YOUR_SOLUTION()
DIC = REPLACE_WITH_YOUR_SOLUTION()
return DIC, pD
DIC1, pD1 = DIC(Model1)
print("Effective number of fitted parameters =", pD1)
print("DIC =", DIC1)
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Does your value of $p_D$ make intuitive sense?
2d. Compute the evidence
To do this, note that
$P(D|H)=\int P(D|\theta,H) \, P(\theta|H) d\theta$
can be approximated by an average over samples from the prior
$P(D|H) \approx \frac{1}{m}\sum_{k=1}^m P(D|\theta_k,H)$; $\theta_k\sim P(\theta|H)$.
This estimate is better than trying to use samples from the posterior to calculate the evidence, if only because it's unbiased. But in general, and especially for large-dimensional parameter spaces, it is very inefficient (because the likelihood typically is large in only a small fraction of the prior volume). Still, it will do for this exercise.
In a function, draw a large number of samples from the prior and use them to calculate the evidence. To avoid numerical over/underflows, use the special scipy function logsumexp (which we imported directly, way at the top of the notebook) to do the sum. As the name implies, this function is equivalent to log(sum(exp(...))), but is more numerically stable.
|
try:
exec(open('solutions/evidence.py').read())
except IOError:
REMOVE_THIS_LINE()
def log_evidence(Model, N=1000):
"""
Compute the log evidence for the model using N samples from the prior
"""
REPLACE_WITH_YOUR_SOLUTION()
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Roughly how precisely do we need to know the log Evidence, to be able to compare models? Run log_evidence with different values of N (the number of prior samples in the average) to until you're satisfied that you're getting a usefully accurate result.
|
for Nevidence in [1, 10, 100]: # You *will* want to change these values
%time logE1 = log_evidence(Model1, N=Nevidence)
print("From", Nevidence, "samples, the log-evidence is", logE1, "\n")
|
Sessions/Session10/Day5/model_selection/model_comparison_tutorial.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
분산 입력
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/input"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/input.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/input.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/distribute/input.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
tf.distribute API는 사용자가 단일 머신에서 여러 머신으로 훈련을 쉽게 확장하는 방법을 제공합니다. 모델을 확장할 때 사용자는 입력을 여러 기기로 분산해야 합니다. tf.distribute는 입력을 기기에 자동으로 분산할 수 있는 API를 제공합니다.
이 가이드는 tf.distribute API를 사용하여 분산 데이터세트 및 반복기를 생성할 수 있는 다양한 방법을 보여줍니다. 또한 다음 주제들을 다룹니다.
tf.distribute.Strategy.experimental_distribute_dataset 및 tf.distribute.Strategy.experimental_distribute_datasets_from_function의 사용법, 샤딩 및 배치 처리 옵션
분산 데이터세트를 반복할 수 있는 다양한 방법
tf.distribute.Strategy.experimental_distribute_dataset / tf.distribute.Strategy.experimental_distribute_datasets_from_function API와 tf.data API의 차이점과 사용자가 사용하는 데 있어 발생할 수 있는 제한 사항
이 가이드는 Keras API를 사용한 분산 입력 사용법에 대해서는 다루지 않습니다.
분산 데이터세트
tf.distribute API를 사용하여 조정하려면 사용자가 tf.data.Dataset을 사용하여 입력을 나타내는 것이 좋습니다. tf.distribute는 tf.data.Dataset(예: 각 가속기 기기에 데이터 자동 프리페치)에서 효율적으로 작동하도록 만들어졌으며 성능 최적화가 구현에 정기적으로 통합되었습니다. tf.data.Dataset 외에 다른 사용에 대한 사용 사례가 있는 경우 나중에 이 가이드의 섹션을 참고하세요. 비분산 훈련 루프에서 사용자는 먼저 tf.data.Dataset 인스턴스를 만든 다음 요소를 반복합니다. 예를 들면 다음과 같습니다.
|
# Import TensorFlow
!pip install tf-nightly
import tensorflow as tf
# Helper libraries
import numpy as np
import os
print(tf.__version__)
global_batch_size = 16
# Create a tf.data.Dataset object.
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
@tf.function
def train_step(inputs):
features, labels = inputs
return labels - 0.3 * features
# Iterate over the dataset using the for..in construct.
for inputs in dataset:
print(train_step(inputs))
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
사용자가 존재하는 코드를 최소한으로 변경하면서 tf.distribute 전략을 사용할 수 있도록 tf.data.Dataset 인스턴스를 배포하고 분산 데이터세트 객체를 반환하는 두 개의 API가 도입되었습니다. 그런 다음 사용자는 이 분산 데이터세트 인스턴스를 반복하고 이전과 같이 모델을 훈련할 수 있습니다. 이제 두 가지 API tf.distribute.Strategy.experimental_distribute_dataset 및 tf.distribute.Strategy.experimental_distribute_datasets_from_function를 자세히 살펴보겠습니다.
tf.distribute.Strategy.experimental_distribute_dataset
사용법
이 API는 tf.data.Dataset 인스턴스를 입력으로 받고 tf.distribute.DistributedDataset 인스턴스를 반환합니다. 입력 데이터세트를 전역 배치 크기와 동일한 값으로 배치해야 합니다. 이 전역 배치 크기는 모든 기기에서 1스텝에서 처리하려는 샘플의 수입니다. 이 분산 데이터세트를 Python 방식으로 반복하거나 iter를 사용하여 반복기를 작성할 수 있습니다. 반환된 객체는 tf.data.Dataset 인스턴스가 아니며 데이터세트를 변환하거나 검사하는 다른 API를 지원하지 않습니다. 다른 복제본에 대해 입력을 샤딩하려는 특정 방법이 없는 경우 권장되는 API입니다.
|
global_batch_size = 16
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
# Distribute input using the `experimental_distribute_dataset`.
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
# 1 global batch of data fed to the model in 1 step.
print(next(iter(dist_dataset)))
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
속성
배치 처리
tf.distribute는 입력된 tf.data.Dataset 인스턴스를 전역 배치 크기와 동기화된 복제본 수로 나눈 새 배치 크기로 다시 배치합니다. 동기화 중인 복제본의 수는 훈련 중에 그래디언트 올리듀스(allreduce)에 참여하는 기기의 수와 같습니다. 사용자가 분산 반복기에서 next를 호출하면 복제본마다 배치 크기의 데이터가 각 복제본에 반환됩니다. 다시 배치된 데이터세트 카디널리티는 항상 여러 복제본의 배수입니다. 다음은 몇 가지 예입니다.
tf.data.Dataset.range(6).batch(4, drop_remainder=False)
분산 없음:
배치 1: [0, 1, 2, 3]
배치 2: [4, 5]
2개 복제본에서 분산 포함. 마지막 배치 ([4, 5])는 2개의 복제본으로 분할됩니다.
배치 1:
복제본 1:[0, 1]
복제본 2:[2, 3]
배치 2:
복제본 2: [4]
복제본 2: [5]
tf.data.Dataset.range(4).batch(4)
분산 없음:
배치 1: [[0], [1], [2], [3]]
5개 복제본에서 분산:
배치 1:
복제본 1: [0]
복제본 2: [1]
복제본 3: [2]
복제본 4: [3]
복제본 5: []
tf.data.Dataset.range(8).batch(4)
분산 없음:
배치 1: [0, 1, 2, 3]
배치 2: [4, 5, 6, 7]
3개 복제본에서 분산:
배치 1:
복제본 1: [0, 1]
복제본 2: [2, 3]
복제본 3: []
배치 2:
복제본 1: [4, 5]
복제본 2: [6, 7]
복제본 3: []
참고 : 위의 예는 전역 배치가 다른 복제본에서 분할되는 방법만 보여줍니다. 구현에 따라 각 복제본은 변경될 수 있으므로 각 복제본에서 발생 수 있는 실제 값에 의존하지 않는 것이 좋습니다.
데이터세트를 다시 일괄 처리하면 복제본 수에 비례하여 공간 복잡성이 선형적으로 증가합니다. 이는 다중 작업자 훈련 사용 사례에서 입력 파이프라인에 OOM 오류가 발생할 수 있음을 의미합니다.
샤딩
tf.distribute는 또한 다중 작업자 훈련에서 입력 데이터세트를 자동 샤딩합니다. 작업자의 CPU 기기에 각 데이터세트가 생성됩니다. 작업자 집합에 대한 데이터세트를 자동 샤딩하면 각 작업자에게 전체 데이터세트의 하위 세트가 할당됩니다(올바른 tf.data.experimental.AutoShardPolicy가 설정된 경우). 이는 각 스텝에서 겹치지 않는 데이터세트 요소의 전역 배치 크기가 각 작업자에 의해 처리되도록 합니다. 자동 샤딩에는 tf.data.experimental.DistributeOptions을 사용해 지정할 수 있는 몇 개의 다양한 옵션이 있습니다.
|
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(64).batch(16)
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA
dataset = dataset.with_options(options)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
tf.data.experimental.AutoShardPolicy에 대해서 세 가지 다른 옵션을 설정할 수 있습니다.
AUTO: 기본 옵션으로 FILE별 샤딩 시도가 이루어짐을 의미합니다. 파일 기반 데이터세트가 탐지되지 않으면 FILE별 샤딩 시도가 실패합니다. 그러면 tf.distribute가 DATA별 샤딩으로 폴백합니다. 입력 데이터세트가 파일 기반이지만 파일 수가 작업자 수보다 적으면, <code>InvalidArgumentError</code>가 발생합니다. 이 경우, 정책을 명시적으로 <code>AutoShardPolicy.DATA</code>로 설정하거나 파일 수가 작업자 수보다 많도록 입력 소스를 더 작은 파일로 분할합니다.
FILE: 모든 작업자에 대해 입력 파일을 샤딩하려는 경우 사용하는 옵션입니다. 입력 파일의 수가 작업자의 수보다 훨씬 많고 파일의 데이터가 균등하게 분산된 경우, 이 옵션을 사용해야 합니다. 이 옵션의 단점은 파일의 데이터가 균등하게 분산되어 있지 않으면 유휴 작업자가 생긴다는 것입니다. 파일의 수가 작업자의 수보다 적으면 InvalidArgumentError가 발생합니다. 이 경우에는 정책을 명시적으로 AutoShardPolicy.DATA로 설정합니다. 예를 들어, 각각 1개의 복제본이 있는 두 작업자에 2개의 파일을 배포해 보겠습니다. 파일 1에는 [0, 1, 2, 3, 4, 5]가 포함되고 파일 2에는 [6, 7, 8, 9, 10, 11]이 포함됩니다. 동기화된 총 복제본의 수를 2개로, 전역 배치 크기를 4로 설정합니다.
작업자 0:
배치 1 = 복제본 1: [0, 1]
배치 2 = 복제본 1: [2, 3]
배치 3 = 복제본 1: [4]
배치 4 = 복제본 1: [5]
작업자 1:
배치 1 = 복제본 2: [6, 7]
배치 2 = 복제본 2: [8, 9]
배치 3 = 복제본 2: [10]
배치 4 = 복제본 2: [11]
DATA: 모든 작업자에 걸쳐 요소를 자동 샤딩합니다. 각 작업자는 전체 데이터세트를 읽고 할당된 샤드만 처리합니다. 다른 모든 샤드는 삭제됩니다. 이 옵션은 일반적으로 입력 파일의 수가 작업자의 수보다 적고 모든 작업자에서 데이터를 더 잘 샤딩하려는 경우에 사용됩니다. 단점은 각 작업자에서 전체 데이터세트를 읽는다는 것입니다. 예를 들어, 두 작업자에 1개의 파일을 배포해 보겠습니다. 파일 1에 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]이 포함됩니다. 동기화된 총 복제본의 수를 2개로 설정합니다.
작업자 0:
배치 1 = 복제본 1: [0, 1]
배치 2 = 복제본 1: [4, 5]
배치 3 = 복제본 1: [8, 9]
작업자 1:
배치 1 = 복제본 2: [2, 3]
배치 2 = 복제본 2: [6, 7]
배치 3 = 복제본 2: [10, 11]
OFF: 자동 샤딩을 끄면 각 작업자가 모든 데이터를 처리합니다. 예를 들어, 두 작업자에 1개의 파일을 배포해 보겠습니다. 파일 1에 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]이 포함됩니다. 동기화된 총 복제본의 수를 2개로 설정합니다. 그러면 각 작업자는 다음과 같은 분산을 볼 수 있습니다.
작업자 0:
배치 1 = 복제본 1: [0, 1]
배치 2 = 복제본 1: [2, 3]
배치 3 = 복제본 1: [4, 5]
배치 4 = 복제본 1: [6, 7]
배치 5 = 복제본 1: [8, 9]
배치 6 = 복제본 1: [10, 11]
작업자 1:
배치 1 = 복제본 2: [0, 1]
배치 2 = 복제본 2: [2, 3]
배치 3 = 복제본 2: [4, 5]
배치 4 = 복제본 2: [6, 7]
배치 5 = 복제본 2: [8, 9]
배치 6 = 복제본 2: [10, 11]
프리페치
기본적으로 tf.distribute는 사용자 제공 tf.data.Dataset 인스턴스의 끝에 프리페치 변환을 추가합니다. 프리페치 변환에 대한 인수 buffer_size는 동기화 중인 복제본의 수와 같습니다.
tf.distribute.Strategy.experimental_distribute_datasets_from_function
사용법
이 API는 입력 함수를 받고 tf.distribute.DistributedDataset 인스턴스를 반환합니다. 사용자가 전달하는 입력 함수는tf.distribute.InputContext 인수를 갖고 tf.data.Dataset 인스턴스를 반환해야 합니다. 이 API를 사용하면 tf.distribute는 더 이상 입력 함수로부터 반환된 사용자의 tf.data.Dataset 인스턴스를 변경하지 않습니다. 데이터세트를 배치하고 샤딩하는 것은 사용자가 해야 합니다. tf.distribute는 작업자 각각의 CPU 기기에서 입력 함수를 호출합니다. 사용자가 자체 배치 처리 및 샤딩 로직을 지정할 수 있게 해주는 것 이외에도, 다중 작업자 훈련을 사용할 때 tf.distribute.Strategy.experimental_distribute_dataset과 비교하여 확장성과 성능이 더 우수합니다.
|
mirrored_strategy = tf.distribute.MirroredStrategy()
def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(64).batch(16)
dataset = dataset.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(2) # This prefetches 2 batches per device.
return dataset
dist_dataset = mirrored_strategy.experimental_distribute_datasets_from_function(dataset_fn)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
속성
배치 처리
입력 함수의 반환 값인 tf.data.Dataset 인스턴스는 복제본별 배치 크기를 사용하여 배치해야 합니다. 복제본별 배치 크기는 전역 배치 크기를 동기화 훈련에 참여하는 복제본의 수로 나눈 값입니다. tf.distribute가 각 작업자의 CPU 기기에서 입력 함수를 호출하기 때문입니다. 지정된 작업자에서 생성된 데이터세트는 해당 작업자의 모든 복제본에서 사용할 수 있어야 합니다.
샤딩
사용자의 입력 함수에 대한 인수로 암시적으로 전달되는 tf.distribute.InputContext 객체는 내부에서 tf.distribute 에 의해 생성됩니다. 작업자 수, 현재 작업자 ID 등에 대한 정보가 있습니다.이 입력 함수는 tf.distribute.InputContext 오브젝트의 일부인 이러한 특성을 사용하여 사용자가 설정 한 정책에 따라 샤딩을 처리 할 수 있습니다.
프리페치
tf.distribute 는 사용자가 제공한 입력 함수가 반환한 tf.data.Dataset 의 끝에 프리페치 변환을 추가하지 않습니다.
참고 : tf.distribute.Strategy.experimental_distribute_dataset 및 tf.distribute.Strategy.experimental_distribute_datasets_from_function 은 tf.data.Dataset 유형이 아닌 tf.distribute.DistributedDataset 인스턴스를 반환합니다. Distributed Iterators 섹션에 표시된 대로 이러한 인스턴스를 반복하고 element_spec 속성을 사용할 수 있습니다.
분산 반복기
비분산 tf.data.Dataset 인스턴스와 유사하게, tf.distribute.DistributedDataset에서 요소에 접근하여 반복하려면 tf.distribute.DistributedDataset 인스턴스에 반복기를 생성해야 합니다. 다음은 tf.distribute.DistributedIterator를 생성하고 모델을 훈련할 때 사용하는 방법입니다.
사용법
Python 같은 루프 구성 사용하기
Python 같은 사용자 친화적인 루프를 사용하여 tf.distribute.DistributedDataset를 반복할 수 있습니다. tf.distribute.DistributedIterator에서 반환된 요소는 단일 tf.Tensor 또는 복제본별 값을 포함하는 tf.distribute.DistributedValues입니다. tf.function 내에 루프를 배치하면 성능이 향상됩니다. 그러나 tf.function 내부에 배치된 tf.distribute.DistributedDataset를 통한 루프는 현재 break 및 return이 지원되지 않습니다. .function {/ code8}. tf.distribute.experimental.MultiWorkerMirroredStrategy 및 tf.distribute.TPUStrategy와 같은 다중 작업자 전략을 사용할 때, tf.function 내에 루프를 배치하는 것도 지원하지 않습니다. 단일 작업자 tf.distribute.TPUStrategy를 위해 <code>tf.function</code> 내에 루프를 배치하는 것은 동작하지만, TPU pod를 사용할 때는 동작하지 않습니다.
|
global_batch_size = 16
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(100).batch(global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function
def train_step(inputs):
features, labels = inputs
return labels - 0.3 * features
for x in dist_dataset:
# train_step trains the model using the dataset elements
loss = mirrored_strategy.run(train_step, args=(x,))
print("Loss is ", loss)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
iter를 사용하여 명시적인 반복기 만들기
tf.distribute.DistributedDataset 인스턴스의 요소를 반복하기 위해 iter API를 사용하여 tf.distribute.DistributedIterator를 생성할 수 있습니다. 명시적인 반복기를 사용하면 고정된 수의 스텝을 반복할 수 있습니다. tf.distribute.DistributedIterator 인스턴스 dist_iterator에서 다음 요소를 가져오려면 next(dist_iterator), dist_iterator.get_next() 또는 dist_iterator.get_next_as_optional()을 호출할 수 있습니다. 앞의 두 개는 본질적으로 동일합니다.
|
num_epochs = 10
steps_per_epoch = 5
for epoch in range(num_epochs):
dist_iterator = iter(dist_dataset)
for step in range(steps_per_epoch):
# train_step trains the model using the dataset elements
loss = mirrored_strategy.run(train_step, args=(next(dist_iterator),))
# which is the same as
# loss = mirrored_strategy.run(train_step, args=(dist_iterator.get_next(),))
print("Loss is ", loss)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
next() 또는 tf.distribute.DistributedIterator.get_next()를 사용하여 tf.distribute.DistributedIterator의 끝에 도달하면 OutOfRange 오류가 발생합니다. 클라이언트는 Python 측에서 오류를 포착하고 체크포인트 및 평가와 같은 다른 작업을 계속할 수 있습니다. 그러나 호스트 훈련 루프를 사용하는 경우(예: tf.function당 여러 스텝 실행) 다음과 같이 작동하지 않습니다.
@tf.function def train_fn(iterator): for _ in tf.range(steps_per_loop): strategy.run(step_fn, args=(next(iterator),))
train_fn은 스텝 본문을 tf.range안에 배치하여 여러 스텝을 래핑합니다. 이 경우 종속성이 없는 루프에서 다른 반복이 병렬로 시작될 수 있으므로 이전 반복 계산이 완료되기 전에 이후 반복에서 OutOfRange 오류가 트리거될 수 있습니다. OutOfRange 오류가 발생하면 함수의 모든 op가 즉시 종료됩니다. 이런 경우를 피하려면 OutOfRange 오류를 발생시키지 않는 대안은 tf.distribute.DistributedIterator.get_next_as_optional()입니다. get_next_as_optional은 다음 요소를 포함하거나 tf.distribute.DistributedIterator가 끝에 도달한 경우 값이 없는 tf.experimental.Optional을 반환합니다.
|
# You can break the loop with get_next_as_optional by checking if the Optional contains value
global_batch_size = 4
steps_per_loop = 5
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "CPU:0"])
dataset = tf.data.Dataset.range(9).batch(global_batch_size)
distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset))
@tf.function
def train_fn(distributed_iterator):
for _ in tf.range(steps_per_loop):
optional_data = distributed_iterator.get_next_as_optional()
if not optional_data.has_value():
break
per_replica_results = strategy.run(lambda x:x, args=(optional_data.get_value(),))
tf.print(strategy.experimental_local_results(per_replica_results))
train_fn(distributed_iterator)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
element_spec 속성 사용
분산된 데이터세트의 요소를 tf.function으로 전달하여 tf.TypeSpec 보장을 원할 경우, tf.function의 input_signature 인수를 지정합니다. 분산 데이터세트의 출력은 tf.distribute.DistributedValues이며 단일 기기 또는 여러 기기에 대한 입력을 나타낼 수 있습니다. 이 분산 값에 해당하는 tf.TypeSpec을 가져오려면 분산 데이터세트 또는 분산 반복기 객체의 element_spec 속성을 사용할 수 있습니다.
|
global_batch_size = 16
epochs = 5
steps_per_epoch = 5
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(100).batch(global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(per_replica_inputs):
def step_fn(inputs):
return 2 * inputs
return mirrored_strategy.run(step_fn, args=(per_replica_inputs,))
for _ in range(epochs):
iterator = iter(dist_dataset)
for _ in range(steps_per_epoch):
output = train_step(next(iterator))
tf.print(output)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
부분 배치
사용자가 생성한 tf.data.Dataset 인스턴스에 복제본의 수로 균등하게 나눌 수 없는 배치 크기가 포함되어 있거나 데이터세트 인스턴스의 카디널리티가 배치 크기로 나눌 수 없는 경우 부분 배치가 발생합니다. 이는 데이터세트가 여러 복제본에 분산될 때 일부 반복기에 대한 next 호출로 OutOfRangeError가 발생함을 의미합니다. 이 사용 사례를 처리하기 위해 tf.distribute는 처리할 데이터가 더 이상 없는 복제본에서 배치 크기가 0인 더미 배치를 반환합니다.
단일 작업자 사례의 경우 반복기에서 next 호출로 데이터가 반환되지 않으면 배치 크기가 0인 더미 배치가 작성되어 데이터세트의 실제 데이터와 함께 사용됩니다. 부분 배치의 경우 마지막 전역 배치 데이터에는 더미 배치 데이터와 함께 실제 데이터가 포함됩니다. 데이터 처리를 위한 중지 조건이 이제 복제본에 데이터가 있는지 확인합니다. 복제본에 데이터가 없으면 OutOfRange 오류가 발생합니다.
다중 작업자 사례의 경우, 각 작업자에서 데이터의 존재를 나타내는 boolean 값은 교차 복제본 통신을 사용하여 집계하며, 이는 모든 작업자가 분산 데이터세트의 처리를 완료했는지 식별하는 데 사용됩니다. 여기에는 작업자 간 의사소통이 포함되므로 성능에 약간의 불이익이 따릅니다.
경고 사항
다중 작업자 설정에서 tf.distribute.Strategy.experimental_distribute_dataset API를 사용할 때, 사용자는 파일로부터 읽은 tf.data.Dataset을 전달합니다. <code>tf.data.experimental.AutoShardPolicy</code>가 <code>AUTO</code> 또는 <code>FILE</code>로 설정된 경우, 스텝당 실제 배치 사이즈는 사용자가 정의한 전역 배치 크기보다 더 작을 수 있습니다. 파일에 남아 있는 요소가 전역 배치 크기보다 더 적을 때 이런 경우가 발생할 수 있습니다. 사용자는 실행 스텝의 수에 의존하지 않고 데이터세트를 소진하거나 <code>tf.data.experimental.AutoShardPolicy</code>를 <code>DATA</code>로 설정하여 문제를 해결할 수 있습니다.
상태 저장 데이터세트 변환은 현재 tf.distribute에서 지원되지 않으며 데이터세트에 있을 수 있는 상태 저장 연산은 현재 무시됩니다. 예를 들어, 데이터세트에 map_fn을 사용하여 이미지를 회전시키는 tf.random.uniform이 있는 경우, Python 프로세스가 실행되는 로컬 머신의 상태(예: 임의 시드)에 의존하는 데이터세트 그래프가 있습니다.
tf.distribute와 함께 사용될 때와 같이, 특정 컨텍스트에서 기본적으로 비활성화되어 있는 실험적인 tf.data.experimental.OptimizationOptions은 성능 저하를 유발합니다. 배포 설정에서 워크로드의 성능 향상이 확인된 후에만 이 옵션을 활성하세요.
tf.distribute.experimental_distribute_dataset 또는 tf.distribute.experimental_distribute_datasets_from_function을 사용할 때 작업자에 의해 처리되는 데이터의 순서는 보장되지 않습니다. 일반적으로 tf.distribute을 사용하여 예측을 조정하는 경우 요구됩니다. 하지만 배치의 각 요소에 인덱스를 삽입하고 그에 맞게 출력을 정렬하면 됩니다. 다음은 출력을 정렬하는 방법에 대한 예제 코드 조각입니다.
참고: 편의상 tf.distribute.MirroredStrategy()를 사용합니다. 여러 작업자를 사용하고 tf.distribute.MirroredStrategy를 사용하여 단일 작업자에게 훈련을 배포하는 경우에만 입력 순서를 다시 지정하면 됩니다.
|
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset_size = 24
batch_size = 6
dataset = tf.data.Dataset.range(dataset_size).enumerate().batch(batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
def predict(index, inputs):
outputs = 2 * inputs
return index, outputs
result = {}
for index, inputs in dist_dataset:
output_index, outputs = mirrored_strategy.run(predict, args=(index, inputs))
indices = list(mirrored_strategy.experimental_local_results(output_index))
rindices = []
for a in indices:
rindices.extend(a.numpy())
outputs = list(mirrored_strategy.experimental_local_results(outputs))
routputs = []
for a in outputs:
routputs.extend(a.numpy())
for i, value in zip(rindices, routputs):
result[i] = value
print(result)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
<a name="tensorinputs"> # 표준 tf.data.Dataset 인스턴스를 사용하지 않는 경우 데이터를 어떻게 배포하나요? </a>
때때로 사용자가 tf.data.Dataset을 사용하여 입력을 나타내고 이후에 언급한 API를 사용하여 데이터 세트를 여러 기기에 분배할 수 없습니다. 이런 경우 생성기의 원시 텐서 또는 입력을 사용할 수 있습니다.
임의의 텐서 입력에 experiment_distribute_values_from_function 사용하기
strategy.run은 next(iterator)의 출력인 tf.distribute.DistributedValues를 허용합니다. 텐서 값을 전달하려면 experimental_distribute_values_from_function을 사용하여 원시 텐서에서 tf.distribute.DistributedValues를 구성합니다.
|
mirrored_strategy = tf.distribute.MirroredStrategy()
worker_devices = mirrored_strategy.extended.worker_devices
def value_fn(ctx):
return tf.constant(1.0)
distributed_values = mirrored_strategy.experimental_distribute_values_from_function(value_fn)
for _ in range(4):
result = mirrored_strategy.run(lambda x:x, args=(distributed_values,))
print(result)
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
생성기에서 입력한 경우 tf.data.Dataset.from_generator 사용하기
사용하려는 생성기 함수가 있는 경우, from_generator API를 사용하여 tf.data.Dataset 인스턴스를 생성할 수 있습니다.
참고: 현재 tf.distribute.TPUStrategy에서는 지원하지 않습니다.
|
mirrored_strategy = tf.distribute.MirroredStrategy()
def input_gen():
while True:
yield np.random.rand(4)
# use Dataset.from_generator
dataset = tf.data.Dataset.from_generator(
input_gen, output_types=(tf.float32), output_shapes=tf.TensorShape([4]))
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
iterator = iter(dist_dataset)
for _ in range(4):
mirrored_strategy.run(lambda x:x, args=(next(iterator),))
|
site/ko/tutorials/distribute/input.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
''The experiments I am about to relate ... may be repeated with great ease,
whenever the sun shines, and without any other apparatus than is at hand to everyone [1]''
Así comenzó Thomas Young su famoso experimento el 24 de noviembre de 1803 en la Real Sociedad de Londres. Ante una audencia mayoritariamente defensora de la teoría corpuscular de la luz (apoyada por Isaac Newton), Thomas Young llevó a cabo el primer experimento de interferencias de luz, demostrando la naturaleza ondulatoria de la luz. Dejó pasar un rayo de sol por un pequeño orificio de la ventana de la habitación e hizo incidir el haz de luz sobre el canto de una tarjeta diviendo el haz en dos. Estos dos haces al solaparse en una pantalla generaban unas franjas oscuras y brillantes de luz.
[1] Thomas Young, "Experimental Demonstration of the General Law of the Interference of Light", Philosophical Transactions of the Royal Society of London vol. 94 (1804).
Teoría
Normalmente el experimento de Young se representa con una doble rendija, tal y como aparece en la figura anterior. Una onda esférica (o bien una onda plana, el tratamiento es equivalente), incide en una pantalla sobre la cual se han realizado dos aperturas $S_1$ y $S_2$ muy próximas entre sí (llamaremos a la distancia entre ellas $a$). Estas aperturas actúan como dos fuentes secundarias de radiación, generando a su vez dos ondas esféricas que se superponen en el espacio que hay detrás de ellas. Si observamos la distribución de irradiancia en una pantalla situada a una cierta distancia $D$, ¿qué nos encontraremos?.
Las dos ondas que se generan en $S_1$ y $S_2$ pueden escribirse como:
$$\vec{E_1} = \vec{e_1} \; E_{01} \; \; \cos\left( k r_1 - \omega t + \phi_1\right)$$
$$\vec{E_2} = \vec{e_2} \; E_{02} \; \; \cos\left( k r_2 - \omega t + \phi_2\right)$$
$E_{0j}$ es la amplitud de la onda, $\vec{e_j}$ es la dirección de vibración y $\phi_j$ es la fase inicial. $r_1$ ($r_2$) es el camino que recorre la onda desde $S_1$ ($S_2$) hasta el punto de observación P. Ambas ondas tienen la misma longitud de onda.
La superposición de estas dos ondas, nos dará la expresión de la irradiancia ya conocida,
$$I_T = I_1 + I_2 + 2 \sqrt{I_1 I_2} \; (\vec{e_ 1}\cdot\vec{e_ 2}) \; cos(\delta)$$
donde $\delta = k (r_2 - r_1) + \phi_2 - \phi_1$ es el desfase (o la diferencia de fase) entre las dos ondas.
En esta expresión podemos hacer alguna que otra simplificación,
$\vec{e_ 1} \cdot \vec{e_ 2}=1$ porque las ondas las consideramos polarizadas linealmente en la misma dirección.
$I_1 = I_2$ en caso de que no haya ningún filtro en $S_1$ ó $S_2$, las dos ondas tienen la misma amplitud.
$\phi_2 - \phi_1 = 0$ ya que el frente de ondas llega simultáneamente a $S_1$ y $S_2$. Nótese que si colocásemos por ejemplo una pieza de un material transparente antes de una de las dos aperturas, tendríamos un desfase adicional en una de las dos ondas y esta diferencia ya no sería nula. Esto ocurriría porque una de las ondas viajaría a través del material mientras la otra onda lo haría en aire.
Así la irradiancia total queda
$$I_T = 2 I_1 \left( 1 + cos(\delta) \; \right)$$
con $\delta = k (r_2 - r_1)$
Como vemos, es la diferencia de caminos $\Delta = r_2 - r_1$ la que determina el valor de la irradiancia final en el punto P. Vamos a calcularla.
|
from IPython.display import Image
Image(filename="ExperimentoYoung.jpg")
|
Experimento de Young/ExperimentoYoung.ipynb
|
ecabreragranado/OpticaFisicaII
|
gpl-3.0
|
Según la figura, $\Delta = r_2 - r_1$ lo podemos escribir como $\Delta = a sen(\theta)$, siendo $a$ la separación entre las rendijas. Si éste ángulo es pequeño (lo que significa que la distancia entre las fuentes y la pantalla de observación sea grande comparada con la separación entre las fuentes), esta expresión la podemos simplificar,
$$ \Delta = a sen(\theta) \simeq a tan(\theta) = a \frac{x}{D}$$.
Y por tanto,
$$\delta = k \frac{a x }{D} = \frac{2 \pi a x}{\lambda D}$$
En estas expresiones, $x$ es la distancia del punto P de observación al eje mientras que $D$ es la distancia entre el plano
que contiene a las fuentes y la pantalla de observación, donde se encuentra P. Podemos reescribir la irradiancia total en la pantalla empleando la expresión calculada del desfase
$$I_T = 2 I_1 \left( 1 + cos\left( \frac{2 \pi a x}{\lambda D} \right) \; \right)$$
Distribución de luz. Patrón de interferencias
Ahora estamos en disposición de contestar a la pregunta que nos planteábamos antes, ¿cómo es la distribución de irradiancia en la pantalla de observación?. Vemos que el desfase depende de la altura en la pantalla $x$, por tanto al movernos en esa dirección el valor de la irradiancia cambiará. En particular el término que provoca esa variación es del tipo cosenoidal $cos( \frac{2 \pi a x}{\lambda D})$ por lo que veremos en la pantalla una distribución cosenoidal, con máximos de irradiancia cuando $\delta = 2 m \pi$, con $m = 0, \pm 1, \pm 2 ...$ y mínimos de irradiancia cuando $\delta = (2 m + 1) \pi$, con $m = 0, \pm 1, \pm 2 ...$. Las posiciones $x$ a las que corresponden estas condiciones serán,
Máximos de irradiancia. $\delta = 2 m \pi \implies \Delta = m \lambda \implies$
$$x^{max}_m = \frac{m \lambda D}{a}$$
Mínimos de irradiancia. $\delta = (2 m + 1)\pi \implies \Delta = \frac{(2m +1) \lambda}{2} \implies$
$$x^{min}_m = \frac{(2m + 1) \lambda D}{2a}$$
Vamos a dibujar la distribución de irradiancia en la pantalla y un corte a lo largo del eje X (ejecutar la siguiente celda de código).
|
from matplotlib.pyplot import *
from numpy import *
%matplotlib inline
style.use('fivethirtyeight')
###################################################################################
# PARÁMETROS. SE PUEDEN MODIFICAR SUS VALORES
###################################################################################
Lambda =400e-9 # en metros, longitud de onda de la radiación
D = 4.5 # en metros, distancia entre el plano que contiene las fuentes y la pantalla de observación
a = 0.003 # en metros, separación entre fuentes
###################################################################################
interfranja=Lambda*D/a # cálculo de la interfranja
k = 2.0*pi/Lambda
x = linspace(-5*interfranja,5*interfranja,500)
I1 = 1 # Consideramos irradiancias normalizadas a un cierto valor.
I2 = 0.01
X,Y = meshgrid(x,x)
delta = k*a*X/D
Itotal = I1 + I2 + 2.0*sqrt(I1*I2)*cos(delta)
figure(figsize=(14,5))
subplot(121)
pcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4)
xlabel("x (mm)"); ylabel("y (mm)")
subplot(122)
plot(x*1e3,Itotal[x.shape[0]/2,:])
xlabel("x (mm)"); ylabel("Irradiancia total normalizada")
|
Experimento de Young/ExperimentoYoung.ipynb
|
ecabreragranado/OpticaFisicaII
|
gpl-3.0
|
Como podemos ver, los máximos están equiespaciados (lo mismo sucede con los míminos), siendo la distancia entre dos máximos consecutivos
$$ \text{Interfranja} = \frac{\lambda D}{a} $$
Dicha magnitud se conoce con el nombre de interfranja y nos da información sobre el tamaño característico del patrón de franjas.
Además del tamaño, para poder observar con claridad las franjas es necesario que estén bien contrastadas. Para ello se define el contraste o visibilidad de las franjas
$$ C = \frac{I_T^{max}-I_T^{min}}{I_T^{max}+I_T^{min}}$$
que nos dice cuanto están separados los máximos de luz respecto de los mínimos.
El valor de estas dos magnitudes para el caso representado en la figura anterior se muestra en la siguiente celda (ejecutar dicha celda después de haber ejecutado la anterior celda de código)
|
interfranja=Lambda*D/a # cálculo de la interfranja
C = (Itotal.max() - Itotal.min())/(Itotal.max() + Itotal.min()) # cálculo del contraste
print "a=",a*1e3,"mm ","D=",D,"m ","Longitud de onda=",Lambda*1e9,"nm" # valores de los parámetros
print "Interfranja=",interfranja*1e3,"mm" # muestra el valor de la interfranja en mm
print 'Contraste=',C # muestra el valor del contraste
|
Experimento de Young/ExperimentoYoung.ipynb
|
ecabreragranado/OpticaFisicaII
|
gpl-3.0
|
Some Pretty Printing and Imports
(not the "real" work yet)
|
import base64
import numpy as np
import pprint
import os
import tensorflow
from graphviz import Source
import tensorflow as tf
from IPython.display import Image
from IPython.lib import pretty
import struct2tensor as s2t
from struct2tensor.test import test_pb2
from google.protobuf import text_format
def _display(graph):
"""Renders a graphviz digraph."""
s = Source(graph)
s.format='svg'
return s
def _create_query_from_text_sessions(text_sessions):
"""Creates a struct2tensor query from a list of pbtxt of struct2tensor.test.Session."""
sessions = tf.constant([
text_format.Merge(
text_session,
test_pb2.Session()
).SerializeToString()
for text_session in text_sessions
])
return s2t.create_expression_from_proto(
sessions, test_pb2.Session.DESCRIPTOR)
def _prensor_pretty_printer(prensor, p, cycle):
"""Pretty printing function for struct2tensor.prensor.Prensor"""
pretty.pprint(prensor.get_sparse_tensors())
def _sp_pretty_printer(sp, p, cycle):
"""Pretty printing function for SparseTensor."""
del cycle
p.begin_group(4, "SparseTensor(")
p.text("values={}, ".format(sp.values.numpy().tolist()))
p.text("dense_shape={}, ".format(sp.dense_shape.numpy().tolist()))
p.break_()
p.text("indices={}".format(sp.indices.numpy().tolist()))
p.end_group(4, ")")
pretty.for_type(tf.SparseTensor, _sp_pretty_printer)
pretty.for_type(s2t.Prensor, _prensor_pretty_printer)
_pretty_print = pretty.pprint
print("type-specific pretty printing ready to go")
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
The real work:
A function that parses our structured data (protobuffers) into tensors:
|
@tf.function(input_signature=[tf.TensorSpec(shape=(None), dtype=tf.string)], autograph=False)
def parse_session(serialized_sessions):
"""A TF function parsing a batch of serialized Session protos into tensors.
It is a TF graph that takes one 1-D tensor as input, and outputs a
Dict[str, tf.SparseTensor]
"""
query = s2t.create_expression_from_proto(
serialized_sessions, test_pb2.Session.DESCRIPTOR)
# Move all the fields of our interest to under "event".
query = query.promote_and_broadcast({
"session_feature": "session_info.session_feature",
"action_number_of_views": "event.action.number_of_views" },
"event")
# Specify "event" to be examples.
query = query.reroot("event")
# Extract all the fields of our interest.
projection = query.project(["session_feature", "query", "action_number_of_views"])
prensors = s2t.calculate_prensors([projection])
output_sparse_tensors = {}
for prensor in prensors:
path_to_tensor = prensor.get_sparse_tensors()
output_sparse_tensors.update({str(k): v for k, v in path_to_tensor.items()})
return output_sparse_tensors
print("Defined the workhorse func: (structured data at rest) -> (tensors)")
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
Lets see it in action:
|
serialized_sessions = tf.constant([
text_format.Merge(
"""
session_info {
session_duration_sec: 1.0
session_feature: "foo"
}
event {
query: "Hello"
action {
number_of_views: 1
}
action {
}
}
event {
query: "world"
action {
number_of_views: 2
}
action {
number_of_views: 3
}
}
""",
test_pb2.Session()
).SerializeToString()
])
_pretty_print(parse_session(serialized_sessions))
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
See how we went from our pre-pipeline data (the Protobuffer) all the way to the structured data, packed into SparseTensors?
Digging Far Deeper
Interested and want to learn more? Read on...
Let's define several terms we mentioned before:
Prensor
A Prensor (protobuffer + tensor) is a data structure storing the data we work on. We use protobuffers a lot at Google. struct2tensor can support other structured formats, too.
For example, throughout this colab we will be using proto
struct2tensor.test.Session. A schematic visualization
of a selected part of the prensor from that proto looks like:
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display("""
digraph {
root -> session [label="*"];
session -> event [label="*"];
session -> session_id [label="?"];
event -> action [label="*"];
event -> query_token [label="*"]
action -> number_of_views [label="?"];
}
""")
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
We will be using visualizations like this to demostrate struct2tensor queries later.
Note:
The "*" on the edge means the pointed node has repeated values; while the "?" means it has an optional value.
There is always a "root" node whose only child is the root of the structure. Note that it's "repeated" because one struct2tensorTree can represent multiple instances of a structure.
struct2tensor Query
A struct2tensor query transforms a Prensor into another Prensor.
For example, broadcast is a query that replicates a node as a child of one of its siblings.
Applying
broadcast(
source_path="session.session_id",
sibling="event",
new_field_name="session_session_id")
on the previous tree gives:
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display("""
digraph {
session_session_id [color="red"];
root -> session [label="*"];
session -> event [label="*"];
session -> session_id [label="?"];
event -> action [label="*"];
event -> session_session_id [label="?"];
event -> query_token [label="*"];
action -> number_of_views [label="?"];
}
""")
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
We will talk about common struct2tensor queries in later sections.
Projection
A projection of paths in a Prensor produces another Prensor with just the selected paths.
Logical representation of a projection
The structure of the projected path can be represented losslessly as nested lists. For example, the projection of event.action.number_of_views from the struct2tensorTree formed by the following two instances of struct2tensor.test.Session:
{
event { action { number_of_views: 1} action { number_of_views: 2} action {} }
event {}
}, {
event { action { number_of_views: 3} }
}
is:
[ # the outer list has two elements b/c there are two Session protos.
[ # the first proto has two events
[[1],[2],[]], # 3 actions, the last one does not have a number_of_views.
[], # the second event does not have action
],
[ # the second proto has one event
[[3]],
],
]
Representing nested lists with tf.SparseTensor
struct2tensor uses tf.SparseTensor to represent the above nested list in the projection results. Note that tf.SparseTensor essentially enforces that the lists nested at the same level to have the same length (because the there is a certain size for each dimension), therefore this representation is lossy. The above nested lists, when written as a SparseTensor will look like:
tf.SparseTensor(
dense_shape=[2, 2, 3, 1], # each is the maximum length of lists at the same nesting level.
values = [1, 2, 3],
indices = [[0, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]]
)
Note that the last dimension is useless: the index of that dimension will always be 0 for any present value because number_of_views is an optional field. So struct2tensors library will actually "squeeze" all the optional dimensions.
The actual result would be:
|
query = _create_query_from_text_sessions(['''
event { action { number_of_views: 1} action { number_of_views: 2} action {} }
event {}
''', '''
event { action { number_of_views: 3} }
''']
).project(["event.action.number_of_views"])
prensor = s2t.calculate_prensors([query])
pretty.pprint(prensor)
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
struct2tensor's internal data model is closer to the above "nested lists" abstraction and sometimes it's easier to reason with "nested lists" than with SparseTensors.
Recently, tf.RaggedTensor was introduced to represent nested lists exactly. We are working on adding support for projecting into ragged tensors.
Common struct2tensor Queries
promote
Promotes a node to become a sibling of its parent. If the node is repeated, then all its values are concatenated (the order is preserved).
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
root -> session [label="*"];
session -> event [label="*"];
event -> query_token [label="*"];
}
''')
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
promote(source_path="event.query_token", new_field_name="event_query_token")
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
event_query_token [color="red"];
root -> session [label="*"];
session -> event [label="*"];
session -> event_query_token [label="*"];
event -> query_token [label="*"];
}
''')
query = (_create_query_from_text_sessions([
"""
event {
query_token: "abc"
query_token: "def"
}
event {
query_token: "ghi"
}
"""])
.promote(source_path="event.query_token", new_field_name="event_query_token")
.project(["event_query_token"]))
prensor = s2t.calculate_prensors([query])
_pretty_print(prensor)
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
The projected structure is like:
{
# this is under Session.
event_query_token: "abc"
event_query_token: "def"
event_query_token: "ghi"
}
broadcast
Broadcasts the value of a node to one of its sibling. The value will be replicated if the sibling is repeated. This is similar to TensorFlow and Numpy's broadcasting semantics.
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
root -> session [label="*"];
session -> session_id [label="?"];
session -> event [label="*"];
}
''')
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
broadcast(source_path="session_id", sibling_field="event", new_field_name="session_session_id")
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
session_session_id [color="red"];
root -> session [label="*"];
session -> session_id [label="?"];
session -> event [label="*"];
event -> session_session_id [label="?"];
}
''')
query = (_create_query_from_text_sessions([
"""
session_id: 8
event { }
event { }
"""])
.broadcast(source_path="session_id",
sibling_field="event",
new_field_name="session_session_id")
.project(["event.session_session_id"]))
prensor = s2t.calculate_prensors([query])
_pretty_print(prensor)
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
The projected structure is like:
{
event {
session_session_id: 8
}
event {
session_session_id: 8
}
}
promote_and_broadcast
The query accepts multiple source fields and a destination field. For each source field, it first promotes it to the least common ancestor with the destination field (if necessary), then broadcasts it to the destination field (if necessary).
Usually for the purpose of machine learning, this gives a reasonable flattened representation of nested structures.
promote_and_broadcast(
path_dictionary={
'session_info_duration_sec': 'session_info.session_duration_sec'},
dest_path_parent='event.action')
is equivalent to:
```
promote(source_path='session_info.session_duration_sec',
new_field_name='anonymous_field1')
broadcast(source_path='anonymous_field1',
sibling_field='event.action',
new_field_name='session_info_duration_sec')
```
map_field_values
Creates a new node that is a sibling of a leaf node. The values of the new node are results of applying the given function to the values of the source node.
Note that the function provided takes 1-D tensor that contains all the values of the source node as input and should also output a 1-D tensor of the same size, and it should build TF ops.
|
query = (_create_query_from_text_sessions([
"""
session_id: 8
""",
"""
session_id: 9
"""])
.map_field_values("session_id", lambda x: tf.add(x, 1), dtype=tf.int64,
new_field_name="session_id_plus_one")
.project(["session_id_plus_one"]))
prensor = s2t.calculate_prensors([query])
_pretty_print(prensor)
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
reroot
Makes the given node the new root of the struct2tensorTree. This has two effects:
restricts the scope of the struct2tensorTree
The field paths in all the following queries are relative to the new root
There's no way to refer to nodes that are outside the subtree rooted at the new root.
changes the batch dimension.
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
root -> session [label="*"];
session -> session_id [label="?"];
session -> event [label="*"];
event -> event_id [label="?"];
}
''')
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
reroot("event")
|
#@title { display-mode: "form" }
#@test {"skip": true}
_display('''
digraph {
root -> event [label="*"];
event -> event_id [label="?"];
}
''')
#@title { display-mode: "form" }
text_protos = ["""
session_id: 1
event {
event_id: "a"
}
event {
event_id: "b"
}
""",
"""
session_id: 2
""",
"""
session_id: 3
event {
event_id: "c"
}
"""
]
print("""Assume the following Sessions: """)
print([text_format.Merge(p, s2t.test.test_pb2.Session()) for p in text_protos])
print("\n")
reroot_example_query = _create_query_from_text_sessions(text_protos)
print("""project(["event.event_id"]) before reroot() (the batch dimension is the index to sessions):""")
_pretty_print(s2t.calculate_prensors([reroot_example_query.project(["event.event_id"])]))
print("\n")
print("""project(["event_id"]) after reroot() (the batch dimension becomes the index to events):""")
_pretty_print(s2t.calculate_prensors([reroot_example_query.reroot("event").project(["event_id"])]))
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
Proto Map
You can specify a key for the proto map field in a path via brackets.
Given the following tf.Example:
features {
feature {
key: "my_feature"
value {
float_list {
value: 1.0
}
}
}
feature {
key: "other_feature"
value {
bytes_list {
value: "my_val"
}
}
}
}
To get the values of my_feature and other_feature, we can promote_and_broadcast and project the following paths: features.feature[my_feature].float_list.value and features.feature[other_feature].bytes_list.value
This results in the following dict of ragged tensors:
{
features.my_new_feature: <tf.RaggedTensor [[[1.0]]]>,
features.other_new_feature: <tf.RaggedTensor [[[b'my_val']]]>
}
Note: we renamed my_feature to my_new_feature in the promote_and_broadcast (and similarly for other_feature).
|
tf_example = text_format.Parse("""
features {
feature {
key: "my_feature"
value {
float_list {
value: 1.0
}
}
}
feature {
key: "other_feature"
value {
bytes_list {
value: "my_val"
}
}
}
}
""", tf.train.Example())
query = s2t.create_expression_from_proto(
tf_example.SerializeToString(), tf.train.Example.DESCRIPTOR)
query = query.promote_and_broadcast({'my_new_feature': "features.feature[my_feature].float_list.value", "other_new_feature": "features.feature[other_feature].bytes_list.value"}, "features")
query = query.project(["features.my_new_feature", "features.other_new_feature"])
[prensor] = s2t.calculate_prensors([query])
ragged_tensors = prensor.get_ragged_tensors()
print(ragged_tensors)
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
Apache Parquet Support
struct2tensor offers an Apache Parquet tf.DataSet that allows reading from a Parquet file and apply queries to manipulate the structure of the data.
Because of the powerful struct2tensor library, the dataset will only read the Parquet columns that are required. This reduces I/O cost if we only need a select few columns.
Preparation
Please run the code cell at Some Pretty Printing and Imports to ensure that all required modules are imported, and that pretty print works properly.
Prepare the input data
|
# Download our sample data file from the struct2tensor repository. The desciption of the data is below.
#@test {"skip": true}
!curl -o dremel_example.parquet 'https://raw.githubusercontent.com/google/struct2tensor/master/struct2tensor/testdata/parquet_testdata/dremel_example.parquet'
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
Example
We will use a sample Parquet data file (dremel_example.parquet), which contains data based on the example used in this paper: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36632.pdf
The file dremel_example.parquet has the following schema:
message Document {
required int64 DocId;
optional group Links {
repeated int64 Backward;
repeated int64 Forward; }
repeated group Name {
repeated group Language {
required string Code;
optional string Country; }
optional string Url; }}
and contains the following data:
Document
DocId: 10
Links
Forward: 20
Forward: 40
Forward: 60
Name
Language
Code: 'en-us'
Country: 'us'
Language
Code: 'en'
Url: 'http://A'
Name
Url: 'http://B'
Name
Language
Code: 'en-gb'
Country: 'gb'
Document
DocId: 20
Links
Backward: 10
Backward: 30
Forward: 80
Name
Url: 'http://C'
In this example, we will promote and broadcast the field Links.Forward and project it.
batch_size will be the number of records (Document) per prensor. This works with optional and repeated fields, and will be able to batch the entire record.
Feel free to try batch_size = 2 in the below code. (Note this parquet file only has 2 records (Document) total).
|
#@test {"skip": true}
from struct2tensor import expression_impl
filenames = ["dremel_example.parquet"]
batch_size = 1
exp = s2t.expression_impl.parquet.create_expression_from_parquet_file(filenames)
new_exp = exp.promote_and_broadcast({"new_field": "Links.Forward"}, "Name")
proj_exp = new_exp.project(["Name.new_field"])
proj_exp_needed = exp.project(["Name.Url"])
# Please note that currently, proj_exp_needed needs to be passed into calculate.
# This is due to the way data is stored in parquet (values and repetition &
# definition levels). To construct the node for "Name", we need to read the
# values of a column containing "Name".
pqds = s2t.expression_impl.parquet.calculate_parquet_values([proj_exp, proj_exp_needed], exp,
filenames, batch_size)
for prensors in pqds:
new_field_prensor = prensors[0]
print("============================")
print("Schema of new_field prensor: ")
print(new_field_prensor)
print("\nSparse tensor representation: ")
pretty.pprint(new_field_prensor)
print("============================")
|
examples/prensor_playground.ipynb
|
google/struct2tensor
|
apache-2.0
|
Ok you got me, the plot function still generates a line by default... but we can turn it off
|
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', marker='o')
|
classes/12_matplotlib/2_points_and_errorbars.ipynb
|
theJollySin/python_for_scientists
|
gpl-3.0
|
Markersize
|
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', marker='o', ms=15)
|
classes/12_matplotlib/2_points_and_errorbars.ipynb
|
theJollySin/python_for_scientists
|
gpl-3.0
|
Symbol
|
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='o')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='s')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='D')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='^')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='>')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='<')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='v')
|
classes/12_matplotlib/2_points_and_errorbars.ipynb
|
theJollySin/python_for_scientists
|
gpl-3.0
|
Errorbars
|
### generate some random data
xdata2 = numpy.arange(15)
ydata2 = numpy.random.randn(15)
yerrors = numpy.random.randn(15)
### initialize the figure
fig, ax = pyplot.subplots()
ax.errorbar(xdata2, ydata2, yerr=yerrors)
### initialize the figure
fig, ax = pyplot.subplots()
eb = ax.errorbar(xdata2, ydata2, yerr=yerrors, ls='', # no lines connecting points
marker='*', # circular plot symbols
ms=20, # marker size
mfc='r', # marker face color
mew=2, # marker edge width
mec='k', # marker edge color
elinewidth=2, # error line width
ecolor='gray', # error color
capsize=6) # error hat size
### also try mfc="none"
pyplot.errorbar?
|
classes/12_matplotlib/2_points_and_errorbars.ipynb
|
theJollySin/python_for_scientists
|
gpl-3.0
|
We will create a model with a listric fault from scratch. In addition to the previous parameters for creating a fault (see notebook 4-Create-model), we now change the fault "geometry" to "Curved" and add parameters defining the amplitude and radius of influence:
|
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3', 'layer 4', 'layer 5', 'layer 6', 'layer 7', 'layer 8'],
'layer_thickness' : [1000, 500, 500, 500, 500, 500, 1000, 2000]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (3000, 0, 4000),
'dip_dir' : 90,
'dip' : 30,
'slip' : 1000,
'amplitude' : 1000.,
'radius' : 2000,
'geometry' : 'Curved',
'xaxis': 5000.,
'yaxis': 5000.0,
'zaxis' : 39999.0}
nm.add_event('fault', fault_options)
nm.change_cube_size(50)
|
docs/notebooks/10-Fault-Shapes.ipynb
|
Leguark/pynoddy
|
gpl-2.0
|
With these settings, we obtain an example of a listric fault in Noddy:
|
history = "listric_example.his"
outout_name = "listric_out"
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "",
savefig = False, fig_filename = "ex01_fault_listric.eps")
|
docs/notebooks/10-Fault-Shapes.ipynb
|
Leguark/pynoddy
|
gpl-2.0
|
As you can see the resulting topography is very different than in the case with continuous uplift.
For our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering.
First we import remaining modules and set some parameter values
|
from landlab.components import DepthDependentDiffuser, ExponentialWeatherer
# here are the parameters to change
K = 0.0005 # stream power coefficient, bigger = streams erode more quickly
U = 0.0001 # uplift rate in meters per year
max_soil_production_rate = (
0.001
) # Maximum weathering rate for bare bedrock in meters per year
soil_production_decay_depth = 0.7 # Characteristic weathering depth in meters
linear_diffusivity = 0.01 # Hillslope diffusivity and m2 per years
soil_transport_decay_depth = 0.5 # Characteristic soil transport depth in meters
dt = 100 # time step in years
dx = 10 # space step in meters
nr = 60 # number of model rows
nc = 100 # number of model columns
?ExponentialWeatherer
|
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
|
cmshobe/landlab
|
mit
|
Manual iteration through test image to generate convolutional test features. Saves each batch to disk insetad of loading in memory.
|
# conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
|
FAI_old/conv_test_Asus.ipynb
|
WNoxchi/Kaukasos
|
mit
|
I think conv_feat below should be conv_test_feat
|
fname = path + 'results/conv_test_feat.dat'
%rm -r $fname
for i in xrange(test_batches.n // batch_size + 1):
conv_test_feat = conv_model.predict_on_batch(test_batches.next()[0])
if not i:
c = bcolz.carray(conv_feat, rootdir= path + '/results/conv_test_feat.dat', mode='a')
else:
c.append(conv_feat)
c.flush()
|
FAI_old/conv_test_Asus.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Question: Why does it look like I can have the entire conv_test_feat array open at once, when opened w/ bcolz; but when it's explicitly loaded as a Numpy array via bcolz.open(fname)[:], all of a sudden the RAM takes a severe memory hit?
|
# apparently you can just open a (massive) bcolz carray this way
# without crashing memory... okay I'm learning things
# carr = bcolz.open(fname)
# forgot to add the '+1' so missed the last 14 images. Doing that here:
# NOTE: below code only adds on the missed batch
# iterate generator until final missed batch, then work:
fname = path + 'results/conv_test_feat.dat'
test_batches.reset()
iters = test_batches.n // batch_size
for i in xrange(iters): test_batches.next()
conv_test_feat = conv_model.predict_on_batch(test_batches.next()[0])
# c = bcolz.carray(conv_test_feat, rootdir=fname, mode='a')
c = bcolz.open(fname)
c.append(conv_test_feat)
c.flush()
|
FAI_old/conv_test_Asus.ipynb
|
WNoxchi/Kaukasos
|
mit
|
As expected (& which motivated this) the full set of convolutional test features does not fit at once in memory.
|
fname = path + 'results/conv_test_feat.dat'
x = bcolz.open(fname)
len(x)
|
FAI_old/conv_test_Asus.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Loading train/valid features; defining & fitting NN model
|
# conv_train_feat_batches = get_batches(path + '/results/conv_feat.dat')
# conv_valid_feat_batches = get_batches(path + '/results/conv_val_feat.dat')
conv_trn_feat = load_array(path + '/results/conv_feat.dat')
conv_val_feat = load_array(path + '/results/conv_val_feat.dat')
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
p = 0.8
bn_model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
])
bn_model.compile(Adam(lr=1e-3), loss='categorical_crossentropy', metrics=['accuracy'])
# Sequential.fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose=1, callbacks=None, validation_data=None, nb_val_samples=None, class_weight=None, max_q_size=10, nb_worker=1, pickle_safe=False, initial_epoch=0, **kwargs)
# bn_model.fit_generator((conv_train_feat_batches, trn_labels), conv_train_feat_batches.nb_sample, nb_epoch=1,
# validation_data=(conv_valid_feat_batches, val_labels), nb_val_samples=conv_valid_feat_batches.nb_sample)
bn_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data = (conv_val_feat, val_labels))
bn_model.optimizer.lr=1e-2
bn_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data = (conv_val_feat, val_labels))
# bn_model.save_weights(path + 'models/da_conv8.h5')
bn_model.load_weights(path + 'models/da_conv8.h5')
# conv_test_feat_batches = bcolz.iterblocks(path + fname)
fname = path + 'results/conv_test_feat.dat'
idx, inc = 0, 4096
preds = []
while idx < test_batches.n - inc:
conv_test_feat = bcolz.open(fname)[idx:idx+inc]
idx += inc
if len(preds):
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
else:
preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
conv_test_feat = bcolz.open(fname)[idx:]
next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0)
preds = np.concatenate([preds, next_preds])
print(len(preds))
if len(preds) != len(bcolz.open(fname)):
print("Ya done fucked up, son.")
|
FAI_old/conv_test_Asus.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.