markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.The general takeaways here should be:* Make sure your data is encoded properly* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range* Deal with missing data in one way or another: replace it if you can or drop it* Never tidy your data manually because that is not easily reproducible* Use code as a record of how you tidied your data* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct Bonus: Testing our data[[ go back to the top ]](Table-of-contents)At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,```Pythonassert 1 == 2```will raise an `AssertionError` and stop execution of the notebook because the assertion failed.Let's test a few things that we know about our data set now.
# We know that we should only have three classes assert len(iris_data_clean['class'].unique()) == 3 # We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5 # We know that our data set should have no missing measurements assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) | (iris_data_clean['sepal_width_cm'].isnull()) | (iris_data_clean['petal_length_cm'].isnull()) | (iris_data_clean['petal_width_cm'].isnull())]) == 0 # We know that our data set should have no missing measurements assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) | (iris_data['sepal_width_cm'].isnull()) | (iris_data['petal_length_cm'].isnull()) | (iris_data['petal_width_cm'].isnull())]) == 0
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage. Data Cleanup & Wrangling > 80% time spent in Data Science Step 4: Exploratory analysis[[ go back to the top ]](Table-of-contents)Now after spending entirely too much time tidying our data, we can start analyzing it!Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:* How is my data distributed?* Are there any correlations in my data?* Are there any confounding factors that explain these correlations?This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.Let's return to that scatterplot matrix that we used earlier.
sb.pairplot(iris_data_clean) ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up.
sb.pairplot(iris_data_clean, hue='class') ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
plt.figure(figsize=(10, 10)) for column_index, column in enumerate(iris_data_clean.columns): if column == 'class': continue plt.subplot(2, 2, column_index + 1) sb.violinplot(x='class', y=column, data=iris_data_clean)
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Enough flirting with the data. Let's get to modeling. Step 5: Classification[[ go back to the top ]](Table-of-contents)Wow, all this work and we *still* haven't modeled the data!As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.Remember: **Bad data leads to bad models.** Always check your data first.Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.A **training set** is a random subset of the data that we use to train our models.A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.Let's set up our data first.
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv') # We're using all four measurements as inputs # Note that scikit-learn expects each entry to be a list of values, e.g., # [ [val1, val2, val3], # [val1, val2, val3], # ... ] # such that our input data set is represented as a list of lists # We can extract the data in this format from pandas like this: all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm', 'petal_length_cm', 'petal_width_cm']].values # Similarly, we can extract the class labels all_labels = iris_data_clean['class'].values # Make sure that you don't mix up the order of the entries # all_inputs[5] inputs should correspond to the class in all_labels[5] # Here's what a subset of our inputs looks like: all_inputs[:5] all_labels[:5] type(all_inputs) all_labels[:5] type(all_labels)
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now our data is ready to be split.
from sklearn.model_selection import train_test_split all_inputs[:3] iris_data_clean.head(3) all_labels[:3] # Here we split our data into training and testing data (training_inputs, testing_inputs, training_classes, testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1) training_inputs[:5] testing_inputs[:5] testing_classes[:5] training_classes[:5]
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.Here's an example decision tree classifier:Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
from sklearn.tree import DecisionTreeClassifier # Create the classifier decision_tree_classifier = DecisionTreeClassifier() # Train the classifier on the training set decision_tree_classifier.fit(training_inputs, training_classes) # Validate the classifier on the testing set using classification accuracy decision_tree_classifier.score(testing_inputs, testing_classes) 150*0.25 len(testing_inputs) 37/38 from sklearn import svm svm_classifier = svm.SVC(gamma = 'scale') svm_classifier.fit(training_inputs, training_classes) svm_classifier.score(testing_inputs, testing_classes) svm_classifier = svm.SVC(gamma = 'scale') svm_classifier.fit(training_inputs, training_classes) svm_classifier.score(testing_inputs, testing_classes)
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Heck yeah! Our model achieves 97% classification accuracy without much effort.However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
import matplotlib.pyplot as plt # here we randomly split data 1000 times in differrent training and test sets model_accuracies = [] for repetition in range(1000): (training_inputs, testing_inputs, training_classes, testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25) decision_tree_classifier = DecisionTreeClassifier() decision_tree_classifier.fit(training_inputs, training_classes) classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes) model_accuracies.append(classifier_accuracy) plt.hist(model_accuracies) ; 100/38
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before. Cross-validation[[ go back to the top ]](Table-of-contents)This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:(each square is an entry in our data set)
# new text import numpy as np from sklearn.model_selection import StratifiedKFold def plot_cv(cv, features, labels): masks = [] for train, test in cv.split(features, labels): mask = np.zeros(len(labels), dtype=bool) mask[test] = 1 masks.append(mask) plt.figure(figsize=(15, 15)) plt.imshow(masks, interpolation='none', cmap='gray_r') plt.ylabel('Fold') plt.xlabel('Row #') plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)We can perform 10-fold cross-validation on our model with the following code:
from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_score decision_tree_classifier = DecisionTreeClassifier() # cross_val_score returns a list of the scores, which we can visualize # to get a reasonable estimate of our classifier's performance cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10) plt.hist(cv_scores) plt.title('Average score: {}'.format(np.mean(cv_scores))) ; len(all_inputs.T[1]) print("Entropy for: ", stats.entropy(all_inputs.T[1])) # we go through list of column names except last one and get entropy # for data (without missing values) in each column def printEntropy(npdata): for i, col in enumerate(npdata.T): print("Entropy for column:", i, stats.entropy(col)) printEntropy(all_inputs)
Entropy for column: 0 4.9947332367061925 Entropy for column: 1 4.994187360273029 Entropy for column: 2 4.88306851089088 Entropy for column: 3 4.76945055275522
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now we have a much more consistent rating of our classifier's general classification accuracy. Parameter tuning[[ go back to the top ]](Table-of-contents)Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
decision_tree_classifier = DecisionTreeClassifier(max_depth=1) cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10) plt.hist(cv_scores) plt.title('Average score: {}'.format(np.mean(cv_scores))) ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
the classification accuracy falls tremendously.Therefore, we need to find a systematic method to discover the best parameters for our model and data set.The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
from sklearn.model_selection import GridSearchCV decision_tree_classifier = DecisionTreeClassifier() parameter_grid = {'max_depth': [1, 2, 3, 4, 5], 'max_features': [1, 2, 3, 4]} cross_validation = StratifiedKFold(n_splits=10) grid_search = GridSearchCV(decision_tree_classifier, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(all_inputs, all_labels) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_))
Best score: 0.9664429530201343 Best parameters: {'max_depth': 3, 'max_features': 2}
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now let's visualize the grid search to see how the parameters interact.
grid_search.cv_results_['mean_test_score'] grid_visualization = grid_search.cv_results_['mean_test_score'] grid_visualization.shape = (5, 4) sb.heatmap(grid_visualization, cmap='Reds', annot=True) plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features']) plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth']) plt.xlabel('max_features') plt.ylabel('max_depth') ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
decision_tree_classifier = DecisionTreeClassifier() parameter_grid = {'criterion': ['gini', 'entropy'], 'splitter': ['best', 'random'], 'max_depth': [1, 2, 3, 4, 5], 'max_features': [1, 2, 3, 4]} cross_validation = StratifiedKFold(n_splits=10) grid_search = GridSearchCV(decision_tree_classifier, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(all_inputs, all_labels) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_))
Best score: 0.9664429530201343 Best parameters: {'criterion': 'gini', 'max_depth': 3, 'max_features': 3, 'splitter': 'best'}
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now we can take the best classifier from the Grid Search and use that:
decision_tree_classifier = grid_search.best_estimator_ decision_tree_classifier
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
import sklearn.tree as tree from sklearn.externals.six import StringIO with open('iris_dtc.dot', 'w') as out_file: out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
(This classifier may look familiar from earlier in the notebook.)Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10) sb.boxplot(dt_scores) sb.stripplot(dt_scores, jitter=True, color='black') ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**Let's see if a Random Forest classifier works better here.The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestClassifier random_forest_classifier = RandomForestClassifier() parameter_grid = {'n_estimators': [10, 25, 50, 100], 'criterion': ['gini', 'entropy'], 'max_features': [1, 2, 3, 4]} cross_validation = StratifiedKFold(n_splits=10) grid_search = GridSearchCV(random_forest_classifier, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(all_inputs, all_labels) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_)) grid_search.best_estimator_
Best score: 0.9664429530201343 Best parameters: {'criterion': 'gini', 'max_features': 3, 'n_estimators': 25}
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Now we can compare their performance:
random_forest_classifier = grid_search.best_estimator_ rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10), 'classifier': ['Random Forest'] * 10}) dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10), 'classifier': ['Decision Tree'] * 10}) both_df = rf_df.append(dt_df) sb.boxplot(x='classifier', y='accuracy', data=both_df) sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black') ;
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set. Step 6: Reproducibility[[ go back to the top ]](Table-of-contents)Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
!pip install watermark %load_ext watermark pd.show_versions() %watermark -a 'RCS_April_2019' -nmv --packages numpy,pandas,sklearn,matplotlib,seaborn
RCS_April_2019 Wed Apr 17 2019 CPython 3.7.3 IPython 7.4.0 numpy 1.16.2 pandas 0.24.2 sklearn 0.20.3 matplotlib 3.0.3 seaborn 0.9.0 compiler : MSC v.1915 64 bit (AMD64) system : Windows release : 10 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel CPU cores : 12 interpreter: 64bit
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
%matplotlib inline import pandas as pd import seaborn as sb from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, cross_val_score # We can jump directly to working with the clean data because we saved our cleaned data set iris_data_clean = pd.read_csv('../data/iris-data-clean.csv') # Testing our data: Our analysis will stop here if any of these assertions are wrong # We know that we should only have three classes assert len(iris_data_clean['class'].unique()) == 3 # We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5 # We know that our data set should have no missing measurements assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) | (iris_data_clean['sepal_width_cm'].isnull()) | (iris_data_clean['petal_length_cm'].isnull()) | (iris_data_clean['petal_width_cm'].isnull())]) == 0 all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm', 'petal_length_cm', 'petal_width_cm']].values all_labels = iris_data_clean['class'].values # This is the classifier that came out of Grid Search random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50) # All that's left to do now is plot the cross-validation scores rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10) sb.boxplot(rf_classifier_scores) sb.stripplot(rf_classifier_scores, jitter=True, color='black') # ...and show some of the predictions from the classifier (training_inputs, testing_inputs, training_classes, testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25) random_forest_classifier.fit(training_inputs, training_classes) for input_features, prediction, actual in zip(testing_inputs[:10], random_forest_classifier.predict(testing_inputs[:10]), testing_classes[:10]): print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual)) %matplotlib inline import pandas as pd import seaborn as sb from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, cross_val_score def processData(filename): # We can jump directly to working with the clean data because we saved our cleaned data set iris_data_clean = pd.read_csv(filename) # Testing our data: Our analysis will stop here if any of these assertions are wrong # We know that we should only have three classes assert len(iris_data_clean['class'].unique()) == 3 # We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5 # We know that our data set should have no missing measurements assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) | (iris_data_clean['sepal_width_cm'].isnull()) | (iris_data_clean['petal_length_cm'].isnull()) | (iris_data_clean['petal_width_cm'].isnull())]) == 0 all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm', 'petal_length_cm', 'petal_width_cm']].values all_labels = iris_data_clean['class'].values # This is the classifier that came out of Grid Search random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50) # All that's left to do now is plot the cross-validation scores rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10) sb.boxplot(rf_classifier_scores) sb.stripplot(rf_classifier_scores, jitter=True, color='black') # ...and show some of the predictions from the classifier (training_inputs, testing_inputs, training_classes, testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25) random_forest_classifier.fit(training_inputs, training_classes) for input_features, prediction, actual in zip(testing_inputs[:10], random_forest_classifier.predict(testing_inputs[:10]), testing_classes[:10]): print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual)) return rf_classifier_scores myscores = processData('../data/iris-data-clean.csv') myscores
_____no_output_____
MIT
Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
ValRCS/RCS_Data_Analysis_Python_2019_July
Example 2: Sensitivity analysis on a NetLogo model with SALibThis notebook provides a more advanced example of interaction between NetLogo and a Python environment, using the SALib library (Herman & Usher, 2017; available through the pip package manager) to sample and analyze a suitable experimental design for a Sobol global sensitivity analysis. All files used in the example are available from the pyNetLogo repository at https://github.com/quaquel/pyNetLogo.
#Ensuring compliance of code with both python2 and python3 from __future__ import division, print_function try: from itertools import izip as zip except ImportError: # will be 3.x series pass %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pyNetLogo #Import the sampling and analysis modules for a Sobol variance-based sensitivity analysis from SALib.sample import saltelli from SALib.analyze import sobol
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
SALib relies on a problem definition dictionary which contains the number of input parameters to sample, their names (which should here correspond to a NetLogo global variable), and the sampling bounds. Documentation for SALib can be found at https://salib.readthedocs.io/en/latest/.
problem = { 'num_vars': 6, 'names': ['random-seed', 'grass-regrowth-time', 'sheep-gain-from-food', 'wolf-gain-from-food', 'sheep-reproduce', 'wolf-reproduce'], 'bounds': [[1, 100000], [20., 40.], [2., 8.], [16., 32.], [2., 8.], [2., 8.]] }
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
We start by instantiating the wolf-sheep predation example model, specifying the _gui=False_ flag to run in headless mode.
netlogo = pyNetLogo.NetLogoLink(gui=False) netlogo.load_model(r'Wolf Sheep Predation_v6.nlogo')
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
The SALib sampler will automatically generate an appropriate number of samples for Sobol analysis. To calculate first-order, second-order and total sensitivity indices, this gives a sample size of _n*(2p+2)_, where _p_ is the number of input parameters, and _n_ is a baseline sample size which should be large enough to stabilize the estimation of the indices. For this example, we use _n_ = 1000, for a total of 14000 experiments.For more complex analyses, parallelizing the experiments can significantly improve performance. An additional notebook in the pyNetLogo repository demonstrates the use of the ipyparallel library; parallel processing for NetLogo models is also supported by the Exploratory Modeling Workbench (Kwakkel, 2017).
n = 1000 param_values = saltelli.sample(problem, n, calc_second_order=True)
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
The sampler generates an input array of shape (_n*(2p+2)_, _p_) with rows for each experiment and columns for each input parameter.
param_values.shape
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
Assuming we are interested in the mean number of sheep and wolf agents over a timeframe of 100 ticks, we first create an empty dataframe to store the results.
results = pd.DataFrame(columns=['Avg. sheep', 'Avg. wolves'])
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
We then simulate the model over the 14000 experiments, reading input parameters from the param_values array generated by SALib. The repeat_report command is used to track the outcomes of interest over time. To later compare performance with the ipyparallel implementation of the analysis, we also keep track of the elapsed runtime.
import time t0=time.time() for run in range(param_values.shape[0]): #Set the input parameters for i, name in enumerate(problem['names']): if name == 'random-seed': #The NetLogo random seed requires a different syntax netlogo.command('random-seed {}'.format(param_values[run,i])) else: #Otherwise, assume the input parameters are global variables netlogo.command('set {0} {1}'.format(name, param_values[run,i])) netlogo.command('setup') #Run for 100 ticks and return the number of sheep and wolf agents at each time step counts = netlogo.repeat_report(['count sheep','count wolves'], 100) #For each run, save the mean value of the agent counts over time results.loc[run, 'Avg. sheep'] = counts['count sheep'].values.mean() results.loc[run, 'Avg. wolves'] = counts['count wolves'].values.mean() elapsed=time.time()-t0 #Elapsed runtime in seconds elapsed
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
The "to_csv" dataframe method provides a simple way of saving the results to disk.Pandas supports several more advanced storage options, such as serialization with msgpack, or hierarchical HDF5 storage.
results.to_csv('Sobol_sequential.csv') results = pd.read_csv('Sobol_sequential.csv', header=0, index_col=0) results.head(5)
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
We can then proceed with the analysis, first using a histogram to visualize output distributions for each outcome:
sns.set_style('white') sns.set_context('talk') fig, ax = plt.subplots(1,len(results.columns), sharey=True) for i, n in enumerate(results.columns): ax[i].hist(results[n], 20) ax[i].set_xlabel(n) ax[0].set_ylabel('Counts') fig.set_size_inches(10,4) fig.subplots_adjust(wspace=0.1) #plt.savefig('JASSS figures/SA - Output distribution.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Output distribution.png', dpi=300, bbox_inches='tight') plt.show()
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
Bivariate scatter plots can be useful to visualize relationships between each input parameter and the outputs. Taking the outcome for the average sheep count as an example, we obtain the following, using the scipy library to calculate the Pearson correlation coefficient (r) for each parameter:
%matplotlib import scipy nrow=2 ncol=3 fig, ax = plt.subplots(nrow, ncol, sharey=True) sns.set_context('talk') y = results['Avg. sheep'] for i, a in enumerate(ax.flatten()): x = param_values[:,i] sns.regplot(x, y, ax=a, ci=None, color='k',scatter_kws={'alpha':0.2, 's':4, 'color':'gray'}) pearson = scipy.stats.pearsonr(x, y) a.annotate("r: {:6.3f}".format(pearson[0]), xy=(0.15, 0.85), xycoords='axes fraction',fontsize=13) if divmod(i,ncol)[1]>0: a.get_yaxis().set_visible(False) a.set_xlabel(problem['names'][i]) a.set_ylim([0,1.1*np.max(y)]) fig.set_size_inches(9,9,forward=True) fig.subplots_adjust(wspace=0.2, hspace=0.3) #plt.savefig('JASSS figures/SA - Scatter.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Scatter.png', dpi=300, bbox_inches='tight') plt.show()
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
This indicates a positive relationship between the "sheep-gain-from-food" parameter and the mean sheep count, and negative relationships for the "wolf-gain-from-food" and "wolf-reproduce" parameters.We can then use SALib to calculate first-order (S1), second-order (S2) and total (ST) Sobol indices, to estimate each input's contribution to output variance. By default, 95% confidence intervals are estimated for each index.
Si = sobol.analyze(problem, results['Avg. sheep'].values, calc_second_order=True, print_to_console=False)
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
As a simple example, we first select and visualize the first-order and total indices for each input, converting the dictionary returned by SALib to a dataframe.
Si_filter = {k:Si[k] for k in ['ST','ST_conf','S1','S1_conf']} Si_df = pd.DataFrame(Si_filter, index=problem['names']) Si_df sns.set_style('white') fig, ax = plt.subplots(1) indices = Si_df[['S1','ST']] err = Si_df[['S1_conf','ST_conf']] indices.plot.bar(yerr=err.values.T,ax=ax) fig.set_size_inches(8,4) #plt.savefig('JASSS figures/SA - Indices.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/SA - Indices.png', dpi=300, bbox_inches='tight') plt.show()
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
The "sheep-gain-from-food" parameter has the highest ST index, indicating that it contributes over 50% of output variance when accounting for interactions with other parameters. However, it can be noted that the confidence bounds are overly broad due to the small _n_ value used for sampling, so that a larger sample would be required for reliable results. For instance, the S1 index is estimated to be larger than ST for the "random-seed" parameter, which is an artifact of the small sample size.We can use a more sophisticated visualization to include the second-order interactions between inputs.
import itertools from math import pi def normalize(x, xmin, xmax): return (x-xmin)/(xmax-xmin) def plot_circles(ax, locs, names, max_s, stats, smax, smin, fc, ec, lw, zorder): s = np.asarray([stats[name] for name in names]) s = 0.01 + max_s * np.sqrt(normalize(s, smin, smax)) fill = True for loc, name, si in zip(locs, names, s): if fc=='w': fill=False else: ec='none' x = np.cos(loc) y = np.sin(loc) circle = plt.Circle((x,y), radius=si, ec=ec, fc=fc, transform=ax.transData._b, zorder=zorder, lw=lw, fill=True) ax.add_artist(circle) def filter(sobol_indices, names, locs, criterion, threshold): if criterion in ['ST', 'S1', 'S2']: data = sobol_indices[criterion] data = np.abs(data) data = data.flatten() # flatten in case of S2 # TODO:: remove nans filtered = ([(name, locs[i]) for i, name in enumerate(names) if data[i]>threshold]) filtered_names, filtered_locs = zip(*filtered) elif criterion in ['ST_conf', 'S1_conf', 'S2_conf']: raise NotImplementedError else: raise ValueError('unknown value for criterion') return filtered_names, filtered_locs def plot_sobol_indices(sobol_indices, criterion='ST', threshold=0.01): '''plot sobol indices on a radial plot Parameters ---------- sobol_indices : dict the return from SAlib criterion : {'ST', 'S1', 'S2', 'ST_conf', 'S1_conf', 'S2_conf'}, optional threshold : float only visualize variables with criterion larger than cutoff ''' max_linewidth_s2 = 15#25*1.8 max_s_radius = 0.3 # prepare data # use the absolute values of all the indices #sobol_indices = {key:np.abs(stats) for key, stats in sobol_indices.items()} # dataframe with ST and S1 sobol_stats = {key:sobol_indices[key] for key in ['ST', 'S1']} sobol_stats = pd.DataFrame(sobol_stats, index=problem['names']) smax = sobol_stats.max().max() smin = sobol_stats.min().min() # dataframe with s2 s2 = pd.DataFrame(sobol_indices['S2'], index=problem['names'], columns=problem['names']) s2[s2<0.0]=0. #Set negative values to 0 (artifact from small sample sizes) s2max = s2.max().max() s2min = s2.min().min() names = problem['names'] n = len(names) ticklocs = np.linspace(0, 2*pi, n+1) locs = ticklocs[0:-1] filtered_names, filtered_locs = filter(sobol_indices, names, locs, criterion, threshold) # setup figure fig = plt.figure() ax = fig.add_subplot(111, polar=True) ax.grid(False) ax.spines['polar'].set_visible(False) ax.set_xticks(ticklocs) ax.set_xticklabels(names) ax.set_yticklabels([]) ax.set_ylim(ymax=1.4) legend(ax) # plot ST plot_circles(ax, filtered_locs, filtered_names, max_s_radius, sobol_stats['ST'], smax, smin, 'w', 'k', 1, 9) # plot S1 plot_circles(ax, filtered_locs, filtered_names, max_s_radius, sobol_stats['S1'], smax, smin, 'k', 'k', 1, 10) # plot S2 for name1, name2 in itertools.combinations(zip(filtered_names, filtered_locs), 2): name1, loc1 = name1 name2, loc2 = name2 weight = s2.ix[name1, name2] lw = 0.5+max_linewidth_s2*normalize(weight, s2min, s2max) ax.plot([loc1, loc2], [1,1], c='darkgray', lw=lw, zorder=1) return fig from matplotlib.legend_handler import HandlerPatch class HandlerCircle(HandlerPatch): def create_artists(self, legend, orig_handle, xdescent, ydescent, width, height, fontsize, trans): center = 0.5 * width - 0.5 * xdescent, 0.5 * height - 0.5 * ydescent p = plt.Circle(xy=center, radius=orig_handle.radius) self.update_prop(p, orig_handle, legend) p.set_transform(trans) return [p] def legend(ax): some_identifiers = [plt.Circle((0,0), radius=5, color='k', fill=False, lw=1), plt.Circle((0,0), radius=5, color='k', fill=True), plt.Line2D([0,0.5], [0,0.5], lw=8, color='darkgray')] ax.legend(some_identifiers, ['ST', 'S1', 'S2'], loc=(1,0.75), borderaxespad=0.1, mode='expand', handler_map={plt.Circle: HandlerCircle()}) sns.set_style('whitegrid') fig = plot_sobol_indices(Si, criterion='ST', threshold=0.005) fig.set_size_inches(7,7) #plt.savefig('JASSS figures/Figure 8 - Interactions.pdf', bbox_inches='tight') #plt.savefig('JASSS figures/Figure 8 - Interactions.png', dpi=300, bbox_inches='tight') plt.show()
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
In this case, the sheep-gain-from-food variable has strong interactions with the wolf-gain-from-food and sheep-reproduce inputs in particular. The size of the ST and S1 circles correspond to the normalized variable importances. Finally, the kill_workspace() function shuts down the NetLogo instance.
netlogo.kill_workspace()
_____no_output_____
BSD-3-Clause
docs/source/_docs/pyNetLogo demo - SALib sequential.ipynb
jasonrwang/pyNetLogo
1. Visualize The 10 Most Slow Players
most_slow_Players = players_time[players_time["seconds_added_per_point"] > 0].sort_values(by="seconds_added_per_point", ascending=False).head(10) most_slow_Players sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.barplot(x="seconds_added_per_point", y="player", data=most_slow_Players) ax.set_title("TOP 10 MOST SLOW PLAYERS", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Players", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
2. Visualize The 10 Most Fast Players
most_fast_Players = players_time[players_time["seconds_added_per_point"] < 0].sort_values(by="seconds_added_per_point").head(10) most_fast_Players sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.barplot(x="seconds_added_per_point", y="player", data=most_fast_Players) ax.set_title("TOP 10 MOST FAST PLAYERS", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Players", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
3. Visualize The Time Of The Big 3
big_three_time = players_time[(players_time["player"] == "Novak Djokovic") | (players_time["player"] == "Roger Federer") | (players_time["player"] == "Rafael Nadal")] big_three_time sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.barplot(x="seconds_added_per_point", y="player", data=big_three_time) ax.set_title("TIME OF THE BIG THREE", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Players", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
4. Figure Out The Top 10 Surfaces That Take The Longest Time
longest_time_surfaces = events_time[events_time["seconds_added_per_point"] > 0].sort_values(by="seconds_added_per_point", ascending=False).head(10) longest_time_surfaces sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.barplot(x="seconds_added_per_point", y="tournament", hue="surface", data=longest_time_surfaces) ax.set_title("TOP 10 SURFACES THAT TAKE THE LONGEST TIME", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Tournament", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
5. Figure Out The Top 10 Surfaces That Take The Shortest Time
shortest_time_surfaces = events_time[events_time["seconds_added_per_point"] < 0].sort_values(by="seconds_added_per_point").head(10) shortest_time_surfaces sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax = sns.barplot(x="seconds_added_per_point", y="tournament", hue="surface", data=shortest_time_surfaces) ax.set_title("TOP 10 SURFACES THAT TAKE THE SHORTEST TIME", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Tournament", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
6. Figure Out How The Time For The Clay Surface Has Progressed Throughout The Years
years = events_time[~events_time["years"].str.contains("-")] sorted_years_clay = years[years["surface"] == "Clay"].sort_values(by="years") sorted_years_clay sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.lineplot(x="years", y="seconds_added_per_point", hue="surface", data=sorted_years_clay) ax.set_title("PROGRESSION OF TIME FOR THE CLAY SURFACE THROUGHOUT THE YEARS", fontsize=17) plt.xlabel("Years", fontsize=17) plt.ylabel("Seconds", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
7. Figure Out How The Time For The Hard Surface Has Progressed Throughout The Years
sorted_years_hard = years[years["surface"] == "Hard"].sort_values(by="years") sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.lineplot(x="years", y="seconds_added_per_point", hue="surface", data=sorted_years_hard) ax.set_title("PROGRESSION OF TIME FOR THE HARD SURFACE THROUGHOUT THE YEARS", fontsize=17) plt.xlabel("Years", fontsize=17) plt.ylabel("Seconds", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
8. Figure Out How The Time For The Carpet Surface Has Progressed Throughout The Years
sorted_years_carpet = years[years["surface"] == "Carpet"].sort_values(by="years") sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.lineplot(x="years", y="seconds_added_per_point", hue="surface", data=sorted_years_carpet) ax.set_title("PROGRESSION OF TIME FOR THE CARPET SURFACE THROUGHOUT THE YEARS", fontsize=17) plt.xlabel("Years", fontsize=17) plt.ylabel("Seconds", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
9. Figure Out How The Time For The Grass Surface Has Progressed Throughout The Years
sorted_years_grass = events_time[events_time["surface"] == "Grass"].sort_values(by="years").head(5) sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax= sns.lineplot(x="years", y="seconds_added_per_point", hue="surface", data=sorted_years_grass) ax.set_title("PROGRESSION OF TIME FOR THE GRASS SURFACE THROUGHOUT THE YEARS", fontsize=17) plt.xlabel("Years", fontsize=17) plt.ylabel("Seconds", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
10. Figure Out The Person Who Took The Most Time Serving In 2015
serve_time serve_time_visualization = serve_time.groupby("server")["seconds_before_next_point"].agg("sum") serve_time_visualization serve_time_visual_data = serve_time_visualization.reset_index() serve_time_visual_data serve_time_visual_sorted = serve_time_visual_data.sort_values(by="seconds_before_next_point", ascending = False) sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax = sns.barplot(x="seconds_before_next_point", y="server", data=serve_time_visual_sorted) ax.set_title("PLAYERS TOTAL SERVING TIME(2015) ", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Player", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
BIG THREE TOTAL SERVING TIME IN 2015
big_three_total_serving_time = serve_time_visual_sorted[(serve_time_visual_sorted["server"] == "Roger Federer") | (serve_time_visual_sorted["server"] == "Rafael Nadal") | (serve_time_visual_sorted["server"] == "Novak Djokovic")] big_three_total_serving_time sns.set(style="darkgrid") plt.figure(figsize = (10,5)) ax = sns.barplot(x="seconds_before_next_point", y="server", data=big_three_total_serving_time) ax.set_title("BIG THREE TOTAL SERVING TIME(2015) ", fontsize=17) plt.xlabel("Seconds", fontsize=17) plt.ylabel("Player", fontsize=17) plt.yticks(size=17) plt.xticks(size=17) plt.show()
_____no_output_____
MIT
Tennis_Time_Data_Visualization.ipynb
Tinzyl/Tennis_Time_Data_Visualization
Data
PATH = Path('/home/giles/Downloads/fastai_data/salt/') MASKS_FN = 'train_masks.csv' META_FN = 'metadata.csv' masks_csv = pd.read_csv(PATH/MASKS_FN) meta_csv = pd.read_csv(PATH/META_FN) def show_img(im, figsize=None, ax=None, alpha=None): if not ax: fig,ax = plt.subplots(figsize=figsize) ax.imshow(im, alpha=alpha) ax.set_axis_off() return ax (PATH/'train_masks-128').mkdir(exist_ok=True) def resize_img(fn): Image.open(fn).resize((128,128)).save((fn.parent.parent)/'train_masks-128'/fn.name) files = list((PATH/'train_masks').iterdir()) with ThreadPoolExecutor(8) as e: e.map(resize_img, files) (PATH/'train-128').mkdir(exist_ok=True) def resize_img(fn): Image.open(fn).resize((128,128)).save((fn.parent.parent)/'train-128'/fn.name) files = list((PATH/'train').iterdir()) with ThreadPoolExecutor(8) as e: e.map(resize_img, files) TRAIN_DN = 'train-128' MASKS_DN = 'train_masks-128' sz = 32 bs = 64 nw = 16
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
TRAIN_DN = 'train'MASKS_DN = 'train_masks_png'sz = 128bs = 64nw = 16
class MatchedFilesDataset(FilesDataset): def __init__(self, fnames, y, transform, path): self.y=y assert(len(fnames)==len(y)) super().__init__(fnames, transform, path) def get_y(self, i): return open_image(os.path.join(self.path, self.y[i])) def get_c(self): return 0 x_names = np.array(glob(f'{PATH}/{TRAIN_DN}/*')) y_names = np.array(glob(f'{PATH}/{MASKS_DN}/*')) val_idxs = list(range(800)) ((val_x,trn_x),(val_y,trn_y)) = split_by_idx(val_idxs, x_names, y_names) aug_tfms = [RandomFlip(tfm_y=TfmType.CLASS)] tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms) datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH) md = ImageData(PATH, datasets, bs, num_workers=16, classes=None) denorm = md.trn_ds.denorm x,y = next(iter(md.trn_dl)) x.shape,y.shape denorm = md.val_ds.denorm def show_aug_img(ims, idx, figsize=(5,5), normed=True, ax=None, nchannels=3): if ax is None: fig,ax = plt.subplots(figsize=figsize) if normed: ims = denorm(ims) else: ims = np.rollaxis(to_np(ims),1,nchannels+1) ax.imshow(np.clip(ims,0,1)[idx]) ax.axis('off') batches = [next(iter(md.aug_dl)) for i in range(9)] fig, axes = plt.subplots(3, 6, figsize=(18, 9)) for i,(x,y) in enumerate(batches): show_aug_img(x,1, ax=axes.flat[i*2]) show_aug_img(y,1, ax=axes.flat[i*2+1], nchannels=1, normed=False)
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
Simple upsample
f = resnet34 cut,lr_cut = model_meta[f] def get_base(): layers = cut_model(f(True), cut) return nn.Sequential(*layers) def dice(pred, targs): pred = (pred>0.5).float() return 2. * (pred*targs).sum() / (pred+targs).sum()
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
U-net (ish)
class SaveFeatures(): features=None def __init__(self, m): self.hook = m.register_forward_hook(self.hook_fn) def hook_fn(self, module, input, output): self.features = output def remove(self): self.hook.remove() class UnetBlock(nn.Module): def __init__(self, up_in, x_in, n_out): super().__init__() up_out = x_out = n_out//2 self.x_conv = nn.Conv2d(x_in, x_out, 1) self.tr_conv = nn.ConvTranspose2d(up_in, up_out, 2, stride=2) self.bn = nn.BatchNorm2d(n_out) def forward(self, up_p, x_p): up_p = self.tr_conv(up_p) x_p = self.x_conv(x_p) cat_p = torch.cat([up_p,x_p], dim=1) return self.bn(F.relu(cat_p)) class Unet34(nn.Module): def __init__(self, rn): super().__init__() self.rn = rn self.sfs = [SaveFeatures(rn[i]) for i in [2,4,5,6]] self.up1 = UnetBlock(512,256,256) self.up2 = UnetBlock(256,128,256) self.up3 = UnetBlock(256,64,256) self.up4 = UnetBlock(256,64,256) self.up5 = UnetBlock(256,3,16) self.up6 = nn.ConvTranspose2d(16, 1, 1) def forward(self,x): inp = x x = F.relu(self.rn(x)) x = self.up1(x, self.sfs[3].features) x = self.up2(x, self.sfs[2].features) x = self.up3(x, self.sfs[1].features) x = self.up4(x, self.sfs[0].features) x = self.up5(x, inp) x = self.up6(x) return x[:,0] def close(self): for sf in self.sfs: sf.remove() class UnetModel(): def __init__(self,model,name='unet'): self.model,self.name = model,name def get_layer_groups(self, precompute): lgs = list(split_by_idxs(children(self.model.rn), [lr_cut])) return lgs + [children(self.model)[1:]] m_base = get_base() m = to_gpu(Unet34(m_base)) models = UnetModel(m) learn = ConvLearner(md, models) learn.opt_fn=optim.Adam learn.crit=nn.BCEWithLogitsLoss() learn.metrics=[accuracy_thresh(0.5),dice] learn.summary() [o.features.size() for o in m.sfs] learn.freeze_to(1) learn.lr_find() learn.sched.plot() lr=1e-2 wd=1e-7 lrs = np.array([lr/9,lr/3,lr]) learn.fit(lr,1,wds=wd,cycle_len=10,use_clr=(5,8)) learn.save('32urn-tmp') learn.load('32urn-tmp') learn.unfreeze() learn.bn_freeze(True) learn.fit(lrs/4, 1, wds=wd, cycle_len=20,use_clr=(20,10)) learn.sched.plot_lr() learn.save('32urn-0') learn.load('32urn-0') x,y = next(iter(md.val_dl)) py = to_np(learn.model(V(x))) show_img(py[0]>0.5); show_img(y[0]); show_img(x[0][0]); m.close()
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
64x64
sz=64 bs=64 tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms) datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH) md = ImageData(PATH, datasets, bs, num_workers=16, classes=None) denorm = md.trn_ds.denorm m_base = get_base() m = to_gpu(Unet34(m_base)) models = UnetModel(m) learn = ConvLearner(md, models) learn.opt_fn=optim.Adam learn.crit=nn.BCEWithLogitsLoss() learn.metrics=[accuracy_thresh(0.5),dice] learn.freeze_to(1) learn.load('32urn-0') learn.fit(lr/2,1,wds=wd, cycle_len=10,use_clr=(10,10)) learn.sched.plot_lr() learn.save('64urn-tmp') learn.unfreeze() learn.bn_freeze(True) learn.load('64urn-tmp') learn.fit(lrs/4,1,wds=wd, cycle_len=8,use_clr=(20,8)) learn.sched.plot_lr() learn.save('64urn') learn.load('64urn') x,y = next(iter(md.val_dl)) py = to_np(learn.model(V(x))) show_img(py[0]>0.5); show_img(y[0]); show_img(x[0][0]); m.close()
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
128x128
sz=128 bs=64 tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms) datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH) md = ImageData(PATH, datasets, bs, num_workers=16, classes=None) denorm = md.trn_ds.denorm m_base = get_base() m = to_gpu(Unet34(m_base)) models = UnetModel(m) learn = ConvLearner(md, models) learn.opt_fn=optim.Adam learn.crit=nn.BCEWithLogitsLoss() learn.metrics=[accuracy_thresh(0.5),dice] learn.load('64urn') learn.fit(lr/2,1, wds=wd, cycle_len=6,use_clr=(6,4)) learn.save('128urn-tmp') learn.load('128urn-tmp') learn.unfreeze() learn.bn_freeze(True) #lrs = np.array([lr/200,lr/30,lr]) learn.fit(lrs/5,1, wds=wd,cycle_len=8,use_clr=(20,8)) learn.sched.plot_lr() learn.sched.plot_loss() learn.save('128urn') learn.load('128urn') x,y = next(iter(md.val_dl)) py = to_np(learn.model(V(x))) show_img(py[0]>0.5); show_img(y[0]); show_img(x[0][0]); y.shape batches = [next(iter(md.aug_dl)) for i in range(9)] fig, axes = plt.subplots(3, 6, figsize=(18, 9)) for i,(x,y) in enumerate(batches): show_aug_img(x,1, ax=axes.flat[i*2]) show_aug_img(y,1, ax=axes.flat[i*2+1], nchannels=1, normed=False)
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
Test on original validation
x_names_orig = np.array(glob(f'{PATH}/train/*')) y_names_orig = np.array(glob(f'{PATH}/train_masks/*')) val_idxs_orig = list(range(800)) ((val_x_orig,trn_x_orig),(val_y_orig,trn_y_orig)) = split_by_idx(val_idxs_orig, x_names_orig, y_names_orig) sz=128 bs=64 tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms) datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH) md = ImageData(PATH, datasets, bs, num_workers=16, classes=None) denorm = md.trn_ds.denorm m_base = get_base() m = to_gpu(Unet34(m_base)) models = UnetModel(m) learn = ConvLearner(md, models) learn.opt_fn=optim.Adam learn.crit=nn.BCEWithLogitsLoss() learn.metrics=[accuracy_thresh(0.5),dice] learn.load('128urn') probs = learn.predict() probs.shape _, y = learn.TTA(n_aug=1) y.shape idx=0 show_img(probs[idx]>0.5); show_img(probs[idx]); show_img(y[idx]); show_img(x[idx][0]);
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
Optimise threshold
# src: https://www.kaggle.com/aglotero/another-iou-metric def iou_metric(y_true_in, y_pred_in, print_table=False): labels = y_true_in y_pred = y_pred_in true_objects = 2 pred_objects = 2 intersection = np.histogram2d(labels.flatten(), y_pred.flatten(), bins=(true_objects, pred_objects))[0] # Compute areas (needed for finding the union between all objects) area_true = np.histogram(labels, bins = true_objects)[0] area_pred = np.histogram(y_pred, bins = pred_objects)[0] area_true = np.expand_dims(area_true, -1) area_pred = np.expand_dims(area_pred, 0) # Compute union union = area_true + area_pred - intersection # Exclude background from the analysis intersection = intersection[1:,1:] union = union[1:,1:] union[union == 0] = 1e-9 # Compute the intersection over union iou = intersection / union # Precision helper function def precision_at(threshold, iou): matches = iou > threshold true_positives = np.sum(matches, axis=1) == 1 # Correct objects false_positives = np.sum(matches, axis=0) == 0 # Missed objects false_negatives = np.sum(matches, axis=1) == 0 # Extra objects tp, fp, fn = np.sum(true_positives), np.sum(false_positives), np.sum(false_negatives) return tp, fp, fn # Loop over IoU thresholds prec = [] if print_table: print("Thresh\tTP\tFP\tFN\tPrec.") for t in np.arange(0.5, 1.0, 0.05): tp, fp, fn = precision_at(t, iou) if (tp + fp + fn) > 0: p = tp / (tp + fp + fn) else: p = 0 if print_table: print("{:1.3f}\t{}\t{}\t{}\t{:1.3f}".format(t, tp, fp, fn, p)) prec.append(p) if print_table: print("AP\t-\t-\t-\t{:1.3f}".format(np.mean(prec))) return np.mean(prec) def iou_metric_batch(y_true_in, y_pred_in): batch_size = y_true_in.shape[0] metric = [] for batch in range(batch_size): value = iou_metric(y_true_in[batch], y_pred_in[batch]) metric.append(value) return np.mean(metric) thres = np.linspace(-1, 1, 10) thres_ioc = [iou_metric_batch(y, np.int32(probs > t)) for t in tqdm_notebook(thres)] plt.plot(thres, thres_ioc); best_thres = thres[np.argmax(thres_ioc)] best_thres, max(thres_ioc) thres = np.linspace(-0.5, 0.5, 50) thres_ioc = [iou_metric_batch(y, np.int32(probs > t)) for t in tqdm_notebook(thres)] plt.plot(thres, thres_ioc); best_thres = thres[np.argmax(thres_ioc)] best_thres, max(thres_ioc) show_img(probs[0]>best_thres);
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
Run on test
(PATH/'test-128').mkdir(exist_ok=True) def resize_img(fn): Image.open(fn).resize((128,128)).save((fn.parent.parent)/'test-128'/fn.name) files = list((PATH/'test').iterdir()) with ThreadPoolExecutor(8) as e: e.map(resize_img, files) testData = np.array(glob(f'{PATH}/test-128/*')) class TestFilesDataset(FilesDataset): def __init__(self, fnames, y, transform, path): self.y=y assert(len(fnames)==len(y)) super().__init__(fnames, transform, path) def get_y(self, i): return open_image(os.path.join(self.path, self.fnames[i])) def get_c(self): return 0 tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms) datasets = ImageData.get_ds(TestFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, test=testData, path=PATH) md = ImageData(PATH, datasets, bs, num_workers=16, classes=None) denorm = md.trn_ds.denorm m_base = get_base() m = to_gpu(Unet34(m_base)) models = UnetModel(m) learn = ConvLearner(md, models) learn.opt_fn=optim.Adam learn.crit=nn.BCEWithLogitsLoss() learn.metrics=[accuracy_thresh(0.5),dice] learn.load('128urn') x,y = next(iter(md.test_dl)) py = to_np(learn.model(V(x))) show_img(py[6]>best_thres); show_img(py[6]); show_img(y[6]); probs = learn.predict(is_test=True) show_img(probs[12]>best_thres); show_img(probs[12]); show_img(y[12]); show_img(x[12][0]); with open(f'{PATH}/probs.pkl', 'wb') as fout: #Save results pickle.dump(probs, fout) probs.shape def resize_img(fn): return np.array(Image.fromarray(fn).resize((101,101))) resizePreds = np.array([resize_img(x) for x in probs]) resizePreds.shape show_img(resizePreds[12]); testData f'{PATH}/test' test_ids = next(os.walk(f'{PATH}/test'))[2] def RLenc(img, order='F', format=True): """ img is binary mask image, shape (r,c) order is down-then-right, i.e. Fortran format determines if the order needs to be preformatted (according to submission rules) or not returns run length as an array or string (if format is True) """ bytes = img.reshape(img.shape[0] * img.shape[1], order=order) runs = [] ## list of run lengths r = 0 ## the current run length pos = 1 ## count starts from 1 per WK for c in bytes: if (c == 0): if r != 0: runs.append((pos, r)) pos += r r = 0 pos += 1 else: r += 1 # if last run is unsaved (i.e. data ends with 1) if r != 0: runs.append((pos, r)) pos += r r = 0 if format: z = '' for rr in runs: z += '{} {} '.format(rr[0], rr[1]) return z[:-1] else: return runs pred_dict = {id_[:-4]:RLenc(np.round(resizePreds[i] > best_thres)) for i,id_ in tqdm_notebook(enumerate(test_ids))} sub = pd.DataFrame.from_dict(pred_dict,orient='index') sub.index.names = ['id'] sub.columns = ['rle_mask'] sub.to_csv('submission.csv') sub
_____no_output_____
Apache-2.0
notebooks/old/Salt_9-resne34-with-highLR-derper.ipynb
GilesStrong/Kaggle_TGS-Salt
AWS Elastic Kubernetes Service (EKS) Deep MNISTIn this example we will deploy a tensorflow MNIST model in Amazon Web Services' Elastic Kubernetes Service (EKS).This tutorial will break down in the following sections:1) Train a tensorflow model to predict mnist locally2) Containerise the tensorflow model with our docker utility3) Send some data to the docker model to test it4) Install and configure AWS tools to interact with AWS5) Use the AWS tools to create and setup EKS cluster with Seldon6) Push and run docker image through the AWS Container Registry7) Test our Elastic Kubernetes deployment by sending some dataLet's get started! 🚀🔥 Dependencies:* Helm v3.0.0+* A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM)* kubectl v1.14+* EKS CLI v0.1.32* AWS Cli v1.16.163* Python 3.6+* Python DEV requirements 1) Train a tensorflow model to predict mnist locallyWe will load the mnist images, together with their labels, and then train a tensorflow model to predict the right labels
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) import tensorflow as tf if __name__ == '__main__': x = tf.placeholder(tf.float32, [None,784], name="x") W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b, name="y") y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels})) saver = tf.train.Saver() saver.save(sess, "model/deep_mnist_model")
Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz 0.9194
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
2) Containerise the tensorflow model with our docker utility First you need to make sure that you have added the .s2i/environment configuration file in this folder with the following content:
!cat .s2i/environment
MODEL_NAME=DeepMnist API_TYPE=REST SERVICE_TYPE=MODEL PERSISTENCE=0
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Now we can build a docker image named "deep-mnist" with the tag 0.1
!s2i build . seldonio/seldon-core-s2i-python36:1.5.0-dev deep-mnist:0.1
---> Installing application source... ---> Installing dependencies ... Looking in links: /whl Requirement already satisfied: tensorflow>=1.12.0 in /usr/local/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (1.13.1) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.0.9) Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (0.2.2) Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (0.7.1) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (0.7.1) Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.0.7) Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.12.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.1.0) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.19.0) Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (0.33.1) Requirement already satisfied: tensorboard<1.14.0,>=1.13.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.13.1) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.16.2) Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (3.7.0) Requirement already satisfied: tensorflow-estimator<1.14.0rc0,>=1.13.0 in /usr/local/lib/python3.6/site-packages (from tensorflow>=1.12.0->-r requirements.txt (line 1)) (1.13.0) Requirement already satisfied: h5py in /usr/local/lib/python3.6/site-packages (from keras-applications>=1.0.6->tensorflow>=1.12.0->-r requirements.txt (line 1)) (2.9.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow>=1.12.0->-r requirements.txt (line 1)) (3.0.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow>=1.12.0->-r requirements.txt (line 1)) (0.15.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/site-packages (from protobuf>=3.6.1->tensorflow>=1.12.0->-r requirements.txt (line 1)) (40.8.0) Requirement already satisfied: mock>=2.0.0 in /usr/local/lib/python3.6/site-packages (from tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow>=1.12.0->-r requirements.txt (line 1)) (2.0.0) Requirement already satisfied: pbr>=0.11 in /usr/local/lib/python3.6/site-packages (from mock>=2.0.0->tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow>=1.12.0->-r requirements.txt (line 1)) (5.1.3) Url '/whl' is ignored. It is either a non-existing path or lacks a specific scheme. You are using pip version 19.0.3, however version 19.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Build completed successfully
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
3) Send some data to the docker model to test itWe first run the docker image we just created as a container called "mnist_predictor"
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1
5157ab4f516bd0dea11b159780f31121e9fb41df6394e0d6d631e6e0d572463b
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Send some random features that conform to the contract
import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np # We now test the REST endpoint expecting the same result endpoint = "0.0.0.0:5000" batch = x payload_type = "ndarray" sc = SeldonClient(microservice_endpoint=endpoint) # We use the microservice, instead of the "predict" function client_prediction = sc.microservice( data=batch, method="predict", payload_type=payload_type, names=["tfidf"]) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") !docker rm mnist_predictor --force
mnist_predictor
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
4) Install and configure AWS tools to interact with AWS First we install the awscli
!pip install awscli --upgrade --user
Collecting awscli Using cached https://files.pythonhosted.org/packages/f6/45/259a98719e7c7defc9be4cc00fbfb7ccf699fbd1f74455d8347d0ab0a1df/awscli-1.16.163-py2.py3-none-any.whl Collecting colorama<=0.3.9,>=0.2.5 (from awscli) Using cached https://files.pythonhosted.org/packages/db/c8/7dcf9dbcb22429512708fe3a547f8b6101c0d02137acbd892505aee57adf/colorama-0.3.9-py2.py3-none-any.whl Collecting PyYAML<=3.13,>=3.10 (from awscli) Collecting botocore==1.12.153 (from awscli) Using cached https://files.pythonhosted.org/packages/ec/3b/029218966ce62ae9824a18730de862ac8fc5a0e8083d07d1379815e7cca1/botocore-1.12.153-py2.py3-none-any.whl Requirement already satisfied, skipping upgrade: docutils>=0.10 in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from awscli) (0.14) Collecting rsa<=3.5.0,>=3.1.2 (from awscli) Using cached https://files.pythonhosted.org/packages/e1/ae/baedc9cb175552e95f3395c43055a6a5e125ae4d48a1d7a924baca83e92e/rsa-3.4.2-py2.py3-none-any.whl Requirement already satisfied, skipping upgrade: s3transfer<0.3.0,>=0.2.0 in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from awscli) (0.2.0) Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.20; python_version >= "3.4" in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from botocore==1.12.153->awscli) (1.24.2) Requirement already satisfied, skipping upgrade: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from botocore==1.12.153->awscli) (2.8.0) Requirement already satisfied, skipping upgrade: jmespath<1.0.0,>=0.7.1 in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from botocore==1.12.153->awscli) (0.9.4) Collecting pyasn1>=0.1.3 (from rsa<=3.5.0,>=3.1.2->awscli) Using cached https://files.pythonhosted.org/packages/7b/7c/c9386b82a25115cccf1903441bba3cbadcfae7b678a20167347fa8ded34c/pyasn1-0.4.5-py2.py3-none-any.whl Requirement already satisfied, skipping upgrade: six>=1.5 in /home/alejandro/miniconda3/envs/reddit-classification/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore==1.12.153->awscli) (1.12.0) Installing collected packages: colorama, PyYAML, botocore, pyasn1, rsa, awscli Successfully installed PyYAML-3.13 awscli-1.16.163 botocore-1.12.153 colorama-0.3.9 pyasn1-0.4.5 rsa-3.4.2
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Configure aws so it can talk to your server (if you are getting issues, make sure you have the permmissions to create clusters)
%%bash # You must make sure that the access key and secret are changed aws configure << END_OF_INPUTS YOUR_ACCESS_KEY YOUR_ACCESS_SECRET us-west-2 json END_OF_INPUTS
AWS Access Key ID [****************SF4A]: AWS Secret Access Key [****************WLHu]: Default region name [eu-west-1]: Default output format [json]:
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Install EKCTL*IMPORTANT*: These instructions are for linuxPlease follow the official installation of ekctl at: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
!curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz !chmod 755 ./eksctl !./eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.32"} 
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
5) Use the AWS tools to create and setup EKS cluster with SeldonIn this example we will create a cluster with 2 nodes, with a minimum of 1 and a max of 3. You can tweak this accordingly.If you want to check the status of the deployment you can go to AWS CloudFormation or to the EKS dashboard.It will take 10-15 minutes (so feel free to go grab a ☕). *IMPORTANT*: If you get errors in this step it is most probably IAM role access requirements, which requires you to discuss with your administrator.
%%bash ./eksctl create cluster \ --name demo-eks-cluster \ --region us-west-2 \ --nodes 2
Process is interrupted.
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Configure local kubectl We want to now configure our local Kubectl so we can actually reach the cluster we've just created
!aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster
Updated context arn:aws:eks:eu-west-1:271049282727:cluster/deepmnist in /home/alejandro/.kube/config
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
And we can check if the context has been added to kubectl config (contexts are basically the different k8s cluster connections)You should be able to see the context as "...aws:eks:eu-west-1:27...". If it's not activated you can activate that context with kubectlt config set-context
!kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * arn:aws:eks:eu-west-1:271049282727:cluster/deepmnist arn:aws:eks:eu-west-1:271049282727:cluster/deepmnist arn:aws:eks:eu-west-1:271049282727:cluster/deepmnist docker-desktop docker-desktop docker-desktop docker-for-desktop docker-desktop docker-desktop gke_ml-engineer_us-central1-a_security-cluster-1 gke_ml-engineer_us-central1-a_security-cluster-1 gke_ml-engineer_us-central1-a_security-cluster-1
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Setup Seldon CoreUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlSetup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlAmbassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlInstall-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html). Push docker imageIn order for the EKS seldon deployment to access the image we just built, we need to push it to the Elastic Container Registry (ECR).If you have any issues please follow the official AWS documentation: https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html First we create a registryYou can run the following command, and then see the result at https://us-west-2.console.aws.amazon.com/ecr/repositories?
!aws ecr create-repository --repository-name seldon-repository --region us-west-2
{ "repository": { "repositoryArn": "arn:aws:ecr:us-west-2:271049282727:repository/seldon-repository", "registryId": "271049282727", "repositoryName": "seldon-repository", "repositoryUri": "271049282727.dkr.ecr.us-west-2.amazonaws.com/seldon-repository", "createdAt": 1558535798.0 } }
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Now prepare docker imageWe need to first tag the docker image before we can push it
%%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository"
_____no_output_____
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
We now login to aws through docker so we can access the repository
!`aws ecr get-login --no-include-email --region us-west-2`
WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/alejandro/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
And push the imageMake sure you add your AWS Account ID
%%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository"
The push refers to repository [271049282727.dkr.ecr.us-west-2.amazonaws.com/seldon-repository] f7d0d000c138: Preparing 987f3f1afb00: Preparing 00d16a381c47: Preparing bb01f50d544a: Preparing fcb82c6941b5: Preparing 67290e35c458: Preparing b813745f5bb3: Preparing ffecb18e9f0b: Preparing f50f856f49fa: Preparing 80b43ad4adf9: Preparing 14c77983a1cf: Preparing a22a5ac18042: Preparing 6257fa9f9597: Preparing 578414b395b9: Preparing abc3250a6c7f: Preparing 13d5529fd232: Preparing 67290e35c458: Waiting b813745f5bb3: Waiting ffecb18e9f0b: Waiting f50f856f49fa: Waiting 80b43ad4adf9: Waiting 6257fa9f9597: Waiting 14c77983a1cf: Waiting a22a5ac18042: Waiting 578414b395b9: Waiting abc3250a6c7f: Waiting 13d5529fd232: Waiting 987f3f1afb00: Pushed fcb82c6941b5: Pushed bb01f50d544a: Pushed f7d0d000c138: Pushed ffecb18e9f0b: Pushed b813745f5bb3: Pushed f50f856f49fa: Pushed 67290e35c458: Pushed 14c77983a1cf: Pushed 578414b395b9: Pushed 80b43ad4adf9: Pushed 13d5529fd232: Pushed 6257fa9f9597: Pushed abc3250a6c7f: Pushed 00d16a381c47: Pushed a22a5ac18042: Pushed latest: digest: sha256:19aefaa9d87c1287eb46ec08f5d4f9a689744d9d0d0b75668b7d15e447819d74 size: 3691
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Running the ModelWe will now run the model.Let's first have a look at the file we'll be using to trigger the model:
!cat deep_mnist.json
{ "apiVersion": "machinelearning.seldon.io/v1alpha2", "kind": "SeldonDeployment", "metadata": { "labels": { "app": "seldon" }, "name": "deep-mnist" }, "spec": { "annotations": { "project_name": "Tensorflow MNIST", "deployment_version": "v1" }, "name": "deep-mnist", "oauth_key": "oauth-key", "oauth_secret": "oauth-secret", "predictors": [ { "componentSpecs": [{ "spec": { "containers": [ { "image": "271049282727.dkr.ecr.us-west-2.amazonaws.com/seldon-repository:latest", "imagePullPolicy": "IfNotPresent", "name": "classifier", "resources": { "requests": { "memory": "1Mi" } } } ], "terminationGracePeriodSeconds": 20 } }], "graph": { "children": [], "name": "classifier", "endpoint": { "type" : "REST" }, "type": "MODEL" }, "name": "single-model", "replicas": 1, "annotations": { "predictor_version" : "v1" } } ] } }
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Now let's trigger seldon to run the model.We basically have a yaml file, where we want to replace the value "REPLACE_FOR_IMAGE_AND_TAG" for the image you pushed
%%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f -
error: unable to recognize "STDIN": Get https://461835FD3FF52848655C8F09FBF5EEAA.yl4.us-west-2.eks.amazonaws.com/api?timeout=32s: dial tcp: lookup 461835FD3FF52848655C8F09FBF5EEAA.yl4.us-west-2.eks.amazonaws.com on 1.1.1.1:53: no such host
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
And let's check that it's been created.You should see an image called "deep-mnist-single-model...".We'll wait until STATUS changes from "ContainerCreating" to "Running"
!kubectl get pods
NAME READY STATUS RESTARTS AGE ambassador-5475779f98-7bhcw 1/1 Running 0 21m ambassador-5475779f98-986g5 1/1 Running 0 21m ambassador-5475779f98-zcd28 1/1 Running 0 21m deep-mnist-single-model-42ed9d9-fdb557d6b-6xv2h 2/2 Running 0 18m
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Test the modelNow we can test the model, let's first find out what is the URL that we'll have to use:
!kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
We'll use a random example from our dataset
import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y)
_____no_output_____
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
We can now add the URL above to send our request:
from seldon_core.seldon_client import SeldonClient import math import numpy as np host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com" port = "80" # Make sure you use the port above batch = x payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default", oauth_key="oauth-key", oauth_secret="oauth-secret") client_prediction = sc.predict( data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type) print(client_prediction)
Success:True message: Request: data { names: "text" ndarray { values { list_value { values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.3294117748737335 } values { number_value: 0.7254902124404907 } values { number_value: 0.6235294342041016 } values { number_value: 0.5921568870544434 } values { number_value: 0.2352941334247589 } values { number_value: 0.1411764770746231 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.8705883026123047 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.9450981020927429 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.7764706611633301 } values { number_value: 0.6666666865348816 } values { number_value: 0.2039215862751007 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.26274511218070984 } values { number_value: 0.44705885648727417 } values { number_value: 0.2823529541492462 } values { number_value: 0.44705885648727417 } values { number_value: 0.6392157077789307 } values { number_value: 0.8901961445808411 } values { number_value: 0.9960784912109375 } values { number_value: 0.8823530077934265 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.9803922176361084 } values { number_value: 0.8980392813682556 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.5490196347236633 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.06666667014360428 } values { number_value: 0.25882354378700256 } values { number_value: 0.05490196496248245 } values { number_value: 0.26274511218070984 } values { number_value: 0.26274511218070984 } values { number_value: 0.26274511218070984 } values { number_value: 0.23137256503105164 } values { number_value: 0.08235294371843338 } values { number_value: 0.9254902601242065 } values { number_value: 0.9960784912109375 } values { number_value: 0.41568630933761597 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.32549020648002625 } values { number_value: 0.9921569228172302 } values { number_value: 0.8196079134941101 } values { number_value: 0.07058823853731155 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.08627451211214066 } values { number_value: 0.9137255549430847 } values { number_value: 1.0 } values { number_value: 0.32549020648002625 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.5058823823928833 } values { number_value: 0.9960784912109375 } values { number_value: 0.9333333969116211 } values { number_value: 0.1725490242242813 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.23137256503105164 } values { number_value: 0.9764706492424011 } values { number_value: 0.9960784912109375 } values { number_value: 0.24313727021217346 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.5215686559677124 } values { number_value: 0.9960784912109375 } values { number_value: 0.7333333492279053 } values { number_value: 0.019607843831181526 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.03529411926865578 } values { number_value: 0.803921639919281 } values { number_value: 0.9725490808486938 } values { number_value: 0.22745099663734436 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.4941176772117615 } values { number_value: 0.9960784912109375 } values { number_value: 0.7137255072593689 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.29411765933036804 } values { number_value: 0.9843137860298157 } values { number_value: 0.9411765336990356 } values { number_value: 0.22352942824363708 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.07450980693101883 } values { number_value: 0.8666667342185974 } values { number_value: 0.9960784912109375 } values { number_value: 0.6509804129600525 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.011764707043766975 } values { number_value: 0.7960785031318665 } values { number_value: 0.9960784912109375 } values { number_value: 0.8588235974311829 } values { number_value: 0.13725490868091583 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.14901961386203766 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.3019607961177826 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.12156863510608673 } values { number_value: 0.8784314393997192 } values { number_value: 0.9960784912109375 } values { number_value: 0.45098042488098145 } values { number_value: 0.003921568859368563 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.5215686559677124 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.2039215862751007 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.2392157018184662 } values { number_value: 0.9490196704864502 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.2039215862751007 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.4745098352432251 } values { number_value: 0.9960784912109375 } values { number_value: 0.9960784912109375 } values { number_value: 0.8588235974311829 } values { number_value: 0.1568627506494522 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.4745098352432251 } values { number_value: 0.9960784912109375 } values { number_value: 0.8117647767066956 } values { number_value: 0.07058823853731155 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } values { number_value: 0.0 } } } } } Response: meta { puid: "l6bv1r38mmb32l0hbinln2jjcl" requestPath { key: "classifier" value: "271049282727.dkr.ecr.us-west-2.amazonaws.com/seldon-repository:latest" } } data { names: "class:0" names: "class:1" names: "class:2" names: "class:3" names: "class:4" names: "class:5" names: "class:6" names: "class:7" names: "class:8" names: "class:9" ndarray { values { list_value { values { number_value: 6.839015986770391e-05 } values { number_value: 9.376968534979824e-09 } values { number_value: 8.48581112222746e-05 } values { number_value: 0.0034086888190358877 } values { number_value: 2.3978568606253248e-06 } values { number_value: 2.0100669644307345e-05 } values { number_value: 3.0251623428512175e-08 } values { number_value: 0.9953710436820984 } values { number_value: 2.6070511012221687e-05 } values { number_value: 0.0010185304563492537 } } } } }
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
Let's visualise the probability for each labelIt seems that it correctly predicted the number 7
for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
LABEL 0: 0.0068 % LABEL 1: 0.0000 % LABEL 2: 0.0085 % LABEL 3: 0.3409 % LABEL 4: 0.0002 % LABEL 5: 0.0020 % LABEL 6: 0.0000 % LABEL 7: 99.5371 % LABEL 8: 0.0026 % LABEL 9: 0.1019 %
Apache-2.0
examples/models/aws_eks_deep_mnist/aws_eks_deep_mnist.ipynb
welcomemandeep/seldon-core
**Introduction to TinyAutoML**---TinyAutoML is a Machine Learning Python3.9 library thought as an extension of Scikit-Learn. It builds an adaptable and auto-tuned pipeline to handle binary classification tasks.In a few words, your data goes through 2 main preprocessing steps. The first one is scaling and NonStationnarity correction, which is followed by Lasso Feature selection. Finally, one of the three MetaModels is fitted on the transformed data.Let's import the library !
%pip install TinyAutoML==0.2.3.3 from TinyAutoML.Models import * from TinyAutoML import MetaPipeline
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
MetaModelsMetaModels inherit from the MetaModel Abstract Class. They all implement ensemble methods and therefore are based on EstimatorPools.When training EstimatorPools, you are faced with a choice : doing parameterTuning on entire pipelines with the estimators on the top or training the estimators using the same pipeline and only training the top. The first case refers to what we will be calling **comprehensiveSearch**.Moreover, as we will see in details later, those EstimatorPools can be shared across MetaModels.They are all initialised with those minimum arguments :```pythonMetaModel(comprehensiveSearch: bool = True, parameterTuning: bool = True, metrics: str = 'accuracy', nSplits: int=10)```- nSplits corresponds to the number of split of the cross validation- The other parameters are equivoque**They need to be put in the MetaPipeline wrapper to work** **There are 3 MetaModels**1- BestModel : selects the best performing model of the pool
best_model = MetaPipeline(BestModel(comprehensiveSearch = False, parameterTuning = False))
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
2- OneRulerForAll : implements Stacking using a RandomForestClassifier by default. The user is free to use another classifier using the ruler arguments
orfa_model = MetaPipeline(OneRulerForAll(comprehensiveSearch=False, parameterTuning=False))
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
3- DemocraticModel : implements Soft and Hard voting models through the voting argument
democratic_model = MetaPipeline(DemocraticModel(comprehensiveSearch=False, parameterTuning=False, voting='soft'))
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
As of release v0.2.3.2 (13/04/2022) there are 5 models on which these MetaModels rely in the EstimatorPool:- Random Forest Classifier- Logistic Regression- Gaussian Naive Bayes- Linear Discriminant Analysis- XGBoost***We'll use the breast_cancer dataset from sklearn as an example:
import pandas as pd from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() X = pd.DataFrame(data=cancer.data, columns=cancer.feature_names) y = cancer.target cut = int(len(y) * 0.8) X_train, X_test = X[:cut], X[cut:] y_train, y_test = y[:cut], y[cut:]
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
Let's train a BestModel first and reuse its Pool for the other MetaModels
best_model.fit(X_train,y_train)
INFO:root:Training models INFO:root:The best estimator is random forest classifier with a cross-validation accuracy (in Sample) of 1.0
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
We can now extract the pool
pool = best_model.get_pool()
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
And use it when fitting the other MetaModels to skip the fitting of the underlying models:
orfa_model.fit(X_train,y_train,pool=pool) democratic_model.fit(X_train,y_train,pool=pool)
INFO:root:Training models... INFO:root:Training models...
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
Great ! Let's look at the results with the sk_learn classification report :
orfa_model.classification_report(X_test,y_test)
precision recall f1-score support 0 0.96 1.00 0.98 26 1 1.00 0.99 0.99 88 accuracy 0.99 114 macro avg 0.98 0.99 0.99 114 weighted avg 0.99 0.99 0.99 114
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
Looking good! What about the ROC Curve ?
democratic_model.roc_curve(X_test,y_test)
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
Let's see how the estimators of the pool are doing individually:
best_model.get_scores(X_test,y_test)
_____no_output_____
MIT
introduction-to-Tiny-AutoML.ipynb
thomktz/TinyAutoML
Computer Vision Nanodegree Project: Image Captioning---In this notebook, you will train your CNN-RNN model. You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook. - the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.This notebook **will be graded**. Feel free to use the links below to navigate the notebook:- [Step 1](step1): Training Setup- [Step 2](step2): Train your Model- [Step 3](step3): (Optional) Validate your Model Step 1: Training SetupIn this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**. Task 1Begin by setting the following variables:- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step. - `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary. - `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file. - `embed_size` - the dimensionality of the image and word embeddings. - `hidden_size` - the number of features in the hidden state of the RNN decoder. - `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model. Question 1**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.**Answer:** I used a pretrained Resnet152 network to extract features (deep CNN network). In the literature other architectures like VGG16 are also used, but Resnet152 is claimed to diminish the vanishing gradient problem.I'm using 2 layers of LSTM currently (as it is taking a lot of time), in the future I will explore with more layers.vocab_threshold is 6, I tried with 9 (meaning lesser elements in vocab) but the training seemed to converge faster in the case of 6. Many paper suggest taking batch_size of 64 or 128, I went with 64. embed_size and hidden_size are both 512. I consulted several blogs and famous papers like "Show, attend and tell - Xu et al" although i did not use attention currently. (Optional) Task 2Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:- the images in the dataset have varying heights and widths, and - if using a pre-trained model, you must perform the corresponding appropriate normalization. Question 2**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?**Answer:** The transform value is the same. Empirically, these parameters values worked well in my past projects. Task 3Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:```params = list(decoder.parameters()) + list(encoder.embed.parameters()) ``` Question 3**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?**Answer:** Since resnet was pretrained, i trained only the embedding layer and all layers of the decoder. The resnet is already fitting for feature extraction as it is pretrained, hence only the other parts of the architecture should be trained. Task 4Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.htmltorch.optim.Optimizer). Question 4**Question:** How did you select the optimizer used to train your model?**Answer:** I used the Adam optimizer, since in my past similar projects it gave me better performance than SGD. I have found Adam to perform better than vanilla SGD almost in all cases, aligning with intuition.
import nltk nltk.download('punkt') import torch import torch.nn as nn from torchvision import transforms import sys sys.path.append('/opt/cocoapi/PythonAPI') from pycocotools.coco import COCO from data_loader import get_loader from model import EncoderCNN, DecoderRNN import math ## TODO #1: Select appropriate values for the Python variables below. batch_size = 64 # batch size vocab_threshold = 6 # minimum word count threshold vocab_from_file = True # if True, load existing vocab file embed_size = 512 # dimensionality of image and word embeddings hidden_size = 512 # number of features in hidden state of the RNN decoder num_epochs = 3 # number of training epochs save_every = 1 # determines frequency of saving model weights print_every = 100 # determines window for printing average loss log_file = 'training_log.txt' # name of file with saved training loss and perplexity # (Optional) TODO #2: Amend the image transform below. transform_train = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5 transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) # Build data loader. data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=vocab_from_file) # The size of the vocabulary. vocab_size = len(data_loader.dataset.vocab) # Initialize the encoder and decoder. encoder = EncoderCNN(embed_size) decoder = DecoderRNN(embed_size, hidden_size, vocab_size) # Move models to GPU if CUDA is available. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") encoder.to(device) decoder.to(device) # Define the loss function. criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss() # TODO #3: Specify the learnable parameters of the model. params = list(decoder.parameters()) + list(encoder.embed.parameters()) # TODO #4: Define the optimizer. optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9,0.999), eps=1e-8) # Set the total number of training steps per epoch. total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
Vocabulary successfully loaded from vocab.pkl file! loading annotations into memory... Done (t=1.07s) creating index...
MIT
2_Training.ipynb
siddsrivastava/Image-captionin
Step 2: Train your ModelOnce you have executed the code cell in **Step 1**, the training procedure below should run without issue. It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works! You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:```python Load pre-trained weights before resuming training.encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))```While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :). A Note on Tuning HyperparametersTo figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information. However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models. For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
import torch.utils.data as data import numpy as np import os import requests import time # Open the training log file. f = open(log_file, 'w') old_time = time.time() response = requests.request("GET", "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token", headers={"Metadata-Flavor":"Google"}) for epoch in range(1, num_epochs+1): for i_step in range(1, total_step+1): if time.time() - old_time > 60: old_time = time.time() requests.request("POST", "https://nebula.udacity.com/api/v1/remote/keep-alive", headers={'Authorization': "STAR " + response.text}) # Randomly sample a caption length, and sample indices with that length. indices = data_loader.dataset.get_train_indices() # Create and assign a batch sampler to retrieve a batch with the sampled indices. new_sampler = data.sampler.SubsetRandomSampler(indices=indices) data_loader.batch_sampler.sampler = new_sampler # Obtain the batch. images, captions = next(iter(data_loader)) # Move batch of images and captions to GPU if CUDA is available. images = images.to(device) captions = captions.to(device) # Zero the gradients. decoder.zero_grad() encoder.zero_grad() # Pass the inputs through the CNN-RNN model. features = encoder(images) outputs = decoder(features, captions) # Calculate the batch loss. loss = criterion(outputs.view(-1, vocab_size), captions.view(-1)) # Backward pass. loss.backward() # Update the parameters in the optimizer. optimizer.step() # Get training statistics. stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item())) # Print training statistics (on same line). print('\r' + stats, end="") sys.stdout.flush() # Print training statistics to file. f.write(stats + '\n') f.flush() # Print training statistics (on different line). if i_step % print_every == 0: print('\r' + stats) # Save the weights. if epoch % save_every == 0: torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch)) torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch)) # Close the training log file. f.close()
Epoch [1/3], Step [100/6471], Loss: 4.2137, Perplexity: 67.6088 Epoch [1/3], Step [200/6471], Loss: 3.9313, Perplexity: 50.97528 Epoch [1/3], Step [300/6471], Loss: 3.5978, Perplexity: 36.5175 Epoch [1/3], Step [400/6471], Loss: 3.6794, Perplexity: 39.6219 Epoch [1/3], Step [500/6471], Loss: 3.0714, Perplexity: 21.5712 Epoch [1/3], Step [600/6471], Loss: 3.2012, Perplexity: 24.5617 Epoch [1/3], Step [700/6471], Loss: 3.2718, Perplexity: 26.35966 Epoch [1/3], Step [800/6471], Loss: 3.3748, Perplexity: 29.2185 Epoch [1/3], Step [900/6471], Loss: 3.1745, Perplexity: 23.9146 Epoch [1/3], Step [1000/6471], Loss: 3.2627, Perplexity: 26.1206 Epoch [1/3], Step [1100/6471], Loss: 2.8865, Perplexity: 17.9312 Epoch [1/3], Step [1200/6471], Loss: 2.9421, Perplexity: 18.9562 Epoch [1/3], Step [1300/6471], Loss: 2.7139, Perplexity: 15.0875 Epoch [1/3], Step [1400/6471], Loss: 2.6474, Perplexity: 14.1176 Epoch [1/3], Step [1500/6471], Loss: 2.6901, Perplexity: 14.7331 Epoch [1/3], Step [1600/6471], Loss: 2.6551, Perplexity: 14.2267 Epoch [1/3], Step [1700/6471], Loss: 2.9028, Perplexity: 18.2242 Epoch [1/3], Step [1800/6471], Loss: 2.5633, Perplexity: 12.9791 Epoch [1/3], Step [1900/6471], Loss: 2.7250, Perplexity: 15.2564 Epoch [1/3], Step [2000/6471], Loss: 2.5907, Perplexity: 13.3396 Epoch [1/3], Step [2100/6471], Loss: 2.7079, Perplexity: 14.9985 Epoch [1/3], Step [2200/6471], Loss: 2.5242, Perplexity: 12.4809 Epoch [1/3], Step [2300/6471], Loss: 2.5016, Perplexity: 12.2019 Epoch [1/3], Step [2400/6471], Loss: 2.6168, Perplexity: 13.6915 Epoch [1/3], Step [2500/6471], Loss: 2.6548, Perplexity: 14.2225 Epoch [1/3], Step [2600/6471], Loss: 2.4738, Perplexity: 11.8673 Epoch [1/3], Step [2700/6471], Loss: 2.4797, Perplexity: 11.9380 Epoch [1/3], Step [2800/6471], Loss: 2.6574, Perplexity: 14.2598 Epoch [1/3], Step [2900/6471], Loss: 2.3054, Perplexity: 10.0281 Epoch [1/3], Step [3000/6471], Loss: 2.5392, Perplexity: 12.6694 Epoch [1/3], Step [3100/6471], Loss: 2.6166, Perplexity: 13.6890 Epoch [1/3], Step [3200/6471], Loss: 2.2275, Perplexity: 9.27642 Epoch [1/3], Step [3300/6471], Loss: 2.5271, Perplexity: 12.5177 Epoch [1/3], Step [3400/6471], Loss: 2.3050, Perplexity: 10.0246 Epoch [1/3], Step [3500/6471], Loss: 2.0236, Perplexity: 7.56542 Epoch [1/3], Step [3600/6471], Loss: 2.1614, Perplexity: 8.68294 Epoch [1/3], Step [3700/6471], Loss: 2.3635, Perplexity: 10.6284 Epoch [1/3], Step [3800/6471], Loss: 2.3958, Perplexity: 10.9773 Epoch [1/3], Step [3900/6471], Loss: 2.1591, Perplexity: 8.66344 Epoch [1/3], Step [4000/6471], Loss: 2.3267, Perplexity: 10.2446 Epoch [1/3], Step [4100/6471], Loss: 3.1127, Perplexity: 22.4825 Epoch [1/3], Step [4200/6471], Loss: 2.3359, Perplexity: 10.3392 Epoch [1/3], Step [4300/6471], Loss: 2.3215, Perplexity: 10.1912 Epoch [1/3], Step [4400/6471], Loss: 2.2369, Perplexity: 9.36462 Epoch [1/3], Step [4500/6471], Loss: 2.2770, Perplexity: 9.74746 Epoch [1/3], Step [4600/6471], Loss: 2.2351, Perplexity: 9.34757 Epoch [1/3], Step [4700/6471], Loss: 2.2890, Perplexity: 9.86499 Epoch [1/3], Step [4800/6471], Loss: 2.2736, Perplexity: 9.713991 Epoch [1/3], Step [4900/6471], Loss: 2.5273, Perplexity: 12.5202 Epoch [1/3], Step [5000/6471], Loss: 2.1436, Perplexity: 8.52971 Epoch [1/3], Step [5100/6471], Loss: 2.2414, Perplexity: 9.40672 Epoch [1/3], Step [5200/6471], Loss: 2.3917, Perplexity: 10.9318 Epoch [1/3], Step [5300/6471], Loss: 2.2926, Perplexity: 9.90097 Epoch [1/3], Step [5400/6471], Loss: 2.0861, Perplexity: 8.05366 Epoch [1/3], Step [5500/6471], Loss: 2.0797, Perplexity: 8.00241 Epoch [1/3], Step [5600/6471], Loss: 2.5135, Perplexity: 12.3480 Epoch [1/3], Step [5700/6471], Loss: 2.0843, Perplexity: 8.03936 Epoch [1/3], Step [5800/6471], Loss: 2.4332, Perplexity: 11.3950 Epoch [1/3], Step [5900/6471], Loss: 2.0920, Perplexity: 8.10140 Epoch [1/3], Step [6000/6471], Loss: 2.3367, Perplexity: 10.3468 Epoch [1/3], Step [6100/6471], Loss: 2.9598, Perplexity: 19.2937 Epoch [1/3], Step [6200/6471], Loss: 2.0285, Perplexity: 7.60297 Epoch [1/3], Step [6300/6471], Loss: 2.6213, Perplexity: 13.7538 Epoch [1/3], Step [6400/6471], Loss: 2.0924, Perplexity: 8.10440 Epoch [2/3], Step [100/6471], Loss: 2.1729, Perplexity: 8.783715 Epoch [2/3], Step [200/6471], Loss: 2.1168, Perplexity: 8.30481 Epoch [2/3], Step [300/6471], Loss: 2.2427, Perplexity: 9.41848 Epoch [2/3], Step [400/6471], Loss: 2.5073, Perplexity: 12.2721 Epoch [2/3], Step [500/6471], Loss: 2.1942, Perplexity: 8.97323 Epoch [2/3], Step [600/6471], Loss: 2.2852, Perplexity: 9.82738 Epoch [2/3], Step [700/6471], Loss: 2.0216, Perplexity: 7.55076 Epoch [2/3], Step [800/6471], Loss: 2.0080, Perplexity: 7.44841 Epoch [2/3], Step [900/6471], Loss: 2.6213, Perplexity: 13.7540 Epoch [2/3], Step [1000/6471], Loss: 2.2098, Perplexity: 9.1141 Epoch [2/3], Step [1100/6471], Loss: 2.3376, Perplexity: 10.3568 Epoch [2/3], Step [1200/6471], Loss: 2.1687, Perplexity: 8.74662 Epoch [2/3], Step [1300/6471], Loss: 2.4215, Perplexity: 11.2623 Epoch [2/3], Step [1400/6471], Loss: 2.2622, Perplexity: 9.60387 Epoch [2/3], Step [1500/6471], Loss: 2.0793, Perplexity: 7.99915 Epoch [2/3], Step [1600/6471], Loss: 3.0006, Perplexity: 20.0976 Epoch [2/3], Step [1700/6471], Loss: 2.1184, Perplexity: 8.31816 Epoch [2/3], Step [1800/6471], Loss: 2.0555, Perplexity: 7.81114 Epoch [2/3], Step [1900/6471], Loss: 2.4132, Perplexity: 11.1696 Epoch [2/3], Step [2000/6471], Loss: 2.4320, Perplexity: 11.3817 Epoch [2/3], Step [2100/6471], Loss: 2.6297, Perplexity: 13.8692 Epoch [2/3], Step [2200/6471], Loss: 2.2170, Perplexity: 9.18001 Epoch [2/3], Step [2300/6471], Loss: 2.1038, Perplexity: 8.19712 Epoch [2/3], Step [2400/6471], Loss: 2.0491, Perplexity: 7.76052 Epoch [2/3], Step [2500/6471], Loss: 1.9645, Perplexity: 7.13170 Epoch [2/3], Step [2600/6471], Loss: 2.3801, Perplexity: 10.8063 Epoch [2/3], Step [2700/6471], Loss: 2.3220, Perplexity: 10.1963 Epoch [2/3], Step [2800/6471], Loss: 2.0542, Perplexity: 7.80050 Epoch [2/3], Step [2900/6471], Loss: 1.9378, Perplexity: 6.94348 Epoch [2/3], Step [3000/6471], Loss: 1.9138, Perplexity: 6.77860 Epoch [2/3], Step [3100/6471], Loss: 2.2314, Perplexity: 9.31325 Epoch [2/3], Step [3200/6471], Loss: 2.1790, Perplexity: 8.83758 Epoch [2/3], Step [3300/6471], Loss: 2.7974, Perplexity: 16.4013 Epoch [2/3], Step [3400/6471], Loss: 2.2902, Perplexity: 9.87657 Epoch [2/3], Step [3500/6471], Loss: 2.0739, Perplexity: 7.95541 Epoch [2/3], Step [3600/6471], Loss: 2.4700, Perplexity: 11.8226 Epoch [2/3], Step [3700/6471], Loss: 2.0761, Perplexity: 7.97370 Epoch [2/3], Step [3800/6471], Loss: 2.0085, Perplexity: 7.45224 Epoch [2/3], Step [3900/6471], Loss: 2.0280, Perplexity: 7.59929 Epoch [2/3], Step [4000/6471], Loss: 2.0487, Perplexity: 7.75750 Epoch [2/3], Step [4100/6471], Loss: 2.0105, Perplexity: 7.46732 Epoch [2/3], Step [4200/6471], Loss: 2.3099, Perplexity: 10.0733 Epoch [2/3], Step [4300/6471], Loss: 1.8471, Perplexity: 6.34158 Epoch [2/3], Step [4400/6471], Loss: 1.9144, Perplexity: 6.78305 Epoch [2/3], Step [4500/6471], Loss: 2.3026, Perplexity: 10.0001 Epoch [2/3], Step [4600/6471], Loss: 2.0366, Perplexity: 7.66411 Epoch [2/3], Step [4700/6471], Loss: 2.4918, Perplexity: 12.0830 Epoch [2/3], Step [4800/6471], Loss: 2.0035, Perplexity: 7.41520 Epoch [2/3], Step [4900/6471], Loss: 2.0007, Perplexity: 7.39395 Epoch [2/3], Step [5000/6471], Loss: 2.0057, Perplexity: 7.43157 Epoch [2/3], Step [5100/6471], Loss: 2.0654, Perplexity: 7.88811 Epoch [2/3], Step [5200/6471], Loss: 1.8834, Perplexity: 6.57597 Epoch [2/3], Step [5300/6471], Loss: 1.9578, Perplexity: 7.08400 Epoch [2/3], Step [5400/6471], Loss: 2.1135, Perplexity: 8.27759 Epoch [2/3], Step [5500/6471], Loss: 1.9813, Perplexity: 7.25206 Epoch [2/3], Step [5600/6471], Loss: 2.1926, Perplexity: 8.95865 Epoch [2/3], Step [5700/6471], Loss: 2.2927, Perplexity: 9.90207 Epoch [2/3], Step [5800/6471], Loss: 2.3188, Perplexity: 10.1636 Epoch [2/3], Step [5900/6471], Loss: 1.9937, Perplexity: 7.34238 Epoch [2/3], Step [6000/6471], Loss: 1.8804, Perplexity: 6.55632 Epoch [2/3], Step [6100/6471], Loss: 1.8708, Perplexity: 6.49346 Epoch [2/3], Step [6200/6471], Loss: 1.9785, Perplexity: 7.23204 Epoch [2/3], Step [6300/6471], Loss: 2.1267, Perplexity: 8.38739 Epoch [2/3], Step [6400/6471], Loss: 1.8215, Perplexity: 6.18116 Epoch [3/3], Step [100/6471], Loss: 1.9881, Perplexity: 7.301406 Epoch [3/3], Step [200/6471], Loss: 2.2102, Perplexity: 9.11727 Epoch [3/3], Step [300/6471], Loss: 1.9104, Perplexity: 6.75575 Epoch [3/3], Step [400/6471], Loss: 1.8180, Perplexity: 6.15938 Epoch [3/3], Step [500/6471], Loss: 2.5038, Perplexity: 12.2288 Epoch [3/3], Step [600/6471], Loss: 2.0724, Perplexity: 7.94375 Epoch [3/3], Step [700/6471], Loss: 2.0264, Perplexity: 7.58681 Epoch [3/3], Step [800/6471], Loss: 1.9343, Perplexity: 6.91936 Epoch [3/3], Step [900/6471], Loss: 1.9347, Perplexity: 6.92228 Epoch [3/3], Step [1000/6471], Loss: 2.6768, Perplexity: 14.5382 Epoch [3/3], Step [1100/6471], Loss: 2.1302, Perplexity: 8.41696 Epoch [3/3], Step [1200/6471], Loss: 1.9754, Perplexity: 7.20958 Epoch [3/3], Step [1300/6471], Loss: 2.0288, Perplexity: 7.60478 Epoch [3/3], Step [1400/6471], Loss: 2.1273, Perplexity: 8.39242 Epoch [3/3], Step [1500/6471], Loss: 2.6294, Perplexity: 13.8661 Epoch [3/3], Step [1600/6471], Loss: 2.6716, Perplexity: 14.4634 Epoch [3/3], Step [1700/6471], Loss: 1.8720, Perplexity: 6.50130 Epoch [3/3], Step [1800/6471], Loss: 2.3521, Perplexity: 10.5080 Epoch [3/3], Step [1900/6471], Loss: 2.0034, Perplexity: 7.41405 Epoch [3/3], Step [2000/6471], Loss: 2.0006, Perplexity: 7.39337 Epoch [3/3], Step [2100/6471], Loss: 2.0902, Perplexity: 8.08620 Epoch [3/3], Step [2200/6471], Loss: 3.3483, Perplexity: 28.4533 Epoch [3/3], Step [2300/6471], Loss: 2.0799, Perplexity: 8.00390 Epoch [3/3], Step [2400/6471], Loss: 2.1215, Perplexity: 8.34411 Epoch [3/3], Step [2500/6471], Loss: 1.9870, Perplexity: 7.29389 Epoch [3/3], Step [2600/6471], Loss: 2.1111, Perplexity: 8.25726 Epoch [3/3], Step [2700/6471], Loss: 1.8926, Perplexity: 6.63631 Epoch [3/3], Step [2800/6471], Loss: 2.0022, Perplexity: 7.40557 Epoch [3/3], Step [2900/6471], Loss: 1.9249, Perplexity: 6.85467 Epoch [3/3], Step [3000/6471], Loss: 1.8835, Perplexity: 6.57626 Epoch [3/3], Step [3100/6471], Loss: 2.0569, Perplexity: 7.82189 Epoch [3/3], Step [3200/6471], Loss: 1.8780, Perplexity: 6.54040 Epoch [3/3], Step [3300/6471], Loss: 2.3703, Perplexity: 10.7010 Epoch [3/3], Step [3400/6471], Loss: 1.9703, Perplexity: 7.17267 Epoch [3/3], Step [3500/6471], Loss: 1.9115, Perplexity: 6.76300 Epoch [3/3], Step [3600/6471], Loss: 2.2174, Perplexity: 9.18364 Epoch [3/3], Step [3700/6471], Loss: 2.4291, Perplexity: 11.3490 Epoch [3/3], Step [3800/6471], Loss: 2.3135, Perplexity: 10.1093 Epoch [3/3], Step [3900/6471], Loss: 1.9082, Perplexity: 6.74124 Epoch [3/3], Step [4000/6471], Loss: 1.9494, Perplexity: 7.02424 Epoch [3/3], Step [4100/6471], Loss: 1.8795, Perplexity: 6.55057 Epoch [3/3], Step [4200/6471], Loss: 2.0943, Perplexity: 8.12024 Epoch [3/3], Step [4300/6471], Loss: 1.9174, Perplexity: 6.80361 Epoch [3/3], Step [4400/6471], Loss: 1.8159, Perplexity: 6.14634 Epoch [3/3], Step [4500/6471], Loss: 2.1579, Perplexity: 8.65335 Epoch [3/3], Step [4600/6471], Loss: 2.0022, Perplexity: 7.40562 Epoch [3/3], Step [4700/6471], Loss: 2.0300, Perplexity: 7.61381 Epoch [3/3], Step [4800/6471], Loss: 1.9009, Perplexity: 6.69223 Epoch [3/3], Step [4900/6471], Loss: 2.4837, Perplexity: 11.9857 Epoch [3/3], Step [5000/6471], Loss: 2.0528, Perplexity: 7.79005 Epoch [3/3], Step [5100/6471], Loss: 1.9514, Perplexity: 7.03869 Epoch [3/3], Step [5200/6471], Loss: 1.8162, Perplexity: 6.14836 Epoch [3/3], Step [5300/6471], Loss: 2.0564, Perplexity: 7.81761 Epoch [3/3], Step [5400/6471], Loss: 1.8345, Perplexity: 6.26224 Epoch [3/3], Step [5500/6471], Loss: 2.2075, Perplexity: 9.09278 Epoch [3/3], Step [5600/6471], Loss: 1.8813, Perplexity: 6.56204 Epoch [3/3], Step [5700/6471], Loss: 1.8286, Perplexity: 6.22503 Epoch [3/3], Step [5800/6471], Loss: 1.8301, Perplexity: 6.23444 Epoch [3/3], Step [5900/6471], Loss: 1.9318, Perplexity: 6.90176 Epoch [3/3], Step [6000/6471], Loss: 1.9549, Perplexity: 7.06348 Epoch [3/3], Step [6100/6471], Loss: 1.9326, Perplexity: 6.90775 Epoch [3/3], Step [6200/6471], Loss: 2.0268, Perplexity: 7.58943 Epoch [3/3], Step [6300/6471], Loss: 1.8465, Perplexity: 6.33754 Epoch [3/3], Step [6400/6471], Loss: 1.9052, Perplexity: 6.72096 Epoch [3/3], Step [6471/6471], Loss: 2.0248, Perplexity: 7.57506
MIT
2_Training.ipynb
siddsrivastava/Image-captionin
Step 3: (Optional) Validate your ModelTo assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here. If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/download) for the COCO dataset.
# (Optional) TODO: Validate your model.
_____no_output_____
MIT
2_Training.ipynb
siddsrivastava/Image-captionin
Mount google drive to colab
from google.colab import drive drive.mount("/content/drive")
Mounted at /content/drive
MIT
Models/CNN_best.ipynb
DataMas/Deep-Learning-Image-Classification
Import libraries
import os import random import numpy as np import shutil import time from PIL import Image, ImageOps import cv2 import pandas as pd import math import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') import tensorflow as tf from keras import models from keras import layers from keras import optimizers from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint from keras.callbacks import LearningRateScheduler from keras.utils import np_utils from sklearn.metrics import confusion_matrix, classification_report from sklearn.preprocessing import LabelBinarizer from sklearn.preprocessing import MinMaxScaler from keras.preprocessing.image import ImageDataGenerator from keras import models, layers, optimizers from keras.callbacks import ModelCheckpoint from keras import losses
_____no_output_____
MIT
Models/CNN_best.ipynb
DataMas/Deep-Learning-Image-Classification
Initialize basic working directories
directory = "drive/MyDrive/Datasets/Sign digits/Dataset" trainDir = "train" testDir = "test" os.chdir(directory)
_____no_output_____
MIT
Models/CNN_best.ipynb
DataMas/Deep-Learning-Image-Classification
Augmented dataframes
augDir = "augmented/" classNames_train = os.listdir(augDir+'train/') classNames_test = os.listdir(augDir+'test/') classes_train = [] data_train = [] paths_train = [] classes_test = [] data_test = [] paths_test = [] classes_val = [] data_val = [] paths_val = [] for className in range(0,10): temp_train = os.listdir(augDir+'train/'+str(className)) temp_test = os.listdir(augDir+'test/'+str(className)) for dataFile in temp_train: path_train = augDir+'train/'+str(className)+'/'+dataFile paths_train.append(path_train) classes_train .append(str(className)) testSize = [i for i in range(math.floor(len(temp_test)/2),len(temp_test))] valSize = [i for i in range(0,math.floor(len(temp_test)/2))] for dataFile in testSize: path_test = augDir+'test/'+str(className)+'/'+temp_test[dataFile] paths_test.append(path_test) classes_test .append(str(className)) for dataFile in valSize: path_val = augDir+'test/'+str(className)+'/'+temp_test[dataFile] paths_val.append(path_val) classes_val .append(str(className)) augTrain_df = pd.DataFrame({'fileNames': paths_train, 'labels': classes_train}) augTest_df = pd.DataFrame({'fileNames': paths_test, 'labels': classes_test}) augVal_df = pd.DataFrame({'fileNames': paths_val, 'labels': classes_val}) augTrain_df.head(10) augTrain_df['labels'].hist(figsize=(10,5)) augTest_df['labels'].hist(figsize=(10,5)) augTest_df['labels'].hist(figsize=(10,5)) augVal_df['labels'].hist(figsize=(10,5)) augTrainX=[] augTrainY=[] augTestX=[] augTestY=[] augValX=[] augValY=[] iter = -1 #read images from train set for path in augTrain_df['fileNames']: iter = iter + 1 #image = np.array((Image.open(path))) image = cv2.imread(path) augTrainX.append(image) label = augTrain_df['labels'][iter] augTrainY.append(label) iter = -1 for path in augTest_df['fileNames']: iter = iter + 1 #image = np.array((Image.open(path))) image = cv2.imread(path) augTestX.append(image) augTestY.append(augTest_df['labels'][iter]) iter = -1 for path in augVal_df['fileNames']: iter = iter + 1 #image = np.array((Image.open(path))) image = cv2.imread(path) augValX.append(image) augValY.append(augVal_df['labels'][iter]) augTrainX = np.array(augTrainX) augTestX = np.array(augTestX) augValX = np.array(augValX) augTrainX = augTrainX / 255 augTestX = augTestX / 255 augValX = augValX / 255 # OneHot Encode the Output augTrainY = np_utils.to_categorical(augTrainY, 10) augTestY = np_utils.to_categorical(augTestY, 10) augValY = np_utils.to_categorical(augValY, 10) train_datagen = ImageDataGenerator(rescale=1./255) validation_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_dataframe(dataframe=augTrain_df, x_col="fileNames", y_col="labels", batch_size=16, class_mode="categorical", color_mode="grayscale", target_size=(100,100), shuffle=True) validation_generator = validation_datagen.flow_from_dataframe(dataframe=augVal_df, x_col="fileNames", y_col="labels", batch_size=16, class_mode="categorical", color_mode="grayscale", target_size=(100,100), shuffle=True) test_generator = test_datagen.flow_from_dataframe(dataframe=augTest_df, x_col="fileNames", y_col="labels", batch_size=16, class_mode="categorical", color_mode="grayscale", target_size=(100,100), shuffle=True) model_best = models.Sequential() model_best.add(layers.Conv2D(64, (3,3), input_shape=(100, 100,1), padding='same', activation='relu')) model_best.add(layers.BatchNormalization(momentum=0.1)) model_best.add(layers.MaxPooling2D(pool_size=(2,2))) model_best.add(layers.Conv2D(32, (3,3), padding='same', activation='relu')) model_best.add(layers.BatchNormalization(momentum=0.1)) model_best.add(layers.MaxPooling2D(pool_size=(2,2))) model_best.add(layers.Conv2D(16, (3,3), padding='same', activation='relu')) model_best.add(layers.BatchNormalization(momentum=0.1)) model_best.add(layers.MaxPooling2D(pool_size=(2,2))) model_best.add(layers.Flatten()) model_best.add(layers.Dense(128, activation='relu')) model_best.add(layers.Dropout(0.2)) model_best.add(layers.Dense(10, activation='softmax')) model_best.summary() print("[INFO] Model is training...") time1 = time.time() # to measure time taken # Compile the model model_best.compile(loss='categorical_crossentropy', optimizer=optimizers.Adam(learning_rate=1e-3), metrics=['acc']) history_best = model_best.fit( train_generator, steps_per_epoch=train_generator.samples/train_generator.batch_size , epochs=20, validation_data=validation_generator, validation_steps=validation_generator.samples/validation_generator.batch_size, ) print('Time taken: {:.1f} seconds'.format(time.time() - time1)) # to measure time taken print("[INFO] Model is trained.") score = model_best.evaluate(test_generator) print('===Testing loss and accuracy===') print('Test loss: ', score[0]) print('Test accuracy: ', score[1]) import matplotlib.pyplot as plot plot.plot(history_best.history['acc']) plot.plot(history_best.history['val_acc']) plot.title('Model accuracy') plot.ylabel('Accuracy') plot.xlabel('Epoch') plot.legend(['Train', 'Vall'], loc='upper left') plot.show() plot.plot(history_best.history['loss']) plot.plot(history_best.history['val_loss']) plot.title('Model loss') plot.ylabel('Loss') plot.xlabel('Epoch') plot.legend(['Train', 'Vall'], loc='upper left') plot.show()
_____no_output_____
MIT
Models/CNN_best.ipynb
DataMas/Deep-Learning-Image-Classification
Решаем задачу логистической регрессии и l1-регуляризацией:$$F(w) = - \frac{1}{N}\sum\limits_{i=1}^Ny_i\ln(\sigma_w(x_i)) + (1 - y_i)\ln(1 - \sigma_w(x_i)) + \lambda\|w\|_1,$$где $\lambda$ -- параметр регуляризации.Задачу решаем проксимальным градиентным методом. Убедимся сначала, что при $\lambda = 0$ наше решение совпадает с решением метода градиентного спуска с оценкой длины шага методом Нестерова.
orac = make_oracle('a1a.txt', penalty='l1', reg=0) orac1 = make_oracle('a1a.txt') x, y = load_svmlight_file('a1a.txt', zero_based=False) m = x[0].shape[1] + 1 w0 = np.zeros((m, 1)) optimizer = OptimizeLassoProximal() optimizer1 = OptimizeGD() point = optimizer(orac, w0) point1 = optimizer1(orac1, w0, NesterovLineSearch()) np.allclose(point, point1)
_____no_output_____
Apache-2.0
HW_exam/.ipynb_checkpoints/Exam_Prazdnichnykh-checkpoint.ipynb
AntonPrazdnichnykh/HSE.optimization
Изучим скорость сходимости метода на датасете a1a.txt ($\lambda = 0.001$)
def convergence_plot(xs, ys, xlabel, title=None): plt.figure(figsize = (12, 3)) plt.xlabel(xlabel) plt.ylabel('F(w_{k+1} - F(w_k)') plt.plot(xs, ys) plt.yscale('log') if title: plt.title(title) plt.tight_layout() plt.show() orac = make_oracle('a1a.txt', penalty='l1', reg=0.001) point = optimizer(orac, w0) errs = optimizer.errs title = 'lambda = 0.001' convergence_plot(optimizer.times, errs, 'вермя работы, с', title) convergence_plot(optimizer.orac_calls, errs, 'кол-во вызовов оракула', title) convergence_plot(list(range(1, optimizer.n_iter + 1)), errs, 'кол-во итераций', title)
_____no_output_____
Apache-2.0
HW_exam/.ipynb_checkpoints/Exam_Prazdnichnykh-checkpoint.ipynb
AntonPrazdnichnykh/HSE.optimization
Заметим, что было использовано условие остановки $F(w_{k+1}) - F(w_k) \leq tol = 10^{-16}$. Из математических соображений кажется, что это ок, так как в вещественных числах сходимость последовательности равносильна её фундаментальности. Я также пытался использовать в качестве условия остановки $\|\nabla_w f(w_k)\|_2^2 / \|\nabla_w f(w_0)\|_2^2 <= tol$, где $f$ -- лосс логистической регрессии без регуляризации ($F = f + reg$), но, вообще говоря, не очень понятно, можно ли так делать, потому что оно учитывает только часть функции.Из графиков видно, что метод обладает линейной скоростью сходимости Изучим теперь зависимость скорости сходимости и количества ненулевых компонент в решении от параметра регуляризации $\lambda$
def plot(x, ys, ylabel, legend=False): plt.figure(figsize = (12, 3)) plt.xlabel("lambda") plt.ylabel(ylabel) plt.plot(x, ys, 'o') plt.xscale('log') if legend: plt.legend() plt.tight_layout() plt.show() lambdas = [10**(-i) for i in range(8, 0, -1)] non_zeros = [] for reg in lambdas: orac = make_oracle('a1a.txt', penalty='l1', reg=reg) point = optimizer(orac, w0) convergence_plot(list(range(1, optimizer.n_iter + 1)), optimizer.errs, 'кол-во итераций', f"lambda = {reg}") non_zeros.append(len(np.nonzero(point)[0])) plot(lambdas, non_zeros, '# nonzero components')
_____no_output_____
Apache-2.0
HW_exam/.ipynb_checkpoints/Exam_Prazdnichnykh-checkpoint.ipynb
AntonPrazdnichnykh/HSE.optimization
Видно, что параметр регуляризации практически не влияет на скорость сходимости (она всегда линейная), но количество итераций метода падает с увеличением параметра регуляризации. Так же из последнего графика делаем ожидаемый вывод, что число ненулевых компонент в решении уменьшается с ростом параметра регуляризации Построим еще графики зависимости значения оптимизируемой функции и критерия остновки (ещё разок) в зависимости от итерации ($\lambda = 0.001$)
def value_plot(xs, ys, xlabel, title=None): plt.figure(figsize = (12, 3)) plt.xlabel(xlabel) plt.ylabel('F(w_k)') plt.plot(xs, ys) # plt.yscale('log') if title: plt.title(title) plt.tight_layout() plt.show() orac = make_oracle('a1a.txt', penalty='l1', reg=0.001) point = optimizer(orac, w0) title = 'lambda = 0.001' value_plot(list(range(1, optimizer.n_iter + 1)), optimizer.values, 'кол-во итераций', title) convergence_plot(list(range(1, optimizer.n_iter + 1)), optimizer.errs, 'кол-во итераций', title)
_____no_output_____
Apache-2.0
HW_exam/.ipynb_checkpoints/Exam_Prazdnichnykh-checkpoint.ipynb
AntonPrazdnichnykh/HSE.optimization
Для подтверждения сделаных выводов проверим их ещё на breast-cancer_scale датасете. Проверка равносильности GD + Nesterov и Proximal + $\lambda = 0$:
orac = make_oracle('breast-cancer_scale.txt', penalty='l1', reg=0) orac1 = make_oracle('breast-cancer_scale.txt') x, y = load_svmlight_file('breast-cancer_scale.txt', zero_based=False) m = x[0].shape[1] + 1 w0 = np.zeros((m, 1)) optimizer = OptimizeLassoProximal() optimizer1 = OptimizeGD() point = optimizer(orac, w0) point1 = optimizer1(orac1, w0, NesterovLineSearch()) np.allclose(point, point1) print(abs(orac.value(point) - orac1.value(point1)))
0.0001461093710795336
Apache-2.0
HW_exam/.ipynb_checkpoints/Exam_Prazdnichnykh-checkpoint.ipynb
AntonPrazdnichnykh/HSE.optimization