markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
To perform Binary classification on Tabular data, SageMaker contains following algorithms:- XGBoost Algorithm- Linear Learner Algorithm, - K-Nearest Neighbors (k-NN) Algorithm, Create model 1: XGBoost model in SageMaker Use the XGBoost built-in algorithm to build an XGBoost training container as shown in the following code example. You can automatically spot the XGBoost built-in algorithm image URI using the SageMaker image_uris.retrieve API (or the get_image_uri API if using Amazon SageMaker Python SDK version 1). If you want to ensure if the image_uris.retrieve API finds the correct URI, see Common parameters for built-in algorithms and look up XGBoost from the full list of built-in algorithm image URIs and available regions.After specifying the XGBoost image URI, you can use the XGBoost container to construct an estimator using the SageMaker Estimator API and initiate a training job. This XGBoost built-in algorithm mode does not incorporate your own XGBoost training script and runs directly on the input datasets.See https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html for more information.
container <- sagemaker$image_uris$retrieve(framework='xgboost', region= session$boto_region_name, version='latest') cat('XGBoost Container Image URL: ', container) s3_output <- paste0('s3://', bucket, '/output_xgboost') estimator1 <- sagemaker$estimator$Estimator(image_uri = container, role = role_arn, train_instance_count = 1L, train_instance_type = 'ml.m5.4xlarge', input_mode = 'File', output_path = s3_output, output_kms_key = NULL, base_job_name = NULL, sagemaker_session = NULL)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
How would an untuned model perform compared to a tuned model? Is it worth the effort? Before going deeper into XGBoost model tuning, let’s highlight the reasons why you have to tune your model. The main reason to perform hyper-parameter tuning is to increase predictability of our models by choosing our hyperparameters in a well thought manner. There are 3 ways to perform hyperparameter tuning: grid search, random search, bayesian search. Popular packages like scikit-learn use grid search and random search techniques. SageMaker uses Bayesian search techniques.We need to choose - a learning objective function to optimize during model training- an eval_metric to use to evaluate model performance during validation- a set of hyperparameters and a range of values for each to use when tuning the model automaticallySageMaker XGBoost model can be tuned with many hyperparameters. The hyperparameters that have the greatest effect on optimizing the XGBoost evaluation metrics are: - alpha, - min_child_weight, - subsample, - eta, - num_round.The hyperparameters that are required are num_class (the number of classes if it is a multi-class classification problem) and num_round ( the number of rounds to run the training on). All other hyperparameters are optional and will be set to default values if it is not specified by the user.
# check to make sure which are required and which are optional estimator1$set_hyperparameters(eval_metric='auc', objective='binary:logistic', num_round = 6L ) # Set Hyperparameter Ranges, check to make sure which are integer and which are continuos parameters. hyperparameter_ranges = list('eta' = sagemaker$parameter$ContinuousParameter(0,1), 'min_child_weight'= sagemaker$parameter$ContinuousParameter(0,10), 'alpha'= sagemaker$parameter$ContinuousParameter(0,2), 'max_depth'= sagemaker$parameter$IntegerParameter(0L,10L))
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
The evaluation metric that we will use for our binary classification purpose is validation:auc, but you could use any other metric that is right for your problem. You do have to be careful to change your objective_type to point to the right direction of Maximize or Minimize according to the objective metric you have chosen.
# Create a hyperparamter tuner objective_metric_name = 'validation:auc' tuner1 <- sagemaker$tuner$HyperparameterTuner(estimator1, objective_metric_name, hyperparameter_ranges, objective_type='Maximize', max_jobs=4L, max_parallel_jobs=2L) # Define the data channels for train and validation datasets input_data <- list('train' = s3_train_input, 'validation' = s3_valid_input) # train the tuner tuner1$fit(inputs = input_data, job_name = paste('tune-xgb', format(Sys.time(), '%Y%m%d-%H-%M-%S'), sep = '-'), wait=TRUE)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
The output of the tuning job can be checked in SageMaker if needed. Calculate AUC for the test data on model 1 SageMaker will automatically recognize the training job with the best evaluation metric and load the hyperparameters associated with that training job when we deploy the model. One of the benefits of SageMaker is that we can easily deploy models in a different instance than the instance in which the notebook is running. So we can deploy into a more powerful instance or a less powerful instance.
model_endpoint1 <- tuner1$deploy(initial_instance_count = 1L, instance_type = 'ml.t2.medium')
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
The serializer tells SageMaker what format the model expects data to be input in.
model_endpoint1$serializer <- sagemaker$serializers$CSVSerializer(content_type='text/csv')
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
We input the `iris_test` dataset without the labels into the model using the `predict` function and check its AUC value.
# Prepare the test sample for input into the model test_sample <- as.matrix(iris_test[-1]) dimnames(test_sample)[[2]] <- NULL # Predict using the deployed model predictions_ep <- model_endpoint1$predict(test_sample) predictions_ep <- stringr::str_split(predictions_ep, pattern = ',', simplify = TRUE) predictions_ep <- as.numeric(predictions_ep > 0.5) # Add the predictions to the test dataset. iris_predictions_ep1 <- dplyr::bind_cols(predicted_flower = predictions_ep, iris_test) iris_predictions_ep1 # Get the AUC auc(roc(iris_predictions_ep1$predicted_flower,iris_test$Species))
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Create model 2: Linear Learner in SageMaker Linear models are supervised learning algorithms used for solving either classification or regression problems. For input, you give the model labeled examples (x, y). x is a high-dimensional vector and y is a numeric label. For binary classification problems, the label must be either 0 or 1.The linear learner algorithm requires a data matrix, with rows representing the observations, and columns representing the dimensions of the features. It also requires an additional column that contains the labels that match the data points. At a minimum, Amazon SageMaker linear learner requires you to specify input and output data locations, and objective type (classification or regression) as arguments. The feature dimension is also required. You can specify additional parameters in the HyperParameters string map of the request body. These parameters control the optimization procedure, or specifics of the objective function that you train on. For example, the number of epochs, regularization, and loss type.See https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html for more information.
container <- sagemaker$image_uris$retrieve(framework='linear-learner', region= session$boto_region_name, version='latest') cat('Linear Learner Container Image URL: ', container) s3_output <- paste0('s3://', bucket, '/output_glm') estimator2 <- sagemaker$estimator$Estimator(image_uri = container, role = role_arn, train_instance_count = 1L, train_instance_type = 'ml.m5.4xlarge', input_mode = 'File', output_path = s3_output, output_kms_key = NULL, base_job_name = NULL, sagemaker_session = NULL)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
For the text/csv input type, the first column is assumed to be the label, which is the target variable for prediction. predictor_type is the only hyperparameter that is required to be pre-defined for tuning. The rest are optional. Normalization, or feature scaling, is an important preprocessing step for certain loss functions that ensures the model being trained on a dataset does not become dominated by the weight of a single feature. Decision trees do not require normalization of their inputs; and since XGBoost is essentially an ensemble algorithm comprised of decision trees, it does not require normalization for the inputs either.However, Generalized Linear Models require a normalization of their input. The Amazon SageMaker Linear Learner algorithm has a normalization option to assist with this preprocessing step. If normalization is turned on, the algorithm first goes over a small sample of the data to learn the mean value and standard deviation for each feature and for the label. Each of the features in the full dataset is then shifted to have mean of zero and scaled to have a unit standard deviation.To make our job easier, we do not have to go back to our previous steps to do normalization. Normalization is built in as a hyper-parameter in SageMaker Linear learner algorithm. So no need to worry about normalization for the training portions.
estimator2$set_hyperparameters(predictor_type="binary_classifier", normalize_data = TRUE)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
The tunable hyperparameters for linear learner are:- wd- l1- learning_rate- mini_batch_size- use_bias- positive_example_weight_multBe careful to check which parameters are integers and which parameters are continuous because that is one of the common sources of errors. Also be careful to give a proper range for hyperparameters that makes sense for your problem. Training jobs can sometimes fail if the mini-batch size is too big compared to the training data available.
# Set Hyperparameter Ranges hyperparameter_ranges = list('wd' = sagemaker$parameter$ContinuousParameter(0.00001,1), 'l1' = sagemaker$parameter$ContinuousParameter(0.00001,1), 'learning_rate' = sagemaker$parameter$ContinuousParameter(0.00001,1), 'mini_batch_size' = sagemaker$parameter$IntegerParameter(10L, 50L) )
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
The evaluation metric we will be using in our case to compare the models will be the objective loss and is based on the validation dataset.
# Create a hyperparamter tuner objective_metric_name = 'validation:objective_loss' tuner2 <- sagemaker$tuner$HyperparameterTuner(estimator2, objective_metric_name, hyperparameter_ranges, objective_type='Minimize', max_jobs=4L, max_parallel_jobs=2L) # Create a tuning job name job_name <- paste('tune-linear', format(Sys.time(), '%Y%m%d-%H-%M-%S'), sep = '-') # Define the data channels for train and validation datasets input_data <- list('train' = s3_train_input, 'validation' = s3_valid_input) # Train the tuner tuner2$fit(inputs = input_data, job_name = job_name, wait=TRUE, content_type='csv') # since we are using csv files as input into the model, we need to specify content type as csv.
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Calculate AUC for the test data on model 2
# Deploy the model into an instance of your choosing. model_endpoint2 <- tuner2$deploy(initial_instance_count = 1L, instance_type = 'ml.t2.medium')
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
For inference, the linear learner algorithm supports the application/json, application/x-recordio-protobuf, and text/csv formats. For more information, https://docs.aws.amazon.com/sagemaker/latest/dg/LL-in-formats.html
# Specify what data formats you want the input and output of your model to look like. model_endpoint2$serializer <- sagemaker$serializers$CSVSerializer(content_type='text/csv') model_endpoint2$deserializer <- sagemaker$deserializers$JSONDeserializer()
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
In Linear Learner the output inference files are in JSON or RecordIO formats. https://docs.aws.amazon.com/sagemaker/latest/dg/LL-in-formats.html When you make predictions on new data, the contents of the response data depends on the type of model you choose within Linear Learner. For regression (predictor_type='regressor'), the score is the prediction produced by the model. For classification (predictor_type='binary_classifier' or predictor_type='multiclass_classifier'), the model returns a score and also a predicted_label. The predicted_label is the class predicted by the model and the score measures the strength of that prediction. So, for binary classification, predicted_label is 0 or 1, and score is a single floating point number that indicates how strongly the algorithm believes that the label should be 1.To interpret the score in classification problems, you have to consider the loss function used. If the loss hyperparameter value is logistic for binary classification or softmax_loss for multiclass classification, then the score can be interpreted as the probability of the corresponding class. These are the loss values used by the linear learner when the `loss` hyperparameter is set to auto as default value. But if the `loss` is set to `hinge_loss`, then the score cannot be interpreted as a probability. This is because hinge loss corresponds to a Support Vector Classifier, which does not produce probability estimates. In the current example, since our loss hyperparameter is logistic for binary classification, we can interpret it as probability of the corresponding class.
# Prepare the test data for input into the model test_sample <- as.matrix(iris_test[-1]) dimnames(test_sample)[[2]] <- NULL # Predict using the test data on the deployed model predictions_ep <- model_endpoint2$predict(test_sample) # Add the predictions to the test dataset. df <- data.frame(matrix(unlist(predictions_ep$predictions), nrow=length(predictions_ep$predictions), byrow=TRUE)) df <- df %>% dplyr::rename(score = X1, predicted_label = X2) iris_predictions_ep2 <- dplyr::bind_cols(predicted_flower = df$predicted_label, iris_test) iris_predictions_ep2 # Get the AUC auc(roc(iris_predictions_ep2$predicted_flower,iris_test$Species))
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Create model 3: KNN in SageMaker Amazon SageMaker k-nearest neighbors (k-NN) algorithm is an index-based algorithm. It uses a non-parametric method for classification or regression. For classification problems, the algorithm queries the k points that are closest to the sample point and returns the most frequently used label of their class as the predicted label. For regression problems, the algorithm queries the k closest points to the sample point and returns the average of their feature values as the predicted value.Training with the k-NN algorithm has three steps: sampling, dimension reduction, and index building. Sampling reduces the size of the initial dataset so that it fits into memory. For dimension reduction, the algorithm decreases the feature dimension of the data to reduce the footprint of the k-NN model in memory and inference latency. We provide two methods of dimension reduction methods: random projection and the fast Johnson-Lindenstrauss transform. Typically, you use dimension reduction for high-dimensional (d >1000) datasets to avoid the “curse of dimensionality” that troubles the statistical analysis of data that becomes sparse as dimensionality increases. The main objective of k-NN's training is to construct the index. The index enables efficient lookups of distances between points whose values or class labels have not yet been determined and the k nearest points to use for inference.See https://docs.aws.amazon.com/sagemaker/latest/dg/k-nearest-neighbors.html for more information.
container <- sagemaker$image_uris$retrieve(framework='knn', region= session$boto_region_name, version='latest') cat('KNN Container Image URL: ', container) s3_output <- paste0('s3://', bucket, '/output_knn') estimator3 <- sagemaker$estimator$Estimator(image_uri = container, role = role_arn, train_instance_count = 1L, train_instance_type = 'ml.m5.4xlarge', input_mode = 'File', output_path = s3_output, output_kms_key = NULL, base_job_name = NULL, sagemaker_session = NULL)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Hyperparameter `dimension_reduction_target` should not be set when `dimension_reduction_type` is set to its default value, which is `None`. If 'dimension_reduction_target' is set to a certain number without setting `dimension_reduction_type`, then SageMaker will ask us to remove 'dimension_reduction_target' from the specified hyperparameters and try again. In this tutorial, we are not performing dimensionality reduction, since we only have 4 features; so `dimension_reduction_type` is set to its default value of `None`.
estimator3$set_hyperparameters( feature_dim = 4L, sample_size = 10L, predictor_type = "classifier" )
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Amazon SageMaker k-nearest neighbor model can be tuned with the following hyperparameters:- k - sample_size
# Set Hyperparameter Ranges hyperparameter_ranges = list('k' = sagemaker$parameter$IntegerParameter(1L,10L) ) # Create a hyperparamter tuner objective_metric_name = 'test:accuracy' tuner3 <- sagemaker$tuner$HyperparameterTuner(estimator3, objective_metric_name, hyperparameter_ranges, objective_type='Maximize', max_jobs=2L, max_parallel_jobs=2L) # Create a tuning job name job_name <- paste('tune-knn', format(Sys.time(), '%Y%m%d-%H-%M-%S'), sep = '-') # Define the data channels for train and validation datasets input_data <- list('train' = s3_train_input, 'test' = s3_valid_input # KNN needs a test data, does not work without it. ) # train the tuner tuner3$fit(inputs = input_data, job_name = job_name, wait=TRUE, content_type='text/csv;label_size=0')
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Calculate AUC for the test data on model 3
# Deploy the model into an instance of your choosing. model_endpoint3 <- tuner3$deploy(initial_instance_count = 1L, instance_type = 'ml.t2.medium')
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
For inference, the linear learner algorithm supports the application/json, application/x-recordio-protobuf, and text/csv formats. For more information, https://docs.aws.amazon.com/sagemaker/latest/dg/LL-in-formats.html
# Specify what data formats you want the input and output of your model to look like. model_endpoint3$serializer <- sagemaker$serializers$CSVSerializer(content_type='text/csv') model_endpoint3$deserializer <- sagemaker$deserializers$JSONDeserializer()
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
In KNN, the input formats for inference are:- CSV- JSON- JSONLINES- RECORDIOThe output formats for inference are:- JSON- JSONLINES- Verbose JSON- Verbose RecordIO-ProtoBufNotice that there is no CSV output format for inference. See https://docs.aws.amazon.com/sagemaker/latest/dg/kNN-inference-formats.html for more details.When you make predictions on new data, the contents of the response data depends on the type of model you choose within Linear Learner. For regression (predictor_type='regressor'), the score is the prediction produced by the model. For classification (predictor_type='binary_classifier' or predictor_type='multiclass_classifier'), the model returns a score and also a predicted_label. The predicted_label is the class predicted by the model and the score measures the strength of that prediction. So, for binary classification, predicted_label is 0 or 1, and score is a single floating point number that indicates how strongly the algorithm believes that the label should be 1.To interpret the score in classification problems, you have to consider the loss function used. If the loss hyperparameter value is logistic for binary classification or softmax_loss for multiclass classification, then the score can be interpreted as the probability of the corresponding class. These are the loss values used by the linear learner when the loss hyperparameter is set to auto as default value. But if the loss is set to hinge_loss, then the score cannot be interpreted as a probability. This is because hinge loss corresponds to a Support Vector Classifier, which does not produce probability estimates. In the current example, since our loss hyperparameter is logistic for binary classification, we can interpret it as probability of the corresponding class.
# Prepare the test data for input into the model test_sample <- as.matrix(iris_test[-1]) dimnames(test_sample)[[2]] <- NULL # Predict using the test data on the deployed model predictions_ep <- model_endpoint3$predict(test_sample)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
We see that the output is of a deserialized JSON format.
predictions_ep typeof(predictions_ep) # Add the predictions to the test dataset. df = data.frame(predicted_flower = unlist(predictions_ep$predictions)) iris_predictions_ep2 <- dplyr::bind_cols(predicted_flower = df$predicted_flower, iris_test) iris_predictions_ep2 # Get the AUC auc(roc(iris_predictions_ep2$predicted_flower,iris_test$Species))
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
Compare the AUC of 3 models for the test data - AUC of Sagemaker XGBoost = 1 - AUC of Sagemaker Linear Learner = 0.83- AUC of Sagemaker KNN = 1 Based on the AUC metric (the higher the better), both XGBoost and KNN perform equally well and are better than the Linear Learner. We can also explore the 3 models with other binary classification metrics such as accuracy, F1 score, and misclassification error. Comparing only the AUC, in this example, we could chose either the XGBoost model or the KNN model to move onto production and close the other two. The deployed model of our choosing can be passed onto production to generate predictions of flower species given that the user only has its sepal and petal measurements. The performance of the deployed model can also be tracked in Amazon CloudWatch. Clean up We close the endpoints which we created to free up resources.
model_endpoint1$delete_model() model_endpoint2$delete_model() model_endpoint3$delete_model() session$delete_endpoint(model_endpoint1$endpoint) session$delete_endpoint(model_endpoint2$endpoint) session$delete_endpoint(model_endpoint3$endpoint)
_____no_output_____
Apache-2.0
r_examples/r_sagemaker_binary_classification_algorithms/R_binary_classification_algorithms_comparison.ipynb
vllyakho/amazon-sagemaker-examples
"""SPEECh: Scalable Probabilistic Estimates of EV ChargingCode first published in October 2021.Developed by Siobhan Powell (siobhan.powell@stanford.edu).""" This code produces the data set of driver group profiles that has been posted Siobhan Powell, October 2021
from speech import DataSetConfigurations from speech import SPEECh from speech import SPEEChGeneralConfiguration from speech import Plotting from speech import LoadProfile import os os.chdir('..') import pandas as pd import matplotlib.pyplot as plt import numpy as np total_evs = 10000 weekday_option = 'weekday' # data = DataSetConfigurations('NewData', ng=9) data = DataSetConfigurations('Original16', ng=16) model = SPEECh(data) config = SPEEChGeneralConfiguration(model) config.num_evs(total_evs) config.groups() results = {} n = 10000 for i in range(data.ng): j = data.cluster_reorder_dendtoac[i] config.group_configs[j].numbers(total_drivers=n) config.group_configs[j].load_gmms() model = LoadProfile(config, config.group_configs[j], weekday=weekday_option) model.calculate_load() (pd.DataFrame(model.load_segments_dict)/n).to_csv('Output_Data/group'+str(int(i+1))+'_weekday.csv', index=None) results[i] = pd.DataFrame(model.load_segments_dict)/n total_evs = 10000 weekday_option = 'weekend' # data = DataSetConfigurations('NewData', ng=9) data = DataSetConfigurations('Original16', ng=16) model = SPEECh(data) config = SPEEChGeneralConfiguration(model) config.num_evs(total_evs) config.groups() results = {} n = 10000 for i in range(data.ng): j = data.cluster_reorder_dendtoac[i] config.group_configs[j].numbers(total_drivers=n) config.group_configs[j].load_gmms() model = LoadProfile(config, config.group_configs[j], weekday=weekday_option) model.calculate_load() (pd.DataFrame(model.load_segments_dict)/n).to_csv('Output_Data/group'+str(int(i+1))+'_weekend.csv', index=None) results[i] = pd.DataFrame(model.load_segments_dict)/n
_____no_output_____
BSD-2-Clause
ProcessingForPaper/7_produce_dataset.ipynb
SiobhanPowell/speech
Testing:
plt.plot(results[5]) test = pd.read_csv('Output_Data/group3_weekday.csv') test2 = pd.read_csv('Output_Data/group3_weekend.csv') plt.plot(test.sum(axis=1), label='weekday') plt.plot(test2.sum(axis=1), label='weekend') plt.legend() plt.show() test = pd.read_csv('Output_Data/group4_weekday.csv') test2 = pd.read_csv('Output_Data/group4_weekend.csv') plt.plot(test.sum(axis=1), label='weekday') plt.plot(test2.sum(axis=1), label='weekend') plt.legend() plt.show()
_____no_output_____
BSD-2-Clause
ProcessingForPaper/7_produce_dataset.ipynb
SiobhanPowell/speech
This Notebook - Goals - FOR EDINA**What?:**- Standard classification method example/tutorial**Who?:**- Researchers in ML- Students in computer science- Teachers in ML/STEM**Why?:**- Demonstrate capability/simplicity of core scipy stack. - Demonstrate common ML concept known to learners and used by researchers.**Noteable features to exploit:**- use of pre-installed libraries: numpy, scikit-learn, matplotlib**How?:**- clear to understand - minimise assumed knowledge- clear visualisations - concise explanations- recognisable/familiar - use standard methods- Effective use of core libraries Classification - K nearest neighboursK nearest neighbours is a simple and effective way to deal with classification problems. This method classifies each sample based on the class of the points that are closest to it.This is a supervised learning method, meaning that data used contains information on some feature that the model should predict.This notebook shows the process of classifying handwritten digits. Import librariesOn Noteable, all the libaries required for this notebook are pre-installed, so they simply need to be imported:
import numpy as np import sklearn.datasets as ds import sklearn.model_selection as ms from sklearn import decomposition from sklearn import neighbors from sklearn import metrics import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Data - Handwritten DigitsIn terms of data, [scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) has a loading function for some data regarding hand written digits.
# get the digits data from scikit into the notebook digits = ds.load_digits()
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
The cell above loads the data as a [bunch object](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html), meaning that the data (in this case images of handwritten digits) and the target (the number that is written) can be split by accessing the attributes of the bunch object:
# store data and targets seperately X = digits.data y = digits.target print("The data is of the shape", X.shape) print("The target data is of the shape", y.shape)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
The individual samples in the X array each represent an image. In this representation, 64 numbers are used to represent a greyscale value on an 8\*8 square. The images can be examined by using pyplot's [matshow](https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.pyplot.matshow.html) function.The next cell displays the 17th sample in the dataset as an 8\*8 image.
# create figure to display the 17th sample fig = plt.matshow(digits.images[17], cmap=plt.cm.gray) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Suppose instead of viewing the 17th sample, we want to see the average of samples corresponding to a certain value.This can be done as follows (using 0 as an example):- All samples where the target value is 0 are located- The mean of these samples is taken- The resulting 64 long array is reshaped to be 8\*8 (for display)- The image is displayed
# take samples with target=0 izeros = np.where(y == 0) # take average across samples, reshape to visualise zeros = np.mean(X[izeros], axis=0).reshape(8,8) # display fig = plt.matshow(zeros, cmap=plt.cm.gray) fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Fit and test the model Split the data Now that you have an understanding of the data, the model can be fitted.Fitting the model involves setting some of the data aside for testing, and allowing the model to "see" the target values corresponding to the training samples.Once the model has been fitted to the training data, the model will be tested on some data it has not seen before. The next cell uses [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) to shuffle all data, then set some data aside for testing later. For this example, $\frac{1}{4}$ of the data will be set aside for testing, and the model will be trained on the remaining training set.As before, X corresponds to data samples, and y corresponds to labels.
# split data to train and test sets X_train, X_test, y_train, y_test = \ ms.train_test_split(X, y, test_size=0.25, shuffle=True, random_state=22)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
The data can be examined - here you can see that 1347 samples have been put into the training set, and 450 have been set aside for testing.
# print shape of data print("training samples:", X_train.shape) print("testing samples :", X_test.shape) print("training targets:", y_train.shape) print("testing targets :", y_test.shape)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Using PCA to visualise dataBefore diving into classifying, it is useful to visualise the data.Since each sample has 64 dimensions, some dimensionality reduction is needed in order to visualise the samples as points on a 2D map.One of the easiest ways of visualising high dimensional data is by principal component analysis (PCA). This maps the 64 dimensional image data onto a lower dimension map (here we will map to 2D) so it can be easily viewed on a screen.In this case, the 2 most important "components" are maintained.
# create PCA model with 2 components pca = decomposition.PCA(n_components=2)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
The next step is to perform the PCA on the samples, and store the results.
# transform training data to 2 principal components X_pca = pca.fit_transform(X_train) # transform test data to 2 principal components T_pca = pca.transform(X_test) # check shape of result print(X_pca.shape) print(T_pca.shape)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
As you can see from the above cell, the X_pca and T_pca data is now represented by only 2 elements per sample. The number of samples has remained the same.Now that there is a 2D representation of the data, it can be plotted on a regular scatter graph. Since the labels corresponding to each point are stored in the y_train variable, the plot can be colour coded by target value!Different coloured dots have different target values.
# choose the colours for each digit cmap_digits = plt.cm.tab10 # plot training data with labels plt.figure(figsize = (9,6)) plt.scatter(X_pca[:,0], X_pca[:,1], s=7, c=y_train, cmap=cmap_digits, alpha=0.7) plt.title("Training data coloured by target value") plt.colorbar();
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Create and fit the modelThe scikit-learn library allows fitting of a k-NN model just as with PCA above.First, create the classifier:
# create model knn = neighbors.KNeighborsClassifier()
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
The next step fits the k-NN model using the training data.
# fit model to training data knn.fit(X_train,y_train);
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Test modelNow use the data that was set aside earlier - this stage involves getting the model to "guess" the samples (this time without seeing their target values).Once the model has predicted the sample's class, a score can be calculated by checking how many samples the model guessed correctly.
# predict test data preds = knn.predict(X_test) # test model on test data score = round(knn.score(X_test,y_test)*100, 2) print("Score on test data: " + str(score) + "%")
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
98.44% is a really high score, one that would not likely be seen on real life applications of the method.It can often be useful to visualise the results of your example. Below are plots showing:- The labels that the model predicted for the test data- The actual labels for the test data- The data points that were incorrectly labelledIn this case, the predicted and actual plots are very similar, so these plots are not very informative. In other cases, this kind of visualisation may reveal patterns for you to explore further.
# plot 3 axes fig, axes = plt.subplots(2,2,figsize=(12,12)) # top left axis for predictions axes[0,0].scatter(T_pca[:,0], T_pca[:,1], s=5, c=preds, cmap=cmap_digits) axes[0,0].set_title("Predicted labels") # top right axis for actual targets axes[0,1].scatter(T_pca[:,0], T_pca[:,1], s=5, c=y_test, cmap=cmap_digits) axes[0,1].set_title("Actual labels") # bottom left axis coloured to show correct and incorrect axes[1,0].scatter(T_pca[:,0], T_pca[:,1], s=5, c=(preds==y_test)) axes[1,0].set_title("Incorrect labels") # bottom right axis not used axes[1,1].set_axis_off()
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
So which samples did the model get wrong?There were 7 samples that were misclassified. These can be displayed alongside their actual and predicted labels using the cell below:
# find the misclassified samples misclass = np.where(preds!=y_test)[0] # display misclassified samples r, c = 1, len(misclass) fig, axes = plt.subplots(r,c,figsize=(10,5)) for i in range(c): ax = axes[i] ax.matshow(X_test[misclass[i]].reshape(8,8),cmap=plt.cm.gray) ax.set_axis_off() act = y_test[misclass[i]] pre = preds[misclass[i]] strng = "actual: {a:.0f} \npredicted: {p:.0f}".format(a=act, p=pre) ax.set_title(strng)
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Additionally, a confusion matrix can be used to identify which samples are misclassified by the model. This can help you identify if their are samples that are commonly misidentified - for example you may identify that 8's are often mistook for 1's.
# confusion matrix conf = metrics.confusion_matrix(y_test,preds) # figure f, ax = plt.subplots(figsize=(9,5)) im = ax.imshow(conf, cmap=plt.cm.RdBu) # set labels as ticks on axes ax.set_xticks(np.arange(10)) ax.set_yticks(np.arange(10)) ax.set_xticklabels(list(range(0,10))) ax.set_yticklabels(list(range(0,10))) ax.set_ylim(9.5,-0.5) # axes labels ax.set_ylabel("actual value") ax.set_xlabel("predicted value") ax.set_title("Digit classification confusion matrix") # display plt.colorbar(im).set_label(label="number of classifications")
_____no_output_____
BSD-3-Clause
GeneralExemplars/MLExemplars/Classification_k_NN_Notebook.ipynb
gaybro8777/Exemplars2020
Dependencies
!nvidia-smi !jupyter notebook list %env CUDA_VISIBLE_DEVICES=3 %matplotlib inline %load_ext autoreload %autoreload 2 import time from pathlib import Path import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from models import tiramisu from models import tiramisu_bilinear from models import tiramisu_m3 from models import unet from datasets import deepglobe from datasets import maroads from datasets import joint_transforms import utils.imgs import utils.training as train_utils # tensorboard from torch.utils.tensorboard import SummaryWriter
_____no_output_____
MIT
extract_training_images.ipynb
adriancampos/road-extraction
DatasetDownload the DeepGlobe dataset from https://competitions.codalab.org/competitions/18467. Place it in datasets/deepglobe/dataset/train,test,validDownload the Massachusetts Road Dataset from https://www.cs.toronto.edu/~vmnih/data/. Combine the training, validation, and test sets, process with `crop_dataset.ipynb` and place the output in datasets/maroads/dataset/map,sat
run = "expM.3.drop2.1" DEEPGLOBE_PATH = Path('datasets/', 'deepglobe/dataset') MAROADS_PATH = Path('datasets/', 'maroads/dataset') RESULTS_PATH = Path('.results/') WEIGHTS_PATH = Path('.weights/') RUNS_PATH = Path('.runs/') RESULTS_PATH.mkdir(exist_ok=True) WEIGHTS_PATH.mkdir(exist_ok=True) RUNS_PATH.mkdir(exist_ok=True) batch_size = 1 # TODO: Should be `MAX_BATCH_PER_CARD * torch.cuda.device_count()` (which in this case is 1 assuming max of 1 batch per card) # resize = joint_transforms.JointRandomCrop((300, 300)) normalize = transforms.Normalize(mean=deepglobe.mean, std=deepglobe.std) train_joint_transformer = transforms.Compose([ # resize, joint_transforms.JointRandomHorizontalFlip(), joint_transforms.JointRandomVerticalFlip(), joint_transforms.JointRandomRotate() ]) train_slice = slice(None,4000) test_slice = slice(4000,None) train_dset = deepglobe.DeepGlobe(DEEPGLOBE_PATH, 'train', slc = train_slice, joint_transform=train_joint_transformer, transform=transforms.Compose([ transforms.ColorJitter(brightness=.4,contrast=.4,saturation=.4), transforms.ToTensor(), normalize, ])) train_dset_ma = maroads.MARoads(MAROADS_PATH, joint_transform=train_joint_transformer, transform=transforms.Compose([ transforms.ColorJitter(brightness=.4,contrast=.4,saturation=.4), transforms.ToTensor(), normalize, ])) # print(len(train_dset_ma.imgs)) # print(len(train_dset_ma.msks)) train_dset_combine = torch.utils.data.ConcatDataset((train_dset, train_dset_ma)) # train_loader = torch.utils.data.DataLoader(train_dset, batch_size=batch_size, shuffle=True) # train_loader = torch.utils.data.DataLoader(train_dset_ma, batch_size=batch_size, shuffle=True) train_loader = torch.utils.data.DataLoader( train_dset_combine, batch_size=batch_size, shuffle=True) # resize_joint_transformer = transforms.Compose([ # resize # ]) resize_joint_transformer = None val_dset = deepglobe.DeepGlobe( DEEPGLOBE_PATH, 'valid', joint_transform=resize_joint_transformer, transform=transforms.Compose([ transforms.ToTensor(), normalize ])) val_loader = torch.utils.data.DataLoader( val_dset, batch_size=batch_size, shuffle=False) test_dset = deepglobe.DeepGlobe( DEEPGLOBE_PATH, 'train', joint_transform=resize_joint_transformer, slc = test_slice, transform=transforms.Compose([ transforms.ToTensor(), normalize ])) test_loader = torch.utils.data.DataLoader( test_dset, batch_size=batch_size, shuffle=False) print("Train: %d" %len(train_loader.dataset)) print("Val: %d" %len(val_loader.dataset.imgs)) print("Test: %d" %len(test_loader.dataset.imgs)) # print("Classes: %d" % len(train_loader.dataset.classes)) print((iter(train_loader))) inputs, targets = next(iter(train_loader)) print("Inputs: ", inputs.size()) print("Targets: ", targets.size()) # utils.imgs.view_image(inputs[0]) # utils.imgs.view_image(targets[0]) # utils.imgs.view_annotated(targets[0]) # print(targets[0]) for i,(image,label) in enumerate(iter(test_loader)): if i % 10 == 0: print("Procssing image",i) im = image[0] # scale to [0,1] im -= im.min() im /= im.max() im = torchvision.transforms.ToPILImage()(im) im.save("ds_test/" + str(i) + ".png") label = label.float() la = torchvision.transforms.ToPILImage()(label) la.save("ds_test/" + str(i) + ".mask.png") print("Done!")
Train: 4909 Val: 1243 Test: 2226 <torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7f387e5c3dd0> Inputs: torch.Size([1, 3, 1024, 1024]) Targets: torch.Size([1, 1024, 1024]) Procssing image 0 Procssing image 10 Procssing image 20 Procssing image 30 Procssing image 40 Procssing image 50 Procssing image 60 Procssing image 70 Procssing image 80 Procssing image 90 Procssing image 100 Procssing image 110 Procssing image 120 Procssing image 130 Procssing image 140 Procssing image 150 Procssing image 160 Procssing image 170 Procssing image 180 Procssing image 190 Procssing image 200 Procssing image 210 Procssing image 220 Procssing image 230 Procssing image 240 Procssing image 250 Procssing image 260 Procssing image 270 Procssing image 350 Procssing image 360 Procssing image 370 Procssing image 380 Procssing image 390 Procssing image 400 Procssing image 410 Procssing image 420 Procssing image 430 Procssing image 440 Procssing image 450 Procssing image 460 Procssing image 470 Procssing image 480 Procssing image 490 Procssing image 500 Procssing image 510 Procssing image 520 Procssing image 530 Procssing image 540 Procssing image 550 Procssing image 560 Procssing image 570 Procssing image 580 Procssing image 590 Procssing image 600 Procssing image 610 Procssing image 620 Procssing image 630 Procssing image 640 Procssing image 650 Procssing image 660 Procssing image 670 Procssing image 680 Procssing image 690 Procssing image 700 Procssing image 710 Procssing image 720 Procssing image 730 Procssing image 740 Procssing image 750 Procssing image 760 Procssing image 770 Procssing image 780 Procssing image 790 Procssing image 800 Procssing image 810 Procssing image 820 Procssing image 830 Procssing image 840 Procssing image 850 Procssing image 860 Procssing image 870 Procssing image 880 Procssing image 890 Procssing image 900 Procssing image 910 Procssing image 920 Procssing image 930 Procssing image 940 Procssing image 950 Procssing image 960 Procssing image 970 Procssing image 980 Procssing image 990 Procssing image 1000 Procssing image 1010 Procssing image 1020 Procssing image 1030 Procssing image 1040 Procssing image 1050 Procssing image 1060 Procssing image 1070 Procssing image 1080 Procssing image 1090 Procssing image 1100 Procssing image 1110 Procssing image 1120 Procssing image 1130 Procssing image 1140 Procssing image 1150 Procssing image 1160 Procssing image 1170 Procssing image 1180 Procssing image 1190 Procssing image 1200 Procssing image 1210 Procssing image 1220 Procssing image 1230 Procssing image 1240 Procssing image 1250 Procssing image 1260 Procssing image 1270 Procssing image 1280 Procssing image 1290 Procssing image 1300 Procssing image 1310 Procssing image 1320 Procssing image 1330 Procssing image 1340 Procssing image 1350 Procssing image 1360 Procssing image 1370 Procssing image 1380 Procssing image 1390 Procssing image 1400 Procssing image 1410 Procssing image 1420 Procssing image 1430 Procssing image 1440 Procssing image 1450 Procssing image 1460 Procssing image 1470 Procssing image 1480 Procssing image 1490 Procssing image 1500 Procssing image 1510 Procssing image 1520 Procssing image 1530 Procssing image 1540 Procssing image 1550 Procssing image 1560 Procssing image 1570 Procssing image 1580 Procssing image 1590 Procssing image 1600 Procssing image 1610 Procssing image 1620 Procssing image 1630 Procssing image 1640 Procssing image 1650 Procssing image 1660 Procssing image 1670 Procssing image 1680 Procssing image 1690 Procssing image 1700 Procssing image 1710 Procssing image 1720 Procssing image 1730 Procssing image 1740 Procssing image 1750 Procssing image 1760 Procssing image 1770 Procssing image 1780 Procssing image 1790 Procssing image 1800 Procssing image 1810 Procssing image 1820 Procssing image 1830 Procssing image 1840 Procssing image 1850 Procssing image 1860 Procssing image 1870 Procssing image 1880 Procssing image 1890 Procssing image 1900 Procssing image 1910 Procssing image 1920 Procssing image 1930 Procssing image 1940 Procssing image 1950 Procssing image 1960 Procssing image 1970 Procssing image 1980 Procssing image 1990 Procssing image 2000 Procssing image 2010 Procssing image 2020 Procssing image 2030 Procssing image 2040 Procssing image 2050 Procssing image 2060 Procssing image 2070 Procssing image 2080 Procssing image 2090 Procssing image 2100 Procssing image 2110 Procssing image 2120 Procssing image 2130 Procssing image 2140 Procssing image 2150 Procssing image 2160 Procssing image 2170 Procssing image 2180 Procssing image 2190 Procssing image 2200 Procssing image 2210 Procssing image 2220 Done!
MIT
extract_training_images.ipynb
adriancampos/road-extraction
Fun with PythonA very minimal example for the Pythia Foundations collection. A Python program can be a single line:
print('Hello interweb')
Hello interweb
Apache-2.0
jb_demo/foundations/Hello.ipynb
halehawk/sphinx-pythia-theme
Inference and ValidationNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:```pythontestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)```The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
import torch from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
_____no_output_____
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
Here I'll create a model like normal, using the same one from my solution for part 4.
from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x
_____no_output_____
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape)
torch.Size([64, 10])
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
top_p, top_class = ps.topk(1, dim=1) # Look at the most likely classes for the first 10 examples print(top_class[:10,:])
tensor([[4], [4], [4], [4], [4], [4], [4], [4], [4], [4]])
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.If we do```pythonequals = top_class == labels````equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
equals = top_class == labels.view(*top_class.shape)
_____no_output_____
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error```RuntimeError: mean is not implemented for type torch.ByteTensor```This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%')
Accuracy: 9.375%
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:```python turn off gradientswith torch.no_grad(): validation pass here for images, labels in testloader: ...```>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False)
_____no_output_____
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
OverfittingIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Dropout) module.```pythonclass Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): make sure input tensor is flattened x = x.view(x.shape[0], -1) Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x```During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.```python turn off gradientswith torch.no_grad(): set model to evaluation mode model.eval() validation pass here for images, labels in testloader: ... set model back to train modemodel.train()``` > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
## TODO: Define your model with dropout added class MyClassifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.dropout(self.fc4(x)), dim=1) return x ## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy model = MyClassifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) epochs = 30 for e in range(epochs): running_loss = 0 for images, labels in trainloader: log_ps = model(images) loss = criterion(log_ps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: with torch.no_grad(): model.eval() for images, labels in testloader: ps = torch.exp(model(images)) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') model.train()
Accuracy: 75.0% Accuracy: 90.625% Accuracy: 75.0% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 73.4375% Accuracy: 75.0% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 81.25% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 76.5625% Accuracy: 76.5625% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 70.3125% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 75.0% Accuracy: 89.0625% Accuracy: 76.5625% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 76.5625% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 78.125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 95.3125% Accuracy: 81.25% Accuracy: 76.5625% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 75.0% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 73.4375% Accuracy: 82.8125% Accuracy: 78.125% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 76.5625% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 75.0% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 96.875% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 95.3125% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 76.5625% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 73.4375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 70.3125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 73.4375% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 79.6875% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 78.125% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 92.1875% Accuracy: 76.5625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 95.3125% Accuracy: 87.5% Accuracy: 75.0% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 92.1875% Accuracy: 78.125% Accuracy: 93.75% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 75.0% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 78.125% Accuracy: 78.125% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 78.125% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 78.125% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 87.5%
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
InferenceNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
# Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
_____no_output_____
MIT
intro-to-pytorch/Part 5 - Inference and Validation (Exercises).ipynb
rodolfoams/deep-learning-v2-pytorch
SoundPlayer Usage InstructionsThe `SoundPlayer` class plays sounds contained within the Sounds directory in the project. Sounds can be played simmply by initializing the class and calling the available methods.The sounds that are available to play are located in the folder `Sounds/`. Files of type .mp3 and .wav are supported. Getting a List of Available SoundsIn order to get a list of the sounds that are available to play, call the method `SoundPlayer.list_sounds()`.
from SoundPlayer import SoundPlayer player = SoundPlayer() player.list_sounds()
_____no_output_____
MIT
salemPi/SoundPlayerDocs.ipynb
kburgon/salem-candy-dispenser
Playing a SoundA sound can be played by calling the method `SoundPlayer.play(sound)`, where sound is the string name of the sound that will be played.
sound = player.list_sounds()[0] player.play(sound)
playing name wlaugh.mp3
MIT
salemPi/SoundPlayerDocs.ipynb
kburgon/salem-candy-dispenser
Rotating Through SoundsWhen the method `SoundPlayer.play_rotated_sound()` is called, a different sound within the Sounds folder will be played each time.
for i in range(len(player.list_sounds())): player.play_rotated_sound()
playing name wlaugh.mp3 playing name hag_idle.mp3 playing name ghosts03.mp3
MIT
salemPi/SoundPlayerDocs.ipynb
kburgon/salem-candy-dispenser
Scale-Out Data Preparation Once we are done with preparing and featurizing the data locally, we can run the same steps on the full dataset in scale-out mode. The new york taxi cab data is about 300GB in total, which is perfect for scale-out. Let's start by downloading the package we saved earlier to disk. Feel free to run the `new_york_taxi_cab.ipynb` notebook to generate the package yourself, in which case you may comment out the download code and set the `package_path` to where the package is saved.
from tempfile import mkdtemp from os import path from urllib.request import urlretrieve dflow_root = mkdtemp() dflow_path = path.join(dflow_root, "new_york_taxi.dprep") print("Downloading Dataflow to: {}".format(dflow_path)) urlretrieve("https://dprepdata.blob.core.windows.net/demo/new_york_taxi_v2.dprep", dflow_path)
_____no_output_____
MIT
case-studies/new-york-taxi/new-york-taxi_scale-out.ipynb
Bhaskers-Blu-Org2/AMLDataPrepDocs
Let's load the package we just downloaded.
import azureml.dataprep as dprep df = dprep.Dataflow.open(dflow_path)
_____no_output_____
MIT
case-studies/new-york-taxi/new-york-taxi_scale-out.ipynb
Bhaskers-Blu-Org2/AMLDataPrepDocs
Let's replace the datasources with the full dataset.
from uuid import uuid4 other_step = df._get_steps()[7].arguments['dataflows'][0]['anonymousSteps'][0] other_step['id'] = str(uuid4()) other_step['arguments']['path']['target'] = 1 other_step['arguments']['path']['resourceDetails'][0]['path'] = 'https://wranglewestus.blob.core.windows.net/nyctaxi/yellow_tripdata*' green_dsource = dprep.BlobDataSource("https://wranglewestus.blob.core.windows.net/nyctaxi/green_tripdata*") df = df.replace_datasource(green_dsource)
_____no_output_____
MIT
case-studies/new-york-taxi/new-york-taxi_scale-out.ipynb
Bhaskers-Blu-Org2/AMLDataPrepDocs
Once we have replaced the datasource, we can now run the same steps on the full dataset. We will print the first 5 rows of the spark DataFrame. Since we are running on the full dataset, this might take a little while depending on your spark cluster's size.
spark_df = df.to_spark_dataframe() spark_df.head(5)
_____no_output_____
MIT
case-studies/new-york-taxi/new-york-taxi_scale-out.ipynb
Bhaskers-Blu-Org2/AMLDataPrepDocs
Regular ExpressionsRegular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.Let's get started! Searching for Patterns in TextOne of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
import re # List of patterns to search for patterns = ['term1', 'term2'] # Text to parse text = 'This is a string with term1, but it does not have the other term.' for pattern in patterns: print('Searching for "%s" in:\n "%s"\n' %(pattern,text)) #Check for match if re.search(pattern,text): print('Match was found. \n') else: print('No Match was found.\n')
Searching for "term1" in: "This is a string with term1, but it does not have the other term." Match was found. Searching for "term2" in: "This is a string with term1, but it does not have the other term." No Match was found.
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Now we've seen that re.search() will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:
# List of patterns to search for pattern = 'term1' # Text to parse text = 'This is a string with term1, but it does not have the other term.' match = re.search(pattern,text) type(match)
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
# Show start of match match.start() # Show end match.end()
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Split with regular expressionsLet's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
# Term to split on split_term = '@' phrase = 'What is the domain name of someone with the email: hello@gmail.com' # Split the phrase re.split(split_term,phrase)
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Note how re.split() returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand! Finding all instances of a patternYou can use re.findall() to find all the instances of a pattern in a string. For example:
# Returns a list of all matches re.findall('match','test phrase match is in middle')
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
re Pattern SyntaxThis will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred. We can use *metacharacters* along with re to find specific types of patterns. Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
def multi_re_find(patterns,phrase): ''' Takes in a list of regex patterns Prints a list of all matches ''' for pattern in patterns: print('Searching the phrase using the re check: %r' %(pattern)) print(re.findall(pattern,phrase)) print('\n')
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Repetition SyntaxThere are five ways to express repetition in a pattern: 1. A pattern followed by the meta-character * is repeated zero or more times. 2. Replace the * with + and the pattern must appear at least once. 3. Using ? means the pattern appears zero or one time. 4. For a specific number of occurrences, use {m} after the pattern, where **m** is replaced with the number of times the pattern should repeat. 5. Use {m,n} where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** {m,} means the value appears at least **m** times, with no maximum. Now we will see an example of each of these using our multi_re_find function:
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd' test_patterns = [ 'sd*', # s followed by zero or more d's 'sd+', # s followed by one or more d's 'sd?', # s followed by zero or one d's 'sd{3}', # s followed by three d's 'sd{2,3}', # s followed by two to three d's ] multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: 'sd*' ['sd', 'sd', 's', 's', 'sddd', 'sddd', 'sddd', 'sd', 's', 's', 's', 's', 's', 's', 'sdddd'] Searching the phrase using the re check: 'sd+' ['sd', 'sd', 'sddd', 'sddd', 'sddd', 'sd', 'sdddd'] Searching the phrase using the re check: 'sd?' ['sd', 'sd', 's', 's', 'sd', 'sd', 'sd', 'sd', 's', 's', 's', 's', 's', 's', 'sd'] Searching the phrase using the re check: 'sd{3}' ['sddd', 'sddd', 'sddd', 'sddd'] Searching the phrase using the re check: 'sd{2,3}' ['sddd', 'sddd', 'sddd', 'sddd']
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Character SetsCharacter sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurrences of either **a** or **b**.Let's see some examples:
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd' test_patterns = ['[sd]', # either s or d 's[sd]+'] # s followed by one or more s or d multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '[sd]' ['s', 'd', 's', 'd', 's', 's', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 'd', 's', 'd', 's', 'd', 's', 's', 's', 's', 's', 's', 'd', 'd', 'd', 'd'] Searching the phrase using the re check: 's[sd]+' ['sdsd', 'sssddd', 'sdddsddd', 'sds', 'sssss', 'sdddd']
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
It makes sense that the first input [sd] returns every instance of s or d. Also, the second input s[sd]+ returns any full strings that begin with an s and continue with s or d characters until another character is reached. ExclusionWe can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single character not in the brackets. Let's see some examples:
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Use [^!.? ] to check for matches that are not a !,.,?, or space. Add a + to check that the match appears at least once. This basically translates into finding the words.
re.findall('[^!.? ]+',test_phrase)
_____no_output_____
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Character RangesAs character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].Common use cases are to search for a specific range of letters in the alphabet. For instance, [a-f] would return matches with any occurrence of letters between a and f. Let's walk through some examples:
test_phrase = 'This is an example sentence. Lets see if we can find some letters.' test_patterns=['[a-z]+', # sequences of lower case letters '[A-Z]+', # sequences of upper case letters '[a-zA-Z]+', # sequences of lower or upper case letters '[A-Z][a-z]+'] # one upper case letter followed by lower case letters multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '[a-z]+' ['his', 'is', 'an', 'example', 'sentence', 'ets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters'] Searching the phrase using the re check: '[A-Z]+' ['T', 'L'] Searching the phrase using the re check: '[a-zA-Z]+' ['This', 'is', 'an', 'example', 'sentence', 'Lets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters'] Searching the phrase using the re check: '[A-Z][a-z]+' ['This', 'Lets']
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Escape CodesYou can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:CodeMeaning\da digit\Da non-digit\swhitespace (tab, space, newline, etc.)\Snon-whitespace\walphanumeric\Wnon-alphanumericEscapes are indicated by prefixing the character with a backslash \. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with r, eliminates this problem and maintains readability.Personally, I think this use of r to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag' test_patterns=[ r'\d+', # sequence of digits r'\D+', # sequence of non-digits r'\s+', # sequence of whitespace r'\S+', # sequence of non-whitespace r'\w+', # alphanumeric characters r'\W+', # non-alphanumeric ] multi_re_find(test_patterns,test_phrase)
Searching the phrase using the re check: '\\d+' ['1233'] Searching the phrase using the re check: '\\D+' ['This is a string with some numbers ', ' and a symbol #hashtag'] Searching the phrase using the re check: '\\s+' [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] Searching the phrase using the re check: '\\S+' ['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', '#hashtag'] Searching the phrase using the re check: '\\w+' ['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', 'hashtag'] Searching the phrase using the re check: '\\W+' [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' #']
MIT
13-Advanced Python Modules/05-Regular Expressions - re.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Minimum Absolute Difference in an Array![image](https://user-images.githubusercontent.com/50367487/83008403-0e8a6480-a050-11ea-8973-4b41088e6e7e.png)
#!/bin/python3 import math import os import random import re import sys # Complete the minimumAbsoluteDifference function below. def minimumAbsoluteDifference(arr): arr.sort(reverse=True) Min = arr[0] - arr[1] for i in range(1, n - 1): Min = min(Min, arr[i] - arr[i + 1]) return Min if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') n = int(input()) arr = list(map(int, input().rstrip().split())) result = minimumAbsoluteDifference(arr) fptr.write(str(result) + '\n') fptr.close()
_____no_output_____
MIT
Interview Preparation Kit/6. Greedy Algorithms/Minimum Absolute Difference in an Array.ipynb
Nam-SH/HackerRank
Feature selection using SelectFromModel and LassoCVUse SelectFromModel meta-transformer along with Lasso to select the bestcouple of features from the Boston dataset.
# Author: Manoj Kumar <mks542@nyu.edu> # License: BSD 3 clause print(__doc__) import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import load_boston from sklearn.feature_selection import SelectFromModel from sklearn.linear_model import LassoCV import pandas as pd # Load the boston dataset. # boston = load_boston() # X, y = boston['data'], boston['target'] ds1 = pd.read_csv("DS.csv") ds2= pd.read_csv("DS1.csv") X = ds1 y = ds2 # We use the base estimator LassoCV since the L1 norm promotes sparsity of features. clf = LassoCV() # Set a minimum threshold of 0.25 sfm = SelectFromModel(clf, threshold=0.25) sfm.fit(X, y) n_features = sfm.transform(X).shape[1] # Reset the threshold till the number of features equals two. # Note that the attribute can be set directly instead of repeatedly # fitting the metatransformer. while n_features > 2: sfm.threshold += 0.1 X_transform = sfm.transform(X) n_features = X_transform.shape[1] # Plot the selected two features from X. plt.title( "Features selected from Boston using SelectFromModel with " "threshold %0.3f." % sfm.threshold) feature1 = X_transform[:, 0] feature2 = X_transform[:, 1] plt.plot(feature1, feature2, 'r.') plt.xlabel("Feature number 1") plt.ylabel("Feature number 2") plt.ylim([np.min(feature2), np.max(feature2)]) plt.show()
Automatically created module for IPython interactive environment
CC-BY-3.0
Assignments/hw3/Failed_to_perform_with_dataset/HW3_feature_selection_from_Boston/plot_select_from_model_boston.ipynb
Leon23N/Leon23N.github.io
Lambda School Data Science*Unit 4, Sprint 3, Module 2*--- Convolutional Neural Networks (Prepare)> Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers. *Goodfellow, et al.* Learning Objectives- Part 1: Describe convolution and pooling- Part 2: Apply a convolutional neural network to a classification task- Part 3: Use a pre-trained convolution neural network for object detectionModern __computer vision__ approaches rely heavily on convolutions as both a dimensinoality reduction and feature extraction method. Before we dive into convolutions, let's talk about some of the common computer vision applications: * Classification [(Hot Dog or Not Dog)](https://www.youtube.com/watch?v=ACmydtFDTGs)* Object Detection [(YOLO)](https://www.youtube.com/watch?v=MPU2HistivI)* Pose Estimation [(PoseNet)](https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html)* Facial Recognition [Emotion Detection](https://www.cbronline.com/wp-content/uploads/2018/05/Mona-lIsa-test-570x300.jpg)* and *countless* more We are going to focus on classification and pre-trained object detection today. What are some of the applications of object detection?
from IPython.display import YouTubeVideo YouTubeVideo('MPU2HistivI', width=600, height=400)
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolutional_Neural_Networks_Lecture.ipynb
SamH3pn3r/DS-Unit-4-Sprint-3-Deep-Learning
Convolution & Pooling (Learn) OverviewLike neural networks themselves, CNNs are inspired by biology - specifically, the receptive fields of the visual cortex.Put roughly, in a real brain the neurons in the visual cortex *specialize* to be receptive to certain regions, shapes, colors, orientations, and other common visual features. In a sense, the very structure of our cognitive system transforms raw visual input, and sends it to neurons that specialize in handling particular subsets of it.CNNs imitate this approach by applying a convolution. A convolution is an operation on two functions that produces a third function, showing how one function modifies another. Convolutions have a [variety of nice mathematical properties](https://en.wikipedia.org/wiki/ConvolutionProperties) - commutativity, associativity, distributivity, and more. Applying a convolution effectively transforms the "shape" of the input.One common confusion - the term "convolution" is used to refer to both the process of computing the third (joint) function and the process of applying it. In our context, it's more useful to think of it as an application, again loosely analogous to the mapping from visual field to receptive areas of the cortex in a real animal.
from IPython.display import YouTubeVideo YouTubeVideo('IOHayh06LJ4', width=600, height=400)
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolutional_Neural_Networks_Lecture.ipynb
SamH3pn3r/DS-Unit-4-Sprint-3-Deep-Learning
Follow AlongLet's try to do some convolutions in `Keras`. Convolution - an exampleConsider blurring an image - assume the image is represented as a matrix of numbers, where each number corresponds to the color value of a pixel.
import imageio import matplotlib.pyplot as plt from skimage import color, io from skimage.exposure import rescale_intensity austen = io.imread('https://dl.airtable.com/S1InFmIhQBypHBL0BICi_austen.jpg') austen_grayscale = rescale_intensity(color.rgb2gray(austen)) austen_grayscale plt.imshow(austen_grayscale, cmap="gray"); import scipy.ndimage as nd import numpy as np horizontal_edge_convolution = np.array([[1,1,1,1,1], [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0], [-1,-1,-1,-1,-1]]) vertical_edge_convolution = np.array([[1, 0, 0, 0, -1], [1, 0, 0, 0, -1], [1, 0, 0, 0, -1], [1, 0, 0, 0, -1], [1, 0, 0, 0, -1]]) austen_edges = nd.convolve(austen_grayscale, vertical_edge_convolution) #austen_edges plt.imshow(austen_edges, cmap="gray");
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolutional_Neural_Networks_Lecture.ipynb
SamH3pn3r/DS-Unit-4-Sprint-3-Deep-Learning
ChallengeYou will be expected to be able to describe convolution. CNNs for Classification (Learn) Overview Typical CNN Architecture![A Typical CNN](https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Typical_cnn.png/800px-Typical_cnn.png)The first stage of a CNN is, unsurprisingly, a convolution - specifically, a transformation that maps regions of the input image to neurons responsible for receiving them. The convolutional layer can be visualized as follows:![Convolutional layer](https://upload.wikimedia.org/wikipedia/commons/6/68/Conv_layer.png)The red represents the original input image, and the blue the neurons that correspond.As shown in the first image, a CNN can have multiple rounds of convolutions, [downsampling](https://en.wikipedia.org/wiki/Downsampling_(signal_processing)) (a digital signal processing technique that effectively reduces the information by passing through a filter), and then eventually a fully connected neural network and output layer. Typical output layers for a CNN would be oriented towards classification or detection problems - e.g. "does this picture contain a cat, a dog, or some other animal?"Why are CNNs so popular?1. Compared to prior image learning techniques, they require relatively little image preprocessing (cropping/centering, normalizing, etc.)2. Relatedly, they are *robust* to all sorts of common problems in images (shifts, lighting, etc.)Actually training a cutting edge image classification CNN is nontrivial computationally - the good news is, with transfer learning, we can get one "off-the-shelf"! Follow Along
from tensorflow.keras import datasets from tensorflow.keras.models import Sequential, Model # <- May Use from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten import matplotlib.pyplot as plt (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1 train_images, test_images = train_images / 255.0, test_images / 255.0 class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) # The CIFAR labels happen to be arrays, # which is why you need the extra index plt.xlabel(class_names[train_labels[i][0]]) plt.show() train_images[0].shape # Setup Architecture model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D((2,2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2,2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dense(10, activation='softmax')) model.summary() # Compile Model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Fit Model model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)); # Evaluate Model test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
- 25s - loss: 0.9412 - acc: 0.7020
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolutional_Neural_Networks_Lecture.ipynb
SamH3pn3r/DS-Unit-4-Sprint-3-Deep-Learning
ChallengeYou will apply CNNs to a classification task in the module project. CNNs for Object Detection (Learn) Overview Transfer Learning - TensorFlow Hub"A library for reusable machine learning modules"This lets you quickly take advantage of a model that was trained with thousands of GPU hours. It also enables transfer learning - reusing a part of a trained model (called a module) that includes weights and assets, but also training the overall model some yourself with your own data. The advantages are fairly clear - you can use less training data, have faster training, and have a model that generalizes better.https://www.tensorflow.org/hub/**WARNING** - Dragons ahead!![Dragon](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Friedrich-Johann-Justin-Bertuch_Mythical-Creature-Dragon_1806.jpg/637px-Friedrich-Johann-Justin-Bertuch_Mythical-Creature-Dragon_1806.jpg)TensorFlow Hub is very bleeding edge, and while there's a good amount of documentation out there, it's not always updated or consistent. You'll have to use your problem-solving skills if you want to use it! Follow Along
import numpy as np from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions def process_img_path(img_path): return image.load_img(img_path, target_size=(224, 224)) def img_contains_banana(img): x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) model = ResNet50(weights='imagenet') features = model.predict(x) results = decode_predictions(features, top=3)[0] print(results) for entry in results: if entry[1] == 'banana': return entry[2] return 0.0 import requests image_urls = ["https://github.com/LambdaSchool/ML-YouOnlyLookOnce/raw/master/sample_data/negative_examples/example11.jpeg", "https://github.com/LambdaSchool/ML-YouOnlyLookOnce/raw/master/sample_data/positive_examples/example0.jpeg"] for _id,img in enumerate(image_urls): r = requests.get(img) with open(f'example{_id}.jpg', 'wb') as f: f.write(r.content) from IPython.display import Image Image(filename='./example0.jpg', width=600) img_contains_banana(process_img_path('example0.jpg')) Image(filename='example1.jpg', width=600) img_contains_banana(process_img_path('example1.jpg'))
[('n07753592', 'banana', 0.06643853), ('n03532672', 'hook', 0.06110267), ('n03498962', 'hatchet', 0.05880436)]
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolutional_Neural_Networks_Lecture.ipynb
SamH3pn3r/DS-Unit-4-Sprint-3-Deep-Learning
Trump Charges
# https://www.metaculus.com/questions/6222/criminal-charges-against-trump/ trump_charges = compare_metaculus_vs_polymarket('https://polymarket.com/market/donald-trump-federally-charged-by-february-20th', 6222, actual=0) trump_charges['data'] trump_charges['brier'] trump_charges['winnings'] plot_predictions(trump_charges, 'Criminal charges against Trump by 20 Feb?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
GOP Win
# https://www.metaculus.com/questions/5734/gop-to-hold-senate-on-feb-1st-2021/ gop_senate = compare_metaculus_vs_polymarket('https://polymarket.com/market/which-party-will-control-the-senate', 5734, actual=0) gop_senate['data'] gop_senate['brier'] gop_senate['winnings'] plot_predictions(gop_senate, 'GOP Hold Senate for 2021?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Trump Pardon
# https://www.metaculus.com/questions/5685/will-donald-trump-attempt-to-pardon-himself/ trump_pardon = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-trump-pardon-himself-in-his-first-term', 5685, actual=0) trump_pardon['data'] trump_pardon['brier'] trump_pardon['winnings'] plot_predictions(trump_pardon, 'Trump self-pardon?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
538 - Economist
# https://www.metaculus.com/questions/5503/comparing-538-and-economist-forecasts-in-2020/ economist_538 = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-538-outperform-the-economist-in-forecasting-the-2020-presidential-election', 5503, actual=0) economist_538['data'] economist_538['brier'] economist_538['winnings'] plot_predictions(economist_538, '538 prez forecast beat Economist?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Biden in-person inauguration
## https://www.metaculus.com/questions/6293/biden-in-person-inauguration/ biden_in_person = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-joe-biden-be-officially-inaugurated-as-president-in-person-outside-the-us-capitol-on-january-20th-2021', 6293, actual=1) biden_in_person['data'] biden_in_person['brier'] biden_in_person['winnings'] plot_predictions(biden_in_person, 'Biden inaugurated in-person on 20 Jan 2021?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Trump at Biden's Inauguration
## https://www.metaculus.com/questions/5825/trump-at-bidens-inauguration/ trump_attend = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-donald-trump-attend-joe-biden-s-inauguration-ceremony-in-person', 5825, actual=0) trump_attend['data'] trump_attend['brier'] trump_attend['winnings'] plot_predictions(trump_attend, 'Trump attend Biden\'s inauguration?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Electoral Challenge
## https://www.metaculus.com/questions/5844/electoral-college-results-challenged/ challenge = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-any-electoral-certificates-be-formally-challenged-in-congress', 5844, actual=1) challenge['data'] challenge['brier'] challenge['winnings'] plot_predictions(challenge, 'Electoral college challenge?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Trump Convict
## https://www.metaculus.com/questions/6303/trump-convicted-by-senate/ trump_convict = compare_metaculus_vs_polymarket('https://polymarket.com/market/will-the-senate-convict-donald-trump-on-impeachment-before-june-1-2021', 6303, actual=0) trump_convict['data'] trump_convict['brier'] trump_convict['winnings'] plot_predictions(trump_convict, 'Senate convict Trump in 2021?').show()
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Tokyo Olympics
# https://polymarket.com/market/will-the-tokyo-summer-olympics-be-cancelled-or-postponed # https://www.metaculus.com/questions/5555/rescheduled-2020-olympics/
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Brier
(challenge['brier'] + trump_attend['brier'] + biden_in_person['brier'] + economist_538['brier'] + trump_pardon['brier'] + gop_senate['brier'] + trump_charges['brier'] + trump_convict['brier']) / 8 (challenge['winnings'] + trump_attend['winnings'] + biden_in_person['winnings'] + economist_538['winnings'] + trump_pardon['winnings'] + gop_senate['winnings'] + trump_charges['winnings'] + trump_convict['winnings']) / 8
_____no_output_____
MIT
Metaculus vs. Polymarket.ipynb
rethinkpriorities/compare_forecast_markets
Training a ConvNet PyTorchIn this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable from torch.utils.data import DataLoader from torch.utils.data import sampler import torchvision.datasets as dset import torchvision.transforms as T import numpy as np import matplotlib.pyplot as plt import timeit from tqdm import tqdm # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
What's this PyTorch business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook). Why?* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn PyTorch?If you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.htmlOtherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here. Load DatasetsWe load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
class ChunkSampler(sampler.Sampler): """Samples elements sequentially from some offset. Arguments: num_samples: # of desired datapoints start: offset where we should start selecting from """ def __init__(self, num_samples, start = 0): self.num_samples = num_samples self.start = start def __iter__(self): return iter(range(self.start, self.start + self.num_samples)) def __len__(self): return self.num_samples NUM_TRAIN = 49000 NUM_VAL = 1000 cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True, transform=T.ToTensor()) loader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0)) cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True, transform=T.ToTensor()) loader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN)) cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True, transform=T.ToTensor()) loader_test = DataLoader(cifar10_test, batch_size=64)
Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
# Constant to control how frequently we print train loss print_every = 100 # This is a little utility that we'll use to reset the model # if we want to re-initialize all our parameters def reset(m): if hasattr(m, 'reset_parameters'): m.reset_parameters()
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
Example Model Some assorted tidbitsLet's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:* N is the number of datapoints* C is the number of channels* H is the height of the intermediate feature map in pixels* W is the height of the intermediate feature map in pixelsThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "Flatten" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
class Flatten(nn.Module): def forward(self, x): N, C, H, W = x.size() # read in N, C, H, W return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image def out_dim(sz, filter_size, padding, stride): """ Computes the size of dimension after convolution. Input: - sz: Original size of dimension - filter_size: Filter size applied in convolution - padding: Applied to the original dimension - stride: Between the two applications of convolution Returns a tuple of: - out: The size of the dimension after the convolution is computed """ return 1 + int((sz + 2 * padding - filter_size) / stride) # Verify that CUDA is properly configured and you have a GPU available if torch.cuda.is_available(): dtype = torch.cuda.FloatTensor ltype = torch.cuda.LongTensor else: dtype = torch.FloatTensor ltype = torch.LongTensor
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
The example model itselfThe first step to training your own model is defining its architecture.Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layerone after the other.In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used. Make sure you understand why the parameters of the Linear layer are 5408 and 10.
# Here's where we define the architecture of the model... simple_model = nn.Sequential( nn.Conv2d(3, 32, kernel_size=7, stride=2), nn.ReLU(inplace=True), Flatten(), # see above for explanation nn.Linear(5408, 10), # affine layer ) # the number of output classes: # 10 # 32*out_dim(32, 7, 0, 2)**2 # 5408 # Set the type of all data in this model to be FloatTensor simple_model.type(dtype) loss_fn = nn.CrossEntropyLoss().type(dtype) optimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.* Layers: http://pytorch.org/docs/nn.html* Activations: http://pytorch.org/docs/nn.htmlnon-linear-activations* Loss functions: http://pytorch.org/docs/nn.htmlloss-functions* Optimizers: http://pytorch.org/docs/optim.htmlalgorithms Training a specific modelIn this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model. Using the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:* 7x7 Convolutional Layer with 32 filters and stride of 1* ReLU Activation Layer* Spatial Batch Normalization Layer* 2x2 Max Pooling layer with a stride of 2* Affine layer with 1024 output units* ReLU Activation Layer* Affine layer from 1024 input units to 10 outputsAnd finally, set up a **cross-entropy** loss function and the **RMSprop** learning rule.
n_Conv2d = out_dim(32, 7, 0, 1) n_MaxPool2d = out_dim(n_Conv2d, 2, 0, 2) n_Flatten = 32*n_MaxPool2d**2 fixed_model_base = nn.Sequential( # You fill this in! nn.Conv2d(3, 32, kernel_size=7, stride=1), nn.ReLU(inplace=True), nn.BatchNorm2d(32), nn.MaxPool2d(2, stride=2), Flatten(), # see above for explanation nn.Linear(n_Flatten, 1024), # affine layer nn.ReLU(inplace=True), nn.Linear(1024, 10) # affine layer ) fixed_model = fixed_model_base.type(dtype)
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size x = torch.randn(64, 3, 32, 32).type(dtype) x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data ans = fixed_model(x_var) # Feed it through the model! # Check to make sure what comes out of your model # is the right dimensionality... this should be True # if you've done everything correctly np.array_equal(np.array(ans.size()), np.array([64, 10]))
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
GPU!Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
import copy fixed_model_gpu = copy.deepcopy(fixed_model_base).type(dtype) x_gpu = torch.randn(64, 3, 32, 32).type(dtype) x_var_gpu = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data ans = fixed_model_gpu(x_var_gpu) # Feed it through the model! # Check to make sure what comes out of your model # is the right dimensionality... this should be True # if you've done everything correctly np.array_equal(np.array(ans.size()), np.array([64, 10]))
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
Run the following cell to evaluate the performance of the forward pass running on the CPU:
%%timeit ans = fixed_model(x_var)
1000 loops, best of 3: 445 µs per loop
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
... and now the GPU:
%%timeit # torch.cuda.synchronize() # Make sure there are no pending GPU computations ans = fixed_model_gpu(x_var_gpu) # Feed it through the model! # torch.cuda.synchronize() # Make sure there are no pending GPU computations
1000 loops, best of 3: 448 µs per loop
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is *torch.cuda.FloatTensor* (in our notebook here as *gpu_dtype*) Train the model.Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).Make sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.Note that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:
loss_fn = nn.CrossEntropyLoss() optimizer = optim.RMSprop(fixed_model_gpu.parameters(), lr=1e-3) # This sets the model in "training" mode. This is relevant for some layers that may have different behavior # in training mode vs testing mode, such as Dropout and BatchNorm. fixed_model_gpu.train() # Load one batch at a time. for t, (x, y) in enumerate(tqdm(loader_train)): x_var = Variable(x.type(dtype)) y_var = Variable(y.type(ltype)) # This is the forward pass: predict the scores for each class, for each x in the batch. scores = fixed_model_gpu(x_var) # Use the correct y values and the predicted y values to compute the loss. loss = loss_fn(scores, y_var) if (t + 1) % print_every == 0: print('t = %d, loss = %.4f' % (t + 1, loss.data[0])) # Zero out all of the gradients for the variables which the optimizer will update. optimizer.zero_grad() # This is the backwards pass: compute the gradient of the loss with respect to each # parameter of the model. loss.backward() # Actually update the parameters of the model using the gradients computed by the backwards pass. optimizer.step()
26%|██▌ | 198/766 [00:01<00:05, 105.99it/s]
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
def train(model, loss_fn, optimizer, num_epochs = 1, verbose = True): for epoch in range(num_epochs): if verbose: print('Starting epoch %d / %d' % (epoch + 1, num_epochs)) model.train() for t, (x, y) in enumerate(loader_train): x_var = Variable(x.type(dtype)) y_var = Variable(y.type(ltype)) scores = model(x_var) loss = loss_fn(scores, y_var) if (t + 1) % print_every == 0 and verbose: print('t = %d, loss = %.4f' % (t + 1, loss.data[0])) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(model, loader, verbose = True): if verbose: if loader.dataset.train: print('Checking accuracy on validation set') else: print('Checking accuracy on test set') num_correct = 0 num_samples = 0 model.eval() # Put the model in test mode (the opposite of model.train(), essentially) for x, y in loader: x_var = Variable(x.type(dtype), volatile=True) scores = model(x_var) _, preds = scores.data.cpu().max(1) num_correct += (preds == y).sum() num_samples += preds.size(0) acc = float(num_correct) / num_samples if verbose: print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc)) return acc torch.cuda.random.manual_seed(12345)
_____no_output_____
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17
Check the accuracy of the model.Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
fixed_model_gpu.apply(reset) train(fixed_model_gpu, loss_fn, optimizer, num_epochs=5) check_accuracy(fixed_model_gpu, loader_val)
Starting epoch 1 / 5 t = 100, loss = 1.3351 t = 200, loss = 1.4855 t = 300, loss = 1.4892 t = 400, loss = 1.2383 t = 500, loss = 1.2223 t = 600, loss = 1.3844 t = 700, loss = 1.1986 Starting epoch 2 / 5 t = 100, loss = 0.9178 t = 200, loss = 0.9722 t = 300, loss = 1.0708 t = 400, loss = 0.8852 t = 500, loss = 0.9199 t = 600, loss = 1.0414 t = 700, loss = 0.8921 Starting epoch 3 / 5 t = 100, loss = 0.6192 t = 200, loss = 0.6360 t = 300, loss = 0.5818 t = 400, loss = 0.7068 t = 500, loss = 0.6241 t = 600, loss = 0.7583 t = 700, loss = 0.4911 Starting epoch 4 / 5 t = 100, loss = 0.3345 t = 200, loss = 0.2367 t = 300, loss = 0.2881 t = 400, loss = 0.3601 t = 500, loss = 0.2190 t = 600, loss = 0.3616 t = 700, loss = 0.2555 Starting epoch 5 / 5 t = 100, loss = 0.1729 t = 200, loss = 0.1571 t = 300, loss = 0.1178 t = 400, loss = 0.1417 t = 500, loss = 0.2129 t = 600, loss = 0.3370 t = 700, loss = 0.2010 Checking accuracy on validation set Got 623 / 1000 correct (62.30)
MIT
assignment2/PyTorch.ipynb
aoboturov/cs237n-17