markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<h2> Supply and demand </h2> Market clearing equilibrium price, $p^*$, satisfies... $$ D(p_t) = S(p_t^e). $$ Really this is also an equilibrium in beliefs because we also require that $p_t = p_t^e$!
interactive_supply_demand_plot = ipywidgets.interact(cobweb.supply_demand_plot, D=ipywidgets.fixed(quantity_demand), S=ipywidgets.fixed(quantity_supply), a=cobweb.a_float_slider, b=cobweb.b_float_slider, gamma=cobweb.gamma_float_slider, p_bar=cobweb.p_bar_float_slider)
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
<h2> Analyzing dynamics of the model via simulation... </h2> Model has no closed form solution (i.e., we can not solve for a function that describes $p_t^e$ as a function of time and model parameters). BUT, we can simulate equation 7 above to better understand the dynamics of the model... We can simulate our model and plot time series for different parameter values. Questions for discussion... <ol> <li> Can you find a two-cycle? What does this mean?</li> <li> Can you find higher cycles? Perhaps a four-cycle? Maybe even a three-cycle?</li> <li> Do simulations with similar initial conditions converge or diverge over time? </li> </ol> Can we relate these things to other SFI MOOCS on non-linear dynamics and chaos? Surely yes!
model = functools.partial(adaptive_expectations, inverse_demand, quantity_supply) interactive_time_series_plot = ipywidgets.interact(cobweb.time_series_plot, F=ipywidgets.fixed(model), X0=cobweb.initial_expected_price_slider, T=cobweb.T_int_slider, a=cobweb.a_float_slider, b=cobweb.b_float_slider, w=cobweb.w_float_slider, gamma=cobweb.gamma_float_slider, p_bar=cobweb.p_bar_float_slider)
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
<h2> Forecast errors </h2> How do we measure forecast error? What does the distribution of forecast errors look like for different parameters? Could an agent learn to avoid chaos? Specifically, suppose an agent learned to tune the value of $w$ in order to minimize its mean forecast error. Would this eliminate chaotic dynamics?
interactive_forecast_error_plot = ipywidgets.interact(cobweb.forecast_error_plot, D_inverse=ipywidgets.fixed(inverse_demand), S=ipywidgets.fixed(quantity_supply), F=ipywidgets.fixed(model), X0=cobweb.initial_expected_price_slider, T=cobweb.T_int_slider, a=cobweb.a_float_slider, b=cobweb.b_float_slider, w=cobweb.w_float_slider, gamma=cobweb.gamma_float_slider, p_bar=cobweb.p_bar_float_slider)
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
LU 分解 将一个矩阵分解为一个上三角和下三角矩阵的乘积
import numpy as np def LU(A): U = np.copy(A) m, n = A.shape L = np.eye(n) for k in range(n-1): for j in range(k+1,n): L[j,k] = U[j,k]/U[k,k] U[j,k:n] -= L[j,k] * U[k,k:n] return L, U A = np.array([[2,1,1,0],[4,3,3,1],[8,7,9,3],[6,7,9,8]]).astype(np.float) L, U = LU(A) L U A L @ U np.allclose(A, L @ U)
fastai_notes/LinearAlgebra/speech03.ipynb
AutuanLiu/Python
mit
The LU factorization is useful! Solving Ax = b becomes LUx = b: 1. find A = LU 2. solve Ly = b 3. solve Ux = y
v=np.array([1,2,3]) v v.shape v1=np.expand_dims(v, -1) v1 v1.shape v2 = v[np.newaxis] v2 v2.shape v3 = v[:, np.newaxis] v3 v3.shape
fastai_notes/LinearAlgebra/speech03.ipynb
AutuanLiu/Python
mit
广播运算 When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 稀疏矩阵 There are the most common sparse storage formats: coordinate-wise (scipy calls COO) compressed sparse row (CSR) compressed sparse column (CSC)
import sklearn
fastai_notes/LinearAlgebra/speech03.ipynb
AutuanLiu/Python
mit
Load Data For this section, I'll load the data into a Pandas dataframe using urlopen from the urllib.request module. Instead of downloading a csv, I started implementing this method (inspired by this Python Tutorials) where I grab the data straight from the UCI Machine Learning Database using an http request.
# Loading data and cleaning dataset UCI_data_URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases\ /breast-cancer-wisconsin/wdbc.data'
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
I do recommend on keeping a static file for your dataset as well. Next, I created a list with the appropriate names and set them as the data frame's column names, then I load them unto a pandas data frame
names = ['id_number', 'diagnosis', 'radius_mean', 'texture_mean', 'perimeter_mean', 'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean','concave_points_mean', 'symmetry_mean', 'fractal_dimension_mean', 'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'compactness_se', 'concavity_se', 'concave_points_se', 'symmetry_se', 'fractal_dimension_se', 'radius_worst', 'texture_worst', 'perimeter_worst', 'area_worst', 'smoothness_worst', 'compactness_worst', 'concavity_worst', 'concave_points_worst', 'symmetry_worst', 'fractal_dimension_worst'] dx = ['Benign', 'Malignant'] breast_cancer = pd.read_csv(urlopen(UCI_data_URL), names=names)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Cleaning We do some minor cleanage like setting the id_number to be the data frame index, along with converting the diagnosis to the standard binary 1, 0 representation using the map() function.
# Setting 'id_number' as our index breast_cancer.set_index(['id_number'], inplace = True) # Converted to binary to help later on with models and plots breast_cancer['diagnosis'] = breast_cancer['diagnosis'].map({'M':1, 'B':0})
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Missing Values Given context of the data set, I know that there is no missing data, but I ran a for-loop that checks to see if there was any missing values through each column. Printing the column name and total missing values for that column, iteratively.
breast_cancer.isnull().sum()
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
This will be used for the random forest model, where the id_number won't be relevant.
# For later use in CART models names_index = names[2:]
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Let's preview the data set utilizing the head() function which will give the first 5 values of our data frame.
breast_cancer.head()
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Next, we'll give the dimensions of the data set; where the first value is the number of patients and the second value is the number of features. We print the data types of our data set this is important because this will often be an indicator of missing data, as well as giving us context to anymore data cleanage.
print("Here's the dimensions of our data frame:\n", breast_cancer.shape) print("Here's the data types of our columns:\n", breast_cancer.dtypes)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Class Imbalance The distribution for diagnosis is important because it brings up the discussion of Class Imbalance within Machine learning and data mining applications. Class Imbalance refers to when a target class within a data set is outnumbered by the other target class (or classes). This can lead to misleading accuracy metrics, known as accuracy paradox, therefore we have to make sure our target classes aren't imblanaced. We do so by creating a function that will output the distribution of the target classes. NOTE: If your data set suffers from class imbalance I suggest reading documentation on upsampling and downsampling.
def print_target_perc(data_frame, col): """Function used to print class distribution for our data set""" try: # If the number of unique instances in column exceeds 20 print warning if data_frame[col].nunique() > 20: return print('Warning: there are {0} values in `{1}` column which exceed the max of 20 for this function. \ Please try a column with lower value counts!' .format(data_frame[col].nunique(), col)) # Stores value counts col_vals = data_frame[col].value_counts().sort_values(ascending=False) # Resets index to make index a column in data frame col_vals = col_vals.reset_index() # Create a function to output the percentage f = lambda x, y: 100 * (x / sum(y)) for i in range(0, len(col_vals['index'])): print('`{0}` accounts for {1:.2f}% of the {2} column'\ .format(col_vals['index'][i], f( col_vals[col].iloc[i], col_vals[col]), col)) # try-except block goes here if it can't find the column in data frame except KeyError as e: raise KeyError('{0}: Not found. Please choose the right column name!'.format(e)) print_target_perc(breast_cancer, 'diagnosis')
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Fortunately, this data set does not suffer from class imbalance. Next we will use a useful function that gives us standard descriptive statistics for each feature including mean, standard deviation, minimum value, maximum value, and range intervals.
breast_cancer.describe()
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
We can see through the maximum row that our data varies in distribution, this will be important when considering classification models. Standardization is an important requirement for many classification models that should be considered when implementing pre-processing. Some models (like neural networks) can perform poorly if pre-processing isn't considered, so the describe() function can be a good indicator for standardization. Fortunately Random Forest does not require any pre-processing (for use of categorical data see sklearn's Encoding Categorical Data section). Creating Training and Test Sets We split the data set into our training and test sets which will be (pseudo) randomly selected having a 80-20% splt. We will use the training set to train our model along with some optimization, and use our test set as the unseen data that will be a useful final metric to let us know how well our model does. When using this method for machine learning always be weary of utilizing your test set when creating models. The issue of data leakage is a serious issue that is common in practice and can result in over-fitting. More on data leakage can be found in this Kaggle article
feature_space = breast_cancer.iloc[:, breast_cancer.columns != 'diagnosis'] feature_class = breast_cancer.iloc[:, breast_cancer.columns == 'diagnosis'] training_set, test_set, class_set, test_class_set = train_test_split(feature_space, feature_class, test_size = 0.20, random_state = 42)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
NOTE: What I mean when I say pseudo-random is that we would want everyone who replicates this project to get the same results. So we use a random seed generator and set it equal to a number of our choosing, this will then make the results the same for anyone who uses this generator, awesome for reproducibility.
# Cleaning test sets to avoid future warning messages class_set = class_set.values.ravel() test_class_set = test_class_set.values.ravel()
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Fitting Random Forest Now we will create the model no parameter tuning aside from the random seed generator. What I mean when I say parameter tuning is different machine learning models utilize various parameters which have to be tuned by the person implementing the algorithm. Here I'll give a brief overview of the parameters I will be tuning in this tutorial: max_depth: the maximum splits for all trees in the forest. bootstrap: indicating whether or not we want to use bootstrap samples when building trees max_features: the maximum number of features that will be used in the node splitting (the main difference previously mentioned between Bagging trees and Random Forest). Typically we want a value that is less than p, where p is all features in our dataset. criterion: this is the metric used to asses the stopping criteria for the Decision trees, more on this later Once we've instantiated our model we will go ahead and tune our parameters.
# Set the random state for reproducibility fit_rf = RandomForestClassifier(random_state=42)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Hyperparameters Optimization Utilizing the GridSearchCV functionality, I create a dictionary with parameters I am looking to optimize to create the best model for our data. Setting the n_jobs to 3 tells the grid search to run 3 jobs in parallel reducing the time the function will take to compute the best parameters. I included the timer to help see how long different jobs took, ultimately deciding on using 3 parallel jobs. This will help set parameters which I will then use to tune one final paramter; the number of trees in my forest.
np.random.seed(42) start = time.time() param_dist = {'max_depth': [2, 3, 4], 'bootstrap': [True, False], 'max_features': ['auto', 'sqrt', 'log2', None], 'criterion': ['gini', 'entropy']} cv_rf = GridSearchCV(fit_rf, cv = 10, param_grid=param_dist, n_jobs = 3) cv_rf.fit(training_set, class_set) print('Best Parameters using grid search: \n', cv_rf.best_params_) end = time.time() print('Time taken in grid search: {0: .2f}'.format(end - start))
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Once we are given the best parameter combination, we set the parameters to our model. Notice how we didn't utilize the bootstrap: True parameter, this will make sense in the following section.
# Set best parameters given by grid search fit_rf.set_params(criterion = 'gini', max_features = 'log2', max_depth = 3)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Out of Bag Error Rate Another useful feature of Random Forest is the concept of Out of Bag Error Rate or OOB error rate. When creating the forest, typically only 2/3 of the data is used to train each tree, this gives us 1/3 of unseen data that we can then utilize in a way that is advantageos to our accuracy metrics withou being computationally expensive like cross validation. When calculating OOB, two parameters have to be changed as outlined below. Also utilizing a for-loop across a multitude of forest sizes, we can calculate the OOB Error rate and use this to asses how many trees are appropriate for our model! NOTE: When calculating the oob score, setting bootstrap=True will produce errors, but is necessary for oob_score calculation as stated on this example For the original analysis I compared Kth Nearest Neighbor, Random Forest, and Neural Networks, so most of the analysis was done to compare across different models.
fit_rf.set_params(warm_start=True, oob_score=True) min_estimators = 15 max_estimators = 1000 error_rate = {} for i in range(min_estimators, max_estimators + 1, 5): fit_rf.set_params(n_estimators=i) fit_rf.fit(training_set, class_set) oob_error = 1 - fit_rf.oob_score_ error_rate[i] = oob_error # Convert dictionary to a pandas series for easy plotting oob_series = pd.Series(error_rate) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_facecolor('#fafafa') oob_series.plot(kind='line', color = 'red') plt.axhline(0.055, color='#875FDB', linestyle='--') plt.axhline(0.05, color='#875FDB', linestyle='--') plt.xlabel('n_estimators') plt.ylabel('OOB Error Rate') plt.title('OOB Error Rate Across various Forest sizes \n(From 15 to 1000 trees)')
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
The OOB error rate starts to oscilate at around 400 trees, so I will go ahead and use my judgement to use 400 trees in my forest. Using the pandas series object I can easily find the OOB error rate for the estimator as follows:
print('OOB Error rate for 400 trees is: {0:.5f}'.format(oob_series[400]))
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Utilizing the OOB error rate that was created with the model gives us an unbiased error rate. This can be helpful when cross validating and/or hyperparameter optimization prove to be too computationally expensive, since oob can be calculated with the model estimation. For the sake of this tutorial I will go over the other traditional methods for machine learning including the training and test error route, along with cross validation metrics. Traditional Training and Test Set Split In order for this methodology to work we will set the number of trees calculated using the OOB error rate, and removing the warm_start and oob_score parameters. Along with including the bootstrap parameter.
fit_rf.set_params(n_estimators=400, bootstrap = True, warm_start=False, oob_score=False)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Training Algorithm Next we train the algorithm utilizing the training and target class set we had made earlier.
fit_rf.fit(training_set, class_set)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Variable Importance Once we have trained the model, we are able to assess this concept of variable importance. A downside to creating ensemble methods with Decision Trees is we lose the interpretability that a single tree gives. A single tree can outline for us important node splits along with variables that were important at each split. Forunately ensemble methods utilzing CART models use a metric to evaluate homogeneity of splits. Thus when creating ensembles these metrics can be utilized to give insight to important variables used in the training of the model. Two metrics that are used are gini impurity and entropy. The two metrics vary and from reading documentation online, many people favor gini impurity due to the computational cost of entropy since it requires calculating the logarithmic function. For more discussion I recommend reading this article. Here we define each metric: $$Gini\ Impurity = 1 - \sum_i p_i$$ $$Entropy = \sum_i -p_i * \log_2 p_i$$ where $p_i$ is defined as the proportion of subsamples that belong to a certain target class. Since we are utilizing the Gini Impurity, the impurity measure reaches 0 when all target class labels are the same. We are able to access the feature importance of the model and using a helper function to output the importance of our variables in descending order.
def variable_importance(fit): """ Purpose ---------- Checks if model is fitted CART model then produces variable importance and respective indices in dictionary. Parameters ---------- * fit: Fitted model containing the attribute feature_importances_ Returns ---------- Dictionary containing arrays with importance score and index of columns ordered in descending order of importance. """ try: if not hasattr(fit, 'fit'): return print("'{0}' is not an instantiated model from scikit-learn".format(fit)) # Captures whether the model has been trained if not vars(fit)["estimators_"]: return print("Model does not appear to be trained.") except KeyError: KeyError("Model entered does not contain 'estimators_' attribute.") importances = fit.feature_importances_ indices = np.argsort(importances)[::-1] return {'importance': importances, 'index': indices} var_imp_rf = variable_importance(fit_rf) # Create separate variables for each attribute importances_rf = var_imp_rf['importance'] indices_rf = var_imp_rf['index'] def print_var_importance(importance, indices, name_index): """ Purpose ---------- Prints dependent variable names ordered from largest to smallest based on information gain for CART model. Parameters ---------- * importance: Array returned from feature_importances_ for CART models organized by dataframe index * indices: Organized index of dataframe from largest to smallest based on feature_importances_ * name_index: Name of columns included in model Returns ---------- Prints feature importance in descending order """ print("Feature ranking:") for f in range(0, indices.shape[0]): i = f print("{0}. The feature '{1}' has a Mean Decrease in Impurity of {2:.5f}" .format(f + 1, names_index[indices[i]], importance[indices[f]])) print_var_importance(importances_rf, indices_rf, names_index)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
We can see here that our top 5 variables were area_worst, perimeter_worst, concave_points_worst, concave_points_mean, radius_worst. This can give us great insight for further analysis like feature engineering, although we won't go into this during this tutorial. This step can help give insight to the practitioner and audience as to what variables played an important part to the predictions generated by the model. In our test case, this can help people in the medical field focus on the top variables and their relationship with breast cancer.
def variable_importance_plot(importance, indices, name_index): """ Purpose ---------- Prints bar chart detailing variable importance for CART model NOTE: feature_space list was created because the bar chart was transposed and index would be in incorrect order. Parameters ---------- * importance: Array returned from feature_importances_ for CART models organized by dataframe index * indices: Organized index of dataframe from largest to smallest based on feature_importances_ * name_index: Name of columns included in model Returns: ---------- Returns variable importance plot in descending order """ index = np.arange(len(name_index)) importance_desc = sorted(importance) feature_space = [] for i in range(indices.shape[0] - 1, -1, -1): feature_space.append(name_index[indices[i]]) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_facecolor('#fafafa') plt.title('Feature importances for Gradient Boosting Model\ \nCustomer Churn') plt.barh(index, importance_desc, align="center", color = '#875FDB') plt.yticks(index, feature_space) plt.ylim(-1, indices.shape[0]) plt.xlim(0, max(importance_desc) + 0.01) plt.xlabel('Mean Decrease in Impurity') plt.ylabel('Feature') plt.show() plt.close() variable_importance_plot(importances_rf, indices_rf, names_index)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
The visual helps drive the point of variable importance, since you can clearly see the difference in importance of variables for the ensemble method. Certain cutoff points can be made to reduce the inclusion of features and can help in the accuracy of the model, since we'll be removing what is considered noise within our feature space. Cross Validation Cross validation is a powerful tool that is used for estimating the predicitive power of your model, which performs better than the conventional training and test set. What we are doing with Cross Validation is we are essentially creating multiple training and test sets, then averaging the scores to give us a less biased metric. In our case we are creating 10 sets within our data set that calculates the estimations we have done already, but then averages the prediction error to give us a more accurate representation of our model's prediction power, since the model's performance can vary significantly when utilizing different training and test sets. Suggested Reading: For a more concise explanation of Cross Validation I recommend reading An Introduction to Statistical Learnings with Applications in R, specifically chapter 5.1! K-Fold Cross Validation Here we are employing K-Fold Cross Validation, more specifically 10 folds. So therefore we are creating 10 subsets of our data where we will be employing the training and test set methodology then averaging the accuracy for all folds to give us our estimatation. Within a Random Forest context if your data set is significantly large one can choose to not do cross validation and use the OOB error rate as an unbiased metric for computational costs, but for this tutorial I included it to show different accuracy metrics available.
def cross_val_metrics(fit, training_set, class_set, estimator, print_results = True): """ Purpose ---------- Function helps automate cross validation processes while including option to print metrics or store in variable Parameters ---------- fit: Fitted model training_set: Data_frame containing 80% of original dataframe class_set: data_frame containing the respective target vaues for the training_set print_results: Boolean, if true prints the metrics, else saves metrics as variables Returns ---------- scores.mean(): Float representing cross validation score scores.std() / 2: Float representing the standard error (derived from cross validation score's standard deviation) """ my_estimators = { 'rf': 'estimators_', 'nn': 'out_activation_', 'knn': '_fit_method' } try: # Captures whether first parameter is a model if not hasattr(fit, 'fit'): return print("'{0}' is not an instantiated model from scikit-learn".format(fit)) # Captures whether the model has been trained if not vars(fit)[my_estimators[estimator]]: return print("Model does not appear to be trained.") except KeyError as e: raise("'{0}' does not correspond with the appropriate key inside the estimators dictionary. \ Please refer to function to check `my_estimators` dictionary.".format(estimator)) n = KFold(n_splits=10) scores = cross_val_score(fit, training_set, class_set, cv = n) if print_results: for i in range(0, len(scores)): print("Cross validation run {0}: {1: 0.3f}".format(i, scores[i])) print("Accuracy: {0: 0.3f} (+/- {1: 0.3f})"\ .format(scores.mean(), scores.std() / 2)) else: return scores.mean(), scores.std() / 2 cross_val_metrics(fit_rf, training_set, class_set, 'rf', print_results = True)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Test Set Metrics Now we will be utilizing the test set that was created earlier to receive another metric for evaluation of our model. Recall the importance of data leakage and that we didn't touch the test set until now, after we had done hyperparamter optimization. We create a confusion matrix showcasing the following metrics: | n = Sample Size | Predicted Benign | Predicted Malignant | |-----------------|------------------|---------------------| | Actual Benign | True Negative | False Positive | | Actual Malignant | False Negative | True Positive |
predictions_rf = fit_rf.predict(test_set)
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Confusion Matrix Here we create a confusion matrix visual with seaborn and transposing the matrix when creating the heatmap.
def create_conf_mat(test_class_set, predictions): """Function returns confusion matrix comparing two arrays""" if (len(test_class_set.shape) != len(predictions.shape) == 1): return print('Arrays entered are not 1-D.\nPlease enter the correctly sized sets.') elif (test_class_set.shape != predictions.shape): return print('Number of values inside the Arrays are not equal to each other.\nPlease make sure the array has the same number of instances.') else: # Set Metrics test_crosstb_comp = pd.crosstab(index = test_class_set, columns = predictions) test_crosstb = test_crosstb_comp.values return test_crosstb conf_mat = create_conf_mat(test_class_set, predictions_rf) sns.heatmap(conf_mat, annot=True, fmt='d', cbar=False) plt.xlabel('Predicted Values') plt.ylabel('Actual Values') plt.title('Actual vs. Predicted Confusion Matrix') plt.show() accuracy_rf = fit_rf.score(test_set, test_class_set) print("Here is our mean accuracy on the test set:\n {0:.3f}"\ .format(accuracy_rf)) # Here we calculate the test error rate! test_error_rate_rf = 1 - accuracy_rf print("The test error rate for our model is:\n {0: .4f}"\ .format(test_error_rate_rf))
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
As you can see we got a very similar error rate for our test set that we did for our OOB, which is a good sign for our model. ROC Curve Metrics Receiver Operating Characteristc Curve, calculates the False Positive Rates and True Positive Rates across different thresholds . We will now graph these calculations, and being located the top left corner of the plot indicates a really ideal model, i.e. a False Positive Rate of 0 and True Positive Rate of 1, whereas an ROC curve that is at a 45 degree is indicative of a model that is essentially randomly guessing. We also calculated the Area under the Curve or AUC, the AUC is used as a metric to differentiate the predicion power for those with the disease and those without it. Typically a value closer to one means that our model was able to differentiate correctly from a random sample of the two target classes of two patients with and without the disease.
# We grab the second array from the output which corresponds to # to the predicted probabilites of positive classes # Ordered wrt fit.classes_ in our case [0, 1] where 1 is our positive class predictions_prob = fit_rf.predict_proba(test_set)[:, 1] fpr2, tpr2, _ = roc_curve(test_class_set, predictions_prob, pos_label = 1) auc_rf = auc(fpr2, tpr2) def plot_roc_curve(fpr, tpr, auc, estimator, xlim=None, ylim=None): """ Purpose ---------- Function creates ROC Curve for respective model given selected parameters. Optional x and y limits to zoom into graph Parameters ---------- * fpr: Array returned from sklearn.metrics.roc_curve for increasing false positive rates * tpr: Array returned from sklearn.metrics.roc_curve for increasing true positive rates * auc: Float returned from sklearn.metrics.auc (Area under Curve) * estimator: String represenation of appropriate model, can only contain the following: ['knn', 'rf', 'nn'] * xlim: Set upper and lower x-limits * ylim: Set upper and lower y-limits """ my_estimators = {'knn': ['Kth Nearest Neighbor', 'deeppink'], 'rf': ['Random Forest', 'red'], 'nn': ['Neural Network', 'purple']} try: plot_title = my_estimators[estimator][0] color_value = my_estimators[estimator][1] except KeyError as e: raise("'{0}' does not correspond with the appropriate key inside the estimators dictionary. \ Please refer to function to check `my_estimators` dictionary.".format(estimator)) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_facecolor('#fafafa') plt.plot(fpr, tpr, color=color_value, linewidth=1) plt.title('ROC Curve For {0} (AUC = {1: 0.3f})'\ .format(plot_title, auc)) plt.plot([0, 1], [0, 1], 'k--', lw=2) # Add Diagonal line plt.plot([0, 0], [1, 0], 'k--', lw=2, color = 'black') plt.plot([1, 0], [1, 1], 'k--', lw=2, color = 'black') if xlim is not None: plt.xlim(*xlim) if ylim is not None: plt.ylim(*ylim) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.show() plt.close() plot_roc_curve(fpr2, tpr2, auc_rf, 'rf', xlim=(-0.01, 1.05), ylim=(0.001, 1.05))
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Our model did exceptional with an AUC over .90, now we do a zoomed in view to showcase the closeness our ROC Curve is relative to the ideal ROC Curve.
plot_roc_curve(fpr2, tpr2, auc_rf, 'rf', xlim=(-0.01, 0.2), ylim=(0.85, 1.01))
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
Classification Report The classification report is available through sklearn.metrics, this report gives many important classification metrics including: + Precision: also the positive predictive value, is the number of correct predictions divided by the number of correct predictions plus false positives, so $tp / (tp + fp)$ + Recall: also known as the sensitivity, is the number of correct predictions divided by the total number of instances so $tp / (tp + fn)$ where $fn$ is the number of false negatives + f1-score: this is defined as the weighted harmonic mean of both the precision and recall, where the f1-score at 1 is the best value and worst value at 0, as defined by the documentation + support: number of instances that are the correct target values Across the board we can see that our model provided great insight into classifying patients based on the FNA scans. Important metrics to consider would be optimzing the false positive rate since within this context it would be bad for the model to tell someone that they are cancer free when in reality they have cancer.
def print_class_report(predictions, alg_name): """ Purpose ---------- Function helps automate the report generated by the sklearn package. Useful for multiple model comparison Parameters: ---------- predictions: The predictions made by the algorithm used alg_name: String containing the name of the algorithm used Returns: ---------- Returns classification report generated from sklearn. """ print('Classification Report for {0}:'.format(alg_name)) print(classification_report(predictions, test_class_set, target_names = dx)) class_report = print_class_report(predictions_rf, 'Random Forest')
notebooks/02_random_forest.ipynb
raviolli77/machineLearning_breastCancer_Python
mit
You can check the version of the library. Because pandas is fast-evolving library, you want to make sure that you have the up-to-date version of the library.
pd.__version__
m02-history/lab02.ipynb
yy/dviz-course
mit
You also need matplotlib, which is used by pandas to plot figures. The following is the most common convention to import matplotlib library.
import matplotlib.pyplot as plt
m02-history/lab02.ipynb
yy/dviz-course
mit
Let's check its version too.
import matplotlib matplotlib.__version__
m02-history/lab02.ipynb
yy/dviz-course
mit
Loading a CSV data file Using pandas, you can read tabular data files in many formats and through many protocols. Pandas supports not only flat files such as .csv, but also various other formats including clipboard, Excel, JSON, HTML, Feather, Parquet, SQL, Google BigQuery, and so on. Moreover, you can pass a local file path or a URL. If it's on Amazon S3, just pass a url like s3://path/to/file.csv. If it's on a webpage, then just use https://some/url.csv. Let's load a dataset about the location of pumps in the John Snow's map. You can download the file to your computer and try to load it using the local path too.
pump_df = pd.read_csv('https://raw.githubusercontent.com/yy/dviz-course/master/data/pumps.csv')
m02-history/lab02.ipynb
yy/dviz-course
mit
df stands for "Data Frame", which is a fundamental data object in Pandas. You can take a look at the dataset by looking at the first few lines.
pump_df.head()
m02-history/lab02.ipynb
yy/dviz-course
mit
Q1: can you print only the first three lines? Refer: http://pandas.pydata.org/pandas-docs/stable/index.html
# TODO: write your code here
m02-history/lab02.ipynb
yy/dviz-course
mit
You can also sample several rows randomly. If the data is sorted in some ways, sampling may give you a rather unbiased view of the dataset.
# Your code here
m02-history/lab02.ipynb
yy/dviz-course
mit
You can also figure out the number of rows in the dataset by running
len(pump_df)
m02-history/lab02.ipynb
yy/dviz-course
mit
Note that df.size does not give you the number of rows. It tells you the number of elements.
pump_df.size
m02-history/lab02.ipynb
yy/dviz-course
mit
You can also look into the shape of the dataset as well as what are the columns in the dataset.
pump_df.shape # 13 rows and 2 columns pump_df.columns
m02-history/lab02.ipynb
yy/dviz-course
mit
You can also check out basic descriptive statistics of the whole dataset by using describe() method.
pump_df.describe()
m02-history/lab02.ipynb
yy/dviz-course
mit
You can slice the data like a list
pump_df[:2] pump_df[-2:] pump_df[1:5]
m02-history/lab02.ipynb
yy/dviz-course
mit
or filter rows using some conditions.
pump_df[pump_df.X > 13]
m02-history/lab02.ipynb
yy/dviz-course
mit
Now let's load another CSV file that documents the cholera deaths. The URL is https://raw.githubusercontent.com/yy/dviz-course/master/data/deaths.csv Q2: load the death dataset and inspect it load this dataset as death_df. show the first 2 rows. show the total number of rows.
# TODO: Remove below dummy dataframe and write your code here. You probably want to create multiple cells. death_df = pd.DataFrame({"X": [2., 3.], "Y": [1., 2.]})
m02-history/lab02.ipynb
yy/dviz-course
mit
Some visualizations? Let's visualize them! Pandas actually provides a nice visualization interface that uses matplotlib under the hood. You can do many basic plots without learning matplotlib. So let's try.
death_df.plot()
m02-history/lab02.ipynb
yy/dviz-course
mit
This is not what we want! When asked to plot the data, it tries to figure out what we want based on the type of the data. However, that doesn't mean that it will successfully do so! Oh by the way, depending on your environment, you may not see any plot. If you don't see anything run the following command.
%matplotlib inline
m02-history/lab02.ipynb
yy/dviz-course
mit
The commands that start with % is called the magic commands, which are available in IPython and Jupyter. The purpose of this command is telling the IPython / Jupyter to show the plot right here instead of trying to use other external viewers. Anyway, this doesn't seem like the plot we want. Instead of putting each row as a point in a 2D plane by using the X and Y as the coordinate, it just created a line chart. Let's fix it. Please take a look at the plot method documentation. How should we change the command? Which kind of plot do we want to draw? Yes, we want to draw a scatter plot using x and y as the Cartesian coordinates.
death_df.plot(x='X', y='Y', kind='scatter', label='Deaths')
m02-history/lab02.ipynb
yy/dviz-course
mit
I think I want to reduce the size of the dots and change the color to black. But it is difficult to find how to do that! It is sometimes quite annoying to figure out how to change how the visualization looks, especially when we use matplotlib. Unlike some other advanced tools, matplotlib does not provide a very coherent way to adjust your visualizations. That's one of the reasons why there are lots of visualization libraries that wrap matplotlib. Anyway, this is how you do it.
death_df.plot(x='X', y='Y', kind='scatter', label='Deaths', s=2, c='black')
m02-history/lab02.ipynb
yy/dviz-course
mit
Can we visualize both deaths and pumps?
death_df.plot(x='X', y='Y', s=2, c='black', kind='scatter', label='Deaths') pump_df.plot(x='X', y='Y', kind='scatter', c='red', s=8, label='Pumps')
m02-history/lab02.ipynb
yy/dviz-course
mit
Oh well, this is not what we want! We want to overlay them to see them together, right? How can we do that? Before going into that, we probably want to understand some key components of matplotlib figures. Figure and Axes Why do we have two separate plots? The reason is that, by default, the plot method creates a new \emph{figure} instead of putting them inside a single figure. In order to avoid it, we need to either create an Axes and tell plot to use that axes. What is an axes? See this illustration. <img src="https://matplotlib.org/1.5.1/_images/fig_map.png" alt="figure, axes, and axis" style="width: 500px;"/> A figure can contain multiple axes (link). The figure below contains two axes: and an axes can contain multiple plots (link). Conveniently, when you call plot method, it creates an axes and returns it to you
ax = death_df.plot(x='X', y='Y', s=2, c='black', kind='scatter', label='Deaths') ax
m02-history/lab02.ipynb
yy/dviz-course
mit
This object contains all the information and objects in the plot we see. Whatever we want to do with this axes (e.g., changing x or y scale, overlaying other data, changing the color or size of symbols, etc.) can be done by accessing this object. Then you can pass this axes object to another plot to put both plots in the same axes. Note ax=ax in the second plot command. It tells the plot command where to draw the points.
ax = death_df.plot(x='X', y='Y', s=2, c='black', alpha=0.5, kind='scatter', label='Deaths') pump_df.plot(x='X', y='Y', kind='scatter', c='red', s=8, label='Pumps', ax=ax)
m02-history/lab02.ipynb
yy/dviz-course
mit
Although simply invoking the plot() command is quick and easy when doing an exploratory data analysis, it is usually better to be formal about figure and axes objects. Here is the recommended way to create a plot. Call the subplots() method (see https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.subplots.html) to get the figure and axes objects explicitly. As you can see below, subplots() creates an empty figure and returns the figure and axes object to you. Then you can fill this empty canvas with your plots. Whatever manipulation you want to make about your figure (e.g., changing the size of the figure) or axes (e.g., drawing a new plot on it) can be done with fig and ax objects. So whenever possible, use this method! Now, can you use this method to produce the same plot just above?
import matplotlib.pyplot as plt fig, ax = plt.subplots() # your code here
m02-history/lab02.ipynb
yy/dviz-course
mit
Voronoi diagram Let's try the Voronoi diagram. You can use the scipy.spatial.Voronoi and scipy.spatial.voronoi_plot_2d from scipy, the scientific python library.
from scipy.spatial import Voronoi, voronoi_plot_2d
m02-history/lab02.ipynb
yy/dviz-course
mit
Take a look at the documentation of Voronoi and voronoi_plot_2d and Q3: produce a Voronoi diagram that shows the deaths, pumps, and voronoi cells
# you'll need this points = pump_df.values points # TODO: your code here
m02-history/lab02.ipynb
yy/dviz-course
mit
Saving the figure You can also save your figure into PDF, PNG, etc. If you run the following, the plot will not only be displayed here, but also be saved as foo.png.
import matplotlib.pyplot as plt plt.plot([1,2,3], [4,2,3]) plt.savefig('foo.png')
m02-history/lab02.ipynb
yy/dviz-course
mit
Q4: Save your Voronoi diagram. Make sure that your plot contains the scatterplot of deaths & pumps as well as the Voronoi cells
# TODO: your code here
m02-history/lab02.ipynb
yy/dviz-course
mit
Ok, that was a brief introduction to pandas and some simple visualizations. Now let's talk about web a little bit. HTML & CSS Basics HTML review Webpages are written in a standard markup language called HTML (HyperText Markup Language). The basic syntax of HTML consists of elements enclosed within &lt; and &gt; symbols. Markup tags often come in a pair, the opening tag without / and the closing tag with /. For instance, when we assign the title of the webpage, we write &lt;title&gt;This is the title of the page&lt;/title&gt;. You can find tutorials and references from many websites, including W3Schools. Here is an example of a simple HTML document (from w3schools homepage): ```html <!DOCTYPE html> <html> <title>HTML Tutorial</title> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> ``` Here is a list of important tags and their descriptions. &lt;html&gt; - Surrounds the entire document. &lt;head&gt; - Contains information about the document. E.g. the title, metadata, scripts to load, stylesheets, etc. &lt;title&gt; - Assigns title to the page. This is what you see in the tab and what you have when the page is bookmarked. &lt;body&gt; - The main part of the document. &lt;h1&gt;, &lt;h2&gt;, &lt;h3&gt;, ... - Headings (Smaller the number, larger the size). &lt;p&gt; - Paragraph. e.g., &lt;p&gt;Here is a paragraph&lt;/p&gt; &lt;br&gt; - Line break. &lt;em&gt; - emphasize text. &lt;strong&gt; - Bold font. &lt;a&gt; - Defines a hyperlink and allows you to link out to the other webpages. See examples &lt;img&gt; - Place an image. See examples &lt;ul&gt;, &lt;ol&gt;, &lt;li&gt; - Unordered lists with bullets, ordered lists with numbers and each item in list respectively. See examples &lt;table&gt; - Make a table, specifying contents of each cell. See examples &lt;!--&gt; - Comments – will not be displayed. &lt;span&gt; - This will mark a certain part of text but will not necessarily change how they look. CSS or Javascript can access them and change how they look or behave. &lt;div&gt; - Similar to &lt;span&gt;, but used for a block that contains many elements. CSS review While HTML specifies the content and structure, it does not say how they should look. CSS (Cascading Style Sheets) is the primary language that is used for the look and formatting of a web document. In the context of creating visualization, CSS becomes critical when you create web-based (Javascript-based) visualizations. A CSS stylesheet consists of one or more selectors, properties and values. For example: css body { background-color: white; color: steelblue; } Selectors are the HTML elements to which the specific styles (combination of properties and values) will be applied. In the above example, all text within the body tags will be in steelblue. There are three ways to include CSS code in HTML. This is called "referencing". Embed CSS in HTML - You can place the CSS code within style tags inside the head tags. This way you can keep everything within a single HTML file but does make the code lengthy. ```html <head> <style type="text/css"> .description { font: 16px times-new-roman; } .viz { font: 10px sans-serif; } </style> </head> `` Reference an external stylesheet from HTML is a much cleaner way but results in the creation of another file. To do this, you can copy the CSS code into a text file and save it as a.css` file in the same folder as the HTML file. In the document head in the HTML code, you can then do the following: html &lt;head&gt; &lt;link rel="stylesheet" href="main.css"&gt; &lt;/head&gt; Attach inline styles - You can also directly attach the styles in-line along with the main HTML code in the body. This makes it easy to customize specific elements but makes the code very messy, because the design and content get mixed up. ```html <p style="color: green; font-size:36px; font-weight:bold;">Inline styles can be handy sometimes.</p> ``` %%html magic command in jupyter You can use built-in magic command in jupyter notebook to render the cell as a block of HTML. You just need to add %%html at the beginning of the code cell, this command explicitly tells jupyter that the code in this cell will be html. You can find more about magic commands in jupyter here: https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-html Below is an example of how to render html code in jupyter code cell:
%%html <!DOCTYPE html> <html> <head> <style> .para { font: 20px times-new-roman; color: green; padding: 10px; border: 1px solid black; } </style> </head> <body> <p class='para'>Hello World!</p> <!-- You can also add an image in your html code <img src='location'/> --> </body> </html>
m02-history/lab02.ipynb
yy/dviz-course
mit
练习:数据探索 首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量: 总的记录数量,'n_records' 年收入大于50,000美元的人数,'n_greater_50k'. 年收入最多为50,000美元的人数 'n_at_most_50k'. 年收入大于50,000美元的人所占的比例, 'greater_percent'. 提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
# TODO:总的记录数 n_records = len(data['income']) # TODO:被调查者的收入大于$50,000的人数 n_greater_50k = len(data[data['income'] == '>50K']) # TODO:被调查者的收入最多为$50,000的人数 n_at_most_50k = len(data[data['income'] == '<=50K']) # TODO:被调查者收入大于$50,000所占的比例 greater_percent = (n_at_most_50k + 0.0) / n_records * 100 # 打印结果 print "Total number of records: {}".format(n_records) print "Individuals making more than $50,000: {}".format(n_greater_50k) print "Individuals making at most $50,000: {}".format(n_at_most_50k) print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
练习:数据预处理 从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C. | | 一些特征 | | 特征_A | 特征_B | 特征_C | | :-: | :-: | | :-: | :-: | :-: | | 0 | B | | 0 | 1 | 0 | | 1 | C | ----> 独热编码 ----> | 0 | 0 | 1 | | 2 | A | | 1 | 0 | 0 | 此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能: - 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。 - 将目标标签'income_raw'转换成数字项。 - 将"<=50K"转换成0;将">50K"转换成1。
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码 features = pd.get_dummies(features_raw) income_mapping = { '<=50K': 0, '>50K': 1 } # TODO:将'income_raw'编码成数字值 income = income_raw.map(income_mapping) # 打印经过独热编码之后的特征数量 encoded = list(features.columns) print "{} total features after one-hot encoding.".format(len(encoded)) # 移除下面一行的注释以观察编码的特征名字 #print encoded
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
评价模型性能 在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。这里面的三个将是你选择的监督学习器,而第四种算法被称为一个朴素的预测器。 评价方法和朴素的预测器 CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率: $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ 尤其是,当$\beta = 0.5$的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。 通过查看不同类别的数据分布(那些最多赚\$50,000和那些能够赚更多的),我们能发现:很明显的是很多的被调查者年收入没有超过\$50,000。这点会显著地影响准确率,因为我们可以简单地预测说“这个人的收入没有超过\$50,000”,这样我们甚至不用看数据就能做到我们的预测在一般情况下是正确的!做这样一个预测被称作是朴素的,因为我们没有任何信息去证实这种说法。通常考虑对你的数据使用一个朴素的预测器是十分重要的,这样能够帮助我们建立一个模型的表现是否好的基准。那有人说,使用这样一个预测是没有意义的:如果我们预测所有人的收入都低于\$50,000,那么CharityML就不会有人捐款了。 问题 1 - 朴素预测器的性能 如果我们选择一个无论什么情况都预测被调查者年收入大于\$50,000的模型,那么这个模型在这个数据集上的准确率和F-score是多少? 注意: 你必须使用下面的代码单元将你的计算结果赋值给'accuracy' 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
# TODO: 计算准确率 accuracy = n_greater_50k / float(n_records) precision = accuracy recall = 1.0 # TODO: 使用上面的公式,并设置beta=0.5计算F-score fscore = (1 + 0.5**2) * precision * recall / ( 0.5 ** 2 * precision + recall) # 打印结果 print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
监督学习模型 下面的监督学习模型是现在在 scikit-learn 中你能够选择的模型 - 高斯朴素贝叶斯 (GaussianNB) - 决策树 - 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K近邻 (KNeighbors) - 随机梯度下降分类器 (SGDC) - 支撑向量机 (SVM) - Logistic回归 问题 2 - 模型应用 列出从上面的监督学习模型中选择的三个适合我们这个问题的模型,你将在人口普查数据上测试这每个算法。对于你选择的每一个算法: 描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处) 这个模型的优势是什么?他什么情况下表现最好? 这个模型的缺点是什么?什么条件下它表现很差? 根据我们当前数据集的特点,为什么这个模型适合这个问题。 回答: 决策树 支持向量机 逻辑回归 练习 - 创建一个训练和预测的流水线 为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在测试集上做预测的训练和测试的流水线是十分重要的。 你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能: 从sklearn.metrics中导入fbeta_score和accuracy_score。 用样例训练集拟合学习器,并记录训练时间。 用学习器来对训练集进行预测并记录预测时间。 在最前面的300个训练数据上做预测。 计算训练数据和测试数据的准确率。 计算训练数据和测试数据的F-score。
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # TODO:使用sample_size大小的训练数据来拟合学习器 # TODO: Fit the learner to the training data using slicing with 'sample_size' start = time() # 获得程序开始时间 learner = learner.fit(X_train, y_train) end = time() # 获得程序结束时间 # TODO:计算训练时间 results['train_time'] = end - start # TODO: 得到在测试集上的预测值 # 然后得到对前300个训练数据的预测结果 start = time() # 获得程序开始时间 predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train[0:300]) end = time() # 获得程序结束时间 # TODO:计算预测用时 results['pred_time'] = end - start # TODO:计算在最前面的300个训练数据的准确率 results['acc_train'] = accuracy_score(y_train[0:300], predictions_train) # TODO:计算在测试集上的准确率 results['acc_test'] = accuracy_score(y_test, predictions_test) # TODO:计算在最前面300个训练数据上的F-score results['f_train'] = fbeta_score(y_train[0:300], predictions_train, average='macro', beta=0.5) # TODO:计算测试集上的F-score results['f_test'] = fbeta_score(y_test, predictions_test, average='macro', beta=0.5) # 成功 print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size) # 返回结果 return results
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
练习:初始模型的评估 在下面的代码单元中,您将需要实现以下功能: - 导入你在前面讨论的三个监督学习模型。 - 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。 - 如果可能对每一个模型都设置一个random_state。 - 注意:这里先使用每一个模型的默认参数,在接下来的部分中你将需要对某一个模型的参数进行调整。 - 计算记录的数目等于1%,10%,和100%的训练数据,并将这些值存储在'samples'中 注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
# TODO:从sklearn中导入三个监督学习模型 from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier # TODO:初始化三个模型 clf_A = LogisticRegression(random_state=0) clf_B = SVC(kernel='linear', C=1.0, random_state=0) clf_C = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0) # TODO:计算1%, 10%, 100%的训练数据分别对应多少点 samples_1 = int(len(X_train) * 0.01) samples_10 = int(len(X_train) * 0.1) samples_100 = int(len(X_train) * 1) # 收集学习器的结果 results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = \ train_predict(clf, samples, X_train, y_train, X_test, y_test) # 对选择的三个模型得到的评价结果进行可视化 vs.evaluate(results, accuracy, fscore)
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
提高效果 在这最后一节中,您将从三个有监督的学习模型中选择最好的模型来使用学生数据。你将在整个训练集(X_train和y_train)上通过使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的F-score。 问题 3 - 选择最佳的模型 基于你前面做的评价,用一到两段向CharityML解释这三个模型中哪一个对于判断被调查者的年收入大于\$50,000是最合适的。 提示:你的答案应该包括关于评价指标,预测/训练时间,以及该算法是否适合这里的数据的讨论。 回答: 问题 4 - 用通俗的话解释模型 用一到两段话,向CharityML用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。 回答: 练习:模型调优 调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需给出并尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能: 导入sklearn.model_selection.GridSearchCV和sklearn.metrics.make_scorer. 初始化你选择的分类器,并将其存储在clf中。 如果能够设置的话,设置random_state。 创建一个对于这个模型你希望调整参数的字典。 例如: parameters = {'parameter' : [list of values]}。 注意: 如果你的学习器(learner)有 max_features 参数,请不要调节它! 使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。 在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。 用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。 注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行!
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库 from sklearn.grid_search import GridSearchCV from sklearn.metrics import fbeta_score, make_scorer from sklearn.svm import SVC # TODO:初始化分类器 clf = SVC(random_state=1) # TODO:创建你希望调节的参数列表 parameters = [{ 'C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0], 'kernel': ['linear'] },{ 'C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0], 'gamma': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0], 'kernel': ['rbf'] }] # TODO:创建一个fbeta_score打分对象 scorer = make_scorer(fbeta_score, beta=0.5) # TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数 grid_obj = GridSearchCV( estimator=clf, param_grid=parameters, scoring=scorer, cv=10 ) # TODO:用训练数据拟合网格搜索对象并找到最佳参数 grid_obj.fit(X_train, y_train) # 得到estimator best_clf = grid_obj.best_estimator_ # 使用没有调优的模型做预测 predictions = (clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_test) # 汇报调参前和调参后的分数 print "Unoptimized model\n------" print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)) print "\nOptimized Model\n------" print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
问题 5 - 最终模型评估 你的最优模型在训练数据上的准确率和F-score是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的朴素预测器怎么样? 注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。 结果: | 评价指标 | 基准预测器 | 未优化的模型 | 优化的模型 | | :------------: | :-----------------: | :---------------: | :-------------: | | 准确率 | | | | | F-score | | | 样例 | 回答: 特征的重要性 在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。通过专注于一些少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。 选择一个有feature_importance_属性(这是一个根据这个选择的分类器来对特征的重要性进行排序的函数)的scikit学习分类器(例如,AdaBoost,随机森林)。在下一个Python代码单元中用这个分类器拟合训练集数据并使用这个属性来决定这个人口普查数据中最重要的5个特征。 问题 6 - 观察特征相关性 当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。 在这十三个记录中,你认为哪五个特征对于预测是最重要的,你会怎样对他们排序?理由是什么? 回答: 练习 - 提取特征重要性 选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。 在下面的代码单元中,你将要实现以下功能: - 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。 - 在整个训练集上训练一个监督学习模型。 - 使用模型中的'.feature_importances_'提取特征的重要性。
# TODO:导入一个有'feature_importances_'的监督学习模型 from sklearn.ensemble import RandomForestClassifier # TODO:在训练集上训练一个监督学习模型 model = RandomForestClassifier(n_estimators=10000, random_state=0) model.fit(X_train, y_train) # TODO: 提取特征重要性 importances = model.feature_importances_ # 绘图 vs.feature_plot(importances, X_train, y_train)
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
问题 7 - 提取特征重要性 观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。 这五个特征和你在问题 6中讨论的特征比较怎么样?如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?如果你的选择不相近,那么为什么你觉得这些特征更加相关? 回答: 特征选择 如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,并简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
# 导入克隆模型的功能 from sklearn.base import clone # 减小特征空间 X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]] # 在前面的网格搜索的基础上训练一个“最好的”模型 clf = (clone(best_clf)).fit(X_train_reduced, y_train) # 做一个新的预测 reduced_predictions = clf.predict(X_test_reduced) # 对于每一个版本的数据汇报最终模型的分数 print "Final Model trained on full data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) print "\nFinal Model trained on reduced data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
Udacity-ML/finding_donors-master/finding_donors.ipynb
quoniammm/happy-machine-learning
mit
Handling Data over time There's a widespread trend in solar physics at the moment for correlation over actual science, so being able to handle data over time spans is a skill we all need to have. Python has ample support for this so lets have a look at what we can use. <section class="objectives panel panel-warning"> <div class="panel-heading"> <h2><span class="fa fa-certificate"></span> Learning Objectives </h2> </div> <br/> <ul> <li> Understand and use SunPy lightcurve. </li> <li> Create a pandas dataframe. </li> <li> Utilise the datetime package </li> <li> Use the pandas dataframe to plot the data within it. </li> </ul> <br/> </section> SunPy Lightcurve SunPy provides a lightcurve object to handle this type of time series data. The module has a number of instruments associated with it, including: GOES XRS LightCurve SDO EVE LightCurve for level 0CS data Proba-2 LYRA LightCurve NOAA Solar Cycle monthly indices. Nobeyama Radioheliograph Correlation LightCurve. RHESSI X-ray Summary LightCurve. We're going to examine the lightcurves created by a solar flare on June 7th 2011. Lets begin with the import statements:
from __future__ import print_function, division import numpy as np import sunpy from sunpy import lightcurve as lc import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings('ignore')
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Now lets create some lightcurves
goes_lightcurve = lc.GOESLightCurve.create('2011-06-07 06:00','2011-06-07 08:00') hsi_lightcurve = lc.RHESSISummaryLightCurve.create('2011-06-07 06:00','2011-06-07 08:00')
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
In terms of LYRA, the server only allows you to download an entire day of data at a time. We can match this to the rest of the data by using the truncate function.
lyra_lightcurve_fullday = lc.LYRALightCurve.create('2011-06-07') lyra_lightcurve = lyra_lightcurve_fullday.truncate('2011-06-07 06:00','2011-06-07 08:00')
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Part of the advantage of using these inbuilt functions we can get a quicklook at our data using short commands:
fig = goes_lightcurve.peek() fig = lyra_lightcurve.peek()
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Accessing and using the data More custom plots can be made easily by accessing the data in the lightcurve functionality. Both the time information and the data are contained within the lightcurve.data code, which is a pandas dataframe. We can see what data is contained in the dataframe by finding which keys it contains and also asking what's in the meta data:
print(lyra_lightcurve.data.keys()) print(lyra_lightcurve.meta)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Notice that the meta data information is stored in something called OrderedDict <section class="objectives panel panel-info"> <div class="panel-heading"> <h2><span class="fa fa-thumb-tack"></span> On Dictionaries </h2> </div> <br/> We can create keyword-data pairs to form a dictionary (shock horror) of values. In this case we have defined some strings and number to represent temperatures across europe <pre> temps = {'Brussles': 9, 'London': 3, 'Barcelona': 13, 'Rome': 16} temps['Rome'] 16 </pre> We can also find out what keywords are associated with a given dictionary, In this case: <pre> temps.keys() dict_keys(['London', 'Barcelona', 'Rome', 'Brussles']) </pre> </code> <br/> </section> We can use these keys to plot specific parameters from the lightcurve, Aluminium is 'CHANNEL_3' and Zirconium is 'CHANNEL_4. These measurements are taken on a instuments which detect events at different energy levels.
plt.figure(1, figsize=(10,5)) plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL3'], color='blue', label='Al filter') plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL4'], color='red', label='Zr filter') plt.ylabel('Flux (Wm^2)') plt.show()
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Analysing Lightcurve data We can asses the degree to which the lyra curves correlate with each other:
cross_correlation = np.correlate(lyra_lightcurve.data['CHANNEL3'], lyra_lightcurve.data['CHANNEL4']) print(cross_correlation)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Pandas In its own words Pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. Pandas has two forms of structures, 1D series and 2D dataframe. It also has its own functions associated with it. It is also amazing. Lightcurve uses these in built Pandas functions, so we can find out things like the maximum of curves:
# max time argument taken from long and short GOES channels max_t_goes_long = goes_lightcurve.data['xrsb'].idxmax() max_t_goes_short = goes_lightcurve.data['xrsa'].idxmax() # max_time argument taken from channel 3 & 4 LYRA channe;s max_t_lyra_al = lyra_lightcurve.data['CHANNEL3'].idxmax() max_t_lyra_zr = lyra_lightcurve.data['CHANNEL4'].idxmax() print('GOES long: ', max_t_goes_long) print('GOES short: ', max_t_goes_short) print('LYRA Al: ', max_t_lyra_al) print('LYRA Zr: ', max_t_lyra_zr)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
So lets plot them on the graph
# create figure with raw curves plt.figure(1, figsize=(10,5)) plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL3'], color='blue', label='Al filter') plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL4'], color='red', label='Zr filter') plt.ylabel('Flux (Wm^2)') # max lines plt.axvline(max_t_lyra_al,color='blue',linestyle='dashed', linewidth=2) plt.axvline(max_t_lyra_zr,color='red',linestyle='dashed') plt.axvline(max_t_goes_long,color='green',linestyle='dashed',linewidth=2)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Reading in Tablulated data Now we have seen a little of what Pandas can do, lets read in some of our own data. In this case we are going to use data from Bennett et al. 2015, ApJ, a truly ground breaking work. Now the data we are reading in here is a structured Array.
data = np.genfromtxt('macrospicules.csv', skip_header=1, dtype=None, delimiter=',')
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Now, the above line imports information on some solar features over a sample time period. Specifically we have, maximum length, lifetime and time at which they occured. Now if we type data[0] what will happen?
data[0]
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
This is the first row of the array, containing the first element of our three properties. This particular example is a stuctured array, so the columns and rows can have properties and assign properties to the header. We can ask what the title of these columns is by using a dtype command:
data.dtype.names
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Unhelpful, so lets give them something more recognisable. We can use the docs to look up syntax and change the names of the column lables. <br/> <section class="objectives panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> Google your troubles away </h2> </div> <br/> So the docs are [here](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.genfromtxt.html). Find the syntax to change to names to better to represent maximum length, lifetime and point in time which they occured. <br/> </section>
data.dtype.names = ('max_len', 'ltime', 'sample_time') data['max_len']
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
DataFrame Now a pandas DataFrame takes two arguments as a minimum, index and data. In this case the index will be our time within the sample and the maximum length and lifetime will be our data. So lets import pandas and use the dataframe: Pandas reads a dictionary when we want to input multiple data columns. Therefore we need to make a dictionary of our data and read that into a pandas data frame. First we need to import pandas. <section class="objectives panel panel-info"> <div class="panel-heading"> <h2><span class="fa fa-thumb-tack"></span> Dictionaries </h2> </div> <br/> So we covered dictionaries earlier. We can create keyword data pairs to form a dictionary (shock horror) of values. In this case <pre> temps = {'Brussles': 9, 'London': 3, 'Barcelona': 13, 'Rome': 16} temps['Rome'] 16 </pre> We can also find out what keywords are associated with a given dictionary, In this case: <pre> temps.keys() dict_keys(['London', 'Barcelona', 'Rome', 'Brussles']) </pre> </code> <br/> </section> First, let's import Pandas:
import pandas as pd d = {'max_len': data['max_len'], 'ltime': data['ltime']} df = pd.DataFrame(data=d, index=data['sample_time']) print(df)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Datetime Objects Notice that the time for the sample is in a strange format. It is a string containing the date in YYYY-MM-DD and time in HH-MM-SS-mmmmmm. These datetime objects have their own set of methods associated with them. Python appreciates that theses are built this way and can use them for the indexing easily. We can use this module to create date objects (representing just year, month, day). We can also get information about universal time, such as the time and date today. NOTE: Datetime objects are NOT strings. They are objects which print out as strings.
import datetime print(datetime.datetime.now()) print(datetime.datetime.utcnow()) lunchtime = datetime.time(12,30) the_date = datetime.date(2005, 7, 14) dinner = datetime.datetime.combine(the_date, lunchtime) print("When is dinner? {}".format(dinner))
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Looking back at when we discussed the first element of data, and the format of the time index was awkward to use so lets do something about that.
print(df.index[0])
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
This is a string and python will just treat it as such. We need to use datetime to pick this string appart and change it into an oject we can use. To the Docs! So we use the formatting commands to match up with the string we have.
dt_obj = datetime.datetime.strptime(df.index[0], '%Y-%m-%dT%H:%M:%S.%f') print(dt_obj)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Now the next logical step would be to make a for loop and iterate over the index and reassign it. HOWEVER there is almost always a better way. And Pandas has a to_dateime() method that we can feed the index:
df.index = pd.to_datetime(df.index) df
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Much easier. Note that the format of table has now changed and are pandas specific datetime objects, and looks like this:
df.index[0]
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
This means we can bin data according to time
l_bins = pd.groupby(df['max_len'], by=[df.index.year, df.index.month]) print(len(l_bins))
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Here we have used the groupby command to take the 'max_len' column, called as a dictionary key, and create bins for our data to sit in according to year and then month. The object l_bins has mean, max, std etc. attributes in the same way as the numpy arrays we handled the other day.
l_mean = l_bins.mean() l_std = l_bins.std()
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Lets not forget that l_bins is a list of bins so when we print out l_mean we get:
print(l_mean)
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Now we have all this data we can build a lovely bargraph with error bars and wonderful things like that. Remember, these pandas objects have functions associated with them, and one of them is a plot command.
fig, ax = plt.subplots() fig.autofmt_xdate() l_mean.plot(kind='bar', ax=ax, yerr=l_std, grid=False, legend=False) plt.show()
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Note that the date on the x-axis is a little messed up we can fix with fig.autofmt_xdate() <section class="objectives panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> How do the lifetimes change? </h2> </div> <br/> Now that we have the plot for the maximum length, now make a bar graph of the lifetimes of the features. <br/> </section> <br/> <section class="objectives panel panel-success"> <div class="panel-heading"> <h2><span class="fa fa-pencil"></span> Exoplanet Data </h2> </div> <br/> Now, to all the astronomers out there, let us process some real data. We have some txt files containing the lightcurve from a recent paper. Can you process the data and show us the planet? HINT: You'll need to treat this data slightly differently. The date here is in Julian Day so you will need to use [these](http://docs.astropy.org/en/v1.1.1/api/astropy.time.Time.html) docs to convert it to a sensible datetime object, before you make the DataFrame. <br/> </section>
import numpy as np ep_data = np.loadtxt('data/XO1_wl_transit_FLUX.txt') ep_dict = {'flux':ep_data[:, 1], 'err_flux':ep_data[:, 2]} ep_df = pd.DataFrame(data=ep_dict, index=ep_data[:,0]) ep_df.index = pd.to_datetime(ep_df.index) from astropy.time import Time t = Time(ep_data[:, 0], format='jd') UTC = t.datetime ep_df.index = UTC ep_df
09-Time-Series-Data.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
2. Project 5 data set import This cell: * Creates a feature file name list of the car/non-car supplied files * Creates a label (y) vector * Randomly split the dataset into a train and a validation dataset
import glob import numpy as np from sklearn.model_selection import train_test_split cars = glob.glob("./dataset/vehicles/*/*.png") non_cars = glob.glob("./dataset/non-vehicles/*/*.png") # feature list X = [] # Append cars and non-cars image file paths to the feature list for car in cars: X.append(car) for non_car in non_cars: X.append(non_car) X = np.array(X) # Generate y Vector (Cars = 1, Non-Cars = -1) y = np.concatenate([np.ones(len(cars)), np.zeros(len(non_cars))-1]) # Randomly split the file paths in a validation set and a training set X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.1) print("Loading done!")
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
3. Print a dataset summary This cell: * Prints a summary of the dataset
import matplotlib.image as mpimg %matplotlib inline if display_output == 1: # Load the 1rst image to get its size train_img = mpimg.imread(X_train[0]) # Dataset image shape image_shape = train_img.shape # Number of unique classes/labels there are in the dataset. n_classes = np.unique(y_train).size print("Number of training examples =", X_train.shape[0]) print("Number of validation examples =", X_valid.shape[0]) print("Image data shape =", image_shape) print("Number of classes =", n_classes)
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
4. Exploratory visualization of the dataset This cell: * Defines the show_dataset_classes_histogram function * Defines the show_sample() function * Shows an hstogram distribution of the dataset classes * Shows a random sample of the dataset
import matplotlib.pyplot as plt import random def show_dataset_classes_histogram(labels_train, labels_valid): f, ax = plt.subplots(figsize=(5, 5)) # Generate histogram and bins hist_train, bins = np.histogram(labels_train, 2) hist_valid, bins = np.histogram(labels_valid, 2) # Bar width width = 1.0 * (bins[1] - bins[0]) ax.bar([-1, 1], hist_train, width=width, label="Train") ax.bar([-1, 1], hist_valid, width=width, label="Valid") ax.set_xlabel('Classes') ax.set_ylabel('Number of occurence') ax.set_title('Histogram of the data set') ax.legend(bbox_to_anchor=(1.01, 1), loc="upper left") f.tight_layout() plt.savefig("./output_images/histogram_dataset.png") def show_sample(features, labels, preprocess=0, sample_num=1, sample_index=-1): col_num = 2 # Create training sample + histogram plot f, axarr = plt.subplots(sample_num, col_num, figsize=(col_num * 4, sample_num * 3)) index = sample_index - 1 for i in range(0, sample_num, 1): if sample_index == -1: index = random.randint(0, len(features)) else: index = index + 1 if labels[index] == 1: label_str = "Car" else: label_str = "Non-Car" image = (mpimg.imread(features[index]) * 255).astype(np.uint8) if preprocess == 1: image = image_preprocessing(image) axarr[i, 0].set_title('%s' % label_str) axarr[i, 0].imshow(image) hist, bins = np.histogram(image.flatten(), 256, [0, 256]) cdf = hist.cumsum() cdf_normalized = cdf * hist.max()/ cdf.max() axarr[i, 1].plot(cdf_normalized, color='b') axarr[i, 1].plot(hist, color='r') axarr[i, 1].legend(('cdf', 'histogram'), loc='upper left') axarr[i, 0].axis('off') # Tweak spacing to prevent clipping of title labels f.tight_layout() if preprocess == 1: plt.savefig("./output_images/dataset_sample_preprocessed.png") else: plt.savefig("./output_images/dataset_sample.png") if display_output == 1: show_dataset_classes_histogram(y_train, y_valid) show_sample(X_train, y_train, sample_num=6, sample_index=110)
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
5. Get Model As we have seen in the "intro to convolutionnal network" lesson, a nice property of a convolutional filter is that is reuses the weights gwhile slidding through the image and feature maps, the weight number is thus not dependent on the input image size. Therefore, it is possible to train a full ConvNet to classify small size images (64x64) as an image classifier (like we have done for Project 2) and output the result on one neuron. In our case the output will be either there is a car in the image or not (tanh=1 or tanh=-1). The weights resulting from the training can then be reused on the same full ConvNet to build an output feature map from larger images. This feature map can be seen as a heatmap in which each pixel represents the output of the original trained ConvNet for a section of the input image. These pixels thus give the "car probability" for a specific location in the input image. This cell: * Gets an all convolutionnal Keras model to train from the file model.py.
from model import get_model from keras.layers import Flatten # Get the "base" ConvNet Model model = get_model() # Flat out the last layer for training model.add(Flatten()) # Print out model summary if display_output == 1: model.summary()
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
6. Declare generators/load dataset This cell: * Declares a Keras generator (keras_generator()) to enable training on low end hardware * Declares a loader (loader()) to load all the dataset in memory (faster training, higher end hardware required)
from sklearn.utils import shuffle def keras_generator(features, labels, batch_size=32): num_features = len(features) # Loop forever so the generator never terminates while 1: # shuffles the input sample shuffle(features, labels) for offset in range(0, num_features, batch_size): # File path subset batch_features = features[offset:offset + batch_size] batch_labels = labels[offset:offset + batch_size] imgs = [] for feature in batch_features: image = (mpimg.imread(feature) * 255).astype(np.uint8) # Image preprocessing # none.. imgs.append(image) # Convert images to numpy arrays X = np.array(imgs, dtype=np.uint8) y = np.array(batch_labels) yield shuffle(X, y) def loader(features, labels): for iterable in keras_generator(features, labels, batch_size=len(features)): return iterable # Prepare generator functions /dataset if use_generator == 1: # Use the generator function train_generator = keras_generator(X_train, y_train, batch_size=32) validation_generator = keras_generator(X_valid, y_valid, batch_size=32) else: # Load all the preprocessed images in memory train_set = loader(X_train, y_train) validation_set = loader(X_valid, y_valid)
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
7. Train Model and save the best weights This cell: * Defines a training visualization function (plot_train_result()) * Compiles the Keras model. The Adam optimizer is chosen and Mean Squared Error is used as a loss function * A Keras checkpointer is declared and configured to save the weights if loss becomes lower than the lowest loss to date. The checkpointer is called via the callback parameter and is exucuted after each epoch. * Trains the model either with a generator or not. * Outputs a figure of the Training/Validation accuracy vs the epochs.
from keras.callbacks import ModelCheckpoint from keras.optimizers import Adam def plot_train_results(history_object): f, ax = plt.subplots(figsize=(10, 5)) ax.plot(history_object.history['acc']) ax.plot(history_object.history['val_acc']) ax.set_ylabel('Model accuracy') ax.set_xlabel('Epoch') ax.set_title('Model accuracy vs epochs') plt.legend(['training accuracy', 'validation accuracy'], bbox_to_anchor=(1.01, 1.0)) f.tight_layout() plt.savefig("./output_images/accuracy_over_epochs.png") if train_model == 1: # Compile the model using an Adam optimizer model.compile(optimizer=Adam(), loss='mse', metrics=['accuracy']) # saves the model weights after each epoch if the validation loss decreased filepath = './weights/best-weights.hdf5' checkpointer = ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min') # Train the model, with or without a generator if use_generator == 1: history_object = model.fit_generator(train_generator, steps_per_epoch=len(X_train), epochs=epoch_num, verbose=train_verbose_style, callbacks=[checkpointer], validation_data=validation_generator, validation_steps=len(X_valid)) else: history_object = model.fit(train_set[0], train_set[1], batch_size=64, epochs=epoch_num, verbose=train_verbose_style, callbacks=[checkpointer], validation_data=(validation_set[0], validation_set[1])) if display_output == 1: plot_train_results(history_object)
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
Testing the classifier 8. Load weights This cell: * Loads the best weight saved by the Keras checkpointer (code cell #7 above).
# Load the weight model.load_weights('./weights/best-weights.hdf5') print("Weights loaded!")
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
9. Test the classifier on random images This cell: * Picks random images from the dataset * Infers the label using the trained model * Measures the inference time * Shows a figure of the images and predicted label, title color changes according to correctness of the inference (green = correct, red = incorrect).
if display_output == 1: import matplotlib.pyplot as plt %matplotlib inline import time import numpy as np sample_num = 12 col_num = 4 row_num = int(sample_num/col_num) # Create training sample + histogram plot f, axarr = plt.subplots(row_num, col_num, figsize=(col_num * 4, row_num * 3)) for i in range(sample_num): # Pick a random image from the validation set index = np.random.randint(validation_set[0].shape[0]) # Add one dimension to the image to fit the CNN input shape... sample = np.reshape(validation_set[0][index], (1, 64,64,3)) # Record starting time start_time = time.time() # Infer the label inference = model.predict(sample, batch_size=64, verbose=0) # Print time difference... print("Image %2d inference time : %.4f s" % (i, time.time() - start_time)) # Extract inference value inference = inference[0][0] # Show the image color_str = 'green' if inference >= 0.0: title_str = "Car: {:4.2f}" .format(inference) if validation_set[1][index] != 1: color_str = 'red' else: title_str = "No Car: {:4.2f}" .format(inference) if validation_set[1][index] != -1: color_str = 'red' axarr[int(i/col_num), i % col_num].imshow(validation_set[0][index]) axarr[int(i/col_num), i % col_num].set_title(title_str, color = color_str) axarr[int(i/col_num), i % col_num].axis('off') f.tight_layout() plt.savefig("./output_images/inference.png")
CarND-Vehicle-Detection-P5/P5-TrainModel.ipynb
mlandry1/CarND
mit
3. Create a Radical Pilot Session
session = rp.Session() print "Session id %s"%session.uid c = rp.Context('ssh') c.user_id = "" session.add_context(c)
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0