markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
look at correlations
# correlation heatmap corrmat=data[features_to_examine].corr() corrmat.head() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=.8, square=True) f.tight_layout()
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
The correlations appear as expected, except for cars_per_hh. Maybe this is because cars_per_hh is reflecting the size of the household more than income. Might want to try cars per adult instead..
print(data.columns) #'pct_amer_native','pct_alaska_native', x_cols = ['bedrooms','bathrooms', 'sqft','age_of_head_med', 'income_med','pct_white', 'pct_black', 'pct_any_native', 'pct_asian', 'pct_pacific', 'pct_other_race', 'pct_mixed_race', 'pct_mover', 'pct_owner', 'avg_hh_size', 'cars_per_hh'] y_col = 'ln_rent...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Comparison of models Try a linear model We'll start with a linear model to use as the baseline.
from sklearn import linear_model, cross_validation # create training and testing datasets. # this creates a test set that is 30% of total obs. X_train, X_test, y_train, y_test = cross_validation.train_test_split(data_notnull[x_cols],data_notnull[y_col], test_size = .3, random_state = 201) regr = linear_model.Linea...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
The residuals look pretty normally distributed. I wonder if inclusion of all these race variables is leading to overfitting. If so, we'd have small error on training set and large error on test set.
print("Training set. Mean squared error: %.5f" % np.mean((regr.predict(X_train) - y_train) ** 2), '| Variance score: %.5f' % regr.score(X_train, y_train)) print("Test set. Mean squared error: %.5f" % np.mean((regr.predict(X_test) - y_test) ** 2), '| Variance score: %.5f' % regr.score(X_test, y_test))
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Try Ridge Regression (linear regression with regularization ) Since the training error and test error are about the same, and since we're using few features, overfitting probably isn't a problem. If it were a problem, we would want to try a regression with regularization. Let's try it just for the sake of demonstratio...
from sklearn.linear_model import Ridge # try a range of different regularization terms. for a in [10,1,0.1,.01,.001,.00001]: ridgereg = Ridge(alpha=a) ridgereg.fit(X_train, y_train) print('\n alpha:',a) print("Mean squared error: %.5f" % np.mean((ridgereg.predict(X_test) - y_test) ** 2),'| Varianc...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
As expected, Ridge regression doesn't help much. The best way to improve the model at this point is probably to add more features. Random Forest
from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error def RMSE(y_actual, y_predicted): return np.sqrt(mean_squared_error(y_actual, y_predicted)) def cross_val_rf(X, y,max_f='auto', n_trees = 50, cv_method='kfold', k=5): """E...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
We can use k-fold validation if we believe the samples are independently and identically distributed. That's probably fine right now because we have only 1.5 months of data, but later we may have some time-dependent processes in these timeseries data. If we do use k-fold, I think we should shuffle the samples, because ...
# without parameter tuning cross_val_rf(df_X,df_y) # tune the parameters rf_results = optimize_rf(df_X,df_y, max_n_trees = 100, n_step = 20) # this is sufficient; very little improvement after n_trees=100. #rf_results2 = optimize_rf(df_X,df_y, max_n_trees = 500, n_step=100) rf_results ax = rf_results.plot() ax.set_...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Using m=sqrt(n_features) and log2(n_features) gives similar performance, and a slight improvement over m = n_features. After about 100 trees the error levels off. One of the nice things about random forest is that using additional trees doesn't lead to overfitting, so we could use more, but it's not necessary. Now we c...
random_forest = RandomForestRegressor(n_estimators=100, max_features='sqrt', criterion='mse', max_depth=None) random_forest.fit(df_X,df_y) predict_y=random_forest.predict(df_X)
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
The 'importance' score provides an ordered qualitative ranking of the importance of each feature. It is calculated from the improvement in MSE provided by each feature when it is used to split the tree.
# plot the importances rf_o = pd.DataFrame({'features':x_cols,'importance':random_forest.feature_importances_}) rf_o= rf_o.sort_values(by='importance',ascending=False) plt.figure(1,figsize=(12, 6)) plt.xticks(range(len(rf_o)), rf_o.features,rotation=45) plt.plot(range(len(rf_o)),rf_o.importance,"o") plt.title('Featur...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
It's not surprising sqft is the most important predictor, although it is strange cars_per_hh is the second most important. I would have expected incometo be higher in the list. If we don't think the samples are i.i.d., it's better to use time series CV.
from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=5)
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Try Boosted Forest
from sklearn.ensemble import GradientBoostingRegressor def cross_val_gb(X,y,cv_method='kfold',k=5, **params): """Estimate gradient boosting regressor using cross validation. Args: X (DataFrame): features data y (Series): target data cv_method (str): how to split the data ('kfold' ...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
tune parameters This time we'll use Grid Search in scikit-learn. This conducts an exhaustive search through the given parameters to find the best for the given estimator.
from sklearn.model_selection import GridSearchCV param_grid = {'learning_rate':[.1, .05, .02, .01], 'max_depth':[2,4,6], 'min_samples_leaf': [3,5,9,17], 'max_features': [1, .3, .1] } est= GradientBoostingRegressor(n_estimators = 1000) gs_cv = GridSearchCV(est,par...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Let's use partial_dependence to look at feature interactions. Look at the four most important features.
from sklearn.ensemble.partial_dependence import plot_partial_dependence from sklearn.ensemble.partial_dependence import partial_dependence df_X.columns features = [0,1,2, 15, 4,5,14, 12] names = df_X.columns fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
The partial dependence plots show how predicted values vary with the given covariate, "controlling for" the influence of other covariates (Friedman, 2001). In the top three plots above, we can see non-linear relationships between the features and predicted values. Bedrooms generally has a positive influence on rent, bu...
features = [(0,1),(0,2),(4,2), (4,15),(14,15)] names = df_X.columns fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6)) fig.suptitle('Partial dependence of rental price features') plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitl...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Reading epochs from a raw FIF file This script shows how to read the epochs from a raw file given a list of events. For illustration, we compute the evoked responses for both MEG and EEG data by averaging all the epochs.
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Matti Hämäläinen <msh@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.21/_downloads/a2556d0f8dd65be2930ab61c812863c7/plot_read_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.5 # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up pick list: EEG + MEG - ...
0.21/_downloads/a2556d0f8dd65be2930ab61c812863c7/plot_read_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show result
evoked.plot(time_unit='s')
0.21/_downloads/a2556d0f8dd65be2930ab61c812863c7/plot_read_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Import Example Data
dat = pd.read_table('../example_data/ST000015_log.tsv') dat.set_index('Name', inplace=True) dat[:3]
notebook/pca_testing.ipynb
secimTools/SECIMTools
mit
Use R to calculate PCA Mi has looked at this already, but wanted to put the R example here to be complete. Here are the two R methods to output PCA
%%R -i dat # First method uses princomp to calulate PCA using eigenvalues and eigenvectors pr = princomp(dat) #str(pr) loadings = pr$loadings scores = pr$scores #summary(pr) %%R -i dat pr = prcomp(dat) #str(pr) loadings = pr$rotation scores = pr$x sd = pr$sdev #summary(pr)
notebook/pca_testing.ipynb
secimTools/SECIMTools
mit
Use Python to calculate PCA scikit-learn has a PCA package that we will use. It uses the SVD method, so results match the prcomp from R. http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Generate PCA with default settings
# Initiate PCA class pca = PCA() # Fit the model and transform data scores = pca.fit_transform(dat) # Get loadings loadings = pca.components_ # R also outputs the following in their summaries sd = loadings.std(axis=0) propVar = pca.explained_variance_ratio_ cumPropVar = propVar.cumsum()
notebook/pca_testing.ipynb
secimTools/SECIMTools
mit
I compared these results with prcomp and they are identical, note that the python version formats the data in scientific notation. Build output tables that match the original PCA script Build comment block At the top of each output file, the original R version includes the standard deviation and the proportion of varia...
# Labels used for the comment block labels = np.array(['#Std. deviation', '#Proportion of variance explained', '#Cumulative proportion of variance explained']) # Stack the data into a matrix data = np.vstack([sd, propVar, cumPropVar]) # Add the labels to the first position in the matrix block = np.column_stack([label...
notebook/pca_testing.ipynb
secimTools/SECIMTools
mit
The resulting dataframe has a bit more than 10M ratings, as expected.
data.shape
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Basic data stats:
data.apply('nunique')
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Preparing data As always, you need to firstly define a data model that will provide a common interface for all recommendation algorithms used in experiments:
data_model = RecommenderData(data, 'userid', 'movieid', 'rating', custom_order='timestamp', seed=0) data_model.fields
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Setting seed=0 ensures controllable randomization when sampling test data, which enhances reproducibility; custom_order allows to select observations for evaluation based on their timestamp, rather than on rating value (more on that later). Let's look at the default configuration of data model:
data_model.get_configuration()
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
By default, Polara samples 20% of users and marks them for test (test_ratio attribute). These users would be excluded from the training dataset, if warm_start attribute remained set to True (strong generalization test). However, in the iALS case such setting would require running additional half-step optimization (fold...
data_model.holdout_size = 1 # hold out 1 item from every test user data_model.random_holdout = False # take items with the latest timstamp data_model.warm_start = False # standard case data_model.prepare()
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
The holdout_size attribute controls how many user preferences will be used for evaluation. Current configuration instructs data model to holdout one item from every test user. The random_holdout=False setting along with custom_order input argumment of data model make sure that only the latest rated item is taken for ev...
from polara.recommender import defaults defaults.memory_hard_limit = 2 defaults.max_test_workers = None # None is the default value
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
common config evaluation In order to avoid undesired effects, related to the positivity bias, models will be trained only on interactions with ratings not lower than 4. This can be achieved by setting feedback_threshold attribute of models to 4. Due to the same reason, during the evaluation only items rated with rating...
defaults.switch_positive = 4 init_config = {'feedback_threshold': 4} # alternatively could set defaults.feedback_threshold
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
The default metric for tuning hyper-parameters and selecting the best model will be Mean Reciprocal Rank (MRR).
target_metric = 'mrr'
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
model tuning All MF models will be tested on the following grid of rank values (number of latent features):
max_rank = 150 rank_grid = range(10, max_rank+1, 10)
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Creating and tuning models We will start from a simple PureSVD model and use it as a baseline in comparison with its own scaling modification and iALS algorithm. PureSVD On one hand, tuning SVD is limited due to its strict least squares formulation, which doesn't leave too much freedom comparing to more general matrix ...
try: # import package to track grid search progress from ipypb import track # lightweight progressbar, doesn't depend on widgets except ImportError: from tqdm import tqdm_notebook as track from polara import SVDModel from polara.evaluation.pipelines import find_optimal_svd_rank %matplotlib inline psvd = SVDM...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Note that in this case the most of the time is spent on evaluation, rather than on model computation. The model was computed only once. You can verify it by calling psvd.training_time list attribute and seeing that it contains only one entry:
psvd.training_time
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Let's see how quality of recommendations changes with rank (number of latent features).
ax = psvd_rank_scores.plot(ylim=(0, None)) ax.set_xlabel('# of latent factors') ax.set_ylabel(target_metric.upper());
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Scaled PureSVD model We will employ a simple scaling trick over the rating matrix $R$ that was proposed by the authors of the EIGENREC model [Nikolakopoulos2019]: $R \rightarrow RD^{f-1},$ where $D$ is a diagonal scaling matrix with elements corresponding to the norm of the matrix columns (or square root of the number ...
from polara.recommender.models import ScaledSVD from polara.evaluation.pipelines import find_optimal_config # generic routine for grid-search
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Now we have to compute SVD model for every value of $f$. However, we can still avoid computing the model for each rank value by the virtue of rank truncation.
def fine_tune_scaledsvd(model, ranks, scale_params, target_metric, config=None): rev_ranks = sorted(ranks, key=lambda x: -x) # descending order helps avoiding model recomputation param_grid = [(s1, r) for s1 in scale_params for r in rev_ranks] param_names = ('col_scaling', 'rank') return find_optimal_co...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
We already know an approximate range of values for the scaling factor. You may also want to play with other values, especially when working with a different dataset.
ssvd = ScaledSVD(data_model) # create model scaling = [0.2, 0.4, 0.6, 0.8] ssvd_best_config, ssvd_scores = fine_tune_scaledsvd(ssvd, rank_grid, scaling, target_met...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Note that during this grid search the model was computed only len(scaling)=4 number of times, other points were found via rank truncation. Let's see how quality changes with different values of scaling parameter $f$.
for cs in scaling: cs_scores = ssvd_scores.xs(cs, level='col_scaling') ax = cs_scores.plot(label=f'col_scaling: {cs}') ax.set_title(f'Recommendations quality for {ssvd.method} model') ax.set_ylim(0, None) ax.set_ylabel(target_metric.upper()) ax.legend(); ssvd_rank_scores = ssvd_scores.xs(ssvd_best_config['col_...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
The optimal set of hyper-parameters:
ssvd_best_config
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
iALS Using implicit library in Polara is almost as simple as using SVD-based models. Make sure you have it installed in your python environment (follow instructions at https://github.com/benfred/implicit ).
import os; os.environ["MKL_NUM_THREADS"] = "1" # as required by implicit import numpy as np from polara.recommender.external.implicit.ialswrapper import ImplicitALS from polara.evaluation.pipelines import random_grid, set_config
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
defining hyper-parameter grid Hyper-parameter space in that case is much broader. We will start by adjusting all hyper-parameters expect the rank value and then, once an optimal config is found, we will perform full grid-search over the range of rank values defined by rank_grid.
als_params = dict(alpha = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100], epsilon = [0.01, 0.03, 0.1, 0.3, 1], weight_func = [None, np.sign, np.sqrt, np.log2, np.log10], regularization = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3], rank = [40] # enforce ...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
In order to avoid too long computation time, grid-search is performed over 60 random points, which is enough to get within 5% of the optimum with 95% confidence. The grid is generated with the built-in random_grid function.
als_param_grid, als_param_names = random_grid(als_params, n=60)
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
random grid search
ials = ImplicitALS(data_model) # create model ials_best_config, ials_grid_scores = find_optimal_config(ials, als_param_grid, # hyper-parameters grid als_param_names, # hyper-parameters' names ...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
rank tuning In contrast to the case of SVD-based algorithms, iALS requires recomputing the model for every new rank value, therefore in addition to the previous 60 times, the model will be computed len(rank_grid) more times for all rank values.
ials_best_rank, ials_rank_scores = find_optimal_config(ials, rank_grid, 'rank', target_metric, # con...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Let's combine the best rank value with other optimal parameters:
ials_best_config.update(ials_best_rank) ials_best_config
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
visualizing rank tuning results We can now see how all three algorithms compare to each other.
def plot_rank_scores(scores): ax = None for sc in scores: ax = sc.sort_index().plot(label=sc.name, ax=ax) ax.set_ylim(0, None) ax.set_title('Recommendations quality') ax.set_xlabel('# of latent factors') ax.set_ylabel(target_metric.upper()); ax.legend() return ax plot_rank_score...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
It can be seen that scaling (PureSVD-s line) has a significant impact on the quality of recommendations. This is, however, a preliminary result, which is yet to be verified via cross-validation. Models comparison The results above were computed only with a single split into train-test corresponding to a single fold. In...
from polara.evaluation import evaluation_engine as ee
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Fixing optimal configurations:
set_config(psvd, {'rank': psvd_best_rank}) set_config(ssvd, ssvd_best_config) set_config(ials, ials_best_config)
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
Performing 5-fold CV:
models = [psvd, ssvd, ials] metrics = ['ranking', 'relevance', 'experience'] # run experiments silently data_model.verbose = False for model in models: model.verbose = False # perform cross-validation on models, report scores according to metrics cv_results = ee.run_cv_experiment(models, ...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
The output contains results for all folds:
cv_results.head()
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
plotting results We will plot average scores and confidence intervals for them. The following function will do this based on raw input from CV:
def plot_cv_results(scores, subplot_size=(6, 3.5)): scores_mean = scores.mean(level='model') scores_errs = ee.sample_ci(scores, level='model') # remove top-level columns with classes of metrics (for convenience) scores_mean.columns = scores_mean.columns.get_level_values(1) scores_errs.columns = scor...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
The difference between PureSVD and iALS is not significant. <div class="alert alert-block alert-success">In contrast, the advantage of the scaled version of PureSVD denoted as `PureSVD-s` over the other models is much more pronounced making it a clear favorite.</div> Interestingly, the difference is especially pronounc...
import pandas as pd timings = {} for model in models: timings[f'{model.method} rank {model.rank}'] = model.training_time[-5:] time_df = pd.DataFrame(timings) time_df.mean().plot.bar(yerr=time_df.std(), rot=0, title='Computation time for optimal config, s');
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
PureSVD-s compares favoribly to the iALS, even though it requires higher rank value, which results in a longer training time comparing to PureSVD. Another interesting measure is what time does it take to achieve approximately the same quality by all models. Note that all models give approximately the same quality at t...
fixed_rank_timings = {} for model in models: model.rank = ials_best_config['rank'] model.build() fixed_rank_timings[model.method] = model.training_time[-1] pd.Series(fixed_rank_timings).plot.bar(rot=0, title=f'Rank {ials.rank} computation time, s')
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
By all means computing SVD on this dataset is much faster than ALS. This may, however, vary on other datasets due to a different sparsity structure. Nevertheless, you can still expect, that SVD-based models will be perfroming well due to the usage of highly optimized BLAS and LAPACK routines. Bonus: scaling for iALS Yo...
from polara.recommender.models import ScaledMatrixMixin class ScaledIALS(ScaledMatrixMixin, ImplicitALS): pass # similarly to how PureSVD is extended to its scaled version sals = ScaledIALS(data_model)
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
In order to save time, we will utilize the optimal configuration for scaling, found by tuning scaled version of PureSVD. Alternatively, you could include scaling parameters into the grid search step by extending als_param_grid and als_param_names variables. However, taking configuration of PureSVD-s should be a good en...
sals_best_config, sals_param_scores = find_optimal_config(sals, als_param_grid, als_param_names, target_metric, ...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
visualizing rank tuning results
plot_rank_scores([ssvd_rank_scores, sals_rank_scores, ials_rank_scores, psvd_rank_scores]);
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
There seem to be no difference between the original and scaled versions of iALS. Let's verify this with CV experiment. cross-validation You only need to perform CV computations for the new model. Configuration of data will be the same as previously, as the data_model instance ensures reproducible data state.
sals_best_config.update(sals_best_rank) sals_best_config set_config(sals, sals_best_config) sals.verbose = False sals_cv_results = ee.run_cv_experiment([sals], metrics=metrics, iterator=track) plot_cv_results(cv_results.append(sals_cv_resu...
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
Evfro/polara
mit
超参配置
# Convolutional Layer 1. filter_size1 = 3 num_filters1 = 32 # Convolutional Layer 2. filter_size2 = 3 num_filters2 = 32 # Convolutional Layer 3. filter_size3 = 3 num_filters3 = 64 # Fully-connected layer. fc_size = 128 # Number of neurons in fully-connected layer. # Number of color channels for the ima...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
数据载入
data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size) test_images, test_ids = dataset.read_test_set(test_path, img_size) print("Size of:") print("- Training-set:\t\t{}".format(len(data.train.labels))) print("- Test-set:\t\t{}".format(len(test_images))) print("- Validation-set:\...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
绘图函数 Function used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None): if len(images) == 0: print("no images to show") return else: random_indices = random.sample(range(len(images)), min(len(images), 9)) images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices]) ...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
随机显示图像,检查是否载入正确
# Get some random images and their labels from the train set. images, cls_true = data.train.images, data.train.cls # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
TensorFlow 图模型 主要包含以下部分: * Placeholder variables used for inputting data to the graph. * Variables that are going to be optimized so as to make the convolutional network perform better. * The mathematical formulas for the convolutional network. * A cost measure that can be used to guide the optimization of the variable...
def new_weights(shape): return tf.Variable(tf.truncated_normal(shape, stddev=0.05)) def new_biases(length): return tf.Variable(tf.constant(0.05, shape=[length]))
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助创建新卷积层的函数 此函数用于定义模型,输入维度假设为4维: 1. Image number. 图片数量 2. Y-axis of each image. 图片高度 3. X-axis of each image. 图片宽度 4. Channels of each image. 图片通道数 通道数可能为原始图片色彩通道数,也可能为之前所生成特征图的通道数 输出同样为4维: 1. Image number, same as input. 图片数量 2. Y-axis of each image. 图片高度,如经过2x2最大池化,则减半 3. X-axis of each image. 同上 4. Channels produced...
def new_conv_layer(input, # The previous layer. num_input_channels, # Num. channels in prev. layer. filter_size, # Width and height of each filter. num_filters, # Number of filters. use_pooling=True): # Use 2x2 max-p...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助展开(一维化)卷积层的函数 A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
def flatten_layer(layer): # Get the shape of the input layer. layer_shape = layer.get_shape() # The shape of the input layer is assumed to be: # layer_shape == [num_images, img_height, img_width, num_channels] # The number of features is: img_height * img_width * num_channels # We can use a fu...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助创建全连接层的函数 This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph. It is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tens...
def new_fc_layer(input, # The previous layer. num_inputs, # Num. inputs from prev. layer. num_outputs, # Num. outputs. use_relu=True): # Use Rectified Linear Unit (ReLU)? # Create new weights and biases. weights = new_weights(shape=[num_inputs,...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Placeholder 参数定义 Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below. First we define the placeholder variable for the input images. This allows us to change ...
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes.
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
卷积层 1 Create the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
layer_conv1, weights_conv1 = \ new_conv_layer(input=x_image, num_input_channels=num_channels, filter_size=filter_size1, num_filters=num_filters1, use_pooling=True) layer_conv1
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
卷积层 2 和 3 Create the second and third convolutional layers, which take as input the output from the first and second convolutional layer respectively. The number of input channels corresponds to the number of filters in the previous convolutional layer.
layer_conv2, weights_conv2 = \ new_conv_layer(input=layer_conv1, num_input_channels=num_filters1, filter_size=filter_size2, num_filters=num_filters2, use_pooling=True) layer_conv2 layer_conv3, weights_conv3 = \ new_conv_layer(input=la...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
展开层 The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
layer_flat, num_features = flatten_layer(layer_conv3) layer_flat num_features
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
全连接层 1 Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.
layer_fc1 = new_fc_layer(input=layer_flat, num_inputs=num_features, num_outputs=fc_size, use_relu=True)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
layer_fc1
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
全连接层 2 Add another fully-connected layer that outputs vectors of length num_classes for determining which of the classes the input image belongs to. Note that ReLU is not used in this layer.
layer_fc2 = new_fc_layer(input=layer_fc1, num_inputs=fc_size, num_outputs=num_classes, use_relu=False) layer_fc2
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
所预测类 The second fully-connected layer estimates how likely it is that the input image belongs to each of the 2 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one an...
y_pred = tf.nn.softmax(layer_fc2)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
The class-number is the index of the largest element.
y_pred_cls = tf.argmax(y_pred, dimension=1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
将要优化的损失函数 To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true. The cross-entropy is a performan...
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cros...
cost = tf.reduce_mean(cross_entropy)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
优化方法 Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFl...
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
评判手段 We need a few more performance measures to display the progress to the user. This is a vector of booleans whether the predicted class equals the true class of each image.
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
TensorFlow图模型的编译与运行 创建 TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
session = tf.Session()
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
初始化参数 The variables for weights and biases must be initialized before we start optimizing them.
session.run(tf.initialize_all_variables())
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助优化迭代的函数 It takes a long time to calculate the gradient of the model using the entirety of a large dataset . We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may th...
train_batch_size = batch_size def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss): # Calculate the accuracy on the training-set. acc = session.run(accuracy, feed_dict=feed_dict_train) val_acc = session.run(accuracy, feed_dict=feed_dict_validate) msg = "Epoch {0} --- Training Accura...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
# Counter for total number of iterations performed so far. total_iterations = 0 def optimize(num_iterations): # Ensure we update the global variable rather than a local copy. global total_iterations # Start-time used for printing time-usage below. start_time = time.time() best_val_loss = floa...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助绘制错误结果的函数 Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors(cls_pred, correct): # cls_pred is an array of the predicted class-number for # all images in the test-set. # correct is a boolean array whether the predicted class # is equal to the true class for each image in the test-set. # Negate the boolean array. incorrect = (corr...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助绘制混淆矩阵的函数
def plot_confusion_matrix(cls_pred): # cls_pred is an array of the predicted class-number for # all images in the test-set. # Get the true classifications for the test-set. cls_true = data.valid.cls # Get the confusion matrix using sklearn. cm = confusion_matrix(y_true=cls_true, ...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助展示实验结果与模型性能的函数 Function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by ea...
def print_validation_accuracy(show_example_errors=False, show_confusion_matrix=False): # Number of images in the test-set. num_test = len(data.valid.images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_p...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
1次优化迭代后的结果
optimize(num_iterations=1) print_validation_accuracy()
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
100次优化迭代后的结果 After 100 optimization iterations, the model should have significantly improved its classification accuracy.
optimize(num_iterations=99) # We already performed 1 iteration above. print_validation_accuracy(show_example_errors=True)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
1000次优化迭代后的结果
optimize(num_iterations=900) # We performed 100 iterations above. print_validation_accuracy(show_example_errors=True)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
10000次优化迭代后的结果
optimize(num_iterations=9000) # We performed 1000 iterations above. print_validation_accuracy(show_example_errors=True, show_confusion_matrix=True)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
权值与卷积层的可视化 In trying to understand why the convolutional neural network can recognize images, we will now visualize the weights of the convolutional filters and the resulting output images. 协助绘制卷积核权值的函数
def plot_conv_weights(weights, input_channel=0): # Assume weights are TensorFlow ops for 4-dim variables # e.g. weights_conv1 or weights_conv2. # Retrieve the values of the weight-variables from TensorFlow. # A feed-dict is not necessary because nothing is calculated. w = session.run(weights) ...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
协助绘制卷积层输出的函数
def plot_conv_layer(layer, image): # Assume layer is a TensorFlow op that outputs a 4-dim tensor # which is the output of a convolutional layer, # e.g. layer_conv1 or layer_conv2. image = image.reshape(img_size_flat) # Create a feed-dict containing just one image. # Note that we don't need...
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
输入图像 Helper-function for plotting an image.
def plot_image(image): plt.imshow(image.reshape(img_size, img_size, num_channels), interpolation='nearest') plt.show()
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Plot an image from the test-set which will be used as an example below.
image1 = test_images[0] plot_image(image1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Plot another example image from the test-set.
image2 = test_images[13] plot_image(image2)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
卷积层 1 Now plot the filter-weights for the first convolutional layer. Note that positive weights are red and negative weights are blue.
plot_conv_weights(weights=weights_conv1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to about half the resolution of the original input image.
plot_conv_layer(layer=layer_conv1, image=image1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
The following images are the results of applying the convolutional filters to the second image.
plot_conv_layer(layer=layer_conv1, image=image2)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
卷积层 2 Now plot the filter-weights for the second convolutional layer. There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for th...
plot_conv_weights(weights=weights_conv2, input_channel=0)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
plot_conv_weights(weights=weights_conv2, input_channel=1)
05_Image_recognition_and_classification/cnn.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0