markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Alright, now we can simply call the create() method to start the pattern loading/structuring process!
mvp.create()
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
After calling create(), the Mvp-object has a couple of 'new' attributes! Let's check them out.
print("The attribute .X represents our samples-by-features matrix of shape %s" % (mvp.X.shape,)) print("The attribute .y represents our targets (y) of shape %s" % (mvp.y.shape,))
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
As you can see, these are exactly the patterns (X) and labels (y) which we created manually earlier in our workshop (except for that X contains fewer features due to the setting remove_zeros=True). We can also inspect the names of the patterns (as parsed from the design.con file):
print(mvp.contrast_labels)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Feature extraction / selection Another feature of skbold is that is offers some neuroimaging-specific transformers (implemented the same way as scikit-learn transformers). Let's look at, for example, the ClusterThreshold class - a transformer that applies a (3D) cluster-thresholding procedure on top of univariate feature selection and subsequently returns the cluster-average values as features (i.e. a dimensionality reduction procedure that reduces voxels to cluster-averages). Let's check it out:
from skbold.feature_extraction import ClusterThreshold from sklearn.feature_selection import f_classif clt = ClusterThreshold(mvp=mvp, min_score=10, selector=f_classif)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
We initialized our ClusterThreshold object to perform an initial threshold at min_score=10. The voxels that "survived" this threshold are subsequently clustered and averaged within clusters. Below, we'll show that the API is exactly the same as scikit-learn's transformers:
from sklearn.model_selection import train_test_split # Let's cross-validate our ClusterThresholding procedure (which you should always do!) X_train, X_test, y_train, y_test = train_test_split(mvp.X, mvp.y, test_size=0.25) print("Shape of X_train before cluster-thresholding: %s" % (X_train.shape,)) print("Shape of X_test before cluster-thresholding: %s" % (X_test.shape,), '\n') # Let's fit and transform clt.fit(X_train, y_train) X_train_clt = clt.transform(X_train) print("Shape of X_train after cluster-thresholding: %s" % (X_train_clt.shape,)) X_test_clt = clt.transform(X_test) print("Shape of X_test after cluster-thresholding: %s" % (X_test_clt.shape,))
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Skbold has many more transformers, such as RoiIndexer, which indexes patterns given a certain mask/ROI. It doesn't matter whether the patterns are in EPI-space and the mask/ROI is in MNI-space; skbold registers the mask/ROI from one space to the other accordingly. (It needs FSL for this, and as we don't know whether you have this installed, we won't showcase it here.) Model evaluation / Feature visualization Apart from the Mvp object, skbold also includes the MvpResults object, which is designed to keep track of your pipeline's performance and feature-weights across different folds. You can keep track of any (scikit-learn) metric (like f1-score, accuracy, ROC-AUC-score, etc.). Keeping track of feature weights comes in three 'flavors': the option "ufs", which keeps track of the 'univariate' feature scores (only applicable if you actually use 'univariate feature selection' in your pipeline); the option "fwm" ('feature weight mapping', see this article which keeps track of the raw feature weights; and the option "forward", which keeps track of the corresponding forward model weights given the feature weights (see this article). Anyway, let's look at how we shoulds initialize such an object:
from skbold.postproc import MvpResults from sklearn.metrics import accuracy_score, f1_score mvpr = MvpResults(mvp=mvp, n_iter=5, feature_scoring='forward', confmat=True, accuracy=accuracy_score, f1=f1_score)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Importantly, the MvpResults class needs a Mvp object upon initialization to extract some meta-data and it needs to know how many folds (n_iter) we're going to keep track of (here we assume we'll do 5-fold CV). We also indicate that we want to keep track of the confusion-matrices across folds (confmat=True) and after that we specified a couple of metrics (you can indicate any amount of metrics in the form name_metric=sklearn_metric_function). Now, we can build a ML pipeline and update our MvpResults object after each fold. Importantly, as you'll see below, to correctly extract feature-weights, you need to pass a Pipeline object when calling the method update(). Let's first import some things for our pipeline and define the pipeline and CV-scheme:
from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import f_classif, SelectKBest from sklearn.svm import SVC from sklearn.pipeline import Pipeline from sklearn.model_selection import StratifiedKFold pipe_line = Pipeline([('scaler', StandardScaler()), ('ufs', SelectKBest(score_func=f_classif, k=1000)), ('clf', SVC(kernel='linear')) ]) skf = StratifiedKFold(n_splits=5)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Now we can implement our analysis and simply call mvpr.update() after each fold:
for i, (train_idx, test_idx) in enumerate(skf.split(mvp.X, mvp.y)): print("Processing fold %i / %i" % (i+1, skf.n_splits)) X_train, X_test = mvp.X[train_idx], mvp.X[test_idx] y_train, y_test = mvp.y[train_idx], mvp.y[test_idx] pipe_line.fit(X_train, y_train) pred = pipe_line.predict(X_test) mvpr.update(test_idx=test_idx, y_pred=pred, pipeline=pipe_line)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
We can check out the results of our analysis by calling the compute_scores() method:
performance, feature_scores = mvpr.compute_scores()
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
This prints out the mean and standard-deviation of our metrics across folds and the amount of voxels that were part of the analysis. We can check out the per-fold performance by looking at the first returned variable (here: performance):
performance
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Also, we can check out the feature-scores (here: the "forward" model corresponding to the classifier weights), which is returned here as feature_scores. This is a nibabel Nifti-object, which we can check out using matplotlib:
import nibabel as nib import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(20, 5)) scores_3d = feature_scores.get_data() background = op.join('..', 'data', 'pi0070', 'wm.feat', 'reg', 'example_func.nii.gz') background = nib.load(background).get_data() for i, slce in enumerate(np.arange(20, 60, 5)): plt.subplot(2, 4, (i+1)) plt.title('X = %i' % slce, fontsize=20) to_plot = np.ma.masked_where(scores_3d[slce, :, :] == 0, scores_3d[slce, :, :]) plt.imshow(background[slce, :, :].T, origin='lower', cmap='gray') plt.imshow(to_plot.T, origin='lower', cmap='hot') plt.axis('off') plt.tight_layout() plt.show()
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Almost always you have more than one subject, so what you can do is loop over subjects and initialize a new MvpResults object for every subject and store them in a separate list. Once the loop over subjets is completed, simply initialize a MvpAverageResults and call compute_statistics(), which we'll show below:
from glob import glob feat_dirs = glob(op.join('..', 'data', 'pi*', 'wm.feat')) n_folds = 5 mvp_results_list = [] for feat_dir in feat_dirs: print("Subject: %s" % feat_dir) mvp = MvpWithin(source=feat_dir, read_labels=read_labels, ref_space=ref_space, statistic=statistic, remove_zeros=remove_zeros, mask=mask_file) mvp.create() mvpr = MvpResults(mvp=mvp, n_iter=5, feature_scoring='forward', confmat=True, accuracy=accuracy_score, f1=f1_score) for train_idx, test_idx in skf.split(mvp.X, mvp.y): X_train, X_test = mvp.X[train_idx], mvp.X[test_idx] y_train, y_test = mvp.y[train_idx], mvp.y[test_idx] pipe_line.fit(X_train, y_train) pred = pipe_line.predict(X_test) mvpr.update(test_idx=test_idx, y_pred=pred, pipeline=pipe_line) mvp_results_list.append(mvpr) from skbold.postproc import MvpAverageResults mvpr_average = MvpAverageResults() subjects = [op.basename(op.dirname(f)) for f in feat_dirs] results = mvpr_average.compute_statistics(mvp_results_list, identifiers=subjects, metric='f1', h0=.5) results
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: \begin{equation} F \left(\epsilon\right) = \frac{1}{e^\frac{(\epsilon - \mu)}{kT} +1} \end{equation} Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" H = (1/((np.e)**((energy - mu)/kT)+1)) return H assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532, 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and the overall visualization. Customize your plot in 3 other ways to make it effective and beautiful.
?np.arange def plot_fermidist(mu, kT): energy = np.linspace(0.,10.0,50) plt.plot(fermidist(energy, mu, kT), energy) plt.title('The Fermi Distribution') plt.grid(True) plt.xlabel('F [(Unitless)]') plt.ylabel('Energy [(eV)]') plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermidist function
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$.
interact(plot_fermidist, mu = [0.0,5.0], kT=[0.1,10.0]);
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Just for fun, let's create a lambda to find and show nearest neighbor images
show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show() show_neighbors(8) show_neighbors(26) auto_data = image_train[image_train['label'] == 'automobile'] cat_data = image_train[image_train['label'] == 'cat'] dog_data = image_train[image_train['label'] == 'dog'] bird_data = image_train[image_train['label'] == 'bird'] auto_model = graphlab.nearest_neighbors.create(auto_data,features=['deep_features'], label='id') cat_model = graphlab.nearest_neighbors.create(cat_data,features=['deep_features'], label='id') dog_model = graphlab.nearest_neighbors.create(dog_data,features=['deep_features'], label='id') bird_model = graphlab.nearest_neighbors.create(bird_data,features=['deep_features'], label='id') cat = image_test[0:1] cat_model_query = cat_model.query(cat) cat_neighbors = get_images_from_ids(cat_model_query) #cat_neighbors cat_neighbors['image'].show() cat_model_query['distance'].mean() dog_model_query = dog_model.query(cat) dog_neighbor = get_images_from_ids(dog_model_query) dog_neighbor['image'].show() dog_model_query['distance'].mean() image_test_auto = image_test[image_test['label'] == 'automobile'] image_test_cat = image_test[image_test['label'] == 'cat'] image_test_dog = image_test[image_test['label'] == 'dog'] image_test_bird = image_test[image_test['label'] == 'bird'] dog_cat_neighbors = cat_model.query(image_test_dog, k=1) dog_dog_neighbors = dog_model.query(image_test_dog, k=1) dog_bird_neighbors = bird_model.query(image_test_dog, k=1) dog_auto_neighbors = auto_model.query(image_test_dog, k=1) dog_distances = graphlab.SFrame() dog_distances['dog-dog'] = dog_dog_neighbors['distance'] dog_distances['dog-cat'] = dog_cat_neighbors['distance'] dog_distances['dog-bird'] = dog_bird_neighbors['distance'] dog_distances['dog-auto'] = dog_auto_neighbors['distance'] dog_distances def is_dog_correct(row): for col_name in dog_distances.column_names(): if row['dog-dog'] > row[col_name]: return 0 return 1 num_correct = dog_distances.apply(is_dog_correct) num_correct.sum() cat_model_query cat_model.query(cat)
dato/deeplearning/Deep Features for Image Retrieval.ipynb
jrrembert/cybernetic-organism
gpl-2.0
As of Mon 12th of Oct running on devel branch of GPy 0.8.8
GPy.plotting.change_plotting_library('plotly')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
Gaussian process regression tutorial Nicolas Durrande 2013 with edits by James Hensman and Neil D. Lawrence We will see in this tutorial the basics for building a 1 dimensional and a 2 dimensional Gaussian process regression model, also known as a kriging model. We first import the libraries we will need:
import numpy as np
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
1-dimensional model For this toy example, we assume we have the following inputs and outputs:
X = np.random.uniform(-3.,3.,(20,1)) Y = np.sin(X) + np.random.randn(20,1)*0.05
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
Note that the observations Y include some noise. The first step is to define the covariance kernel we want to use for the model. We choose here a kernel based on Gaussian kernel (i.e. rbf or square exponential):
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The parameter input_dim stands for the dimension of the input space. The parameters variance and lengthscale are optional, and default to 1. Many other kernels are implemented, type GPy.kern.<tab> to see a list
#type GPy.kern.<tab> here: GPy.kern.BasisFuncKernel?
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The inputs required for building the model are the observations and the kernel:
m = GPy.models.GPRegression(X,Y,kernel)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
By default, some observation noise is added to the model. The functions display and plot give an insight of the model we have just built:
from IPython.display import display display(m) fig = m.plot() GPy.plotting.show(fig, filename='basic_gp_regression_notebook')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The above cell shows our GP regression model before optimization of the parameters. The shaded region corresponds to ~95% confidence intervals (ie +/- 2 standard deviation). The default values of the kernel parameters may not be optimal for the current data (for example, the confidence intervals seems too wide on the previous figure). A common approach is to find the values of the parameters that maximize the likelihood of the data. It as easy as calling m.optimize in GPy:
m.optimize(messages=True)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
If we want to perform some restarts to try to improve the result of the optimization, we can use the optimize_restarts function. This selects random (drawn from $N(0,1)$) initializations for the parameter values, optimizes each, and sets the model to the best solution found.
m.optimize_restarts(num_restarts = 10)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
In this simple example, the objective function (usually!) has only one local minima, and each of the found solutions are the same. Once again, we can use print(m) and m.plot() to look at the resulting model resulting model. This time, the paraemters values have been optimized agains the log likelihood (aka the log marginal likelihood): the fit shoul dbe much better.
display(m) fig = m.plot() GPy.plotting.show(fig, filename='basic_gp_regression_notebook_optimized')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
New plotting of GPy 0.9 and later The new plotting allows you to plot the density of a GP object more fine grained by plotting more percentiles of the distribution color coded by their opacity
display(m) fig = m.plot(plot_density=True) GPy.plotting.show(fig, filename='basic_gp_regression_density_notebook_optimized')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
2-dimensional example Here is a 2 dimensional example:
# sample inputs and outputs X = np.random.uniform(-3.,3.,(50,2)) Y = np.sin(X[:,0:1]) * np.sin(X[:,1:2])+np.random.randn(50,1)*0.05 # define kernel ker = GPy.kern.Matern52(2,ARD=True) + GPy.kern.White(2) # create simple GP model m = GPy.models.GPRegression(X,Y,ker) # optimize and plot m.optimize(messages=True,max_f_eval = 1000) fig = m.plot() display(GPy.plotting.show(fig, filename='basic_gp_regression_notebook_2d')) display(m)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The flag ARD=True in the definition of the Matern kernel specifies that we want one lengthscale parameter per dimension (ie the GP is not isotropic). Note that for 2-d plotting, only the mean is shown. Plotting slices To see the uncertaintly associated with the above predictions, we can plot slices through the surface. this is done by passing the optional fixed_inputs argument to the plot function. fixed_inputs is a list of tuples containing which of the inputs to fix, and to which value. To get horixontal slices of the above GP, we'll fix second (index 1) input to -1, 0, and 1.5:
slices = [-1, 0, 1.5] figure = GPy.plotting.plotting_library().figure(3, 1, shared_xaxes=True, subplot_titles=('slice at -1', 'slice at 0', 'slice at 1.5', ) ) for i, y in zip(range(3), slices): canvas = m.plot(figure=figure, fixed_inputs=[(1,y)], row=(i+1), plot_data=False) GPy.plotting.show(canvas, filename='basic_gp_regression_notebook_slicing')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
A few things to note: * we've also passed the optional ax argument, to mnake the GP plot on a particular subplot * the data look strange here: we're seeing slices of the GP, but all the data are displayed, even though they might not be close to the current slice. To get vertical slices, we simply fixed the other input. We'll turn the display of data off also:
slices = [-1, 0, 1.5] figure = GPy.plotting.plotting_library().figure(3, 1, shared_xaxes=True, subplot_titles=('slice at -1', 'slice at 0', 'slice at 1.5', ) ) for i, y in zip(range(3), slices): canvas = m.plot(figure=figure, fixed_inputs=[(0,y)], row=(i+1), plot_data=False) GPy.plotting.show(canvas, filename='basic_gp_regression_notebook_slicing_vertical')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
<h3>II. Preprocessing </h3> We process the missing values first, dropping columns which have a large number of missing values and imputing values for those that have only a few missing values. The one-class SVM exercise has a more detailed version of these steps.
# dropping columns which have large number of missing entries m = map(lambda x: sum(secom[x].isnull()), xrange(secom.shape[1])) m_200thresh = filter(lambda i: (m[i] > 200), xrange(secom.shape[1])) secom_drop_200thresh = secom.dropna(subset=[m_200thresh], axis=1) dropthese = [x for x in secom_drop_200thresh.columns.values if \ secom_drop_200thresh[x].std() == 0] secom_drop_200thresh.drop(dropthese, axis=1, inplace=True) print 'The SECOM data set now has {} variables.'\ .format(secom_drop_200thresh.shape[1]) # imputing missing values for the random forest imp = Imputer(missing_values='NaN', strategy='median', axis=0) secom_imp = imp.fit_transform(secom_drop_200thresh)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h3>III. GBM: baseline vs using sample_weight</h3> We will first compare baseline results with the performance of a model where the sample_weight is used. As discussed in previous exercises, the <i>Matthews correlation coefficient (MCC)</i> is used instead of the <i>Accuracy</i> to compute the score.
# split data into train and holdout sets # stratify the sample used for modeling to preserve the class proportions X_train, X_test, y_train, y_test = tts(secom_imp, y, \ test_size=0.2, stratify=y, random_state=5) # function to test GBC parameters def GBC(params, weight): clf = GradientBoostingClassifier(**params) if weight: sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) clf.fit(X_train, y_train, sample_weight) else: clf.fit(X_train, y_train) print_results(clf, X_train, X_test) # function to print results def print_results(clf, X_train, X_test): # training data results print 'Training set results:' y_pred = clf.predict(X_train) print 'The train set MCC: {0:4.3f}'\ .format(matthews_corrcoef(y_train, y_pred)) # test set results print '\nTest set results:' acc = clf.score(X_test, y_test) print("Accuracy: {:.4f}".format(acc)) y_pred = clf.predict(X_test) print '\nThe confusion matrix: ' cm = confusion_matrix(y_test, y_pred) print cm print '\nThe test set MCC: {0:4.3f}'\ .format(matthews_corrcoef(y_test, y_pred))
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>A) Baseline</h4>
params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} GBC(params, 0)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>B) Sample weight</h4>
# RUN 1 Using the same parameters as the baseline params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} GBC(params, 1) # RUN 2 Manually selecting parameters to optimize the train/test MCC with sample weights params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.7, 'max_features' : 'log2', 'learning_rate': 0.018, 'min_samples_split': 2, 'random_state': SEED} GBC(params, 1)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
In the baseline case (where we do not adjust the weights), we get a high MCC score for the training set (0.97). The test MCC is 0.197 so there is a large gap between the train and test MCC. When sample weights are used, we get a test set MCC of 0.242 after tuning the parameters. The tuning parameters play a big role in the performance of the GBM. In Section III (D) we will look at the MCC trend over the number of estimators. This will allow us to adjust the complexity of the model. In Sections IV and V we will use the hyperopt and gridsearchCV modules to select the hyperparameters. <h4>C) Feature importance</h4> Here we compute the feature importance (using the baseline parameters) for the GBM. We also compare the results with the feature importance for the Random Forest.
params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} # GBM clf = GradientBoostingClassifier(**params) clf.fit(X_train, y_train) gbm_importance = clf.feature_importances_ gbm_ranked_indices = np.argsort(clf.feature_importances_)[::-1] # Random Forest rf = RandomForestClassifier(n_estimators=100, random_state=7) rf.fit(X_train, y_train) rf_importance = rf.feature_importances_ rf_ranked_indices = np.argsort(rf.feature_importances_)[::-1] # printing results in a table importance_results = pd.DataFrame(index=range(1,16), columns=pd.MultiIndex.from_product([['GBM','RF'],['Feature #','Importance']])) importance_results.index.name = 'Rank' importance_results.loc[:,'GBM'] = list(zip(gbm_ranked_indices[:15], gbm_importance[gbm_ranked_indices[:15]])) importance_results.loc[:,'RF'] = list(zip(rf_ranked_indices[:15], rf_importance[rf_ranked_indices[:15]])) print importance_results
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Roughly half the top fifteen most important features for the GBM were also the top fifteen computed for the Random Forest classifier. There are complex interactions between the parameters so we do not expect the two classifiers to give the same results. In Section IV where we optimize the hyperparameters, the nvar (number of variables) parameter will select variables according to their ranked importance. <h4>D) Number of estimators -- model complexity</h4> The GBM sequentially fits the function and the number of steps (the number of estimators) is specified by the ntree parameter. At some stage, the model complexity increases to the point where we start overfitting the data. Here we will plot the MCC as a function of the number of estimators for the train set and test set to determine a good value for ntree. When <i>Accuracy</i> is used as score, a plot of <i>Classification Error</i> (= <i>1 - Accuracy</i>) versus the <i>Number of Trees</i> can be used as a diagnostic to determine bias and overfitting. The term <i>(1 - MCC)</i>, however, is hard to interpret since MCC is calculated with all four terms of the confusion matrix. (I did make those plots out of curiosity and the trends are similar to those seen in an "Error vs no. of estimators" plot an example of which is presented on Slide 3 of this Hastie/Tibshirani lecture).
# function to compute MCC vs number of trees def GBC_trend(weight): base_params = {'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} mcc_train = [] mcc_test = [] for i in range(500, 1600, 100): params = dict(base_params) ntrees = {'n_estimators': i} params.update(ntrees) clf = GradientBoostingClassifier(**params) if weight: sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) clf.fit(X_train, y_train, sample_weight) else: clf.fit(X_train, y_train) y_pred_train = clf.predict(X_train) mcc_train.append(matthews_corrcoef(y_train, y_pred_train)) y_pred_test = clf.predict(X_test) mcc_test.append(matthews_corrcoef(y_test, y_pred_test)) return mcc_train, mcc_test # plot fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(7,4)) plt.style.use('ggplot') mcc_train, mcc_test = GBC_trend(0) ax[0].plot(range(500, 1600, 100), mcc_train, color='magenta', label='train MCC' ) ax[0].plot(range(500, 1600, 100), mcc_test, color='olive', label='test MCC' ) ax[0].set_title('For default weights') mcc_train, mcc_test = GBC_trend(1) ax[1].plot(range(500, 1600, 100), mcc_train, color='magenta', label='train MCC' ) ax[1].plot(range(500, 1600, 100), mcc_test, color='olive', label='test MCC' ) ax[1].set_title('For sample weights') ax[0].set_ylabel('MCC') fig.text(0.5, 0.04, 'Boosting Iterations', ha='center', va='center') plt.xlim(500,1500) plt.ylim(-0.1, 1.1); plt.legend(bbox_to_anchor=(1.05, 0), loc='lower left');
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Default weight option (left): After about 900 iterations, the GBM models all the training data perfectly. At the same time, there is a large gap in the holdout data classification results. This is a classic case of overfitting. Sample weight option (right): This plot was constructed using the same parameters at the default weight plot. This may account for some of the bias (lower values for MCC) we see for the training data line. From III B, we can see that even when we tune the parameters for the sample weight model, the MCC for the training set is lower that for the default weight model. This gives us some indication that using sample weights adds bias. <h3>IV. Hyperparameter optimization using Hyperopt</h3> The python hyperopt module by James Bergstra uses Bayesian optimization (based on a tree-structured Parzen density estimate--TPE) <a href="#ref1">[1]</a> to automatically select the best hyperparameters. This blog post provides examples of how this can be used with <i>sklearn</i> classifiers. We will use the <i>MCC</i> instead of the <i>Accuracy</i> for the cross-validation score. Note that since the hyperopt function fmin will minimze the MCC, we need to negate the MCC to maximize its value.
# defining the MCC metric to assess cross-validation def mcc_score(y_true, y_pred): mcc = matthews_corrcoef(y_true, y_pred) return mcc mcc_scorer = make_scorer(mcc_score, greater_is_better=True) # convert to DataFrame for easy indexing of number of variables (nvar) X_train = pd.DataFrame(X_train) X_test = pd.DataFrame(X_test) from hyperopt import fmin, tpe, hp, STATUS_OK, Trials def hyperopt_train_test(params): weight = params['weight'] X_ = X_train.loc[:, gbm_ranked_indices[:params['nvar']]] del params['nvar'], params['weight'] clf = GradientBoostingClassifier(**params) if weight: sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) return cross_val_score(clf, X_, y_train, scoring=mcc_scorer,\ fit_params={'sample_weight': sample_weight}).mean() else: return cross_val_score(clf, X_, y_train, scoring=mcc_scorer).mean() space = { 'n_estimators': hp.choice('n_estimators', range(700,1300, 100)), 'max_features': hp.choice('max_features', ['sqrt', 'log2']), 'max_depth': hp.choice('max_depth', range(2,5)), 'subsample': hp.choice('subsample', [0.6, 0.7, 0.8, 0.9]), 'min_samples_split': hp.choice('min_samples_split', [2, 3]), 'learning_rate': hp.choice('learning_rate', [0.01, 0.015, 0.018, 0.02, 0.025]), 'nvar': hp.choice('nvar', [200, 300, 400]), 'weight': hp.choice('weight', [0, 1]), # select sample_weight or default 'random_state': SEED } def f(params): mcc = hyperopt_train_test(params) # with a negative sign, we maximize the MCC return {'loss': -mcc, 'status': STATUS_OK}
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>Run 1</h4>
start = time() trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print("HyperoptCV took %.2f seconds."% (time() - start)) print '\nBest parameters (by index):' print best
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
We will apply the optimal hyperparameters selected via hyperopt above to the GBM classifier. The optimal parameters include nvar= 200 and the use of sample_weight.
params = {'n_estimators': 1200, 'max_depth': 3, 'subsample': 0.7, 'max_features' : 'log2', 'learning_rate': 0.018, 'min_samples_split': 3, 'random_state': SEED} train_ = X_train.loc[:, gbm_ranked_indices[:200]] test_ = X_test.loc[:, gbm_ranked_indices[:200]] clf = GradientBoostingClassifier(**params) sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) clf.fit(train_, y_train, sample_weight) print_results(clf, train_, test_)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>Run 2</h4> I repeated the run with hyperopt a few times and in each case the optimal parameters include nvar= 200 and the use of sample_weight. There is a great deal of variability among the remaining parameters selected across the runs. This is an example of a second run.
trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print '\nBest parameters (by index):' print best params = {'n_estimators': 700, 'max_depth': 4, 'subsample': 0.8, 'max_features' : 'log2', 'learning_rate': 0.018, 'min_samples_split': 2, 'random_state': SEED} train_ = X_train.loc[:, gbm_ranked_indices[:200]] test_ = X_test.loc[:, gbm_ranked_indices[:200]] clf = GradientBoostingClassifier(**params) sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) clf.fit(train_, y_train, sample_weight) print_results(clf, train_, test_)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4> Run 3 -- default weight</h4> For both Run 1 and Run 2, hyperopt selected the sample_weight option. This was the case for most of the runs though there were a few instances in which the default option was selected. This is an example:
start = time() trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print("HyperoptCV took %.2f seconds."% (time() - start)) print '\nBest parameters (by index):' print best params = {'n_estimators': 1000, 'max_depth': 2, 'subsample': 0.9, 'max_features' : 'log2', 'learning_rate': 0.025, 'min_samples_split': 3, 'random_state': SEED} train_ = X_train.loc[:, gbm_ranked_indices[:200]] test_ = X_test.loc[:, gbm_ranked_indices[:200]] clf = GradientBoostingClassifier(**params) clf.fit(train_, y_train) print_results(clf, train_, test_)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
The results from hyperopt (tested over ten or more runs) were quite variable and no conclusions can be made. By seeding the random_state parameter, we should be able to get reproducible results but since this was not the case here, we will need to investigate this further. <h3>V. Grid search with cross-validation</h3>
# cv function def GBMCV(weight): clf = GradientBoostingClassifier(random_state=SEED) param_grid = {"n_estimators": [800, 900, 1000, 1200], "max_depth": [2, 3], "subsample": [0.6, 0.7, 0.8], "min_samples_split": [2, 3], "max_features": ["sqrt", "log2"], "learning_rate": [0.01, 0.015, 0.018, 0.02, 0.025]} # run grid search if weight: sample_weight = np.array([14 if i == -1 else 1 for i in y_train]) grid_search = GridSearchCV(clf, param_grid=param_grid, scoring=mcc_scorer, \ fit_params={'sample_weight': sample_weight}) else: grid_search = GridSearchCV(clf, param_grid=param_grid, scoring=mcc_scorer) start = time() grid_search.fit(X_train, y_train) # print results print("GridSearchCV took %.2f seconds for %d candidate parameter settings." % (time() - start, len(grid_search.grid_scores_))) print 'The best parameters:' print '{}\n'. format(grid_search.best_params_) print 'Results for model fitted with the best parameter:' y_true, y_pred = y_test, grid_search.predict(X_test) print(classification_report(y_true, y_pred)) print 'The confusion matrix: ' cm = confusion_matrix(y_true, y_pred) print cm print '\nThe Matthews correlation coefficient: {0:4.3f}'\ .format(matthews_corrcoef(y_test, y_pred)) # CV with default weights print "CV with default weights:" GBMCV(0) # CV with sample weights GBMCV(1)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Interact basics Write a print_sum function that prints the sum of its arguments a and b.
def print_sum(a, b): return (a+b)
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
Use the interact function to interact with the print_sum function. a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1 b should be an integer slider the interval [-8, 8] with step sizes of 2.
w = interactive(print_sum, a = (-10.0,10.0,0.1), b = (-8, 8, 2)) display(w) w.result assert True # leave this for grading the print_sum exercise
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
Use the interact function to interact with the print_string function. s should be a textbox with the initial value "Hello World!". length should be a checkbox with an initial value of True.
w = interactive(print_string, s = "Hello, World!") w assert True # leave this for grading the print_string exercise
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
img = envi.open('../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr', '../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')
code/Python/remote-sensing/hyperspectral-data/classification_kmeans_pca_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:
%matplotlib inline import pylab pylab.figure() pylab.hold(1) for i in range(c.shape[0]): pylab.plot(c[i]) pylab.show pylab.title('Spectral Classes from K-Means Clustering') pylab.xlabel('Bands (with Water Vapor Windows Removed)') pylab.ylabel('Reflectance') #%matplotlib notebook view = imshow(img_subset, bands=(58,34,19),stretch=0.01, classes=m) view.set_display_mode('overlay') view.class_alpha = 0.5 #set transparency view.show_data
code/Python/remote-sensing/hyperspectral-data/classification_kmeans_pca_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
Downloading the atomic data
# the data is automatically downloaded download_atom_data('kurucz_cd23_chianti_H_He')
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Downloading the example file
!curl -O https://raw.githubusercontent.com/tardis-sn/tardis/master/docs/models/examples/tardis_example.yml
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Running the simulation (long output)
#TARDIS now uses the data in the data repo sim = run_tardis('tardis_example.yml')
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Plotting the Spectrum
%pylab inline spectrum = sim.runner.spectrum spectrum_virtual = sim.runner.spectrum_virtual spectrum_integrated = sim.runner.spectrum_integrated figure(figsize=(10,6)) plot(spectrum.wavelength, spectrum.luminosity_density_lambda, label='normal packets') plot(spectrum.wavelength, spectrum_virtual.luminosity_density_lambda, label='virtual packets') plot(spectrum.wavelength, spectrum_integrated.luminosity_density_lambda, label='formal integral') xlabel('Wavelength [$\AA$]') ylabel('Luminosity [erg/s/$\AA$]') legend() xlim(3000, 9000)
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Multiple concurrent RDataFrame runs If your analysis needs multiple RDataFrames to run (for example multiple dataset samples, data vs simulation etc.), the ROOT.RDF.RunGraphs
ROOT.EnableImplicitMT() treename1 = "myDataset" filename1 = "data/collections_dataset.root" treename2 = "dataset" filename2 = "data/example_file.root" df1 = ROOT.RDataFrame(treename1, filename1) df2 = ROOT.RDataFrame(treename2, filename2) h1 = df1.Histo1D("px") h2 = df2.Histo1D("a") ROOT.RDF.RunGraphs((h1, h2)) c = ROOT.TCanvas() h1.Draw() c.Draw() c = ROOT.TCanvas() h2.Draw() c.Draw()
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Distributed RDataFrame An RDataFrame analysis written in Python can be executed both locally - possibly in parallel on the cores of the machine - and distributedly by offloading computations to external resources, including Spark and Dask clusters. This feature is enabled by the architecture depicted below, which shows that RDataFrame computation graphs can be mapped to different kinds of resources via backends. In this notebook we will exercise the Spark backend, which divides an RDataFrame input dataset in logical ranges and submits computations for each of those ranges to Spark resources. <img src="images/DistRDF_architecture.png" alt="Distributed RDataFrame"> Create a Spark context In order to work with a Spark cluster we need a SparkContext object, which represents the connection to that cluster and allows to configure execution-related parameters (e.g. number of cores, memory). When running this notebook from SWAN, a SparkContext object is already created for us when connecting to the selected cluster via the graphical interface. Alternatively, we could create a SparkContext as described in the Spark documentation.
import pyspark sc = pyspark.SparkContext.getOrCreate()
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Create a ROOT dataframe We now create an RDataFrame based on the same dataset seen in the exercise rdataframe-dimuon. A Spark RDataFrame receives two extra parameters: the number of partitions to apply to the dataset (npartitions) and the SparkContext object (sparkcontext). Besides that detail, a Spark RDataFrame is not different from a local RDataFrame: the analysis presented in this notebook would not change if we wanted to execute it locally.
# Use a Spark RDataFrame RDataFrame = ROOT.RDF.Experimental.Distributed.Spark.RDataFrame df = RDataFrame("h42", "https://root.cern/files/h1big.root", npartitions=4, sparkcontext=sc)
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Run your analysis unchanged From now on, the rest of your application can be written exactly as we have seen with local RDataFrame. The goal of the distributed RDataFrame module is to support all the traditional RDataFrame operations (those that make sense in a distributed context at least). Currently only a subset of those is available and can be found in the corresponding section of the documentation
%%time df1 = df.Filter("nevent > 1") df2 = df1.Define("mpt","sqrt(xpt*xpt + ypt*ypt)") c = df.Count() m = df2.Mean("mpt") print(f"Number of events after processing: {c.GetValue()}") print(f"Mean of column 'mpt': {m.GetValue()}")
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Now we'll fit the multiband periodogram model to this data. For more information on the model, refer to the VanderPlas and Ivezic paper mentioned above.
from gatspy.periodic import LombScargleMultiband model = LombScargleMultiband(Nterms_base=1, Nterms_band=0) model.fit(t, y, dy, filts) periods = np.linspace(period - 0.1, period + 0.1, 2000) power = model.periodogram(periods) plt.plot(periods, power, lw=1) plt.xlim(periods[0], periods[-1]);
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
We can see what the multiterm model looks like by plotting it over the data:
def plot_model(model, lcid): t, y, dy, filts = rrlyrae.get_lightcurve(lcid) model.fit(t, y, dy, filts) tfit = np.linspace(0, period, 1000) for filt in 'ugriz': mask = (filts == filt) eb = plt.errorbar(t[mask] % period, y[mask], dy[mask], fmt='.', label=filt) yfit = model.predict(tfit, filt, period=period) plt.plot(tfit, yfit, color=eb[0].get_color()) plt.gca().invert_yaxis() plt.legend(ncol=3, loc='upper left') plot_model(LombScargleMultiband(Nterms_base=1, Nterms_band=0), lcid)
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
If we'd like to do a higher-oder multiterm model, we can simply adjust the number of terms in the base and band models:
plot_model(LombScargleMultiband(Nterms_base=4, Nterms_band=1), lcid)
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
Now we have the data loaded nicely into a Pandas dataframe and we can look at some of the basics of the data.
display('Number of rows: {}'.format(len(df))) display('Unique SSIDs: {}'.format(len(df['SSID'].unique()))) display('Unique MACs: {}'.format(len(df['MAC'].unique()))) display('Number of Auth Mode types: {}'.format(len(df['AuthMode'].unique()))) def auth_filter(x): if 'WPA2' in x: return 'WPA2' elif 'WPA' in x: return 'WPA' elif 'WEP' in x: return 'WEP' else: return 'OPEN' df['AuthMode'].apply(auth_filter).value_counts().plot(kind='barh')
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
So there are a significant number of open networks, but the overall majority use WPA2. That's good for the University but not so great for attackers. Of course, there could be a way around that via WPS. How many networks use that?
def wps(x): if 'WPS' in x: return 'WPS' else: return 'Not WPS' df['AuthMode'].apply(wps).value_counts().plot(kind='barh')
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
Over 500 networks use WPS! Using a tool like Reaver an attacker could easily breach those networks. This is just some basic insights into the data. We could look further into the different forms of WPA/WPA2 authentication, but for an attacker these insights are enough. Using the above function for extracting WPS networks, an attacker could determine the locations of those networks and mount an attack on each of them.
s = df['AuthMode'].apply(wps) wps_entries = df.ix[s[s == 'WPS'].index] wps_entries.head()
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
Olympic Marathon Data <table> <tr> <td width="70%"> - Gold medal times for Olympic Marathon since 1896. - Marathons before 1924 didn’t have a standardised distance. - Present results using pace per km. - In 1904 Marathon was badly organised leading to very slow times. </td> <td width="30%"> <img class="" src="http://inverseprobability.com/talks/slides/../slides/diagrams/Stephen_Kiprotich.jpg" style="width:100%"> <small>Image from Wikimedia Commons <a href="http://bit.ly/16kMKHQ" class="uri">http://bit.ly/16kMKHQ</a></small> </td> </tr> </table> The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we load in the data and plot.
import numpy as np import pods data = pods.datasets.olympic_marathon_men() x = data['X'] y = data['Y'] offset = y.mean() scale = np.sqrt(y.var()) import matplotlib.pyplot as plt import teaching_plots as plot import mlai xlim = (1875,2030) ylim = (2.5, 6.5) yhat = (y-offset)/scale fig, ax = plt.subplots(figsize=plot.big_wide_figsize) _ = ax.plot(x, y, 'r.',markersize=10) ax.set_xlabel('year', fontsize=20) ax.set_ylabel('pace min/km', fontsize=20) ax.set_xlim(xlim) ax.set_ylim(ylim) mlai.write_figure(filename='olympic-marathon.svg', directory='./datasets')
notebooks/pods/datasets/olympic-marathon.ipynb
sods/ods
bsd-3-clause
Interpretación de la FT de una imagen Una imagen se puede entender como la superpocisión de funciones armónicas (senos y cocenos) bidimensionales de diferentes frecuencias y direcciónes. La FT me dará información de los senos y cocenos que se necesitan (en términos de su frecuencia, dirección y amplitud) para formar la imagen. Formemos primero una imagen correspondiente a un seno en la dirección horizontal
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) g1 = A1*np.sin(2*np.pi*f1*X) # Imagen en el espacio "espacial" ftg1 = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g1)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(abs(g1), cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg1), cmap='gray') plt.title('Amplitud espacio frecuencial')
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
Note que solo aparecen aproximadamente dos deltas de Dirac en el espacio frecuencial. De forma análoga al caso unidimensional esos dos puntos corresponden a la frecuencia del seno en el espacio espacial. Note también que los puntos salen en la dirección horizontal, indicando que la dirección del seno bidimensional es horizontal. Finalmente, observe que en el centro de la imagen está el centro del origen (para matrices cuadradas esto no es del todo exacto), mientras que la posición (0,0) de la matriz está en la esquina superior-izquierda. Ahora, grafiquemos la superposición de un seno horizontal y uno vertical de la misma frecuencia
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*f1*X) # Seno en la dirección horizontal gy = A1*np.sin(2*np.pi*f1*Y) # Seno en la dirección vertical g = gx + gy # Superposición ftg = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(g, cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg), cmap='gray') plt.title('Amplitud espacio frecuencial')
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
Observe que ahora aparecen unas deltas en la dirección vertical que dan cuenta del seno en la dirección vertical. Hagamos un último ejemplo incluyendo un coseno en la dirección diagonal que tiene una amplidut dos veces superior a los otros senos. Además, el seno en la dirección horizontal tiene una frecuencia dos veces mayor al coceno, y el seno en la dirección vertical 4 veces mayor.
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*2*f1*X) # Seno en la dirección horizontal gy = A1*np.sin(2*np.pi*4*f1*Y) # Seno en la dirección vertical gd = 2*A1*np.cos(2*np.pi*f1*(X + Y)) # CCoceno en la dirección diagonal g = gx + gy + gd # Superposición ftg = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(g, cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg), cmap='gray') plt.title('Amplitud espacio frecuencial')
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
SVM Classification Now try classifcation with SVM.
from sklearn.svm import SVC svm = SVC(random_state=42) svm.fit(Xtrain, ytrain) ypredSVM = svm.predict(Xtest) print(classification_report(ytest, ypredSVM, target_names=['QSOs', 'stars']))
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Pretty good. 81% completeness and 89% efficiency. Do it again with scaled data to see if that makes any difference. It doesn't seem to for colors alone, but might for other attributes?
from sklearn.svm import SVC svm = SVC(random_state=42) svm.fit(XStrain, yStrain) ySpredSVM = svm.predict(XStest) print(classification_report(yStest, ySpredSVM, target_names=['QSOs', 'stars'])) ypredCVSVM = cross_val_predict(svm, XS, y) data['ypred'] = ypredCVSVM qq = ((data['shenlabel']==0) & (data['ypred']==0)) ss = ((data['shenlabel']==1) & (data['ypred']==1)) qs = ((data['shenlabel']==0) & (data['ypred']==1)) sq = ((data['shenlabel']==1) & (data['ypred']==0)) dataqq = data[qq] datass = data[ss] dataqs = data[qs] datasq = data[sq] print len(dataqq), "quasars selected as quasars" print len(dataqs), "quasars selected as stars" print len(datasq), "stars selected as quasars" fig = plt.figure(figsize=(6, 6)) bins=np.linspace(0,5.5,100) plt.hist(dataqq['zspec'],bins=bins,label='qq',color='b',alpha=0.5) plt.hist(dataqs['zspec'],bins=bins,label='qs',color='r',alpha=0.5) plt.hist(datasq['zspec'],bins=bins,label='sq',color='g',alpha=0.5) #plt.xlim([-4,8]) #plt.ylim([-1.5,1.5]) plt.legend() plt.xlabel('zspec') plt.ylabel('N')
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Random Forest Classification Now we'll try a DecisionTree, a RandomForest, and an ExtraTrees classifier Note that n_jobs=-1 is supposed to allow it to use multiple processesors if it can, but I'm honestly not sure how that works (and also not convinced that it isn't causing problems as sometimes when I use it I get a warning that it is setting n_jobs=1 instead.
# Random Forests, etc. from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.tree import DecisionTreeClassifier clfDTC = DecisionTreeClassifier(max_depth=None, min_samples_split=2, random_state=42) clfRFC = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=42, n_jobs=-1) clfETC = ExtraTreesClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=42) clfDTC.fit(Xtrain, ytrain) clfRFC.fit(Xtrain, ytrain) clfETC.fit(Xtrain, ytrain) ypredDTC = clfDTC.predict(Xtest) ypredRFC = clfRFC.predict(Xtest) ypredETC = clfETC.predict(Xtest) print clfDTC.feature_importances_ print clfRFC.feature_importances_ print clfETC.feature_importances_ print(classification_report(ytest, ypredDTC, target_names=['QSOs', 'stars'])) print(classification_report(ytest, ypredRFC, target_names=['QSOs', 'stars'])) print(classification_report(ytest, ypredETC, target_names=['QSOs', 'stars'])) #Decision tree on scaled data clfDTC = DecisionTreeClassifier(max_depth=None, min_samples_split=2, random_state=42) clfRFC = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=42, n_jobs=-1) clfETC = ExtraTreesClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=42) clfDTC.fit(XStrain, yStrain) clfRFC.fit(XStrain, yStrain) clfETC.fit(XStrain, yStrain) ypredDTC = clfDTC.predict(XStest) ypredRFC = clfRFC.predict(XStest) ypredETC = clfETC.predict(XStest) print clfDTC.feature_importances_ print clfRFC.feature_importances_ print clfETC.feature_importances_ print(classification_report(yStest, ypredDTC, target_names=['QSOs', 'stars'])) print(classification_report(yStest, ypredRFC, target_names=['QSOs', 'stars'])) print(classification_report(yStest, ypredETC, target_names=['QSOs', 'stars'])) ypredCVRFC = cross_val_predict(clfRFC, XS, y) data['ypred'] = ypredCVRFC qq = ((data['shenlabel']==0) & (data['ypred']==0)) ss = ((data['shenlabel']==1) & (data['ypred']==1)) qs = ((data['shenlabel']==0) & (data['ypred']==1)) sq = ((data['shenlabel']==1) & (data['ypred']==0)) dataqq = data[qq] datass = data[ss] dataqs = data[qs] datasq = data[sq] print len(dataqq), "quasars selected as quasars" print len(dataqs), "quasars selected as stars" print len(datasq), "stars selected as quasars" fig = plt.figure(figsize=(6, 6)) bins=np.linspace(0,5.5,100) plt.hist(dataqq['zspec'],bins=bins,label='qq',color='b',alpha=0.5) plt.hist(dataqs['zspec'],bins=bins,label='qs',color='r',alpha=0.5) plt.hist(datasq['zspec'],bins=bins,label='sq',color='g',alpha=0.5) #plt.xlim([-4,8]) #plt.ylim([-1.5,1.5]) plt.legend() plt.xlabel('zspec') plt.ylabel('N')
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Bagging Now we'll try a bagging classifier, based on K Nearest Neighbors. I did some playing around with max_samples and max_features (both of which run from 0 to 1) and found 0.5 and 1.0 to work best. Note that you have to give 1.0 in decimal otherwise it takes it to be 1 feature instead of 100% of them.
# Bagging from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier bagging = BaggingClassifier(KNeighborsClassifier(), max_samples=0.5, max_features=1.0, random_state=42, n_jobs=-1) bagging.fit(Xtrain, ytrain) ypredBag = bagging.predict(Xtest) print(classification_report(ytest, ypredBag, target_names=['QSOs', 'stars'])) # Bagging from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier bagging = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42, n_jobs=-1) bagging.fit(Xtrain, ytrain) ypredBag = bagging.predict(Xtest) print "7 neighbors" print(classification_report(ytest, ypredBag, target_names=['QSOs', 'stars']))
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
This seems to work better than the RandomForest. Might be worth trying to optimize the parameters for this. First try n_neighbors=7. The default is 5. Now do the same with the scaled data.
# Bagging Scaled data; 7 neighbors from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier bagging = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42, n_jobs=-1) bagging.fit(XStrain, yStrain) ypredBag = bagging.predict(XStest) print(classification_report(ytest, ypredBag, target_names=['QSOs', 'stars']))
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Overall: 83% Completness and 85% Efficiency.
data['ypred'] = ypredCVBAG qq = ((data['labels']==0) & (data['ypred']==0)) ss = ((data['labels']==1) & (data['ypred']==1)) qs = ((data['labels']==0) & (data['ypred']==1)) sq = ((data['labels']==1) & (data['ypred']==0)) dataqq = data[qq] datass = data[ss] dataqs = data[qs] datasq = data[sq] print len(dataqq), "quasars selected as quasars" print len(dataqs), "quasars selected as stars" print len(datasq), "stars selected as quasars" fig = plt.figure(figsize=(6, 6)) bins=np.linspace(0,5.5,100) plt.hist(dataqq['zspec'],bins=bins,label='qq',color='b',alpha=0.5) plt.hist(dataqs['zspec'],bins=bins,label='qs',color='r',alpha=0.5) plt.hist(datasq['zspec'],bins=bins,label='sq',color='g',alpha=0.5) #plt.xlim([-4,8]) #plt.ylim([-1.5,1.5]) plt.legend() plt.xlabel('zspec') plt.ylabel('N')
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Produce the plots:
# define figure and axes fig = plt.figure(figsize=(15,5)) ax0 = fig.add_subplot(131) ax1 = fig.add_subplot(132) ax2 = fig.add_subplot(133) # figure A: predicted probabilities vs. empirical probs hist, bin_edges = np.histogram(X,bins=100) p = [np.sum(y[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))[0]])/np.max([hist[i],1]) for i in np.arange(len(bin_edges)-1)] bar_pos = np.arange(len(p)) bar_width = np.diff(bin_edges) ax0.bar(bin_edges[0:-1], p, width=bar_width, align='edge', alpha=0.5) r = np.arange(X.min(),X.max(),.1) s = 1/(1+np.exp(-cofs[0]*r)) ax0.plot(r,s,'r') ax0.set_xlabel('Scaled rank difference',fontsize=12) ax0.set_ylabel('Probability that higher ranked wins',fontsize=12) ax0.set_title('Logistic fit to empirical probabilities',fontsize=12) ax0.legend(['Logistic probability curve','Empirical probability hist.']) # figure B: probabilities predicted by odds market ProbW = 1/training.PSW ProbL = 1/training.PSL idx = (training.winner_rank_points>training.loser_rank_points) odds_prob=np.where(idx,ProbW,ProbL) t = pd.DataFrame({'X':X,'odds_prob':odds_prob}) ts = t.sort_values('X') ax1.plot(ts['X'],ts['odds_prob'],'.b') ax1.plot(r,s,'r') ax1.set_xlabel('Scaled rank difference',fontsize=12) ax1.set_ylabel('Probability higher ranked wins',fontsize=12) ax1.set_title('Probabilities implied by odds market.',fontsize=12) ax1.legend(['Odds market probabilities','Logistic probability curve']) # Fig C: variance in odds probabilities as a function of rank difference x_odds = ts['X'].values.reshape(len(ts),-1) y_odds = ts['odds_prob'].values hist, bin_edges = np.histogram(x_odds,bins=10) stds = [np.std(y_odds[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))]) for i in np.arange(len(bin_edges)-1)] reg = lm.LinearRegression() reg.fit (bin_edges[0:-1].reshape(10,1),stds) yv=reg.predict(bin_edges[0:-1].reshape(10,1)) ax2.plot(bin_edges[0:-1],stds,'*b') ax2.plot(bin_edges[0:-1],yv,'r') ax2.set_xlabel('Scaled rank difference',fontsize=12) ax2.set_ylabel('Stdev of market prob.',fontsize=12) ax2.set_title('Trends in stdev of implied probabilities',fontsize=12) ax2.legend(['Stdev of binned market-probs.','Regression line'])
results/DI_plot1.ipynb
carltoews/tennis
gpl-3.0
Data augmentation for images
def pre_process_image(image, training): # This function takes a single image as input, # and a boolean whether to build the training or testing graph. if training: # Randomly crop the input image. image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels]) # Randomly flip the image horizontally. image = tf.image.random_flip_left_right(image) # Randomly adjust hue, contrast and saturation. image = tf.image.random_hue(image, max_delta=0.05) image = tf.image.random_contrast(image, lower=0.3, upper=1.0) image = tf.image.random_brightness(image, max_delta=0.2) image = tf.image.random_saturation(image, lower=0.0, upper=2.0) # Some of these functions may overflow and result in pixel # values beyond the [0, 1] range. It is unclear from the # documentation of TensorFlow whether this is # intended. A simple solution is to limit the range. image = tf.minimum(image, 1.0) image = tf.maximum(image, 0.0) else: # Crop the input image around the centre so it is the same # size as images that are randomly cropped during training. image = tf.image.resize_image_with_crop_or_pad(image, target_height=img_size_cropped, target_width=img_size_cropped) return image
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Creating Main Processing https://github.com/google/prettytensor/blob/master/prettytensor/pretty_tensor_image_methods.py
def main_network(images, training): images = tf.cast(images, tf.float32) x_pretty = pt.wrap(images) if training: phase = pt.Phase.train else: phase = pt.Phase.infer # Can't wrap it to pretty tensor because # 'Layer' object has no attribute 'local_response_normalization' normalize = lambda x: pt.wrap( tf.nn.local_response_normalization(x, depth_radius=5.0, bias=2.0, alpha=1e-4, beta=0.75)) with pt.defaults_scope(activation_fn=tf.nn.relu, phase=phase): layers = [] for i in ["up", "down"]: first_conv = x_pretty.\ conv2d(kernel=11, depth=48, name='conv_1_' + i) first_conv_norm = normalize(first_conv) first_conv_norm_pool = first_conv_norm.\ max_pool(kernel=3, stride=2, edges='VALID', name='pool_1' + i) second_conv = first_conv_norm_pool.\ conv2d(kernel=5, depth=128, bias=tf.ones_initializer(), name='conv_2_' + i) second_conv_norm = normalize(second_conv) second_conv_norm_pooled = pt.wrap(second_conv_norm).\ max_pool(kernel=3, stride=2, edges='VALID', name='pool_2_' + i) layers.append(second_conv_norm_pooled) first_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=0)) for i in ["up", "down"]: cur_layer = first_interlayer.\ conv2d(kernel=3, depth=192, name='conv_3_' + i).\ conv2d(kernel=3, depth=192, bias=tf.ones_initializer(), name='conv_4_' + i).\ conv2d(kernel=3, depth=128, bias=tf.ones_initializer(), name='conv_5_' + i).\ max_pool(kernel=3, stride=2, edges='VALID', name='pool_3_' + i) layers.append(cur_layer) second_interlayer = pt.wrap(tf.concat([layers[-2], layers[-1]], axis=1)) y_pred, loss = second_interlayer.\ flatten().\ fully_connected(2048, name='fully_conn_1').\ dropout(0.5, name='dropout_1').\ fully_connected(2048, name='fully_conn_2').\ dropout(0.5, name='dropout_2').\ fully_connected(10, name='fully_conn_3').\ softmax_classifier(num_classes=num_classes, labels=y_true) return y_pred, loss main_network(images=images, training=True)
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
weights_conv1 = get_weights_variable(layer_name='conv1_1') weights_conv2 = get_weights_variable(layer_name='conv2_2') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(weights_conv1).shape) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(weights_conv2).shape)
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Get the output of the convoluational layers so we can plot them later.
output_conv1 = get_layer_output(layer_name='conv1_1') output_conv2 = get_layer_output(layer_name='conv2_2')
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Restore or initialize variables Training this neural network may take a long time, especially if you do not have a GPU. We therefore save checkpoints during training so we can continue training at another time (e.g. during the night), and also for performing analysis later without having to train the neural network every time we want to use it. If you want to restart the training of the neural network, you have to delete the checkpoints first. This is the directory used for the checkpoints.
save_dir = 'checkpoints_alex_net/'
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Function for selecting a random batch of images from the training-set.
def random_batch(): num_images = len(images_train) # Create a random index. idx = np.random.choice(num_images, size=train_batch_size, replace=False) # Use the random index to select random images and labels. x_batch = images_train[idx, :, :, :] y_batch = labels_train[idx, :] return x_batch, y_batch
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Optimization The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
def optimize(num_iterations): start_time = time.time() for i in range(num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = random_batch() # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. # We also want to retrieve the global_step counter. i_global, _ = session.run([global_step, optimizer], feed_dict=feed_dict_train) # Print status to screen every 100 iterations (and last). if (i_global % 100 == 0) or (i == num_iterations - 1): # Calculate the accuracy on the training-batch. batch_acc = session.run(accuracy, feed_dict=feed_dict_train) # Print status. msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}" print(msg.format(i_global, batch_acc)) # Save a checkpoint to disk every 1000 iterations (and last). if (i_global % 1000 == 0) or (i == num_iterations - 1): # Save all variables of the TensorFlow graph to a # checkpoint. Append the global_step counter # to the filename so we save the last several checkpoints. saver.save(session, save_path=save_path, global_step=global_step) print("Saved checkpoint.") end_time = time.time() time_dif = end_time - start_time print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Calculating classifications This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct. The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
# Split the data-set in batches of this size to limit RAM usage. batch_size = 256 def predict_cls(images, labels, cls_true): num_images = len(images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.zeros(shape=num_images, dtype=np.int) # Now calculate the predicted classes for the batches. # We will just iterate through all the batches. # There might be a more clever and Pythonic way of doing this. # The starting index for the next batch is denoted i. i = 0 while i < num_images: # The ending index for the next batch is denoted j. j = min(i + batch_size, num_images) # Create a feed-dict with the images and labels # between index i and j. feed_dict = {x: images[i:j, :], y_true: labels[i:j, :]} # Calculate the predicted class using TensorFlow. cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict) # Set the start-index for the next batch to the # end-index of the current batch. i = j # Create a boolean array whether each image is correctly classified. correct = (cls_true == cls_pred) return correct, cls_pred
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting convolutional weights
def plot_conv_weights(weights, input_channel=0): # Assume weights are TensorFlow ops for 4-dim variables # e.g. weights_conv1 or weights_conv2. # Retrieve the values of the weight-variables from TensorFlow. # A feed-dict is not necessary because nothing is calculated. w = session.run(weights) # Print statistics for the weights. print("Min: {0:.5f}, Max: {1:.5f}".format(w.min(), w.max())) print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std())) # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(w) w_max = np.max(w) abs_max = max(abs(w_min), abs(w_max)) # Number of filters used in the conv. layer. num_filters = w.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid filter-weights. if i<num_filters: # Get the weights for the i'th filter of the input channel. # The format of this 4-dim tensor is determined by the img = w[:, :, input_channel, i] # Plot image. ax.imshow(img, vmin=-abs_max, vmax=abs_max, interpolation='nearest', cmap='seismic') ax.set_xticks([]) ax.set_yticks([]) plt.show()
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting the output of convolutional layers
def plot_layer_output(layer_output, image): # Assume layer_output is a 4-dim tensor # e.g. output_conv1 or output_conv2. # Create a feed-dict which holds the single input image. # Note that TensorFlow needs a list of images, # so we just create a list with this one image. feed_dict = {x: [image]} # Retrieve the output of the layer after inputting this image. values = session.run(layer_output, feed_dict=feed_dict) # This is used to correct the colour intensity across # the images so they can be compared with each other. values_min = np.min(values) values_max = np.max(values) # Number of image channels output by the conv. layer. num_images = values.shape[3] # Number of grid-cells to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_images)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid image-channels. if i<num_images: # Get the images for the i'th output channel. img = values[0, :, :, i] # Plot image. ax.imshow(img, vmin=values_min, vmax=values_max, interpolation='nearest', cmap='binary') ax.set_xticks([]) ax.set_yticks([]) plt.show()
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Examples of distorted input images In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images. This is a helper-function for plotting distorted input images.
def plot_distorted_image(image, cls_true): # Repeat the input image 9 times. image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0) # Create a feed-dict for TensorFlow. feed_dict = {x: image_duplicates} # Calculate only the pre-processing of the TensorFlow graph # which distorts the images in the feed-dict. result = session.run(distorted_images, feed_dict=feed_dict) plot_images(images=result, cls_true=np.repeat(cls_true, 9))
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Perform optimization
tf.summary.FileWriter('./graphs', sess.graph) # if False: optimize(num_iterations=10000)
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
<div id='intro' /> Introduction Back to TOC In our last Jupyter Notebook we learned how to solve 1D equations. Now, we'll go to the next level and will learn how to solve not just <i>one</i> equation, but a <i>system</i> of linear equations. This is a set of $n$ equations involving $n$ variables wherein all the equations must be satisfied at the same time. You probably know how to solve small 2D systems with methods such as substitution and reduction, but in practical real-life situations it's very likely that you'll find problems of bigger dimensions. As usual, we'll present some useful methods for solving systems of linear equations below. <div id='DM' /> Direct Methods Back to TOC Firstly, we will study direct methods. They compute the analytic solution of the system (from here comes the name direct) limited only by the loss of numerical precision, because of the arithmetic operations performed by the computer. Their counterpart is the iterative methods, which calculate an approximate solution that evolves iteratively converging to the real solution. <div id='lu' /> LU decomposition Back to TOC Given the matrix $A \in \mathbb{R}^{n \times n}$ square and non singular, the main goal of this method involves finding a decomposition like $A = L U$ where $L,U \in \mathbb{R}^{n \times n}$ are lower and upper triangular matrices respectively. The algorithm to perform this decomposition is basically a modified version of Gaussian Elimination. It basically iterates through the first $n-1$ columns, making $0$ all the entries below the main diagonal. This is accomplished by performing row operations.
def lu_decomp(A, show=False, print_precision=2): N,_ = A.shape U = np.copy(A) L = np.identity(N) if show: print('Initial matrices') print('L = '); print(np.array_str(L, precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision, suppress_small=True)) print('----------------------------------------') #iterating through columns for j in range(N-1): #iterating through rows for i in range(j+1,N): L[i,j] = U[i,j]/U[j,j] U[i] -= L[i,j]*U[j] if show: print('L = '); print(np.array_str(L, precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision, suppress_small=True)) print('----------------------------------------') return L,U
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Once the decomposition is done, solving a linear system like $A x = b$ is straightforward: $$A x = b \rightarrow L U x = b \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b \ \ \text{ (solve for c) } \ \rightarrow U x = c$$ and as you might know, solving lower and upper triangular systems can be easily performed by back-substitution and forward-subsitution respectively.
""" Solves a linear system A x = b, where A is a triangular (upper or lower) matrix """ def solve_triangular(A, b, upper=True): n = b.shape[0] x = np.zeros_like(b) if upper==True: #perform back-substitution x[-1] = (1./A[-1,-1]) * b[-1] for i in range(n-2, -1, -1): x[i] = (1./A[i,i]) * (b[i] - np.sum(A[i,i+1:] * x[i+1:])) else: #perform forward-substitution x[0] = (1./A[0,0]) * b[0] for i in range(1,n): x[i] = (1./A[i,i]) * (b[i] - np.sum(A[i,:i] * x[:i])) return x def solve_lu(A, b, show=False, print_precision=2): L,U = lu_decomp(A, show, print_precision=print_precision) # L.c = b with c = U.x c = solve_triangular(L, b, upper=False) x = solve_triangular(U, c) return x
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
which is a very good result! This method has two important facts to be noted: Computing the LU decomposition requires $2n^3/3$ floating point operations. Can you check that? When computing the LU decomposition you can see the instruction L[i,j] = U[i,j]/U[j,j]. Here we divide an entry below the main diagonal by the pivot value. What happens if the pivot equals 0? How can we prevent that? Answer: PALU. <div id='palu' /> PALU decomposition Back to TOC As you might've noted previously, LU has a problem when a pivot has the value of $0$. To handle this problem, we add row permutations to the original LU algorithm. The procedure is as follows: When visiting the row $j$, search for $\max(|a_{j,j}|,\ |a_{j+1,j}|,\ \ldots,\ |a_{N-1,j}|,\ |a_{N,j}|)$ (the maximum between the pivot and the entries below it). If such maximum is $|a_{j,k}| \neq |a_{j,j}|$, permutate rows $i$ and $k$ making $a_{j,k}$ the new pivot. To keep track of all the permutations performed, we use the permutation matrix $P$. It's inicially an identity matrix which permutes its rows in the same way the algorithm does on the resulting matrix.
#permutation between rows i and j on matrix A def row_perm(A, i, j): tmp = np.copy(A[i]) A[i] = A[j] A[j] = tmp def palu_decomp(A, show=False, print_precision=2): N,_ = A.shape P = np.identity(N) L = np.zeros((N,N)) U = np.copy(A) if show: print('Initial matrices') print('P = '); print(np.array_str(P, precision=print_precision, suppress_small=True)) print('L = '); print(np.array_str(L+np.eye(N), precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision, suppress_small=True)) print('----------------------------------------') #iterating through columns for j in range(N-1): #determine the new pivot p_index = np.argmax(np.abs(U[j:,j])) if p_index != 0: row_perm(P, j, j+p_index) row_perm(U, j, j+p_index) row_perm(L, j, j+p_index) if show: print('A permutation has been made') print('P = '); print(np.array_str(P, precision=print_precision, suppress_small=True)) print('L = '); print(np.array_str(L+np.eye(N), precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision, suppress_small=True)) print('----------------------------------------') #iterating through rows for i in range(j+1,N): L[i,j] = U[i,j]/U[j,j] U[i] -= L[i,j]*U[j] if show: print('P = '); print(np.array_str(P, precision=print_precision, suppress_small=True)) print('L = '); print(np.array_str(L+np.eye(N), precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision, suppress_small=True)) print('----------------------------------------') np.fill_diagonal(L,1) return P,L,U
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
The procedure to solve the system $Ax=b$ remains almost the same. We have to add the efect of the permutation matrix $P$: $$A x = b \rightarrow P A x = P b \rightarrow L U x = b' \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b' \ \ \text{ (solve for c) } \ \rightarrow U x = c$$
def solve_palu(A, b, show=False, print_precision=2): P,L,U = palu_decomp(A, show, print_precision=print_precision) # A.x = b -> P.A.x = P.b = b' -> L.U.x = b' b = np.dot(P,b) # L.c = b' with c = U.x c = solve_triangular(L, b, upper=False) x = solve_triangular(U, c) return x
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Let's test this new method against the LU and NumPy solvers
palu_sol = solve_palu(A, b, show=True, print_precision=4) np.linalg.norm(palu_sol - lu_sol) np.linalg.norm(palu_sol - np_sol) P,L,U = palu_decomp(A) print('P: ',P) print('L: ',L) print('U: ',U)
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Here are some questions about PALU: 1. How much computational complexity has been added to the original $2n^3/3$ of LU? 2. Clearly PALU is more robust than LU, but given a non sigular matrix $A$ will it always be possible to perform the PALU decomposition? <div id='cholesky' /> Cholesky Back to TOC This is another direct method only applicable to symmetric positive-definite matrices. In order to try this algorithm we have to create this kind of matrices. The next function generates random symmetric positive-definite matrices.
""" Randomly generates an nxn symmetric positive- definite matrix A. """ def generate_spd_matrix(n, flag=True): if flag: A = np.random.random((n,n)) # Constructing symmetry A += A.T # A = np.dot(A.T,A) # Another way #symmetric+diagonally dominant -> symmetric positive-definite deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) else: B = np.random.random((n,n)) # A way to make sure the quadratic form is greater or equal to zero: # this means x^T*B^T\B*x >= ||B*x||, but if B is singular, it could be zero. A = np.dot(B.T,B) # To avoid a being singular, we just add a positive diagonal matrix A = A + np.eye(n) return A
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='im' /> Iterative Methods Back to TOC
""" Randomly generates an nxn strictly diagonally dominant matrix A. """ def generate_dd_matrix(n): A = np.random.random((n,n)) deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A """ Computes relative error between each row on X matrix and y vector. """ def error(X, y): D = X-y err = np.linalg.norm(D, axis=1, ord=np.inf) return err
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='jacobi' /> Jacobi Back to TOC
""" Iterative methods implementations returns an array X with the the solutions at each iteration """ def jacobi(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices D = np.diag(A) Dinv = D**-1 R = A - np.diag(D) # R = (L+U) for i in range(1, n_iter): # X[i] = Dinv*(b - np.dot(R, X[i-1])) # v1.12 ri = b - np.dot(A, X[i-1]) X[i] = X[i-1]+Dinv*ri # = np.dot(np.linalg.inv(D),ri) return X def jacobi_M(A): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = -np.dot(np.linalg.inv(D),L+U) return M
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
$\mathbf{x}{n+1}=M\,\mathbf{x}{n}+\widehat{\mathbf{b}}$ Now let's resolve the same linear system with Jacobi method!
jac_sol = jacobi(A,b, n_iter=50) jac_err = error(jac_sol, np_sol) it = np.linspace(1, 50, 50) plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Jacobi method') plt.show() Mj = jacobi_M(A) print(np.linalg.norm(Mj)) np.linalg.eigvals(Mj) np.abs(np.linalg.eigvals(Mj)) np.max(np.abs(np.linalg.eigvals(Mj)))
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='gaussseidel' /> Gauss Seidel Back to TOC
def gauss_seidel(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) # R=(L+D) U = A-R for i in range(1, n_iter): #X[i] = solve_triangular(R, b-np.dot(U, X[i-1]), upper=False) # v1.11 X[i] = X[i-1]+solve_triangular(R, b-np.dot(A, X[i-1]), upper=False) return X def gauss_seidel_M(A): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = -np.dot(np.linalg.inv(L+D),U) return M
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Now let's resolve the same linear system with Gauss-Seidel method!
gauss_seidel_sol = gauss_seidel(A,b) gauss_seidel_err = error(gauss_seidel_sol, np_sol) plt.figure(figsize=(12,6)) plt.semilogy(it, gauss_seidel_err, marker='o', linestyle='--', color='r') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Gauss method') plt.show()
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='sor' /> SOR Back to TOC
def sor(A, b, w=1.05, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) #R=(L+D) U = A-R # v1.11 L = np.tril(A,-1) D = np.diag(np.diag(A)) M = L+D/w for i in range(1, n_iter): #X_i = solve_triangular(R, b-np.dot(U, X[i-1]), upper=False) #X[i] = w*X_i + (1-w)*X[i-1] # v1.11 X[i] = X[i-1]+solve_triangular(M, b-np.dot(A, X[i-1]), upper=False) return X def sor_M(A,w=1.05): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = np.dot(np.linalg.inv(w*L + D),((1-w)*D -w*U)) return M
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Here are some questions about SOR: - Why can averaging the current solution with the Gauss-Seidel solution improve convergence? - Why do we use $\omega > 1$ and not $\omega < 1$? - Could you describe a method to find the best value of $\omega$ (the one which optimizes convergence)? - Would it be a better option to re-compute $\omega$ at each iteration? <div id='ca' /> Convergence Analysis Back to TOC Let's see convergence plots all together
plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b', label='Jacobi') plt.semilogy(it, gauss_seidel_err, marker='o', linestyle='--', color='r', label='Gauss-Seidel') plt.semilogy(it, sor_err, marker='o', linestyle='--', color='g', label='SOR') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for all methods') plt.legend(loc=0) plt.show()
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='examples' /> Examples Back to TOC <div id='hilbertMatrix' /> Hilbert Matrix Back to TOC
N=20 F_errors=np.zeros(N+1) B_errors=np.zeros(N+1) kappas=np.zeros(N+1) my_range=np.arange(5,N+1) for n in my_range: A=hilbert(n) x_exact=np.ones(n) b=np.dot(A,x_exact) x=np.linalg.solve(A,b) F_errors[n]=np.linalg.norm(x-x_exact)/np.linalg.norm(x_exact) kappas[n]=np.linalg.cond(A,2) B_errors[n]=np.linalg.norm(b-A @ x)/np.linalg.norm(b) plt.figure(figsize=(5,10)) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey = False) ax1.semilogy(my_range, 10.**(-16+np.log10(kappas[my_range])), marker='o', linestyle='--', color='b',label='Estimated Forward error') ax1.semilogy(my_range, F_errors[my_range], marker='o', linestyle='--', color='r',label=r'Forward error: $||\mathbf{x}-\mathbf{x}_a||_2/||\mathbf{x}||$') ax1.semilogy(my_range, B_errors[my_range], marker='o', linestyle='--', color='g',label=r'Backward error: $||\mathbf{b}-A\,\mathbf{x}_a||_2/||\mathbf{b}||$') ax1.semilogy(my_range, kappas[my_range]*B_errors[my_range], marker='o', linestyle='--', color='m', label=r'$\kappa(A_n)\,||\mathbf{b}-A\,\mathbf{x}_a||_2/||\mathbf{b}||$') ax1.set_title('Errors') ax1.grid(True) ax1.set_xlabel('$n$') ax1.legend(loc='best') ax2.semilogy(my_range, kappas[my_range], marker='s', linestyle='--', color='k',label=r'$\kappa(A_n)$') ax2.set_title(r'$\kappa(A_n)$') ax2.set_xlabel('$n$') ax2.grid(True) plt.show()
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Recall: $\dfrac{1}{\kappa(A)}\dfrac{\|\mathbf{b}-A\,\mathbf{x}_a\|}{\|\mathbf{b}\|} \leq \dfrac{\|\mathbf{x}-\mathbf{x}_a\|}{\|\mathbf{x}\|} \leq \kappa(A)\,\dfrac{\|\mathbf{b}-A\,\mathbf{x}_a\|}{\|\mathbf{b}\|}$ Let's solve a linear system of equations with $H_{200}$:
n = 200 # Generating matrix A = hilbert(n) # Defining the 'exact' solution x_exact = np.ones(n) # If we know the exact solution, we can compute the RHS just by multiplying 'A' by 'x_exact' b = A @ x_exact # Using the NumPy routine to solve the linear system of equations. x = np.linalg.solve(A,b) # A.x = A.1 = b
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Now, we compute the condition number of $A=H_{200}$
kappa=np.linalg.cond(A,2) print(np.log10(kappa))
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause