markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Alright, now we can simply call the create() method to start the pattern loading/structuring process!
mvp.create()
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
After calling create(), the Mvp-object has a couple of 'new' attributes! Let's check them out.
print("The attribute .X represents our samples-by-features matrix of shape %s" % (mvp.X.shape,)) print("The attribute .y represents our targets (y) of shape %s" % (mvp.y.shape,))
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
As you can see, these are exactly the patterns (X) and labels (y) which we created manually earlier in our workshop (except for that X contains fewer features due to the setting remove_zeros=True). We can also inspect the names of the patterns (as parsed from the design.con file):
print(mvp.contrast_labels)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Feature extraction / selection Another feature of skbold is that is offers some neuroimaging-specific transformers (implemented the same way as scikit-learn transformers). Let's look at, for example, the ClusterThreshold class - a transformer that applies a (3D) cluster-thresholding procedure on top of univariate featu...
from skbold.feature_extraction import ClusterThreshold from sklearn.feature_selection import f_classif clt = ClusterThreshold(mvp=mvp, min_score=10, selector=f_classif)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
We initialized our ClusterThreshold object to perform an initial threshold at min_score=10. The voxels that "survived" this threshold are subsequently clustered and averaged within clusters. Below, we'll show that the API is exactly the same as scikit-learn's transformers:
from sklearn.model_selection import train_test_split # Let's cross-validate our ClusterThresholding procedure (which you should always do!) X_train, X_test, y_train, y_test = train_test_split(mvp.X, mvp.y, test_size=0.25) print("Shape of X_train before cluster-thresholding: %s" % (X_train.shape,)) print("Shape of X_te...
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Skbold has many more transformers, such as RoiIndexer, which indexes patterns given a certain mask/ROI. It doesn't matter whether the patterns are in EPI-space and the mask/ROI is in MNI-space; skbold registers the mask/ROI from one space to the other accordingly. (It needs FSL for this, and as we don't know whether yo...
from skbold.postproc import MvpResults from sklearn.metrics import accuracy_score, f1_score mvpr = MvpResults(mvp=mvp, n_iter=5, feature_scoring='forward', confmat=True, accuracy=accuracy_score, f1=f1_score)
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Importantly, the MvpResults class needs a Mvp object upon initialization to extract some meta-data and it needs to know how many folds (n_iter) we're going to keep track of (here we assume we'll do 5-fold CV). We also indicate that we want to keep track of the confusion-matrices across folds (confmat=True) and after th...
from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import f_classif, SelectKBest from sklearn.svm import SVC from sklearn.pipeline import Pipeline from sklearn.model_selection import StratifiedKFold pipe_line = Pipeline([('scaler', StandardScaler()), ('ufs', SelectKBe...
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Now we can implement our analysis and simply call mvpr.update() after each fold:
for i, (train_idx, test_idx) in enumerate(skf.split(mvp.X, mvp.y)): print("Processing fold %i / %i" % (i+1, skf.n_splits)) X_train, X_test = mvp.X[train_idx], mvp.X[test_idx] y_train, y_test = mvp.y[train_idx], mvp.y[test_idx] pipe_line.fit(X_train, y_train) pred = pipe_line.predict(X_test) mvpr...
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
We can check out the results of our analysis by calling the compute_scores() method:
performance, feature_scores = mvpr.compute_scores()
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
This prints out the mean and standard-deviation of our metrics across folds and the amount of voxels that were part of the analysis. We can check out the per-fold performance by looking at the first returned variable (here: performance):
performance
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Also, we can check out the feature-scores (here: the "forward" model corresponding to the classifier weights), which is returned here as feature_scores. This is a nibabel Nifti-object, which we can check out using matplotlib:
import nibabel as nib import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(20, 5)) scores_3d = feature_scores.get_data() background = op.join('..', 'data', 'pi0070', 'wm.feat', 'reg', 'example_func.nii.gz') background = nib.load(background).get_data() for i, slce in enumerate(np.a...
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
Almost always you have more than one subject, so what you can do is loop over subjects and initialize a new MvpResults object for every subject and store them in a separate list. Once the loop over subjets is completed, simply initialize a MvpAverageResults and call compute_statistics(), which we'll show below:
from glob import glob feat_dirs = glob(op.join('..', 'data', 'pi*', 'wm.feat')) n_folds = 5 mvp_results_list = [] for feat_dir in feat_dirs: print("Subject: %s" % feat_dir) mvp = MvpWithin(source=feat_dir, read_labels=read_labels, ref_space=ref_space, statistic=statistic, ...
tutorial/ICON2017_tutorial.ipynb
lukassnoek/ICON2017
mit
In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: \begin{equation} F \left(\epsilon\right) = \frac{1}{e^\...
def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" H = (1/((np.e)**((energy - mu)/kT)+1)) return H assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.52220...
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and...
?np.arange def plot_fermidist(mu, kT): energy = np.linspace(0.,10.0,50) plt.plot(fermidist(energy, mu, kT), energy) plt.title('The Fermi Distribution') plt.grid(True) plt.xlabel('F [(Unitless)]') plt.ylabel('Energy [(eV)]') plot_fermidist(4.0, 1.0) assert True # leave this for grading the plo...
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$.
interact(plot_fermidist, mu = [0.0,5.0], kT=[0.1,10.0]);
midterm/InteractEx06.ipynb
LimeeZ/phys292-2015-work
mit
Just for fun, let's create a lambda to find and show nearest neighbor images
show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show() show_neighbors(8) show_neighbors(26) auto_data = image_train[image_train['label'] == 'automobile'] cat_data = image_train[image_train['label'] == 'cat'] dog_data = image_train[image_train['label'] == 'dog'] bird_data ...
dato/deeplearning/Deep Features for Image Retrieval.ipynb
jrrembert/cybernetic-organism
gpl-2.0
As of Mon 12th of Oct running on devel branch of GPy 0.8.8
GPy.plotting.change_plotting_library('plotly')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
Gaussian process regression tutorial Nicolas Durrande 2013 with edits by James Hensman and Neil D. Lawrence We will see in this tutorial the basics for building a 1 dimensional and a 2 dimensional Gaussian process regression model, also known as a kriging model. We first import the libraries we will need:
import numpy as np
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
1-dimensional model For this toy example, we assume we have the following inputs and outputs:
X = np.random.uniform(-3.,3.,(20,1)) Y = np.sin(X) + np.random.randn(20,1)*0.05
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
Note that the observations Y include some noise. The first step is to define the covariance kernel we want to use for the model. We choose here a kernel based on Gaussian kernel (i.e. rbf or square exponential):
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The parameter input_dim stands for the dimension of the input space. The parameters variance and lengthscale are optional, and default to 1. Many other kernels are implemented, type GPy.kern.<tab> to see a list
#type GPy.kern.<tab> here: GPy.kern.BasisFuncKernel?
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The inputs required for building the model are the observations and the kernel:
m = GPy.models.GPRegression(X,Y,kernel)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
By default, some observation noise is added to the model. The functions display and plot give an insight of the model we have just built:
from IPython.display import display display(m) fig = m.plot() GPy.plotting.show(fig, filename='basic_gp_regression_notebook')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The above cell shows our GP regression model before optimization of the parameters. The shaded region corresponds to ~95% confidence intervals (ie +/- 2 standard deviation). The default values of the kernel parameters may not be optimal for the current data (for example, the confidence intervals seems too wide on the p...
m.optimize(messages=True)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
If we want to perform some restarts to try to improve the result of the optimization, we can use the optimize_restarts function. This selects random (drawn from $N(0,1)$) initializations for the parameter values, optimizes each, and sets the model to the best solution found.
m.optimize_restarts(num_restarts = 10)
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
In this simple example, the objective function (usually!) has only one local minima, and each of the found solutions are the same. Once again, we can use print(m) and m.plot() to look at the resulting model resulting model. This time, the paraemters values have been optimized agains the log likelihood (aka the log mar...
display(m) fig = m.plot() GPy.plotting.show(fig, filename='basic_gp_regression_notebook_optimized')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
New plotting of GPy 0.9 and later The new plotting allows you to plot the density of a GP object more fine grained by plotting more percentiles of the distribution color coded by their opacity
display(m) fig = m.plot(plot_density=True) GPy.plotting.show(fig, filename='basic_gp_regression_density_notebook_optimized')
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
2-dimensional example Here is a 2 dimensional example:
# sample inputs and outputs X = np.random.uniform(-3.,3.,(50,2)) Y = np.sin(X[:,0:1]) * np.sin(X[:,1:2])+np.random.randn(50,1)*0.05 # define kernel ker = GPy.kern.Matern52(2,ARD=True) + GPy.kern.White(2) # create simple GP model m = GPy.models.GPRegression(X,Y,ker) # optimize and plot m.optimize(messages=True,max_f_...
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
The flag ARD=True in the definition of the Matern kernel specifies that we want one lengthscale parameter per dimension (ie the GP is not isotropic). Note that for 2-d plotting, only the mean is shown. Plotting slices To see the uncertaintly associated with the above predictions, we can plot slices through the surface...
slices = [-1, 0, 1.5] figure = GPy.plotting.plotting_library().figure(3, 1, shared_xaxes=True, subplot_titles=('slice at -1', 'slice at 0', 'slice at 1.5', ...
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
A few things to note: * we've also passed the optional ax argument, to mnake the GP plot on a particular subplot * the data look strange here: we're seeing slices of the GP, but all the data are displayed, even though they might not be close to the current slice. To get vertical slices, we simply fixed the other inpu...
slices = [-1, 0, 1.5] figure = GPy.plotting.plotting_library().figure(3, 1, shared_xaxes=True, subplot_titles=('slice at -1', 'slice at 0', 'slice at 1.5', ...
GPy/basic_gp.ipynb
SheffieldML/notebook
bsd-3-clause
<h3>II. Preprocessing </h3> We process the missing values first, dropping columns which have a large number of missing values and imputing values for those that have only a few missing values. The one-class SVM exercise has a more detailed version of these steps.
# dropping columns which have large number of missing entries m = map(lambda x: sum(secom[x].isnull()), xrange(secom.shape[1])) m_200thresh = filter(lambda i: (m[i] > 200), xrange(secom.shape[1])) secom_drop_200thresh = secom.dropna(subset=[m_200thresh], axis=1) dropthese = [x for x in secom_drop_200thresh.columns.va...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h3>III. GBM: baseline vs using sample_weight</h3> We will first compare baseline results with the performance of a model where the sample_weight is used. As discussed in previous exercises, the <i>Matthews correlation coefficient (MCC)</i> is used instead of the <i>Accuracy</i> to compute the score.
# split data into train and holdout sets # stratify the sample used for modeling to preserve the class proportions X_train, X_test, y_train, y_test = tts(secom_imp, y, \ test_size=0.2, stratify=y, random_state=5) # function to test GBC parameters def GBC(params, weight): ...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>A) Baseline</h4>
params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} GBC(params, 0)
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>B) Sample weight</h4>
# RUN 1 Using the same parameters as the baseline params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} GBC(params, 1) # RUN 2 Manually selecting parameters to optimize the train/test MCC with sample w...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
In the baseline case (where we do not adjust the weights), we get a high MCC score for the training set (0.97). The test MCC is 0.197 so there is a large gap between the train and test MCC. When sample weights are used, we get a test set MCC of 0.242 after tuning the parameters. The tuning parameters play a big role i...
params = {'n_estimators': 800, 'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} # GBM clf = GradientBoostingClassifier(**params) clf.fit(X_train, y_train) gbm_importance = clf.feature_importances_ gbm_ranked_indices = np.argsort...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Roughly half the top fifteen most important features for the GBM were also the top fifteen computed for the Random Forest classifier. There are complex interactions between the parameters so we do not expect the two classifiers to give the same results. In Section IV where we optimize the hyperparameters, the nvar (num...
# function to compute MCC vs number of trees def GBC_trend(weight): base_params = {'max_depth': 3, 'subsample': 0.8, 'max_features' : 'sqrt', 'learning_rate': 0.019, 'min_samples_split': 2, 'random_state': SEED} mcc_train = [] mcc_test = [] for i in range(500, 1600, 100): p...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Default weight option (left): After about 900 iterations, the GBM models all the training data perfectly. At the same time, there is a large gap in the holdout data classification results. This is a classic case of overfitting. Sample weight option (right): This plot was constructed using the same parameters at the ...
# defining the MCC metric to assess cross-validation def mcc_score(y_true, y_pred): mcc = matthews_corrcoef(y_true, y_pred) return mcc mcc_scorer = make_scorer(mcc_score, greater_is_better=True) # convert to DataFrame for easy indexing of number of variables (nvar) X_train = pd.DataFrame(X_train) X_test...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>Run 1</h4>
start = time() trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print("HyperoptCV took %.2f seconds."% (time() - start)) print '\nBest parameters (by index):' print best
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
We will apply the optimal hyperparameters selected via hyperopt above to the GBM classifier. The optimal parameters include nvar= 200 and the use of sample_weight.
params = {'n_estimators': 1200, 'max_depth': 3, 'subsample': 0.7, 'max_features' : 'log2', 'learning_rate': 0.018, 'min_samples_split': 3, 'random_state': SEED} train_ = X_train.loc[:, gbm_ranked_indices[:200]] test_ = X_test.loc[:, gbm_ranked_indices[:200]] clf = GradientBoostingClassifier(**params) sampl...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4>Run 2</h4> I repeated the run with hyperopt a few times and in each case the optimal parameters include nvar= 200 and the use of sample_weight. There is a great deal of variability among the remaining parameters selected across the runs. This is an example of a second run.
trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print '\nBest parameters (by index):' print best params = {'n_estimators': 700, 'max_depth': 4, 'subsample': 0.8, 'max_features' : 'log2', 'learning_rate': 0.018, 'min_samples_split': 2, 'random_state': SEED} train_ = X...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
<h4> Run 3 -- default weight</h4> For both Run 1 and Run 2, hyperopt selected the sample_weight option. This was the case for most of the runs though there were a few instances in which the default option was selected. This is an example:
start = time() trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=100, trials=trials) print("HyperoptCV took %.2f seconds."% (time() - start)) print '\nBest parameters (by index):' print best params = {'n_estimators': 1000, 'max_depth': 2, 'subsample': 0.9, 'max_features' : 'log2', 'learnin...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
The results from hyperopt (tested over ten or more runs) were quite variable and no conclusions can be made. By seeding the random_state parameter, we should be able to get reproducible results but since this was not the case here, we will need to investigate this further. <h3>V. Grid search with cross-validation</h3>
# cv function def GBMCV(weight): clf = GradientBoostingClassifier(random_state=SEED) param_grid = {"n_estimators": [800, 900, 1000, 1200], "max_depth": [2, 3], "subsample": [0.6, 0.7, 0.8], "min_samples_split": [2, 3], "max_features"...
secomdata_gbm.ipynb
Meena-Mani/SECOM_class_imbalance
mit
Interact basics Write a print_sum function that prints the sum of its arguments a and b.
def print_sum(a, b): return (a+b)
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
Use the interact function to interact with the print_sum function. a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1 b should be an integer slider the interval [-8, 8] with step sizes of 2.
w = interactive(print_sum, a = (-10.0,10.0,0.1), b = (-8, 8, 2)) display(w) w.result assert True # leave this for grading the print_sum exercise
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
Use the interact function to interact with the print_string function. s should be a textbox with the initial value "Hello World!". length should be a checkbox with an initial value of True.
w = interactive(print_string, s = "Hello, World!") w assert True # leave this for grading the print_string exercise
assignments/assignment05/InteractEx01.ipynb
sraejones/phys202-2015-work
mit
For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
img = envi.open('../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr', '../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')
code/Python/remote-sensing/hyperspectral-data/classification_kmeans_pca_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:
%matplotlib inline import pylab pylab.figure() pylab.hold(1) for i in range(c.shape[0]): pylab.plot(c[i]) pylab.show pylab.title('Spectral Classes from K-Means Clustering') pylab.xlabel('Bands (with Water Vapor Windows Removed)') pylab.ylabel('Reflectance') #%matplotlib notebook view = imshow(img_subset, bands=(58...
code/Python/remote-sensing/hyperspectral-data/classification_kmeans_pca_py.ipynb
mjones01/NEON-Data-Skills
agpl-3.0
Downloading the atomic data
# the data is automatically downloaded download_atom_data('kurucz_cd23_chianti_H_He')
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Downloading the example file
!curl -O https://raw.githubusercontent.com/tardis-sn/tardis/master/docs/models/examples/tardis_example.yml
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Running the simulation (long output)
#TARDIS now uses the data in the data repo sim = run_tardis('tardis_example.yml')
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Plotting the Spectrum
%pylab inline spectrum = sim.runner.spectrum spectrum_virtual = sim.runner.spectrum_virtual spectrum_integrated = sim.runner.spectrum_integrated figure(figsize=(10,6)) plot(spectrum.wavelength, spectrum.luminosity_density_lambda, label='normal packets') plot(spectrum.wavelength, spectrum_virtual.luminosity_density_la...
docs/quickstart/quickstart.ipynb
kaushik94/tardis
bsd-3-clause
Multiple concurrent RDataFrame runs If your analysis needs multiple RDataFrames to run (for example multiple dataset samples, data vs simulation etc.), the ROOT.RDF.RunGraphs
ROOT.EnableImplicitMT() treename1 = "myDataset" filename1 = "data/collections_dataset.root" treename2 = "dataset" filename2 = "data/example_file.root" df1 = ROOT.RDataFrame(treename1, filename1) df2 = ROOT.RDataFrame(treename2, filename2) h1 = df1.Histo1D("px") h2 = df2.Histo1D("a") ROOT.RDF.RunGraphs((h1, h2)) c ...
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Distributed RDataFrame An RDataFrame analysis written in Python can be executed both locally - possibly in parallel on the cores of the machine - and distributedly by offloading computations to external resources, including Spark and Dask clusters. This feature is enabled by the architecture depicted below, which shows...
import pyspark sc = pyspark.SparkContext.getOrCreate()
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Create a ROOT dataframe We now create an RDataFrame based on the same dataset seen in the exercise rdataframe-dimuon. A Spark RDataFrame receives two extra parameters: the number of partitions to apply to the dataset (npartitions) and the SparkContext object (sparkcontext). Besides that detail, a Spark RDataFrame is no...
# Use a Spark RDataFrame RDataFrame = ROOT.RDF.Experimental.Distributed.Spark.RDataFrame df = RDataFrame("h42", "https://root.cern/files/h1big.root", npartitions=4, sparkcontext=sc)
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Run your analysis unchanged From now on, the rest of your application can be written exactly as we have seen with local RDataFrame. The goal of the distributed RDataFrame module is to support all the traditional RDataFrame operations (those that make sense in a distributed context at least). Currently only a subset of ...
%%time df1 = df.Filter("nevent > 1") df2 = df1.Define("mpt","sqrt(xpt*xpt + ypt*ypt)") c = df.Count() m = df2.Mean("mpt") print(f"Number of events after processing: {c.GetValue()}") print(f"Mean of column 'mpt': {m.GetValue()}")
SoftwareCarpentry/09-rdataframe-advanced.ipynb
root-mirror/training
gpl-2.0
Now we'll fit the multiband periodogram model to this data. For more information on the model, refer to the VanderPlas and Ivezic paper mentioned above.
from gatspy.periodic import LombScargleMultiband model = LombScargleMultiband(Nterms_base=1, Nterms_band=0) model.fit(t, y, dy, filts) periods = np.linspace(period - 0.1, period + 0.1, 2000) power = model.periodogram(periods) plt.plot(periods, power, lw=1) plt.xlim(periods[0], periods[-1]);
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
We can see what the multiterm model looks like by plotting it over the data:
def plot_model(model, lcid): t, y, dy, filts = rrlyrae.get_lightcurve(lcid) model.fit(t, y, dy, filts) tfit = np.linspace(0, period, 1000) for filt in 'ugriz': mask = (filts == filt) eb = plt.errorbar(t[mask] % period, y[mask], dy[mask], fmt='.', label=filt) yfit = model.pre...
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
If we'd like to do a higher-oder multiterm model, we can simply adjust the number of terms in the base and band models:
plot_model(LombScargleMultiband(Nterms_base=4, Nterms_band=1), lcid)
examples/MultiBand.ipynb
nhuntwalker/gatspy
bsd-2-clause
Now we have the data loaded nicely into a Pandas dataframe and we can look at some of the basics of the data.
display('Number of rows: {}'.format(len(df))) display('Unique SSIDs: {}'.format(len(df['SSID'].unique()))) display('Unique MACs: {}'.format(len(df['MAC'].unique()))) display('Number of Auth Mode types: {}'.format(len(df['AuthMode'].unique()))) def auth_filter(x): if 'WPA2' in x: return 'WPA2' elif 'WPA...
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
So there are a significant number of open networks, but the overall majority use WPA2. That's good for the University but not so great for attackers. Of course, there could be a way around that via WPS. How many networks use that?
def wps(x): if 'WPS' in x: return 'WPS' else: return 'Not WPS' df['AuthMode'].apply(wps).value_counts().plot(kind='barh')
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
Over 500 networks use WPS! Using a tool like Reaver an attacker could easily breach those networks. This is just some basic insights into the data. We could look further into the different forms of WPA/WPA2 authentication, but for an attacker these insights are enough. Using the above function for extracting WPS networ...
s = df['AuthMode'].apply(wps) wps_entries = df.ix[s[s == 'WPS'].index] wps_entries.head()
JHU Wifi.ipynb
ThaWeatherman/jhu_wifi
mit
Olympic Marathon Data <table> <tr> <td width="70%"> - Gold medal times for Olympic Marathon since 1896. - Marathons before 1924 didn’t have a standardised distance. - Present results using pace per km. - In 1904 Marathon was badly organised leading to very slow times. </td> <td width="30%"> <img class="" src...
import numpy as np import pods data = pods.datasets.olympic_marathon_men() x = data['X'] y = data['Y'] offset = y.mean() scale = np.sqrt(y.var()) import matplotlib.pyplot as plt import teaching_plots as plot import mlai xlim = (1875,2030) ylim = (2.5, 6.5) yhat = (y-offset)/scale fig, ax = plt.subplots(figsize=plo...
notebooks/pods/datasets/olympic-marathon.ipynb
sods/ods
bsd-3-clause
Interpretación de la FT de una imagen Una imagen se puede entender como la superpocisión de funciones armónicas (senos y cocenos) bidimensionales de diferentes frecuencias y direcciónes. La FT me dará información de los senos y cocenos que se necesitan (en términos de su frecuencia, dirección y amplitud) para formar la...
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) g1 = A1*np.sin(2*np.pi*f1*X) # Imagen en el espacio "espacial" ftg1 ...
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
Note que solo aparecen aproximadamente dos deltas de Dirac en el espacio frecuencial. De forma análoga al caso unidimensional esos dos puntos corresponden a la frecuencia del seno en el espacio espacial. Note también que los puntos salen en la dirección horizontal, indicando que la dirección del seno bidimensional es h...
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*f1*X) # Seno en la dirección horizontal gy = A...
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
Observe que ahora aparecen unas deltas en la dirección vertical que dan cuenta del seno en la dirección vertical. Hagamos un último ejemplo incluyendo un coseno en la dirección diagonal que tiene una amplidut dos veces superior a los otros senos. Además, el seno en la dirección horizontal tiene una frecuencia dos veces...
tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*2*f1*X) # Seno en la dirección horizontal gy =...
Fourier/Tarea_Fourier/FT-2D.ipynb
cosmolejo/Fisica-Experimental-3
gpl-3.0
SVM Classification Now try classifcation with SVM.
from sklearn.svm import SVC svm = SVC(random_state=42) svm.fit(Xtrain, ytrain) ypredSVM = svm.predict(Xtest) print(classification_report(ytest, ypredSVM, target_names=['QSOs', 'stars']))
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Pretty good. 81% completeness and 89% efficiency. Do it again with scaled data to see if that makes any difference. It doesn't seem to for colors alone, but might for other attributes?
from sklearn.svm import SVC svm = SVC(random_state=42) svm.fit(XStrain, yStrain) ySpredSVM = svm.predict(XStest) print(classification_report(yStest, ySpredSVM, target_names=['QSOs', 'stars'])) ypredCVSVM = cross_val_predict(svm, XS, y) data['ypred'] = ypredCVSVM qq = ((data['shenlabel']==0) & (data['ypred']==0)) ss...
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Random Forest Classification Now we'll try a DecisionTree, a RandomForest, and an ExtraTrees classifier Note that n_jobs=-1 is supposed to allow it to use multiple processesors if it can, but I'm honestly not sure how that works (and also not convinced that it isn't causing problems as sometimes when I use it I get a w...
# Random Forests, etc. from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.tree import DecisionTreeClassifier clfDTC = DecisionTreeClassifier(max_depth=None, min_samples_split=2, random_state=42) clfRFC = RandomForestClassifier(n_estimators=10, max_depth=N...
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Bagging Now we'll try a bagging classifier, based on K Nearest Neighbors. I did some playing around with max_samples and max_features (both of which run from 0 to 1) and found 0.5 and 1.0 to work best. Note that you have to give 1.0 in decimal otherwise it takes it to be 1 feature instead of 100% of them.
# Bagging from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier bagging = BaggingClassifier(KNeighborsClassifier(), max_samples=0.5, max_features=1.0, random_state=42, n_jobs=-1) bagging.fit(Xtrain, ytrain) ypredBag = bagging.predict(Xtest) print(c...
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
This seems to work better than the RandomForest. Might be worth trying to optimize the parameters for this. First try n_neighbors=7. The default is 5. Now do the same with the scaled data.
# Bagging Scaled data; 7 neighbors from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier bagging = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42, n_jobs=-1) bagging.fit(XStrain, yStrain) ypredBag = bagging.predict(XS...
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Overall: 83% Completness and 85% Efficiency.
data['ypred'] = ypredCVBAG qq = ((data['labels']==0) & (data['ypred']==0)) ss = ((data['labels']==1) & (data['ypred']==1)) qs = ((data['labels']==0) & (data['ypred']==1)) sq = ((data['labels']==1) & (data['ypred']==0)) dataqq = data[qq] datass = data[ss] dataqs = data[qs] datasq = data[sq] print len(dataqq), "quasars...
highz_clustering/classification/.ipynb_checkpoints/SpIESHighzCandidateSelection-checkpoint.ipynb
JDTimlin/QSO_Clustering
mit
Produce the plots:
# define figure and axes fig = plt.figure(figsize=(15,5)) ax0 = fig.add_subplot(131) ax1 = fig.add_subplot(132) ax2 = fig.add_subplot(133) # figure A: predicted probabilities vs. empirical probs hist, bin_edges = np.histogram(X,bins=100) p = [np.sum(y[np.where((X>=bin_edges[i]) & (X<bin_edges[i+1]))[0]])/np.max([hist[...
results/DI_plot1.ipynb
carltoews/tennis
gpl-3.0
Data augmentation for images
def pre_process_image(image, training): # This function takes a single image as input, # and a boolean whether to build the training or testing graph. if training: # Randomly crop the input image. image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels]) ...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Creating Main Processing https://github.com/google/prettytensor/blob/master/prettytensor/pretty_tensor_image_methods.py
def main_network(images, training): images = tf.cast(images, tf.float32) x_pretty = pt.wrap(images) if training: phase = pt.Phase.train else: phase = pt.Phase.infer # Can't wrap it to pretty tensor because # 'Layer' object has no attribute 'local_response_normaliz...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
weights_conv1 = get_weights_variable(layer_name='conv1_1') weights_conv2 = get_weights_variable(layer_name='conv2_2') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(weights_conv1).shape) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Get the output of the convoluational layers so we can plot them later.
output_conv1 = get_layer_output(layer_name='conv1_1') output_conv2 = get_layer_output(layer_name='conv2_2')
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Restore or initialize variables Training this neural network may take a long time, especially if you do not have a GPU. We therefore save checkpoints during training so we can continue training at another time (e.g. during the night), and also for performing analysis later without having to train the neural network eve...
save_dir = 'checkpoints_alex_net/'
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Function for selecting a random batch of images from the training-set.
def random_batch(): num_images = len(images_train) # Create a random index. idx = np.random.choice(num_images, size=train_batch_size, replace=False) # Use the random index to select random images and labels. x_batch = images_train[idx, :, :, :]...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Optimization The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
def optimize(num_iterations): start_time = time.time() for i in range(num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = random_batch() # Put the bat...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Calculating classifications This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct. The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
# Split the data-set in batches of this size to limit RAM usage. batch_size = 256 def predict_cls(images, labels, cls_true): num_images = len(images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.zeros(shape=num_images, dt...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting convolutional weights
def plot_conv_weights(weights, input_channel=0): # Assume weights are TensorFlow ops for 4-dim variables # e.g. weights_conv1 or weights_conv2. # Retrieve the values of the weight-variables from TensorFlow. # A feed-dict is not necessary because nothing is calculated. w = session.run(weights) ...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting the output of convolutional layers
def plot_layer_output(layer_output, image): # Assume layer_output is a 4-dim tensor # e.g. output_conv1 or output_conv2. # Create a feed-dict which holds the single input image. # Note that TensorFlow needs a list of images, # so we just create a list with this one image. feed_dict = {x: [image...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Examples of distorted input images In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images. This is a helper-functio...
def plot_distorted_image(image, cls_true): # Repeat the input image 9 times. image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0) # Create a feed-dict for TensorFlow. feed_dict = {x: image_duplicates} # Calculate only the pre-processing of the TensorFlow graph # which distorts t...
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Perform optimization
tf.summary.FileWriter('./graphs', sess.graph) # if False: optimize(num_iterations=10000)
seminar_3/.ipynb_checkpoints/AlexNet-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
<div id='intro' /> Introduction Back to TOC In our last Jupyter Notebook we learned how to solve 1D equations. Now, we'll go to the next level and will learn how to solve not just <i>one</i> equation, but a <i>system</i> of linear equations. This is a set of $n$ equations involving $n$ variables wherein all the equati...
def lu_decomp(A, show=False, print_precision=2): N,_ = A.shape U = np.copy(A) L = np.identity(N) if show: print('Initial matrices') print('L = '); print(np.array_str(L, precision=print_precision, suppress_small=True)) print('U = '); print(np.array_str(U, precision=print_precision...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Once the decomposition is done, solving a linear system like $A x = b$ is straightforward: $$A x = b \rightarrow L U x = b \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b \ \ \text{ (solve for c) } \ \rightarrow U x = c$$ and as you might know, solving lower and upper triangular systems can be easily performed ...
""" Solves a linear system A x = b, where A is a triangular (upper or lower) matrix """ def solve_triangular(A, b, upper=True): n = b.shape[0] x = np.zeros_like(b) if upper==True: #perform back-substitution x[-1] = (1./A[-1,-1]) * b[-1] for i in range(n-2, -1, -1): x[i] =...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
which is a very good result! This method has two important facts to be noted: Computing the LU decomposition requires $2n^3/3$ floating point operations. Can you check that? When computing the LU decomposition you can see the instruction L[i,j] = U[i,j]/U[j,j]. Here we divide an entry below the main diagonal by the pi...
#permutation between rows i and j on matrix A def row_perm(A, i, j): tmp = np.copy(A[i]) A[i] = A[j] A[j] = tmp def palu_decomp(A, show=False, print_precision=2): N,_ = A.shape P = np.identity(N) L = np.zeros((N,N)) U = np.copy(A) if show: print('Initial matrices') print...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
The procedure to solve the system $Ax=b$ remains almost the same. We have to add the efect of the permutation matrix $P$: $$A x = b \rightarrow P A x = P b \rightarrow L U x = b' \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b' \ \ \text{ (solve for c) } \ \rightarrow U x = c$$
def solve_palu(A, b, show=False, print_precision=2): P,L,U = palu_decomp(A, show, print_precision=print_precision) # A.x = b -> P.A.x = P.b = b' -> L.U.x = b' b = np.dot(P,b) # L.c = b' with c = U.x c = solve_triangular(L, b, upper=False) x = solve_triangular(U, c) return x
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Let's test this new method against the LU and NumPy solvers
palu_sol = solve_palu(A, b, show=True, print_precision=4) np.linalg.norm(palu_sol - lu_sol) np.linalg.norm(palu_sol - np_sol) P,L,U = palu_decomp(A) print('P: ',P) print('L: ',L) print('U: ',U)
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Here are some questions about PALU: 1. How much computational complexity has been added to the original $2n^3/3$ of LU? 2. Clearly PALU is more robust than LU, but given a non sigular matrix $A$ will it always be possible to perform the PALU decomposition? <div id='cholesky' /> Cholesky Back to TOC This is another dir...
""" Randomly generates an nxn symmetric positive- definite matrix A. """ def generate_spd_matrix(n, flag=True): if flag: A = np.random.random((n,n)) # Constructing symmetry A += A.T # A = np.dot(A.T,A) # Another way #symmetric+diagonally dominant -> symmetric positive-defini...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='im' /> Iterative Methods Back to TOC
""" Randomly generates an nxn strictly diagonally dominant matrix A. """ def generate_dd_matrix(n): A = np.random.random((n,n)) deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A """ Computes relative error between each row on X matrix...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='jacobi' /> Jacobi Back to TOC
""" Iterative methods implementations returns an array X with the the solutions at each iteration """ def jacobi(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices D = np.diag(A) Dinv = D**-1 R = A - np.diag...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
$\mathbf{x}{n+1}=M\,\mathbf{x}{n}+\widehat{\mathbf{b}}$ Now let's resolve the same linear system with Jacobi method!
jac_sol = jacobi(A,b, n_iter=50) jac_err = error(jac_sol, np_sol) it = np.linspace(1, 50, 50) plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Jacobi method') plt.show() Mj = jaco...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='gaussseidel' /> Gauss Seidel Back to TOC
def gauss_seidel(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) # R=(L+D) U = A-R for i in range(1, n_iter): #X[i] = solve_triangular(R, b-np.dot(U, X[i-1]), upper=False) #...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Now let's resolve the same linear system with Gauss-Seidel method!
gauss_seidel_sol = gauss_seidel(A,b) gauss_seidel_err = error(gauss_seidel_sol, np_sol) plt.figure(figsize=(12,6)) plt.semilogy(it, gauss_seidel_err, marker='o', linestyle='--', color='r') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Gauss method') plt.show()
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='sor' /> SOR Back to TOC
def sor(A, b, w=1.05, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) #R=(L+D) U = A-R # v1.11 L = np.tril(A,-1) D = np.diag(np.diag(A)) M = L+D/w for i in range(1, n_iter): ...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Here are some questions about SOR: - Why can averaging the current solution with the Gauss-Seidel solution improve convergence? - Why do we use $\omega > 1$ and not $\omega < 1$? - Could you describe a method to find the best value of $\omega$ (the one which optimizes convergence)? - Would it be a better option to re-c...
plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b', label='Jacobi') plt.semilogy(it, gauss_seidel_err, marker='o', linestyle='--', color='r', label='Gauss-Seidel') plt.semilogy(it, sor_err, marker='o', linestyle='--', color='g', label='SOR') plt.grid(True) plt.xlabel('Iterations'...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
<div id='examples' /> Examples Back to TOC <div id='hilbertMatrix' /> Hilbert Matrix Back to TOC
N=20 F_errors=np.zeros(N+1) B_errors=np.zeros(N+1) kappas=np.zeros(N+1) my_range=np.arange(5,N+1) for n in my_range: A=hilbert(n) x_exact=np.ones(n) b=np.dot(A,x_exact) x=np.linalg.solve(A,b) F_errors[n]=np.linalg.norm(x-x_exact)/np.linalg.norm(x_exact) kappas[n]=np.linalg.cond(A,2) B_errors...
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Recall: $\dfrac{1}{\kappa(A)}\dfrac{\|\mathbf{b}-A\,\mathbf{x}_a\|}{\|\mathbf{b}\|} \leq \dfrac{\|\mathbf{x}-\mathbf{x}_a\|}{\|\mathbf{x}\|} \leq \kappa(A)\,\dfrac{\|\mathbf{b}-A\,\mathbf{x}_a\|}{\|\mathbf{b}\|}$ Let's solve a linear system of equations with $H_{200}$:
n = 200 # Generating matrix A = hilbert(n) # Defining the 'exact' solution x_exact = np.ones(n) # If we know the exact solution, we can compute the RHS just by multiplying 'A' by 'x_exact' b = A @ x_exact # Using the NumPy routine to solve the linear system of equations. x = np.linalg.solve(A,b) # A.x = A.1 = b
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause
Now, we compute the condition number of $A=H_{200}$
kappa=np.linalg.cond(A,2) print(np.log10(kappa))
SC1v2/04a_linear_systems_of_equations.ipynb
tclaudioe/Scientific-Computing
bsd-3-clause