markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's first compute the regular correlation function. We'll need some radial bins. We'll also need to tell Corrfunc that we're working with a periodic box, and the number of parallel threads. Then we can go ahead and compute the real-space correlation function xi(r) from the pair counts, DD(r) (documentation here: http...
rmin = 40.0 rmax = 150.0 nbins = 22 r_edges = np.linspace(rmin, rmax, nbins+1) r_avg = 0.5*(r_edges[1:]+r_edges[:-1]) periodic = True nthreads = 1 dd_res = DD(1, nthreads, r_edges, x, y, z, boxsize=boxsize, periodic=periodic) dr_res = DD(0, nthreads, r_edges, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand, boxsize=boxsize, ...
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can use these pair counts to compute the Landy-Szalay 2pcf estimator (Landy & Szalay 1993). Let's define a function, as we'll want to reuse this:
def landy_szalay(nd, nr, dd, dr, rr): # Normalize the pair counts dd = dd/(nd*nd) dr = dr/(nd*nr) rr = rr/(nr*nr) xi_ls = (dd-2*dr+rr)/rr return xi_ls
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's unpack the pair counts from the Corrfunc results object, and plot the resulting correlation function: (Note that if you use weights, you need to multiply by the 'weightavg' column.)
dd = np.array([x['npairs'] for x in dd_res], dtype=float) dr = np.array([x['npairs'] for x in dr_res], dtype=float) rr = np.array([x['npairs'] for x in rr_res], dtype=float) xi_ls = landy_szalay(nd, nr, dd, dr, rr) plt.figure(figsize=(8,5)) plt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard est...
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Great, we can even see the baryon acoustic feauture at ~100 $h^{-1}$Mpc! Continuous-function estimator: Tophat basis Now we'll use the continuous-function estimator to compute the same correlation function, but in a continuous representation. First we'll use a tophat basis, to achieve the equivalent (but more correct!...
proj_type = 'tophat' nprojbins = nbins
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Currently the continuous-function estimator is only implemented in DD(s,mu) ('DDsmu'), the redshift-space correlation function which divides the transverse direction s from the line-of-sight direction mu. But we can simply set the number of mu bins to 1, and mumax to 1 (the max of cosine), to achieve the equivalent of ...
nmubins = 1 mumax = 1.0
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Then we just need to give Corrfunc all this info, and unpack the continuous results! The first returned object is still the regular Corrfunc results object (we could have just used this in our above demo of the standard result).
dd_res, dd_proj, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins) dr_res, dr_proj, _ = DDsmu(0, nthreads, r_edges, mumax, nmubins, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand, boxsiz...
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can now compute the amplitudes of the correlation function from these continuous pair counts. The compute_amps function uses the Landy-Szalay formulation of the estimator, but adapted for continuous bases. (Note that you have to pass some values twice, as this is flexible enough to translate to cross-correlations be...
amps = compute_amps(nprojbins, nd, nd, nr, nr, dd_proj, dr_proj, dr_proj, rr_proj, qq_proj)
Computing amplitudes (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
With these amplitudes, we can evaluate our correlation function at any set of radial separations! Let's make a fine-grained array and evaluate. We need to pass 'nprojbins' and 'proj_type'. Because we will be evaluating our tophat function at the new separations, we also need to give it the original bins.
r_fine = np.linspace(rmin, rmax, 2000) xi_proj = evaluate_xi(nprojbins, amps, len(r_fine), r_fine, nbins, r_edges, proj_type)
Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's check out the results, compared with the standard estimator!
plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_proj, color='steelblue', label='Tophat estimator') plt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can see that we're getting "the same" result, but continously, with the hard bin edges made clear. Analytically computing the random term Because we're working with a periodic box, we don't actually need a random catalog. We can analytically compute the RR term, as well as the QQ matrix.We'll need the volume of the...
volume = boxsize**3 rr_ana, qq_ana = qq_analytic(rmin, rmax, nd, volume, nprojbins, nbins, r_edges, proj_type)
Evaluating qq_analytic (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
We also don't need to use the Landy-Szalay estimator (we don't have a DR term!). To get the amplitudes we can just use the naive estimator, $\frac{\text{DD}}{\text{RR}}-1$. In our formulation, the RR term in the demoninator becomes the inverse QQ term, so we have QQ$^{-1}$ $\cdot$ (DD-RR).
numerator = dd_proj - rr_ana amps_ana, *_ = np.linalg.lstsq(qq_ana, numerator, rcond=None) # Use linalg.lstsq instead of actually computing inverse!
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Now we can go ahead and evaluate the correlation function at our fine separations.
xi_ana = evaluate_xi(nbins, amps_ana, len(r_fine), r_fine, nbins, r_edges, proj_type)
Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
We'll compare this to computing the analytic correlation function with standard Corrfunc:
xi_res = Corrfunc.theory.xi(boxsize, nthreads, r_edges, x, y, z) xi_theory = np.array([x['xi'] for x in xi_res], dtype=float) plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_ana, color='blue', label='Tophat basis') plt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard Estimator') plt.xlabel(r'r ...
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Once again, the standard and continuous correlation functions line up exactly. The correlation function looks smoother, as we didn't have to deal with a non-exact random catalog to estimate the window function. Continuous-function estimator: Cubic spline basis Now we can make things more interesting! Let's choose a cu...
proj_type = 'generalr' kwargs = {'order': 3} # 3: cubic spline projfn = 'quadratic_spline.dat' nprojbins = int(nbins/2) spline.write_bases(rmin, rmax, nprojbins, projfn, ncont=1000, **kwargs)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's check out the basis functions:
bases = np.loadtxt(projfn) bases.shape r = bases[:,0] plt.figure(figsize=(8,5)) for i in range(1, len(bases[0])): plt.plot(r, bases[:,i], color='red', alpha=0.5) plt.xlabel(r'r ($h^{-1}$Mpc)')
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
The bases on the ends are different so that they have the same normalization. We'll use the analytic version of the estimator, making sure to pass the basis file:
dd_res_spline, dd_spline, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn) volume = boxsize**3 # nbins and r_edges won't be used here because we passed projfn, but they're needed for compatib...
Evaluating qq_analytic (Corrfunc/utils.py) Evaluating xi (Corrfunc/utils.py)
MIT
example_theory.ipynb
abbyw24/Corrfunc
Let's compare the results:
plt.figure(figsize=(8,5)) plt.plot(r_fine, xi_ana_spline, color='red', label='Cubic spline basis') plt.plot(r_fine, xi_ana, color='blue', label='Tophat basis') plt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard estimator') plt.xlabel(r'r ($h^{-1}$Mpc)') plt.ylabel(r'$\xi$(r)') plt.legend()
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
We can see that the spline basis function produced a completely smooth correlation function; no hard-edged bins! It also captured that baryon acoustic feature (which we expect to be a smooth peak).This basis function is a bit noisy and likely has some non-physical features - but so does the tophat / standard basis! In ...
os.remove(projfn)
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
The below ipython magic line will convert this notebook to a regular old python script.
#!jupyter nbconvert --to script example_theory.ipynb
_____no_output_____
MIT
example_theory.ipynb
abbyw24/Corrfunc
Data Analysis - FIB-SEM Datasets* Goal: identify changes occurred across different time points
import os, sys, glob import re import numpy as np import pandas as pd from scipy.stats import ttest_ind import matplotlib.pyplot as plt import pprint
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
01 Compile data into single .csv file for each label
mainpath = 'D:\PerlmutterData' folder = 'segmentation_compiled_export' data_folder = 'data' path = os.path.join(mainpath, folder, data_folder) print(path) folders = ['cell_membrane', 'nucleus', 'mito', 'cristae', 'inclusion', 'ER'] target_list = glob.glob(os.path.join(path, 'compile', '*.csv')) target_list = [os.path...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02 Load data 02-01 Calculate mean and tota|l volumn for mito, cristate, ER and inclusion
df_mito = pd.read_csv(os.path.join(path, 'compile', 'mito' + '.csv')) df_mito['Volume3d_µm^3'] = df_mito['Volume3d']/1e9 df_mito['Area3d_µm^2'] = df_mito['Area3d']/1e6 df_mito_sum_grouped = df_mito.groupby(['day', 'filename']).sum().reset_index() df_mito_mean_grouped = df_mito.groupby(['day', 'filename']).mean().reset...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-02 Calculate the total volume for cell membrane and nucleus
df_nucleus = pd.read_csv(os.path.join(path, 'compile', 'nucleus' + '.csv')) df_nucleus['Volume3d_µm^3'] = df_nucleus['Volume3d']/1e9 df_nucleus['Area3d_µm^2'] = df_nucleus['Area3d']/1e6 df_nucleus_sum_grouped = df_nucleus.groupby(['day', 'filename']).sum().reset_index() df_cell_membrane = pd.read_csv(os.path.join(path,...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-03 Calculate the volume of cytoplasm
df_cyto = pd.DataFrame() df_cyto['filename'] = df_cell_membrane_sum_grouped['filename'] df_cyto['Volume3d_µm^3'] = df_cell_membrane_sum_grouped['Volume3d_µm^3'] - df_nucleus_sum_grouped['Volume3d_µm^3'] display(df_cyto)
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-03 Omit unhealthy data or data with poor quality
omit_data = ['data_d00_batch02_loc02', 'data_d17_batch01_loc01_01', 'data_d17_batch01_loc01_02'] for omit in omit_data: df_mito = df_mito.loc[df_mito['filename']!= omit+ '_mito.csv'] df_mito_sum_grouped = df_mito_sum_grouped.loc[df_mito_sum_grouped['filename']!=omit+ '_mito.csv'] ...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-04 Compile total volume of mito, cristate, ER and inclusion into one table1. raw value2. normalized by the total volume of cytoplasm
df_sum_compiled = pd.DataFrame() df_sum_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] df_sum_compiled['day'] = df_sum_compiled['day'].astype('int8') df_sum_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_sum_compiled[['c...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-05 Compile mean volume of mito, cristate, ER and inclusion into one table1. raw value2. normalized by the total volume of cytoplasm
df_mean_compiled = pd.DataFrame() df_mean_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']] df_mean_compiled['day'] = df_mean_compiled['day'].astype('int8') df_mean_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']] df_mean_compi...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
02-06 Distribution
# mito maxval = df_mito['Volume3d_µm^3'].max() minval = df_mito['Volume3d_µm^3'].min() print(maxval) print(minval) bins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 100) # bins = np.linspace(500000000, minval + (maxval - minval)* 1, num = 50) days = [0, 7, 14, 21] nrows = 4 ncols =...
261.036 1e-06
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
03 Load Data from Auto Skeletonization of Mitocondria 03-01
mainpath = 'D:\PerlmutterData' folder = 'segmentation_compiled_export' data_folder = 'data' path = os.path.join(mainpath, folder, data_folder) print(path) folders = ['skeleton_output'] subcat = ['nodes', 'points', 'segments_s'] target_list = glob.glob(os.path.join(path, 'compile', '*.csv')) target_list = [os.path.bas...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
04 Average of total size
mito_mean = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index() mito_sem = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index() mito_sem = mito_sem.fillna(0) mito_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_mean.csv')) mito_...
_____no_output_____
CC-BY-4.0
data_inference_3.ipynb
eufmike/fibsem_seg_dl
Harvesting data from HomeThis is an example of how my original recipe for [harvesting data from The Bulletin](Harvesting-data-from-the-Bulletin.ipynb) can be modified for other journals.If you'd like a pre-harvested dataset of all the Home covers (229 images in a 3.3gb zip file), open this link using your preferred Bi...
# Let's import the libraries we need. import requests from bs4 import BeautifulSoup import time import json import os import re # Create a directory for this journal # Edit as necessary for a new journal data_dir = '../../data/Trove/Home' os.makedirs(data_dir, exist_ok=True)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Getting the issue dataEach issue of a digitised journal like has it's own unique identifier. You've probably noticed them in the urls of Trove resources. They look something like this `nla.obj-362409353`. Once we have the identifier for an issue we can easily download the contents, but how do we get a complete list of...
# This is just the url we found above, with a slot into which we can insert the startIdx value # If you want to download data from another journal, just change the nla.obj identifier to point to the journal. start_url = 'https://nla.gov.au/nla.obj-362409353/browse?startIdx={}&rows=20&op=c' # The initial startIdx value ...
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Cleaning up the metadataSo far we've just grabbed the complete issue details as a single string. It would be good to parse this string so that we have the dates, volume and issue numbers in separate fields. As is always the case, there's a bit of variation in the way this information is recorded. The code below tries ...
import arrow from arrow.parser import ParserError issues_data = [] # Loop through the issues for issue in issues: issue_data = {} issue_data['id'] = issue['id'] issue_data['pages'] = int(issue['pages']) print(issue['details']) try: # This pattern looks for details in the form: Vol. 2 No. 3 ...
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Save as CSVNow the issues data is in a nice, structured form, we can load it into a Pandas dataframe. This allows us to do things like find the total number of pages digitised.We can also save the metadata as a CSV.
import pandas as pd # Convert issues metadata into a dataframe df = pd.DataFrame(issues_data, columns=['id', 'label', 'volume', 'number', 'date', 'pages']) # Find the total number of pages df['pages'].sum() # Save metadata as a CSV. df.to_csv('{}/home_issues.csv'.format(data_dir), index=False)
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
Download front coversOptions for downloading images, PDFs and text are described in the [harvesting data from the Bulletin](Harvesting-data-from-the-Bulletin.ipynb) notebook. In this recipe we'll just download the fromt covers (because they're awesome).The code below checks to see if an image has already been saved be...
import zipfile import io # Prepare a directory to save the images into output_dir = data_dir + '/images' os.makedirs(output_dir, exist_ok=True) # Loop through the issue metadata for issue in issues_data: print(issue['id']) id = issue['id'] # Check to see if the first page of this issue has already been down...
_____no_output_____
MIT
Trove/Cookbook/Harvesting-data-from-the-Home.ipynb
wragge/ozglam-workbench
"Old Skool Image Classification"> "A blog on how to manuallly create features from an Image for classification task."- toc: true- branch: master- badges: true- comments: false- categories: [CV, image classification, feature engineering, pyTorch, CIFAR10]- image: images/blog1.png- hide: false- search_exclude: true Int...
%matplotlib inline from torchvision import datasets import PIL from skimage.feature import local_binary_pattern, greycomatrix, greycoprops from skimage.filters import gabor import torch from torch import nn from torch.utils.data import TensorDataset, DataLoader import torch.nn.functional as F import numpy as np imp...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Data Loading
#collapse #collapse-output trainDset = datasets.CIFAR10(root="./cifar10/", train=True, download=True) testDset = datasets.CIFAR10(root = "./cifar10/", train=False, download=True) # Looking at a single image #collapse img = trainDset[0][0] # PIL Image img_grey = img.convert('L') # convert to Grey-scale img_arr = np.ar...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Local Binary Patterns(LBP)LBP is helpful in extracting "local" structure of the image. It does so by encoding the local neighbourhood after they have been maximally simplified, i.e. binarized. In case, we want to perform LBP on a coloured image, we need to do so individually on each channel(Red/Blue/Green).
#collapse feat_lbp = local_binary_pattern(img_arr, 8,1,'uniform') feat_lbp = np.uint8( (feat_lbp/feat_lbp.max())*255) # converting to unit 8 lbp_img = PIL.Image.fromarray(feat_lbp) # Convert from array plt.imshow(lbp_img, cmap = 'gray') # Energy, Entropy def get_lbp(img): """Function to implement Local Binary Patte...
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Co Occurence MatrixIntiutively, if we were to extract information of a pixel in an image and also record its neighbouring pixels and their intensities, we will be able to capture both spatial and relative information. This is where Co-Occurance matrix are useful. They extract the representation of joint probability of...
def creat_cooccur(img_arr, *args, **kwargs): """Implements extraction of features from Co-Occurance Matrix""" gCoMat = greycomatrix(img_arr, [2], [0], 256, symmetric=True, normed=True) contrast = greycoprops(gCoMat, prop='contrast') dissimilarity = greycoprops(gCoMat, prop='dissimilarity') homogeneity = greyc...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
[Gabor Filter](https://en.wikipedia.org/wiki/Gabor_filterApplications_of_2-D_Gabor_filters_in_image_processing)
gf_real, gf_img = gabor(img_arr, frequency=0.6) gf =(gf_real**2 + gf_img**2)//2 # Displaying the filter response fig, ax = plt.subplots(1,3) ax[0].imshow(gf_real,cmap='gray') ax[1].imshow(gf_img,cmap='gray') ax[2].imshow(gf,cmap='gray') def get_gabor(img, N, *args, **kwargs): """Gabor Feature extraction""" gf_...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Feature Extraction
# Generate Training Data # Extract features from all images label = [] featLength = 2+5+2 # LBP, Co-occurance, Gabor trainFeats = np.zeros((len(trainDset), featLength)) testFeats = np.zeros((len(testDset), featLength)) label = [trainDset[tr][1] for tr in tqdm.tqdm_notebook(range(len(trainFeats)))] trainLabel = np.ar...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Normalize Features
# Normalizing the train features to the range [0,1] trMaxs = np.amax(trainFeats,axis=0) #Finding maximum along each column trMins = np.amin(trainFeats,axis=0) #Finding maximum along each column trMaxs_rep = np.tile(trMaxs,(50000,1)) #Repeating the maximum value along the rows trMins_rep = np.tile(trMins,(50000,1)) #Rep...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Save Data
with open("TrainFeats.pckl", "wb") as f: pickle.dump(trainFeatsNorm, f) with open("TrainLabel.pckl", "wb") as f: pickle.dump(trainLabel, f) with open("TestFeats.pckl", "wb") as f: pickle.dump(testFeatsNorm, f) with open("TestLabel.pckl", "wb") as f: pickle.dump(testLabel, f) print("files Saved!")
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Classification with SoftMax Regression Data Preparation
########################## ### SETTINGS ########################## # Device DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Hyperparameters random_seed = 123 learning_rate = 0.01 num_epochs = 100 batch_size = 64 # Architecture num_features = 9 num_classes = 10 ########################## ##...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Define Model
########################## ### MODEL ########################## class SoftmaxRegression(torch.nn.Module): def __init__(self, num_features, num_classes): super(SoftmaxRegression, self).__init__() self.linear = torch.nn.Linear(num_features, num_classes) # self.linear.weight.detach()...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Define Training Route
# Manual seed for deterministic data loader torch.manual_seed(random_seed) def compute_accuracy(model, data_loader): correct_pred, num_examples = 0, 0 for features, targets in data_loader: features = features.float().view(-1, 9).to(DEVICE) targets = targets.to(DEVICE) logits, prob...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Model Performance
%matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.plot(epoch_costs) plt.ylabel('Avg Cross Entropy Loss\n(approximated by averaging over minibatches)') plt.xlabel('Epoch') plt.show() print(f'Train accuracy: {(compute_accuracy(model, train_loader)): .2f}%') print(f'Train accuracy: {(compute_accur...
Train accuracy: 24.92% Train accuracy: 25.29%
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
Comments- This was a demonstration of how we can use manually crafted features in Image Classification tasks.- The model can be improved in several ways: - Tweaking the parameters to modify features generated for **_LBP, Co-Occurance Matrix and Gabor Filter_** - Extending the parameters for Red,Blue and Green ch...
_____no_output_____
Apache-2.0
_notebooks/2020-06-30-Classical.ipynb
S-B-Iqbal/Reflexione
TensorFlow 2.0+ Low Level APIs Convert ExampleThis example demonstrates the workflow to build a model usingTensorFlow 2.0+ low-level APIs and convert it to Core ML `.mlmodel` format using the `coremltools.converters.tensorflow` converter.For more example, refer `test_tf_2x.py` file.Note: - This notebook was tested wit...
import tensorflow as tf import numpy as np import coremltools print(tf.__version__) print(coremltools.__version__)
WARNING: Logging before flag parsing goes to stderr. W1101 14:02:33.174557 4762860864 __init__.py:74] TensorFlow version 2.0.0 detected. Last version known to be fully compatible is 1.14.0 .
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using Low-Level APIs
# construct a toy model with low level APIs root = tf.train.Checkpoint() root.v1 = tf.Variable(3.) root.v2 = tf.Variable(2.) root.f = tf.function(lambda x: root.v1 * root.v2 * x) # save the model saved_model_dir = './tf_model' input_data = tf.constant(1., shape=[1, 1]) to_save = root.f.get_concrete_function(input_data...
0 assert nodes deleted ['Func/StatefulPartitionedCall/input/_2:0', 'StatefulPartitionedCall/mul/ReadVariableOp:0', 'statefulpartitionedcall_args_1:0', 'Func/StatefulPartitionedCall/input/_3:0', 'StatefulPartitionedCall/mul:0', 'StatefulPartitionedCall/ReadVariableOp:0', 'statefulpartitionedcall_args_2:0'] 6 nodes delet...
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using Control Flow
# construct a TensorFlow 2.0+ model with tf.function() @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def control_flow(x): if x <= 0: return 0. else: return x * 3. to_save = tf.Module() to_save.control_flow = control_flow saved_model_dir = './tf_model' tf.saved_model.save(to_sa...
_____no_output_____
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
Using `tf.keras` Subclassing APIs
class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.dense1 = tf.keras.layers.Dense(4) self.dense2 = tf.keras.layers.Dense(5) @tf.function def call(self, input_data): return self.dense2(self.dense1(input_data)) keras_model = MyModel() inpu...
0 assert nodes deleted ['my_model/StatefulPartitionedCall/args_3:0', 'Func/my_model/StatefulPartitionedCall/input/_2:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_11:0', 'my_model/StatefulPartitionedCall/args_4:0', 'Func/my_model/StatefulPartitionedCall/input/_4:0', 'Func/my_model/StatefulPa...
BSD-3-Clause
examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb
DreamChaserMXF/coremltools
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand...
# consider the following list with 4 elements L = [1,-2,0,5] print(L)
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Vectors can be in horizontal or vertical shape.We show this list as a four dimensional row vector (horizontal) or a column vector (vertical):$$ u = \mypar{1~~-2~~0~~-5} ~~~\mbox{ or }~~~ v =\mymatrix{r}{1 \\ -2 \\ 0 \\ 5}, ~~~\mbox{ respectively.}$$Remark that we do not need to use any comma in vector representation...
# 3 * v v = [1,-2,0,5] print("v is",v) # we use the same list for the result for i in range(len(v)): v[i] = 3 * v[i] print("3v is",v) # -0.6 * u # reinitialize the list v v = [1,-2,0,5] for i in range(len(v)): v[i] = -0.6 * v[i] print("0.6v is",v)
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Summation of vectorsTwo vectors (with same dimension) can be summed up.The summation of two vectors is a vector: the numbers on the same entries are added up.$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}. ~~~~~~~ \mbox{Then, }~~ u+v = \myrvector{-3 \\ -2 \\ 0 ...
u = [-3,-2,0,-1,4] v = [-1,-1,2,-3,5] result=[] for i in range(len(u)): result.append(u[i]+v[i]) print("u+v is",result) # print the result vector similarly to a column vector print() # print an empty line print("the elements of u+v are") for j in range(len(result)): print(result[j])
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Task 1 Create two 7-dimensional vectors $u$ and $ v $ as two different lists in Python having entries randomly picked between $-10$ and $10$. Print their entries.
from random import randrange # # your solution is here # #r=randrange(-10,11) # randomly pick a number from the list {-10,-9,...,-1,0,1,...,9,10}
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Task 2 By using the same vectors, find the vector $ (3 u-2 v) $ and print its entries. Here $ 3u $ and $ 2v $ means $u$ and $v$ are multiplied by $3$ and $2$, respectively.
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Visualization of vectors We can visualize the vectors with dimension at most 3. For simplicity, we give examples of 2-dimensional vectors. Consider the vector $ v = \myvector{1 \\ 2} $. A 2-dimensional vector can be represented on the two-dimensional plane by an arrow starting from the origin $ ...
v = [-1,-3,5,3,1,2] length_square=0 for i in range(len(v)): print(v[i],":square ->",v[i]**2) # print each entry and its square value length_square = length_square + v[i]**2 # sum up the square of each entry length = length_square ** 0.5 # take the square root of the summation of the squares of all entries pri...
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
Task 3 Let $ u = \myrvector{1 \\ -2 \\ -4 \\ 2} $ be a four dimensional vector.Verify that $ \norm{4 u} = 4 \cdot \norm{u} $ in Python. Remark that $ 4u $ is another vector obtained from $ u $ by multiplying it with 4.
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
click for our solution Notes:When a vector is multiplied by a number, then its length is also multiplied with the same number.But, we should be careful with the sign.Consider the vector $ -3 v $. It has the same length of $ 3v $, but its direction is opposite.So, when calculating the length of $ -3 v $, we use absolut...
# # your solution is here #
_____no_output_____
Apache-2.0
math/Math20_Vectors.ipynb
QPoland/basics-of-quantum-computing-pl
External data> Helper functions used to download and extract common time series datasets.
#export from tsai.imports import * from tsai.utils import * from tsai.data.validation import * #export from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df from sktime.utils.validation.panel import check_X from sktime.utils.data_io import TsFileParseException #export from fastai.data.external import...
_____no_output_____
Apache-2.0
nbs/012_data.external.ipynb
clancy0614/tsai
Models using 3D convolutions> This module focuses on preparing the data of the UCF101 dataset to be used with the core functions.Refs.[understanding-1d-and-3d-convolution](https://towardsdatascience.com/understanding-1d-and-3d-convolution-neural-network-keras-9d8f76e29610)
#export import torch import torch.nn as nn import torchvision # used to download the model import torch.nn.functional as F from torch.autograd import Variable import math #export def conv3x3x3(in_channels, out_channels, stride=1): # 3x3x3 convolution with padding return nn.Conv3d( in_channels, ...
Converted 01_dataset_ucf101.ipynb. Converted 02_avi.ipynb. Converted 04_data_augmentation.ipynb. Converted 05_models.ipynb. Converted 06_models-resnet_3d.ipynb. Converted 07_utils.ipynb. Converted 10_run-baseline.ipynb. Converted 11_run-sequence-convlstm.ipynb. Converted 12_run-sequence-3d.ipynb. Converted 14_fastai_se...
Apache-2.0
06_models-resnet_3d.ipynb
andreamunafo/actions-in-videos
OUTDATED, the examples moved to the gallery See https://empymod.github.io/emg3d-gallery---- 3D with tri-axial anisotropy comparison between `emg3d` and `SimPEG``SimPEG` is an open source python package for simulation and gradient based parameter estimation in geophysical applications, see https://simpeg.xyz. We can us...
import time import emg3d import discretize import numpy as np import SimPEG, pymatsolver from SimPEG.EM import FDEM from SimPEG import Mesh, Maps from SimPEG.Survey import Data import matplotlib.pyplot as plt from timeit import default_timer from contextlib import contextmanager from datetime import datetime, timedelta...
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Model and survey parameters
# Depths (0 is sea-surface) water_depth = 1000 target_x = np.r_[-500, 500] target_y = target_x target_z = -water_depth + np.r_[-400, -100] # Resistivities res_air = 2e8 res_sea = 0.33 res_back = [1., 2., 3.] # Background in x-, y-, and z-directions res_target = 100. freq = 1.0 src = [-100, 100, 0, 0, -900, -900]
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Mesh and source-field
# skin depth skin_depth = 503/np.sqrt(res_back[0]/freq) print(f"\nThe skin_depth is {skin_depth} m.\n") cs = 100 # 100 m min_width of cells pf = 1.15 # Padding factor x- and y-directions pfz = 1.35 # z-direction npadx = 12 # Nr of padding in x- and y-directions npadz = 9 # z-d...
The skin_depth is 503.0 m. Receiver locations: [-1950. -1850. -1750. -1650. -1550. -1450. -1350. -1250. -1150. -1050. -950. -850. -750. -650. -550. -450. -350. -250. -150. -50. 50. 150. 250. 350. 450. 550. 650. 750. 850. 950. 1050. 1150. 1250. 1350. 1450. 1550. 1650. 1750. ...
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Create model
# Layered_background res_x = res_air*np.ones(mesh.nC) res_x[mesh.gridCC[:, 2] <= 0] = res_sea res_y = res_x.copy() res_z = res_x.copy() res_x[mesh.gridCC[:, 2] <= -water_depth] = res_back[0] res_y[mesh.gridCC[:, 2] <= -water_depth] = res_back[1] res_z[mesh.gridCC[:, 2] <= -water_depth] = res_back[2] res_x_bg = res_x...
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Calculate `emg3d`
%memit em3_tg = emg3d.solver.solver(mesh, pmodel, sfield, verb=3, nu_pre=0, semicoarsening=True) %memit em3_bg = emg3d.solver.solver(mesh, pmodel_bg, sfield, verb=3, nu_pre=0, semicoarsening=True)
:: emg3d START :: 21:03:43 :: MG-cycle : 'F' sslsolver : False semicoarsening : True [1 2 3] tol : 1e-06 linerelaxation : False [0] maxit : 50 nu_{i,1,c,2} : 0, 0, 1, 2 verb : 3 Original grid : 64 x 64 x 32 => 131,072 cells Coa...
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Calculate `SimPEG`
# Set up the PDE prob = FDEM.Problem3D_e(mesh, sigmaMap=Maps.IdentityMap(mesh), Solver=Solver) # Set up the receivers rx_locs = Mesh.utils.ndgrid([rec_x, np.r_[0], np.r_[-water_depth]]) rx_list = [ FDEM.Rx.Point_e(orientation='x', component="real", locs=rx_locs), FDEM.Rx.Point_e(orientation='x', component="im...
peak memory: 10460.16 MiB, increment: 9731.63 MiB SimPEG runtime: 0:03:53
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Plot result
ix1, ix2 = 12, 12 iy = 32 iz = 13 mesh.vectorCCx[ix1], mesh.vectorCCx[-ix2-1], mesh.vectorNy[iy], mesh.vectorNz[iz] plt.figure(figsize=(9, 6)) plt.subplot(221) plt.title('|Real(response)|') plt.semilogy(rec_x/1e3, np.abs(em3_bg.fx[ix1:-ix2, iy, iz].real)) plt.semilogy(rec_x/1e3, np.abs(em3_tg.fx[ix1:-ix2, iy, iz].rea...
_____no_output_____
Apache-2.0
1c_3D_triaxial_SimPEG.ipynb
empymod/emg3d-examples
Portfolio OptimizationThe portfolio optimization problem is a combinatorial optimization problem that seeks the optimal combination of assets based on the balance between risk and return. Cost FunctionThe cost function for solving the portfolio optimization problem is $$E = -\sum \mu_i q_i + \gamma \sum \delta_{i,j}q...
import numpy as np from blueqat import vqe from blueqat.pauli import I, X, Y, Z from blueqat.pauli import from_qubo from blueqat.pauli import qubo_bit as q from blueqat import Circuit
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Use the following as return data
asset_return = np.diag([-0.026,-0.031,-0.007,-0.022,-0.010,-0.055]) print(asset_return)
[[-0.026 0. 0. 0. 0. 0. ] [ 0. -0.031 0. 0. 0. 0. ] [ 0. 0. -0.007 0. 0. 0. ] [ 0. 0. 0. -0.022 0. 0. ] [ 0. 0. 0. 0. -0.01 0. ] [ 0. 0. 0. 0. 0. -0.055]]
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Use the following as risk data
asset_risk = [[0,0.0015,0.0012,0.0018,0.0022,0.0012],[0,0,0.0017,0.0022,0.0005,0.0019],[0,0,0,0.0040,0.0032,0.0024],[0,0,0,0,0.0012,0.0076],[0,0,0,0,0,0.0021],[0,0,0,0,0,0]] np.asarray(asset_risk)
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
It is then converted to Hamiltonian and calculated. In addition, this time there is the constraint of selecting two out of six assets, which is implemented using XYmixer.
#convert qubo to pauli qubo = asset_return + np.asarray(asset_risk)*0.5 hamiltonian = from_qubo(qubo) init = Circuit(6).x[0,1] mixer = I()*0 for i in range(5): for j in range(i+1, 6): mixer += (X[i]*X[j] + Y[i]*Y[j])*0.5 step = 1 result = vqe.Vqe(vqe.QaoaAnsatz(hamiltonian, step, init, mixer)).run() prin...
_____no_output_____
Apache-2.0
tutorial/318_portfolio.ipynb
Blueqat/blueqat-tutorials
Доверительные интервалы на основе bootstrap
import numpy as np import pandas as pd %pylab inline
Populating the interactive namespace from numpy and matplotlib
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Загрузка данных Время ремонта телекоммуникаций Verizon — основная региональная телекоммуникационная компания (Incumbent Local Exchange Carrier, ILEC) в западной части США. В связи с этим данная компания обязана предоставлять сервис ремонта телекоммуникационного оборудования не только для своих клиентов, но и для клие...
data = pd.read_csv('verizon.txt', sep='\t') data.shape data.head() data.Group.value_counts() pylab.figure(figsize(12, 5)) pylab.subplot(1,2,1) pylab.hist(data[data.Group == 'ILEC'].Time, bins = 20, color = 'b', range = (0, 100), label = 'ILEC') pylab.legend() pylab.subplot(1,2,2) pylab.hist(data[data.Group == 'CLEC']....
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Bootstrap
def get_bootstrap_samples(data, n_samples): indices = np.random.randint(0, len(data), (n_samples, len(data))) samples = data[indices] return samples def stat_intervals(stat, alpha): boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)]) return boundaries
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Интервальная оценка медианы
ilec_time = data[data.Group == 'ILEC'].Time.values clec_time = data[data.Group == 'CLEC'].Time.values np.random.seed(0) ilec_median_scores = map(np.median, get_bootstrap_samples(ilec_time, 1000)) clec_median_scores = map(np.median, get_bootstrap_samples(clec_time, 1000)) print "95% confidence interval for the ILEC me...
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Точечная оценка разности медиан
print "difference between medians:", np.median(clec_time) - np.median(ilec_time)
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Интервальная оценка разности медиан
delta_median_scores = map(lambda x: x[1] - x[0], zip(ilec_median_scores, clec_median_scores)) print "95% confidence interval for the difference between medians", stat_intervals(delta_median_scores, 0.05)
_____no_output_____
MIT
StatsForDataAnalysis/stat.bootstrap_intervals.ipynb
alexsubota/PythonScripts
Imports
from geopy.geocoders import Nominatim from geopy.distance import distance from pprint import pprint import pandas as pd import random from typing import List, Tuple from dotenv import dotenv_values random.seed(123) config = dotenv_values(".env")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Cities
country = "Ukraine" cities = ["Lviv", "Chernihiv", "Dnipropetrovs'k", "Uzhgorod", "Kharkiv", "Odesa", "Poltava", "Kiev", "Zhytomyr", "Khmelnytskyi", "Vinnytsia","Cherkasy", "Zaporizhia", "Ternopil", "Sumy"]
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
1) Get Distinct Distance using geopy API
def get_distinct_distances(list:cities, str: country) -> pd.DataFrame: df = pd.DataFrame(index = cities, columns= cities) geolocator = Nominatim(user_agent=config["USER_AGENT"], timeout = 10000) coordinates = dict() for city in cities: location = geolocator.geocode(city + " " + country) ...
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Download file to local
df_distinct.to_csv("data/direct_distances.csv")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
2) Get route distance using Openrouteservice API
import openrouteservice from pprint import pprint def get_route_dataframe(coordinates: dict)->pd.DataFrame: client = openrouteservice.Client(key=config['API_KEY']) cities = list(coordinates.keys()) df = pd.DataFrame(index = cities, columns= cities) for origin in range(len(coordinates.keys())): ...
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Download file to local
df_routes.to_csv("data/route_distances.csv")
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
A* Algorithm
class AStar(): def __init__(self, cities: list[str], country: str, distances: pd.DataFrame, heuristics: pd.DataFrame): self.cities = cities self.country = country self.distances = distances self.heuristics = heuristics def generate_map(self, low, high) -> dict[str, list[str]...
{'Cherkasy': ['Chernihiv', 'Odesa', 'Zhytomyr', 'Ternopil'], 'Chernihiv': ["Dnipropetrovs'k", 'Poltava', 'Cherkasy', 'Kharkiv'], "Dnipropetrovs'k": ['Lviv', 'Chernihiv', 'Poltava'], 'Kharkiv': ['Chernihiv', 'Sumy', 'Vinnytsia'], 'Khmelnytskyi': ['Uzhgorod', 'Odesa', 'Ternopil'], 'Kiev': ['Lviv', 'Poltava', 'Sumy',...
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Display the Graph
import networkx as nx import matplotlib.pyplot as plt Map = a_star.generate_map(low, high) graph = nx.Graph() graph.add_nodes_from(Map.keys()) for origin, destinations in Map.items(): graph.add_weighted_edges_from(([(origin, destination, weight) for destination, weight in zip(destinations, [distances[origin][dest] ...
Solution:Vinnytsia,Dnipropetrovs'k,Poltava Distance:705.274
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Draw the solution
edges = [(path[i-1], path[i]) for i in range(1, len(path))] plt.figure(figsize = (30, 18)) nx.draw_networkx_nodes(graph, pos, node_color="yellow",label="blue", node_size = 1500) nx.draw_networkx_labels(graph, pos, font_color="blue") nx.draw_networkx_edges(graph, pos, edge_color='blue', arrows=False) nx.draw_networkx_ed...
_____no_output_____
MIT
a_star.ipynb
Google-Developer-Student-Club-KPI/a-star
Check we get right proportion of NMACs
num_enc = 10000 num_nmac = 0 NMAC_R = 500 for _ in range(num_enc): tca = 50 st, int_act_gen = mc_encounter(tca) ac0, ac1, prev_a = st for _ in range(tca): ac0 = advance_ac(ac0, a_int('NOOP')) ac1 = advance_ac(ac1, next(int_act_gen)) obs = state_to_obs(State(ac0, ac1, a_int...
_____no_output_____
MIT
notebooks/encounter_test.ipynb
osmanylc/deep-rl-collision-avoidance
MNIST handwritten digits classification with nearest neighbors In this notebook, we'll use [nearest-neighbor classifiers](http://scikit-learn.org/stable/modules/neighbors.htmlnearest-neighbors-classification) to classify MNIST digits using scikit-learn (version 0.20 or later required).First, the needed imports.
%matplotlib inline from pml_utils import get_mnist, show_failures import numpy as np from sklearn import neighbors, __version__ from sklearn.metrics import accuracy_score, confusion_matrix, classification_report import matplotlib.pyplot as plt import seaborn as sns sns.set() from distutils.version import LooseVersi...
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Then we load the MNIST data. First time we need to download the data, which can take a while.
X_train, y_train, X_test, y_test = get_mnist('MNIST') print('MNIST data loaded: train:',len(X_train),'test:',len(X_test)) print('X_train:', X_train.shape) print('y_train:', y_train.shape) print('X_test', X_test.shape) print('y_test', y_test.shape)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
The training data (`X_train`) is a matrix of size (60000, 784), i.e. it consists of 60000 digits expressed as 784 sized vectors (28x28 images flattened to 1D). `y_train` is a 60000-dimensional vector containing the correct classes ("0", "1", ..., "9") for each training digit.Let's take a closer look. Here are the first...
pltsize=1 plt.figure(figsize=(10*pltsize, pltsize)) for i in range(10): plt.subplot(1,10,i+1) plt.axis('off') plt.imshow(X_train[i,:].reshape(28, 28), cmap="gray") plt.title('Class: '+y_train[i])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
1-NN classifier InitializationLet's create first a 1-NN classifier. Note that with nearest-neighbor classifiers there is no internal (parameterized) model and therefore no learning required. Instead, calling the `fit()` function simply stores the samples of the training data in a suitable data structure.
%%time n_neighbors = 1 clf_nn = neighbors.KNeighborsClassifier(n_neighbors) clf_nn.fit(X_train, y_train)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
InferenceAnd try to classify some test samples with it.
%%time pred_nn = clf_nn.predict(X_test[:200,:])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
We observe that the classifier is rather slow, and classifying the whole test set would take quite some time. What is the reason for this?The accuracy of the classifier:
print('Predicted', len(pred_nn), 'digits with accuracy:', accuracy_score(y_test[:len(pred_nn)], pred_nn))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Faster 1-NN classifier InitializationOne way to make our 1-NN classifier faster is to use less training data:
%%time n_neighbors = 1 n_data = 1024 clf_nn_fast = neighbors.KNeighborsClassifier(n_neighbors) clf_nn_fast.fit(X_train[:n_data,:], y_train[:n_data])
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
InferenceNow we can use the classifier created with reduced data to classify our whole test set in a reasonable amount of time.
%%time pred_nn_fast = clf_nn_fast.predict(X_test)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
The classification accuracy is however now not as good:
print('Predicted', len(pred_nn_fast), 'digits with accuracy:', accuracy_score(y_test, pred_nn_fast))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Confusion matrixWe can compute the confusion matrix to see which digits get mixed the most:
labels=[str(i) for i in range(10)] print('Confusion matrix (rows: true classes; columns: predicted classes):'); print() cm=confusion_matrix(y_test, pred_nn_fast, labels=labels) print(cm); print()
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Plotted as an image:
plt.matshow(cm, cmap=plt.cm.gray) plt.xticks(range(10)) plt.yticks(range(10)) plt.grid(None) plt.show()
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts