markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Binarna slikaSlika čiji pikseli imaju samo dve moguće vrednosti: crno i belo. U zavisnosti da li interval realan (float32) ili celobrojan (uint8), ove vrednosti mogu biti {0,1} ili {0,255}.U binarnoj slici često izdvajamo ono što nam je bitno (**foreground**), od ono što nam je nebitno (**background**). Formalnije, ov...
img_tr = img_gray > 127 # svi piskeli koji su veci od 127 ce dobiti vrednost True, tj. 1, i obrnuto plt.imshow(img_tr, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
OpenCV ima metodu threshold koja kao prvi parametar prima sliku koja se binarizuje, kao drugi parametar prima prag binarizacije, treći parametar je vrednost rezultujućeg piksela ako je veći od praga (255=belo), poslednji parametar je tip thresholda (u ovo slučaju je binarizacija).
ret, image_bin = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY) # ret je vrednost praga, image_bin je binarna slika print(ret) plt.imshow(image_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Otsu thresholdOtsu metoda se koristi za automatsko pronalaženje praga za threshold na slici.
ret, image_bin = cv2.threshold(img_gray, 0, 255, cv2.THRESH_OTSU) # ret je izracunata vrednost praga, image_bin je binarna slika print("Otsu's threshold: " + str(ret)) plt.imshow(image_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Adaptivni thresholdU nekim slučajevima primena globalnog praga za threshold ne daje dobre rezultate. Dobar primer su slike na kojima se menja osvetljenje, gde globalni threshold praktično uništi deo slike koji je previše osvetljen ili zatamnjen.Adaptivni threshold je drugačiji pristup, gde se za svaki piksel na slici ...
image_ada = cv2.imread('images/sonnet.png') image_ada = cv2.cvtColor(image_ada, cv2.COLOR_BGR2GRAY) plt.imshow(image_ada, 'gray') ret, image_ada_bin = cv2.threshold(image_ada, 100, 255, cv2.THRESH_BINARY) plt.imshow(image_ada_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Loši rezultati su dobijeni upotrebom globalnog thresholda.Poboljšavamo rezultate korišćenjem adaptivnog thresholda. Pretposlednji parametar metode adaptiveThreshold je ključan, jer predstavlja veličinu bloka susednih piksela (npr. 15x15) na osnovnu kojih se računa lokalni prag.
# adaptivni threshold gde se prag racuna = srednja vrednost okolnih piksela image_ada_bin = cv2.adaptiveThreshold(image_ada, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, 5) plt.figure() # ako je potrebno da se prikaze vise slika u jednoj celiji plt.imshow(image_ada_bin, 'gray') # adaptivni threshold gde se ...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
HistogramMožemo koristiti **histogram**, koji će nam dati informaciju o distribuciji osvetljenosti piksela.Vrlo koristan kada je potrebno odrediti prag za globalni threshold.Pseudo-kod histograma za grayscale sliku: ```codeinicijalizovati nula vektor od 256 elemenata za svaki piksel na slici: preuzeti inicijalni ...
def hist(image): height, width = image.shape[0:2] x = range(0, 256) y = np.zeros(256) for i in range(0, height): for j in range(0, width): pixel = image[i, j] y[pixel] += 1 return (x, y) x,y = hist(img_gray) plt.plot(x, y, 'b') plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Koristeći matplotlib:
plt.hist(img_gray.ravel(), 255, [0, 255]) plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Koristeći OpenCV:
hist_full = cv2.calcHist([img_gray], [0], None, [255], [0, 255]) plt.plot(hist_full) plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Pretpostavimo da su vrednosti piksela lica između 100 i 200.
img_tr = (img_gray > 100) * (img_gray < 200) plt.imshow(img_tr, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Konverovanje iz "grayscale" u RGBOvo je zapravo trivijalna operacija koja za svaki kanal boje (RGB) napravi kopiju od originalne grayscale slike. Ovo je zgodno kada nešto što je urađeno u grayscale modelu treba iskoristiti zajedno sa RGB slikom.
img_tr_rgb = cv2.cvtColor(img_tr.astype('uint8'), cv2.COLOR_GRAY2RGB) plt.imshow(img*img_tr_rgb) # množenje originalne RGB slike i slike sa izdvojenim pikselima lica
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Morfološke operacijeVeliki skup operacija za obradu digitalne slike, gde su te operacije zasnovane na oblicima, odnosno **strukturnim elementima**. U morfološkim operacijama, vrednost svakog piksela rezultujuće slike se zasniva na poređenju odgovarajućeg piksela na originalnoj slici sa svojom okolinom. Veličina i obli...
kernel = np.ones((3, 3)) # strukturni element 3x3 blok print(kernel)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
ErozijaMorfološka erozija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **minimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici.U suštini erozija umanjuje regione belih piksela, a uvećava regione crnih piksela. Često se koristi za uklanjanje šuma (u vidu sitnih regiona b...
plt.imshow(cv2.erode(image_bin, kernel, iterations=1), 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
DilacijaMorfološka dilacija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **maksimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici.U suštini dilacija uvećava regione belih piksela, a umanjuje regione crnih piksela. Zgodno za izražavanje regiona od interesa.![images/dilat...
# drugaciji strukturni element kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (5,5)) # MORPH_ELIPSE, MORPH_RECT... print(kernel) plt.imshow(cv2.dilate(image_bin, kernel, iterations=5), 'gray') # 5 iteracija
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Otvaranje i zatvaranje**```otvaranje = erozija + dilacija```**, uklanjanje šuma erozijom i vraćanje originalnog oblika dilacijom.**```zatvaranje = dilacija + erozija```**, zatvaranje sitnih otvora među belim pikselima
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) print(kernel) img_ero = cv2.erode(image_bin, kernel, iterations=1) img_open = cv2.dilate(img_ero, kernel, iterations=1) plt.imshow(img_open, 'gray') img_dil = cv2.dilate(image_bin, kernel, iterations=1) img_close = cv2.erode(img_dil, kernel, iterations=1) pl...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Primer detekcije ivica na binarnoj slici korišćenjem dilatacije i erozije:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) image_edges = cv2.dilate(image_bin, kernel, iterations=1) - cv2.erode(image_bin, kernel, iterations=1) plt.imshow(image_edges, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Zamućenje (blur)Zamućenje slike se dobija tako što se za svaki piksel slike kao nova vrednost uzima srednja vrednost okolnih piksela, recimo u okolini 5 x 5. Kernel k predstavlja kernel za uniformno zamućenje. Ovo je jednostavnija verzija Gausovskog zamućenja.
from scipy import signal k_size = 5 k = (1./k_size*k_size) * np.ones((k_size, k_size)) image_blur = signal.convolve2d(img_gray, k) plt.imshow(image_blur, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Regioni i izdvajanje regionaNajjednostavnije rečeno, region je skup međusobno povezanih belih piksela. Kada se kaže povezanih, misli se na to da se nalaze u neposrednoj okolini. Razlikuju se dve vrste povezanosti: tzv. **4-connectivity** i **8-connectivity**:![images/48connectivity.png](images/48connectivity.png)Postu...
# ucitavanje slike i convert u RGB img_barcode = cv2.cvtColor(cv2.imread('images/barcode.jpg'), cv2.COLOR_BGR2RGB) plt.imshow(img_barcode)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Recimo da želimo da izdvojimo samo linije barkoda sa slike.Za početak, uradimo neke standardne operacije, kao što je konvertovanje u grayscale i adaptivni threshold.
img_barcode_gs = cv2.cvtColor(img_barcode, cv2.COLOR_RGB2GRAY) # konvert u grayscale plt.imshow(img_barcode_gs, 'gray') #ret, image_barcode_bin = cv2.threshold(img_barcode_gs, 80, 255, cv2.THRESH_BINARY) image_barcode_bin = cv2.adaptiveThreshold(img_barcode_gs, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 10...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Pronalaženje kontura/regionaKonture, odnosno regioni na slici su grubo rečeno grupe crnih piksela. OpenCV metoda findContours pronalazi sve ove grupe crnih piksela, tj. regione. Druga povratna vrednost metode, odnosno contours je lista pronađeih kontura na slici.Ove konture je zaim moguće iscrtati metodom drawContours...
contours, hierarchy = cv2.findContours(image_barcode_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) img = img_barcode.copy() cv2.drawContours(img, contours, -1, (255, 0, 0), 1) plt.imshow(img)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Osobine regionaSvi pronađeni regioni imaju neke svoje karakteristične osobine: površina, obim, konveksni omotač, konveksnost, obuhvatajući pravougaonik, ugao... Ove osobine mogu biti izuzetno korisne kada je neophodno izdvojiti samo određene regione sa slike koji ispoljavaju neku osobinu. Za sve osobine pogledati ovo ...
contours_barcode = [] #ovde ce biti samo konture koje pripadaju bar-kodu for contour in contours: # za svaku konturu center, size, angle = cv2.minAreaRect(contour) # pronadji pravougaonik minimalne povrsine koji ce obuhvatiti celu konturu width, height = size if width > 3 and width < 30 and height > 300 and...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Ensemble models from machine learning: an example of wave runup and coastal dune erosion Tomas Beuzen1, Evan B. Goldstein2, Kristen D. Splinter11Water Research Laboratory, School of Civil and Environmental Engineering, UNSW Sydney, NSW, Australia2Department of Geography, Environment, and Sustainability, University of ...
# Required imports # Standard computing packages import numpy as np import pandas as pd import matplotlib.pyplot as plt # Gaussian Process tools from sklearn.metrics import mean_squared_error from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, WhiteKernel # ...
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
2. Load and Visualize Data In this section, we will load and visualise the wave, beach slope, and runup data we will use to develop the Gaussian process (GP) runup predictor.
# Read in .csv data file as a pandas dataframe df = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_training.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df.shape) df.head() # This cell plots histograms of the data # Initialize the figure and axes fig, axes = plt.subplots...
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
3. Develop GP Runup Predictor In this section we will develop the GP runup predictor.We standardize the data for use in the GP by removing the mean and scaling to unit variance. This does not really affect GP performance but improves computational efficiency (see sklearn documentation for more information).A kernel mu...
# Define features and response data X = df.drop(columns=df.columns[-1]) # Drop the last column to retain input features (Hs, Tp, slope) y = df[[df.columns[-1]]] # The last column is the predictand (R2)
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
Standardize data for use in the GPscaler = StandardScaler()scaler.fit(X) Fit the scaler to the training dataX_scaled = scaler.transform(X) Scale training data
# Specify the kernel to use in the GP kernel = RBF(0.1, (1e-2, 1e2)) + WhiteKernel(1,(1e-2,1e2)) # Train GP model on training dataset gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, random_st...
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
4. Test GP Runup Predictor This section now shows how the GP runup predictor can be used to test 50 test samples not previosuly used in training.
# Read in .csv test data file as a pandas dataframe df_test = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_testing.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df_test.shape) df_test.head() # Predict the data X_test = df_test.drop(columns=df.columns[-1]) # Drop the las...
GP RMSE on test data = 0.22 GP bias on test data = 0.07
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
5. Explore GP Prediction Uncertainty This section explores how we can draw random samples from the GP to explain scatter in the runup predictions. We randomly draw 100 samples from the GP and calculate how much of the scatter in the runup predictions is captured by the ensemble envelope for different ensemble sizes. T...
# Draw 100 samples from the GP model using the testing dataset GP_draws = gp.sample_y(X_test, n_samples=100, random_state=123).squeeze() # Draw 100 random samples from the GP # Initialize result arrays perc_ens = np.zeros((100,100)) # Initialize ensemble capture array perc_err = np.zeros((100,)) # Initialise arbitra...
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class...
#Text version of model architecture model = Network() model # #Common way to define model using PyTorch # import torch.nn.functional as F # class Network(nn.Module): # def __init__(self): # super().__init__() # # Inputs to hidden layer linear transformation # self.hidden = nn.Linear(784, ...
Sequential( (0): Linear(in_features=784, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=64, bias=True) (3): ReLU() (4): Linear(in_features=64, out_features=10, bias=True) (5): Softmax() )
MIT
NN using PyTorch.ipynb
Spurryag/PyTorch-Scholarship-Programme-Solutions
Subplots
%matplotlib notebook import matplotlib.pyplot as plt import numpy as np plt.subplot? plt.figure() # subplot with 1 row, 2 columns, and current axis is 1st subplot axes plt.subplot(1, 2, 1) linear_data = np.array([1,2,3,4,5,6,7,8]) plt.plot(linear_data, '-o') exponential_data = linear_data**2 # subplot with 1 row,...
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Histograms
# create 2x2 grid of axis subplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] # draw n = 10, 100, 1000, and 10000 samples from the normal distribution and plot corresponding histograms for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc...
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Box and Whisker Plots
import pandas as pd normal_sample = np.random.normal(loc=0.0, scale=1.0, size=10000) random_sample = np.random.random(size=10000) gamma_sample = np.random.gamma(2, size=10000) df = pd.DataFrame({'normal': normal_sample, 'random': random_sample, 'gamma': gamma_sample}) df.describ...
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Heatmaps
plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) _ = plt.hist2d(X, Y, bins=25) plt.figure() _ = plt.hist2d(X, Y, bins=100) # add a colorbar legend plt.colorbar()
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Animations
import matplotlib.animation as animation n = 100 x = np.random.randn(n) # create the function that will do the plotting, where curr is the current frame def update(curr): # check if animation is at the last frame, and if so, stop the animation a if curr == n: a.event_source.stop() plt.cla() bi...
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Interactivity
plt.figure() data = np.random.rand(10) plt.plot(data) def onclick(event): plt.cla() plt.plot(data) plt.gca().set_title('Event at pixels {},{} \nand data {},{}'.format(event.x, event.y, event.xdata, event.ydata)) # tell mpl_connect we want to pass a 'button_press_event' into onclick when the event is detec...
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Question 1Assume that $f(\cdot)$ is an infinitely smooth and continuous scalar function. Suppose that $a\in \mathbb{R}$ is a given constant in the domain of the function $f$ and that $h>0$ is a given parameter assumed to be small. Consider the following numerical approximation of a first derivative,$$ f'(a) \approx c_...
x0 = 1.2 f0 = sin(x0) fp = cos(x0) fpp = -sin(x0) fppp = -cos(x0) i = linspace(-20, 0, 40) h = 10.0**i fp_approx = (sin(x0 + h) - f0)/h fp_center_diff_approx = (sin(x0 + h) - sin(x0 - h))/(2*h) err = absolute(fp - fp_approx) err2 = absolute(fp - fp_center_diff_approx) d_err = h/2*absolute(fpp) d2_err = h**2/6*absolute...
_____no_output_____
Apache-2.0
Homework 2 Solutions.ipynb
newby-jay/MATH381-Fall2021-JupyterNotebooks
Building and using data schemas for computer visionThis tutorial illustrates how to use raymon profiling to guard image quality in your production system. The image data is taken from [Kaggle](https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product) and is courtesy of PILOT TECHNOCAST, Sha...
%load_ext autoreload %autoreload 2 from PIL import Image from pathlib import Path
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
First, let's load some data. In this tutorial, we'll take the example of quality inspection in manufacturing. The puprose of our system may be to determine whether a manufactured part passes the required quality checks. These checks may measure the roudness of the part, the smoothness of the edges, the smoothness of th...
DATA_PATH = Path("../raymon/tests/sample_data/castinginspection/ok_front/") LIM = 150 def load_data(dpath, lim): files = dpath.glob("*.jpeg") images = [] for n, fpath in enumerate(files): if n == lim: break img = Image.open(fpath) images.append(img) return images l...
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Constructing and building a profileFor this tutorial, we'll construct a profile that checks the image sharpness and will calculate an outlier score on the image. This way, we hope to get alerting when something seems off with the input data.Just like in the case of structured data, we need to start by specifying a pro...
from raymon import ModelProfile, InputComponent from raymon.profiling.extractors.vision import Sharpness, DN2AnomalyScorer profile = ModelProfile( name="casting-inspection", version="0.0.1", components=[ InputComponent(name="sharpness", extractor=Sharpness()), InputComponent(name="outliers...
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Use the profile to check new dataWe can save the schema to JSON, load it again (in your production system), and use it to validate incoming data.
profile.save(".") profile = ModelProfile.load("casting-inspection@0.0.1.json") tags = profile.validate_input(loaded_data[-1]) tags
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
As you can see, all the extracted feature values are returned. This is useful for when you want to track feature distributions on your monitoring backend (which is what happens on the Raymon.ai platform). Also note that these features are not necessarily the ones going into your ML model. Corrupting inputsLet's see wha...
from PIL import ImageFilter img_blur = loaded_data[-1].copy().filter(ImageFilter.GaussianBlur(radius=5)) img_blur profile.validate_input(img_blur)
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
As can be seen, every feature extractor now gives rise to 2 tags: one being the feature and one being a schema error, indicating that the data has failed both sanity checks. Awesome.We can visualize this datum while inspecting the profile.
profile.view(poi=img_blur, mode="external")
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Clustering hierárquico
# imports necessários from sklearn.cluster import AgglomerativeClustering from scipy.cluster.hierarchy import dendrogram # Implemente Clustering Hierárquico modelo = AgglomerativeClustering(distance_threshold = 0, n_clusters = None, linkage = "single") modelo.fit_predict(x) # clusters.children_ # Plotando o dendogram...
_____no_output_____
MIT
clustering/k_means.ipynb
JVBravoo/Learning-Machine-Learning
This notebook demonstrates how to perform regression analysis using scikit-learn and the watson-machine-learning-client package.Some familiarity with Python is helpful. This notebook is compatible with Python 3.7.You will use the sample data set, **sklearn.datasets.load_boston** which is available in scikit-learn, to p...
username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE' wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' }
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Install and import the `ibm-watson-machine-learning` package**Note:** `ibm-watson-machine-learning` documentation can be found here.
!pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)
2020-12-08 12:44:04,591 - root - WARNING - scikit-learn version 0.23.2 is not supported. Minimum required version: 0.17. Maximum required version: 0.19.2. Disabling scikit-learn conversion API. 2020-12-08 12:44:04,653 - root - WARNING - Keras version 2.2.5 detected. Last version known to be fully compatible of Keras is...
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Working with spacesFirst of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.- Click New Deployment Space- Create an empty space- Go to space `Settings` tab- Copy `space_id` and paste...
space_id = 'PASTE YOUR SPACE ID HERE'
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
You can use `list` method to print all existing spaces.
client.spaces.list(limit=10)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
client.set.default_space(space_id)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
2. Load and explore data The sample data set contains boston house prices. The data set can be found here.In this section, you will learn how to:- [2.1 Explore Data](dataset) - [2.2 Check the correlations between predictors and the target](corr) 2.1 Explore dataIn this subsection, you will perform exploratory data a...
!pip install --upgrade scikit-learn==0.23.1 seaborn import sklearn from sklearn import datasets import pandas as pd boston_data = datasets.load_boston()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Let's check the names of the predictors.
print(boston_data.feature_names)
['CRIM' 'ZN' 'INDUS' 'CHAS' 'NOX' 'RM' 'AGE' 'DIS' 'RAD' 'TAX' 'PTRATIO' 'B' 'LSTAT']
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Tip:** Run `print(boston_data.DESCR)` to view a detailed description of the data set.
print(boston_data.DESCR)
.. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per c...
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Create a pandas DataFrame and display some descriptive statistics.
boston_pd = pd.DataFrame(boston_data.data) boston_pd.columns = boston_data.feature_names boston_pd['PRICE'] = boston_data.target
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
The describe method generates summary statistics of numerical predictors.
boston_pd.describe()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
2.2 Check the correlations between predictors and the target
import seaborn as sns %matplotlib inline corr_coeffs = boston_pd.corr() sns.heatmap(corr_coeffs, xticklabels=corr_coeffs.columns, yticklabels=corr_coeffs.columns);
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
3. Build a scikit-learn linear regression modelIn this section, you will learn how to:- [3.1 Split data](prep)- [3.2 Create a scikit-learn pipeline](pipe)- [3.3 Train the model](train) 3.1 Split dataIn this subsection, you will split the data set into: - Train data set- Test data set
from sklearn.model_selection import train_test_split X = boston_pd.drop('PRICE', axis = 1) y = boston_pd['PRICE'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 5) print('Number of training records: ' + str(X_train.shape[0])) print('Number of test records: ' + str(X_test.sh...
Number of training records: 339 Number of test records: 167
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Your data has been successfully split into two data sets: - The train data set, which is the largest group, is used for training.- The test data set will be used for model evaluation and is used to test the model. 3.2 Create a scikit-learn pipeline In this subsection, you will create a scikit-learn pipeline. First, ...
from sklearn.pipeline import Pipeline from sklearn import preprocessing from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Standardize the features by removing the mean and by scaling to unit variance.
scaler = preprocessing.StandardScaler()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Next, define the regressor you want to use. This notebook uses the Linear Regression model.
lr = LinearRegression()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Build the pipeline. A pipeline consists of a transformer (Standard Scaler) and an estimator (Linear Regression model).
pipeline = Pipeline([('scaler', scaler), ('lr', lr)])
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
3.3 Train the model Now, you can use the **pipeline** and **train data** you defined previously to train your SVM model.
model = pipeline.fit(X_train, y_train)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Check the model quality.
y_pred = model.predict(X_test) mse = sklearn.metrics.mean_squared_error(y_test, y_pred) print('MSE: ' + str(mse))
MSE: 28.530458765974625
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Plot the scatter plot of prices vs. predicted prices.
import matplotlib.pyplot as plt plt.style.use('ggplot') plt.title('Predicted prices vs prices') plt.ylabel('Prices') plt.xlabel('Predicted prices') plot = plt.scatter(y_pred, y_test)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Note:** You can tune your model to achieve better accuracy. To keep this example simple, the tuning section is omitted. 4. Save the model in the WML repository In this section, you will learn how to use the common Python client to manage your model in the WML repository.
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.7") metadata = { client.repository.ModelMetaNames.NAME: 'Boston house price', client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_ui...
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Get information about all of the models in the WML repository.
models_details = client.repository.list_models()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
5. Deploy the model via Core ML In this section, you will learn how to use the WML client to create a **virtual** deployment via the `Core ML`. You will also learn how to use `download_url` to download a Core ML model for your Xcode project.- [5.1 Create a virtual deployment for the model](create)- [5.2 Download the C...
metadata = { client.deployments.ConfigurationMetaNames.NAME: "Virtual deployment of Boston model", client.deployments.ConfigurationMetaNames.VIRTUAL: {"export_format": "coreml"} } created_deployment = client.deployments.create(model_uid, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: '9b319604-4b55-4a86-8728-51572eeeb761' started ####################################################################################### initializing......................... ready ----...
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Now, you can define and print the download endpoint. You can use this endpoint to download the Core ML model. 5.2 Download the `Core ML` file from the deployment
client.deployments.list()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Download the virtual deployment content: Core ML model.
deployment_uid = client.deployments.get_uid(created_deployment) deployment_content = client.deployments.download(deployment_uid)
---------------------------------------------------------- Successfully downloaded deployment file: mlartifact.tar.gz ----------------------------------------------------------
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Use the code in the cell below to create the download link.
from ibm_watson_machine_learning.utils import create_download_link create_download_link(deployment_content)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Note:** You can use Xcode to preview the model's metadata (after unzipping). 5.3 Test the `Core ML` model Use the following steps to run a test against the downloaded Core ML model.
!pip install --upgrade coremltools
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Use the ``coremltools`` to load the model and check some basic metadata. First, extract the model.
from ibm_watson_machine_learning.utils import extract_mlmodel_from_archive extracted_model_path = extract_mlmodel_from_archive('mlartifact.tar.gz', model_uid)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Load the model and check the description.
import coremltools loaded_model = coremltools.models.MLModel(extracted_model_path) print(loaded_model.get_spec())
specificationVersion: 1 description { input { name: "input" type { multiArrayType { shape: 13 dataType: DOUBLE } } } output { name: "prediction" type { doubleType { } } } predictedFeatureName: "prediction" metadata { shortDescription: "\'de...
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Copyright Netherlands eScience Center ** Function : Computing AMET with Surface & TOA flux** ** Author : Yang Liu ** ** First Built : 2019.08.09 ** ** Last Update : 2019.09.09 ** Description : This notebook aims to compute AMET with TOA/surface flux fields from NorESM model. The NorESM model is launche...
%matplotlib inline import numpy as np import sys sys.path.append("/home/ESLT0068/NLeSC/Computation_Modeling/Bjerknes/Scripts/META") import scipy as sp import pygrib import time as tttt from netCDF4 import Dataset,num2date import os import meta.statistics import meta.visualizer # constants constant = {'g' : 9.80616, ...
_____no_output_____
Apache-2.0
Packing/AMET_MPIESM_MPI.ipynb
geek-yang/JointAnalysis
Romania* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview("Romania", weeks=5); overview("Romania"); compare_plot("Romania", normalise=Tru...
_____no_output_____
CC-BY-4.0
ipynb/Romania.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how...
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Romania.ipynb
oscovida/oscovida.github.io
Dempnstration that GMVRFIT reduces to GMVPFIT (or equivalent) for polynomial casesDevelopment for a fitting function (greedy+linear based on mvpolyfit and gmvpfit) that handles rational fucntions
# Low-level import from numpy import * from numpy.linalg import pinv,lstsq # Setup ipython environment %load_ext autoreload %autoreload 2 %matplotlib inline # Setup plotting backend import matplotlib as mpl mpl.rcParams['lines.linewidth'] = 0.8 mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 12 mpl.r...
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Package Development (positive/learning.py) Setup test data
################################################################################ h = 3 Q = 25 x = h*linspace(-1,1,Q) y = h*linspace(-1,1,Q) X,Y = meshgrid(x,y) # X += np.random.random( X.shape )-0.5 # Y += np.random.random( X.shape )-0.5 zfun = lambda xx,yy: 50 + (1.0 + 0.5*xx*yy + xx**2 + yy**2 ) numerator_symbols...
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Initiate class object for fitting
foo = mvrfit( domain, scalar_range, numerator_symbols, denominator_symbols, verbose=True )
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Plot using class method
foo.plot()
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Generate python string for fit model
print foo.__str_python__(precision=8)
f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Use greedy algorithm
star = gmvrfit( domain, scalar_range, verbose=True ) star.plot() star.bin['pgreedy_result'].plot() star.bin['ngreedy_result'].plot()
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Commands for plottingThese are used so the the usual "plot" will use matplotlib.
# commands for plotting, "plot" works with matplotlib def mesh2triang(mesh): xy = mesh.coordinates() return tri.Triangulation(xy[:, 0], xy[:, 1], mesh.cells()) def mplot_cellfunction(cellfn): C = cellfn.array() tri = mesh2triang(cellfn.mesh()) return plt.tripcolor(tri, facecolors=C) def mplot_fun...
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Annulus This is the field in an annulus. We specify boundary conditions and solve the problem.
r1 = 1 # inner circle radius r2 = 10 # outer circle radius # shapes of inner/outer boundaries are circles c1 = Circle(Point(0.0, 0.0), r1) c2 = Circle(Point(0.0, 0.0), r2) domain = c2 - c1 # solve between circles res = 20 mesh = generate_mesh(domain, res) class outer_boundary(SubDomain): def inside(self, x, o...
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Plotting with matplotlibNow the usual "plot" commands will work for plotting the mesh and the function.
plot(mesh) # usual Fenics command, will use matplotlib plot(u) # usual Fenics command, will use matplotlib
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
If you want to do usual "matplotlib" stuff then you still need "plt." prefix on commands.
plt.figure() plt.subplot(1,2,1) plot(mesh) plt.xlabel('x') plt.ylabel('y') plt.subplot(1,2,2) plot(u) plt.title('annulus solution')
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Plotting along a lineIt turns out the the solution "u" is a function that can be evaluated at a point. So in the next cell we loop through a line and make a vector of points for plotting. You just need to give it coordinates $u(x,y)$.
y = np.linspace(r1,r2*0.99,100) uu = [] np.array(uu) for i in range(len(y)): yy = y[i] uu.append(u(0.0,yy)) #evaluate u along y axis plt.figure() plt.plot(y,uu) plt.grid(True) plt.xlabel('y') plt.ylabel('V') u
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Handwritten Digits Recognition 02 - TensorFlowFrom the table below, we see that MNIST database is way larger than scikit-learn database, which we modelled in the previous notebook. Both number of samples and size of each sample are significantly higher. The good new is that, with TensorFlow and Keras, we can build neu...
import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print("TensorFlow Version", tf.__version__) if tf.test.is_gpu_available: print("Device:" ,tf.test.gpu_device_name())
TensorFlow Version 2.1.0 Device: /device:GPU:0
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Now, load the MNIST database using TensorFlow. From the output, we can see that the images are 28x28. The database contains 60,000 training and 10,000 testing images. There is no missing entries.
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape)
(60000, 28, 28) (60000,) (10000, 28, 28) (10000,)
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Before we get our hands dirty with all the hardwork, let's just take a moment and look at some digits in the dataset. The digits displayed are the first eight digits in the set. We can see that the image quality is quite high, significantly better than the ones in scikit-learn digits set.
fig, axes = plt.subplots(2, 4) for i, ax in zip(range(8), axes.flatten()): ax.imshow(X_train[i], cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title("Number %d" % y_train[i]) ax.set_axis_off() fig.suptitle("Image of Digits in MNIST Database") plt.show()
_____no_output_____
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Training a convolutional neural network with TensorFlowEach pixel in the images is stored as integers ranging from 0 to 255. CNN requires us to normalize the numbers to be between 0 and 1. We also increased a dimension so that the images can be fed into the CNN. Also, convert the labels (*y_train, y_test*) to one-hot ...
# Normalize and flatten the images x_train = X_train.reshape((60000, 28, 28, 1)).astype('float32') / 255 x_test = X_test.reshape((10000, 28, 28, 1)).astype('float32') / 255 # Convert to one-hot encoding from keras.utils import np_utils y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test...
Using TensorFlow backend.
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
This is the structure of the convolutional neural network. We have two convolution layers to extract features, along with two pooling layers to reduce the dimension of the features. The dropout layer disgards 20% of the data to prevent overfitting. The multi-dimensional data is then flattened in to vectors. The two den...
model = keras.Sequential([ keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Dropout(rate = 0.2), keras.layers.Flatten(), ...
10000/10000 - 1s - loss: 0.0290 - accuracy: 0.9921 Test accuracy: 0.9921
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
The accuracy of the CNN is 99.46% and its performance on the testing set is 99.21%. No overfitting. We have a robust model! Saving the trained modelBelow is the summary of the model. It is amazing that we have trained 109,930 parameters! Now, save this model so we don't have to train it again in the future.
# Show the model architecture model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) multiple 832 ____________________________________...
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Just like in the previous notebook, we can save this model as well.
model.save("CNN_model.h5")
_____no_output_____
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Lets play with a funny fake dataset. This dataset contains few features and it has an dependent variable which says if we are going ever to graduate or not Importing few libraries
from sklearn import datasets,model_selection import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from ipywidgets import interactive from sklearn.preprocessing import MinMaxScaler from ...
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
Then, we will load our fake dataset, and we will split our dataset in two parts, one for training and one for testing
student = pd.read_csv('LionForests-Bot/students2.csv') feature_names = list(student.columns)[:-1] class_names=["Won't graduate",'Will graduate (eventually)'] X = student.iloc[:, 0:-1].values y = student.iloc[:, -1].values x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3,random_sta...
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
We are also scaling our data in the range [0,1] in order later the interpretations to be comparable
scaler = MinMaxScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test)
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
Now, we will train a linear model, called logistic regression with our dataset. And we will evaluate its performance
#lin_model = LogisticRegression(solver="newton-cg",penalty='l2',max_iter=1000,C=100,random_state=0) lin_model = LogisticRegression(solver="liblinear",penalty='l1',max_iter=1000,C=10,random_state=0) lin_model.fit(x_train, y_train) predicted_train = lin_model.predict(x_train) predicted_test = lin_model.predict(x_test) pr...
Logistic Regression Model Performance: Accuracy in Train Set 0.8414285714285714 Accuracy in Test Set 0.85
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
To globally interpret this model, we will plot the weights of each variable/feature
weights = lin_model.coef_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by ...
Number of features: 5
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
import tensorflow as tf import matplotlib.pyplot as plt fashion = tf.keras.datasets.fashion_mnist (train_data, train_lable), (test_data, test_lable)= fashion.load_data() train_data= train_data/255 test_data=test_data/255 model = tf.keras.Sequential([ tf.keras.layers.Flatten(), ...
313/313 [==============================] - 1s 2ms/step - loss: 0.3598 - accuracy: 0.8886
MIT
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
Using CNN
(train_data1, train_lable1), (test_data1, test_lable1)= fashion.load_data() train_data1=train_data1.reshape(60000, 28,28,1) test_data1= test_data1.reshape(10000,28,28,1) train_data1= train_data1/255 test_data1=test_data1/255 model = tf.keras.Sequential([ tf.keras.layers.Conv2D(128,(3,3), ...
_____no_output_____
MIT
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
Linear Regression Implementation from Scratch:label:`sec_linear_scratch`Now that you understand the key ideas behind linear regression,we can begin to work through a hands-on implementation in code.In this section, (**we will implement the entire method from scratch,including the data pipeline, the model,the loss func...
%matplotlib inline import random import tensorflow as tf from d2l import tensorflow as d2l
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Generating the DatasetTo keep things simple, we will [**construct an artificial datasetaccording to a linear model with additive noise.**]Our task will be to recover this model's parametersusing the finite set of examples contained in our dataset.We will keep the data low-dimensional so we can visualize it easily.In t...
def synthetic_data(w, b, num_examples): #@save """Generate y = Xw + b + noise.""" X = tf.zeros((num_examples, w.shape[0])) X += tf.random.normal(shape=X.shape) y = tf.matmul(X, tf.reshape(w, (-1, 1))) + b y += tf.random.normal(shape=y.shape, stddev=0.01) y = tf.reshape(y, (-1, 1)) return X,...
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Note that [**each row in `features` consists of a 2-dimensional data exampleand that each row in `labels` consists of a 1-dimensional label value (a scalar).**]
print('features:', features[0],'\nlabel:', labels[0])
features: tf.Tensor([ 0.8627048 -0.8168014], shape=(2,), dtype=float32) label: tf.Tensor([8.699112], shape=(1,), dtype=float32)
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning