markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
When number of samples to be explained is 100, `"auto"` selects `"v2"` as the most appropriate TreeSHAP algorithm.
# number of samples to be explained num_sample = 100 # compute SHAP values via FastTreeSHAP v0 (i.e., original TreeSHAP) shap_explainer = fasttreeshap.TreeExplainer(rf_model, algorithm = "v0") shap_values_v0 = shap_explainer(test.iloc[:num_sample]).values # compute SHAP values via FastTreeSHAP v1 shap_explainer = fast...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
When number of samples to be explained is 50, `"auto"` selects `"v1"` as the most appropriate TreeSHAP algorithm.
# number of samples to be explained num_sample = 50 # compute SHAP values via FastTreeSHAP v0 (i.e., original TreeSHAP) shap_explainer = fasttreeshap.TreeExplainer(rf_model, algorithm = "v0") shap_values_v0 = shap_explainer(test.iloc[:num_sample]).values # compute SHAP values via FastTreeSHAP v1 shap_explainer = fastt...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Automatic algorithm selection in very large models In very large models, `"auto"` selects `"v1"` instead of `"v2"` when the potential memory risk is detected.
n_estimators = 200 # number of trees in random forest model max_depth = 20 # maximum depth of any trees in random forest model # train a random forest model rf_model = RandomForestClassifier(n_estimators = n_estimators, max_depth = max_depth, random_state = 0) rf_model.fit(train, label_train) # estimated memory usage...
_____no_output_____
BSD-2-Clause
notebooks/FastTreeSHAP_Census_Income.ipynb
linkedin/FastTreeSHAP
Importing Libraries
import pandas as pd import numpy as np import shutil import os import networkx as nx
_____no_output_____
BSL-1.0
Create Sorted Graph.ipynb
AhmadTaha96/Facebook-Friendship-Prediction
Create Sorted Graph We are gonna now design features based on geometric information in the graph we have, what we call embedding for each node in the graph give it's neighborhood nodes to get it's place inside the graph.
train_graph = nx.read_edgelist("Data/train graph.csv", comments = 's', create_using = nx.DiGraph(), nodetype = int, delimiter = ",")
_____no_output_____
BSL-1.0
Create Sorted Graph.ipynb
AhmadTaha96/Facebook-Friendship-Prediction
Now as we are gonint to use KarateClub library to import the algorithm specifically (NetMF Embedding ALgortihm), for thatwe have to modify our graph as KarateClub does not accept scattered graph(unsorted graph based on index) so we have to design new graph given our train graph
# to get back from sorted graph to train graph base_nodes = dict() # to move from train graph to sorted graph reflection = dict() in_edges = dict() out_edges = dict() sorted_graph = nx.DiGraph() i = 0 for node in train_graph.nodes(): base_nodes[i] = node reflection[node] = i in_edges[i] = train_graph.predecesso...
_____no_output_____
BSL-1.0
Create Sorted Graph.ipynb
AhmadTaha96/Facebook-Friendship-Prediction
Neural Network
import numpy as np import pickle import gzip def load_data(): """ MNIST comprises of three variables -> Training Data: 50,000 entries Validation Data: 10,000 entries Test Data: 10,000 entries One Entry of Input = 28 * 28 = 784 One Entry of Output = 0 - 9 {integer} train...
Epoch 0: 5280 / 10000 Epoch 1: 5560 / 10000 Epoch 2: 5614 / 10000 Epoch 3: 5656 / 10000 Epoch 4: 6032 / 10000 Epoch 5: 6389 / 10000 Epoch 6: 6429 / 10000 Epoch 7: 6440 / 10000 Epoch 8: 7232 / 10000 Epoch 9: 7268 / 10000
MIT
Neural Network.ipynb
utkarshg6/neuralnetworks
Data Analytics Interactive Distribution Transformations in Python Michael Pyrcz, Associate Professor, University of Texas at Austin [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?us...
import numpy as np # ndarrys for gridded data import pandas as pd # DataFrames for tabular data import matplotlib.pyplot as plt # plotting from scipy import stats # summary statistics import math # trigonometry etc. i...
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Set the Random Number SeedSet the random number seed so that we have a repeatable workflow
seed = 73073
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Loading Tabular DataHere's the command to load our comma delimited data file in to a Pandas' DataFrame object. For fun try misspelling the name. You will get an ugly, long error.
data_url = "https://raw.githubusercontent.com/GeostatsGuy/GeoDataSets/master/sample_data.csv" df = pd.read_csv(data_url ) # load our data table (wrong name!)
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
It worked, we loaded our file into our DataFrame called 'df'. But how do you really know that it worked? Visualizing the DataFrame would be useful and we already leard about these methods in this demo (https://git.io/fNgRW). We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member fu...
df.head(n=6) # we could also use this command for a table preview
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Calculating and Plotting a CDF by HandLet's demonstrate the calculation and plotting of a non-parametric CDF by hand1. make a copy of the feature as a 1D array (ndarray from NumPy)2. sort the data in ascending order3. assign cumulative probabilities based on the tail assumptions4. plot cumuative probability vs. value
por = df['Porosity'].copy(deep = True).values # make a deepcopy of the feature from the DataFrame print('The ndarray has a shape of ' + str(por.shape) + '.') por = np.sort(por) # sort the data in ascending order n = por.shape[0] # get the number of data samples cp...
The ndarray has a shape of (261,).
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Transformation to a Parametric DistributionWe can transform our data feature distribution to any parametric distribution with this workflow.1. Calculate the cumulative probability value of each of our data values, $p_{\alpha} = F_x(x_\alpha)$, $\forall$ $\alpha = 1,\ldots, n$.2. Apply the inverse of the target paramet...
y = np.zeros(n) for i in range(0,n): y[i] = norm.ppf(cprob[i],loc=0.0,scale=1.0) plt.subplot(121) plt.plot(por,cprob, alpha = 0.2, c = 'black') # plot piecewise linear interpolation plt.scatter(por,cprob,s = 10, alpha = 1.0, c = 'red', edgecolor = 'black') # plot the CDF points plt.grid(); plt.xlim([0.05,0.25]); ...
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Let's make an interactive version of this plot to visualize the transformation.
# widgets and dashboard l = widgets.Text(value=' Data Analytics, Distribution Transformation, Prof. Michael Pyrcz, The University of Texas at Austin',layout=Layout(width='950px', height='30px')) data_index = widgets.IntSlider(min=1, max = n-1, value=1.0, step = 10.0, de...
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Interactive Data Analytics Distribution Transformation Demonstration Michael Pyrcz, Associate Professor, The University of Texas at Austin Select any data value and observe the distribution transform by mapping through cumulative probability. The Inputs* **data_index** - the data index from 1 to n in the sorted ascen...
display(ui, interactive_plot) # display the interactive plot
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Distribution Transform to a Non-Parametric DistributionWe can apply the mapping through cumulative probabilities to transform from any distribution to any other distribution.* let's make a new data set by randomly sampling from the previous one and adding errorThen we can demonstrate transforming this dataset to match...
n_sample = 30 df_sample = df.sample(n_sample,random_state = seed) df_sample = df_sample.copy(deep = True) # make a deepcopy of the feature from the DataFrame df_sample['Porosity'] = df_sample['Porosity'].values + np.random.normal(loc = 0.0, scale = 0.01, size = n_sample) df_sample = df_sample.s...
The sample ndarray has a shape of (30,).
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Let's transform the values and show them on the target distribution.
y_sample = np.zeros(n_sample) for i in range(0,n_sample): y_sample[i] = np.percentile(por,cprob_sample[i]*100, interpolation = 'linear') # piecewise linear interpolation of inverse of target CDF plt.subplot(121) plt.plot(por_sample,cprob_sample, alpha = 0.2, c = 'black') # plot piecewise linear interpolation...
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Let's make an interactive version of this plot to visualize the transformation.
# widgets and dashboard l_sample = widgets.Text(value=' Data Analytics, Distribution Transformation, Prof. Michael Pyrcz, The University of Texas at Austin',layout=Layout(width='950px', height='30px')) data_index_sample = widgets.IntSlider(min=1, max = n_sample, value=1...
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
Interactive Data Analytics Distribution Transformation Demonstration Michael Pyrcz, Associate Professor, The University of Texas at Austin Select any data value and observe the distribution transform by mapping through cumulative probability. The Inputs* **data_index** - the data index from 1 to n in the sorted ascen...
display(ui_sample, interactive_plot_s) # display the interactive plot
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
To summarize let's look at a DataFrame with the original noisey sample and the transformed to match the original distribution.* we're making and showing a table of original values, $x_{\beta}$ $\forall$ $\beta = 1, \ldots, n_{sample}$, and the transformed values, $y_{\beta}$ $\forall$ $\beta = 1, \ldots, n_{sample}$.
df_sample['Transformed_Por'] = y_sample df_sample.head(n=n_sample)
_____no_output_____
MIT
Interactive_Distribution_Transformations.ipynb
GeostatsGuy/InteractivePython
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/hello-python).**--- Welcome to your first set of Python coding problems. If this is your first time using Kaggle Notebooks, welcome! Notebooks ar...
print("You've successfully run some Python code") print("Congratulations!") print("Hello_World!")
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
Try adding another line of code in the cell above and re-running it. Now let's get a little fancier: Add a new code cell by clicking on an existing code cell, hitting the escape key, and then hitting the `a` or `b` key. The `a` key will add a cell above the current cell, and `b` adds a cell below.Great! Now you know ...
from learntools.core import binder; binder.bind(globals()) from learntools.python.ex1 import * print("Setup complete! You're ready to start question 0.")
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
0.*This is a silly question intended as an introduction to the format we use for hands-on exercises throughout all Kaggle courses.***What is your favorite color? **To complete this question, create a variable called `color` in the cell below with an appropriate value. The function call `q0.check()` (which we've alread...
# create a variable called color with an appropriate value on the line below # (Remember, strings in Python must be enclosed in 'single' or "double" quotes) color = "Blue" # Check your answer q0.check()
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
Didn't get the right answer? How do you not even know your own favorite color?!Delete the `` in the line below to make one of the lines run. You can choose between getting a hint or the full answer by choosing which line to remove the `` from. Removing the `` is called uncommenting, because it changes that line from a ...
#q0.hint() q0.solution()
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
The upcoming questions work the same way. The only thing that will change are the question numbers. For the next question, you'll call `q1.check()`, `q1.hint()`, `q1.solution()`, for question 2, you'll call `q2.check()`, and so on. 1.Complete the code below. In case it's helpful, here is the table of available arithme...
pi = 3.14159 # approximate diameter = 3 # Create a variable called 'radius' equal to half the diameter radius = diameter/2 # Create a variable called 'area', using the formula for the area of a circle: pi times the radius squared area = pi*radius*radius # Check your answer q1.check() # Uncomment and run the lines be...
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
2.Add code to the following cell to swap variables `a` and `b` (so that `a` refers to the object previously referred to by `b` and vice versa).
########### Setup code - don't touch this part ###################### # If you're curious, these are examples of lists. We'll talk about # them in depth a few lessons from now. For now, just know that they're # yet another type of Python object, like int or float. a = [1, 2, 3] b = [3, 2, 1] q2.store_original_ids() ##...
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
3a.Add parentheses to the following expression so that it evaluates to 1.
(5 - 3 )// 2 #q3.a.hint() # Check your answer (Run this code cell to receive credit!) q3.a.solution()
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
3b. 🌶️Questions, like this one, marked a spicy pepper are a bit harder.Add parentheses to the following expression so that it evaluates to 0.
8 - (3 * 2) -( 1 + 1) #q3.b.hint() # Check your answer (Run this code cell to receive credit!) q3.b.solution()
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
4. Alice, Bob and Carol have agreed to pool their Halloween candy and split it evenly among themselves.For the sake of their friendship, any candies left over will be smashed. For example, if they collectivelybring home 91 candies, they'll take 30 each and smash 1.Write an arithmetic expression below to calculate how ...
# Variables representing the number of candies collected by alice, bob, and carol alice_candies = 121 bob_candies = 77 carol_candies = 109 # Your code goes here! Replace the right-hand side of this assignment with an expression # involving alice_candies, bob_candies, and carol_candies to_smash = (alice_candies + bob_...
_____no_output_____
Apache-2.0
Python-Kaggle/exercise-syntax-variables-and-numbers.ipynb
rhazra-003/30-Days-of-ML-Kaggle
Route_Dynamics Example
import os import sys module_path = os.path.abspath(os.path.join('..')) sys.path.append(module_path) from route_dynamics.route_energy import longi_dynam_model as ldm from route_dynamics.route_riders import route_riders as ride from route_dynamics.route_visualizer import visualizer as vis import matplotlib.pyplot as plt...
_____no_output_____
MIT
examples/spring_quarter_example.ipynb
SacPec/Route_Dynamics_S-dev
Layerwise Sequential Unit Variance (LSUV) Getting the MNIST data and a CNN
x_train,y_train,x_valid,y_valid = get_data() x_train,x_valid = normalize_to(x_train,x_valid) train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid) nh,bs = 50,512 c = y_train.max().item()+1 loss_func = F.cross_entropy data = DataBunch(*get_dls(train_ds, valid_ds, bs), c) mnist_view = view_tfm(1,28,2...
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
Now we're going to look at the paper [All You Need is a Good Init](https://arxiv.org/pdf/1511.06422.pdf), which introduces *Layer-wise Sequential Unit-Variance* (*LSUV*). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional l...
run.fit(2, learn)
train: [1.73625, tensor(0.3975, device='cuda:0')] valid: [1.68747265625, tensor(0.5652, device='cuda:0')] train: [0.356792578125, tensor(0.8880, device='cuda:0')] valid: [0.13243565673828125, tensor(0.9588, device='cuda:0')]
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!
learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.
#export def get_batch(dl, run): run.xb,run.yb = next(iter(dl)) for cb in run.cbs: cb.set_runner(run) run('begin_batch') return run.xb,run.yb xb,yb = get_batch(data.train_dl, run)
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use `sum(list, [])` to concatenate the lists the function finds (`sum` applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).
#export def find_modules(m, cond): if cond(m): return [m] return sum([find_modules(o,cond) for o in m.children()], []) def is_lin_layer(l): lin_layers = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear, nn.ReLU) return isinstance(l, lin_layers) mods = find_modules(learn.model, lambda o: isinstance(o,ConvLay...
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
This is a helper function to grab the mean and std of the output of a hooked layer.
def append_stat(hook, mod, inp, outp): d = outp.data hook.mean,hook.std = d.mean().item(),d.std().item() mdl = learn.model.cuda()
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
So now we can look at the mean and std of the conv layers of our model.
with Hooks(mods, append_stat) as hooks: mdl(xb) for hook in hooks: print(hook.mean,hook.std)
0.3813672363758087 0.6907835006713867 0.3570525348186493 0.651114284992218 0.28284627199172974 0.5356632471084595 0.2487572282552719 0.42617663741111755 0.15965904295444489 0.2474386990070343
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The `mdl(xb) is not None` clause is just there to pass `xb` through `mdl` and compute all the activations so that the hooks get updated.
#export def lsuv_module(m, xb): h = Hook(m, append_stat) while mdl(xb) is not None and abs(h.mean) > 1e-3: m.bias -= h.mean while mdl(xb) is not None and abs(h.std-1) > 1e-3: m.weight.data /= h.std h.remove() return h.mean,h.std
_____no_output_____
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
We execute that initialization on all the conv layers in order:
for m in mods: print(lsuv_module(m, xb))
(0.17071205377578735, 1.0) (0.08888687938451767, 1.0000001192092896) (0.1499888300895691, 0.9999999403953552) (0.15749432146549225, 1.0) (0.3106708824634552, 1.0)
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight. Then training is beginning on better grounds.
%time run.fit(2, learn)
train: [0.42438078125, tensor(0.8629, device='cuda:0')] valid: [0.14604696044921875, tensor(0.9548, device='cuda:0')] train: [0.128675537109375, tensor(0.9608, device='cuda:0')] valid: [0.09168212280273437, tensor(0.9733, device='cuda:0')] CPU times: user 4.09 s, sys: 504 ms, total: 4.6 s Wall time: 4.61 s
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer. Export
!python notebook2script.py 07a_lsuv.ipynb
Converted 07a_lsuv.ipynb to exp/nb_07a.py
Apache-2.0
dev_course/dl2/07a_lsuv.ipynb
rohitgr7/fastai_docs
Linear Models: Multiple explanatory variables IntroductionIn this chapter we will explore fitting a linear model to data when you have multiple explanatory (predictor) variables. The aims of this chapter are[$^{[1]}$](fn1):* Learning to build and fit a linear model that includes several explanatory variables* Learnin...
load('../data/mammals.Rdata')
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Look back at the end of the previous chapter to see how you saved the RData file. If `mammals.Rdata` is missing, just import the data again using `read.csv` and add the `log C Value` column to the imported data frame again (go back to the [ANOVA chapter](15-anova.ipynb) and have a look if you have forgotten how).Use `l...
str(mammals)
'data.frame': 379 obs. of 10 variables: $ Binomial : Factor w/ 379 levels "Acinonyx jubatus",..: 1 2 3 4 5 6 7 8 9 10 ... $ meanCvalue : num 2.56 2.64 3.75 3.7 3.98 4.69 2.15 2.43 2.73 2.92 ... $ Order : Factor w/ 21 levels "Artiodactyla",..: 2 17 17 17 1 1 4 17 17 17 ... $ AdultBodyMass_g: num...
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
[Previously](14-regress.ipynb), we asked if carnivores or herbivores had larger genomes. Now we want to ask questions like: do ground-dwelling carnivores have larger genomes than arboreal or flying omnivores? We need to look at plots within groups.Before we do that, there is a lot of missing data in the data frame and ...
mammals <- subset(mammals, select = c(GroundDwelling, TrophicLevel, logCvalue)) mammals <- na.omit(mammals) str(mammals)
'data.frame': 259 obs. of 3 variables: $ GroundDwelling: Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 1 1 1 ... $ TrophicLevel : Factor w/ 3 levels "Carnivore","Herbivore",..: 1 2 2 2 3 3 3 2 2 3 ... $ logCvalue : num 0.94 1.322 1.381 1.545 0.888 ... - attr(*, "na.action")= 'omit' Named int [1:120] 2 4 7 9 1...
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Boxplots within groups[Previously](14-regress.ipynb), we used the `subset` option to fit a model just to dragonflies. You can use `subset` with plots too.$\star$ Add `par(mfrow=c(1,2))` to your script to split the graphics into two panels.$\star$ Copy over and modify the code from the [ANOVA chapter](15-anova.ipynb) t...
library(lattice) bwplot(logCvalue ~ TrophicLevel | GroundDwelling, data= mammals)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
The code `logCvalue ~ TrophicLevel | GroundDwelling` means plot the relationship between genome size and trophic level, but group within levels of ground dwelling. We are using the function `bwplot`, which is provided by `lattice` to create box and whisker plots.$\star$ Create the lattice plots above from within your s...
groups <- list(mammals$GroundDwelling, mammals$TrophicLevel) groupMeans <- tapply(mammals$logCvalue, groups, FUN = mean) print(groupMeans)
Carnivore Herbivore Omnivore No 0.9589465 1.012459 1.191760 Yes 1.2138170 1.297662 1.299017
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
$\star$ Copy this code into your script and run it.Use this code and the script from the [ANOVA chapter](15-anova.ipynb) to get the set ofstandard errors for the groups `groupSE`:
seMean <- function(x){ # get rid of missing values x <- na.omit(x) # calculate the standard error se <- sqrt(var(x)/length(x)) # tell the function to report the standard error return(se) } groups <- list(mammals$GroundDwelling, mammals$TrophicLevel) groupMeans <- tapply(mammals$logCvalue, groups, FUN=mean) print...
Carnivore Herbivore Omnivore No 0.04842209 0.03418613 0.02410400 Yes 0.05975510 0.02787009 0.03586826
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Now we can use `barplot`. The default option for a barplot ofa table is to create a stacked barplot, which is not what we want. Theoption `beside=TRUE` makes the bars for each column appearside by side.Once again, we save the midpoints of the bars to add theerror bars. The other options in the code below change the col...
# get upper and lower standard error height upperSE <- groupMeans + groupSE lowerSE <- groupMeans - groupSE # create barplot barMids <- barplot(groupMeans, ylim=c(0, max(upperSE)), beside=TRUE, ylab= ' log C value (pg) ' , col=c( ' white ' , ' grey70 ')) arrows(barMids, upperSE, barMids, lowerSE, ang=90, code=3, len=0....
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
$\star$ Generate the barplot above and then edit your script to change the colours and error bar lengths to your taste. Plotting means and confidence intervalsWe'll use the `plotmeans` function again as an exercise to change graph settings and to prepare figures for reports and write ups. This is the figure you should ...
model <- lm(logCvalue ~ TrophicLevel + GroundDwelling, data = mammals)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
We're going to do things right this time and check the model diagnosticsbefore we rush into interpretation.
library(repr) ; options(repr.plot.res = 100, repr.plot.width = 7, repr.plot.height = 8) # Change plot size par(mfrow=c(2,2)) plot(model) library(repr) ; options(repr.plot.res = 100, repr.plot.width = 6, repr.plot.height = 6) # Change plot size
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Examine these diagnostic plots. There are six predicted values now - three trophic levels for each of the two levels of ground dwelling. Those plots look ok so now we can look at the analysis of variance table:
anova(model)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
*Ignore the $p$ values*! Yes, they're highly significant but we want to understand the model, not rubber stamp it with 'significant'.The sums of squares for the variables are both small compared to the residual sums of squares — there is lots of unexplained variation. We can calculate the $r^2$ as explained sums of squ...
summary(model)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Starting at the bottom of this output, `summary` has again calculated $r^2$ for us and also an $F$ statistic for the whole model, which matches the calculation above.The other important bits are the four coefficients. The intercept is now the reference level for two variables: it is the mean for carnivores that are not...
# data frame of combinations of variables gd <- rep(levels(mammals$GroundDwelling), times = 3) print(gd) tl <- rep(levels(mammals$TrophicLevel), each = 2) print(tl) predVals <- data.frame(GroundDwelling = gd, TrophicLevel = tl)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Now we have the data frame of values we want, we can use `predict`. Just as when we created log values, we can save the output back into a new column in the data frame:
predVals$predict <- predict(model, newdata = predVals) print(predVals)
GroundDwelling TrophicLevel predict 1 No Carnivore 0.9797572 2 Yes Carnivore 1.1892226 3 No Herbivore 1.0563447 4 Yes Herbivore 1.2658102 5 No Omnivore 1.1524491 6 Yes Omnivore 1.3619145
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
Not that these are in the same order as the bars from your barplot. $\star$ Make a copy of the barplot and arrows code from above and modify it
barMids <- barplot(groupMeans, ylim=c(0, 1.4), ylab='log C value (pg)', beside=TRUE, col=c('white', 'grey70')) arrows(barMids, upperSE, barMids, lowerSE, ang=90, code=3, len=0.1) points(barMids, predVals$predict, col='red', pch=12)
_____no_output_____
MIT
content/notebooks/16-MulExpl.ipynb
zoeydy/CMEESamraat
PYNQ Microblaze Libraries in C---- Aim/s* Explore the various libraries that ship with PYNQ Microblaze.* Try the example using the Grove ADC connector.* Print from the Microblaze using `pyprintf`. References* [PYNQ](http://pynq.readthedocs.io)* [Grove](https://pynq.readthedocs.io/en/latest/pynq_libraries/grove.html) ...
from pynq.overlays.base import BaseOverlay base = BaseOverlay('base.bit')
_____no_output_____
BSD-3-Clause
board/RFSoC2x2/base/notebooks/microblaze/microblaze_c_libraries.ipynb
Zacarhay/RFSoC2x2-PYNQ
In the next cell, `PMOD_G4_B` and `PMOD_G4_A` are 6 and 2, respectively.
%%microblaze base.PMODA #include <i2c.h> #include <pmod_grove.h> int read_adc() { i2c device = i2c_open(PMOD_G4_B, PMOD_G4_A); unsigned char buf[2]; buf[0] = 0; i2c_write(device, 0x50, buf, 1); i2c_read(device, 0x50, buf, 2); return ((buf[0] & 0x0F) << 8) | buf[1]; } read_adc()
_____no_output_____
BSD-3-Clause
board/RFSoC2x2/base/notebooks/microblaze/microblaze_c_libraries.ipynb
Zacarhay/RFSoC2x2-PYNQ
We can use the `gpio` and `timer` components in concert to flash an LED connected to G1. The `timer` header provides PWM and program delay functionality, although only one can be used simultaneously.
%%microblaze base.PMODA #include <timer.h> #include <gpio.h> #include <pmod_grove.h> void flash_led() { gpio led = gpio_open(PMOD_G1_A); gpio_set_direction(led, GPIO_OUT); int state = 0; while (1) { gpio_write(led, state); state = !state; delay_ms(500); } } flash_led()
_____no_output_____
BSD-3-Clause
board/RFSoC2x2/base/notebooks/microblaze/microblaze_c_libraries.ipynb
Zacarhay/RFSoC2x2-PYNQ
---- `pyprintf`The `pyprint` library exposes a single `pyprintf` function which acts similarly to a regular `printf` function but forwards arguments to Python for formatting and display result in far lower code overhead than a regular printf as well as not requiring access to standard in and out.
%%microblaze base.PMODA #include <pyprintf.h> int test_print(float value) { pyprintf("Printing %f from the microblaze!\n", value); return 0; } test_print(1.5)
Printing 1.500000 from the microblaze!
BSD-3-Clause
board/RFSoC2x2/base/notebooks/microblaze/microblaze_c_libraries.ipynb
Zacarhay/RFSoC2x2-PYNQ
import tensorflow as tf
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
Making new layers and model via subclassing
class CustomLayer(tf.keras.layers.Layer): def __init__(self, units=32, input_shape=32, name=None): super(CustomLayer, self).__init__() winit = tf.random_normal_initializer() self.w = tf.Variable(winit(shape=(input_shape, units), dtype="float32"),trainable=True) binit = tf.zeros_...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
instead of tf.variable use built in method add_weights
class CustomLayer2(tf.keras.layers.Layer): def __init__(self, units=32, input_shape=32, name=None): super(CustomLayer, self).__init__() self.w = self.add_weight(shape=(input_shape, units), initializer="random_normal", trainable=True) self.b = self.add_weight(shape=(units,), initializer="...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
for unknown input shape use buid method
class CustomLayer3(tf.keras.layers.Layer): def __init__(self, units): super(CustomLayer3, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer = "random_normal", ...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
Layer of composite layers
class CustomCompositeLayer(tf.keras.layers.Layer): def __init__(self, units=1): super(CustomCompositeLayer, self).__init__() self.l1 = CustomLayer3(32) self.l2 = CustomLayer3(32) self.l3 = CustomLayer3(units) def call(self, inputs): x = self.l1(inputs) ...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
add_loss method in call
class ActivityRegularizer(tf.keras.layers.Layer): def __init__(self, rate): super(ActivityRegularizer, self).__init__() self.rate = rate def call(self, inputs): self.add_loss(self.rate * tf.reduce_sum(inputs)) return inputs class LayerWithKernelRegularizer(tf.keras.lay...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
Auto encoder model
import tensorflow as tf class Sampling(tf.keras.layers.Layer): def call(self, inputs): x, y = inputs batch = tf.shape(x)[0] dim = tf.shape(x)[1] epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) return x + tf.exp(0.5 * y) * epsilon class Encoder(tf.keras.l...
_____no_output_____
MIT
Tensorflow/TensorflowPrac15_Making_new_layers_and_model_via_subclassing.ipynb
Vinaypatil-Ev/vinEvPy-GoCoLab
Substantial class imbalance for the normal/abnormal taskGiven this, we'll derive weights for a weighted binary cross entropy loss function.
train_abnl = pd.read_csv(data_path/'train-abnormal.csv', header=None, names=['Case', 'Abnormal'], dtype={'Case': str, 'Abnormal': np.int64}) print(train_abnl.shape) train_abnl.head() w = train_abnl.Abnormal.sum() / train_abnl.shape[0] print(w) weights = Tensor([w, 1-w]) pr...
_____no_output_____
Apache-2.0
MRNet_fastai_example.ipynb
nswitanek/mrnet-fastai
Load previously created files- `df_abnl` -> master `df` for use with Data Block API, also contains of slices per series- `slice_stats` -> `dict` stored as `json` with mean and max of slices per series
df_abnl = pd.read_pickle('df_abnl.pkl') df_abnl.head() with open('slice_stats.json', 'r') as file: stats = json.load(file) stats max_slc = stats['sagittal']['max'] print(max_slc)
51
Apache-2.0
MRNet_fastai_example.ipynb
nswitanek/mrnet-fastai
MRNet implementationModified from the original [paper](https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002699) to (sort of) work with `fastai`
il = MR3DImageList.from_df(df_abnl, sag_path, suffix='.npy') il.items[0] il sd = il.split_from_df(col=2) sd ll = sd.label_from_df(cols=1) ll # tfms = get_transforms() bs = 1 data = ll.databunch(bs=bs) learn = mrnet_learner(data, MRNet(), opt_func=optim.Adam, loss_func=WtBCELoss(weights), callbacks...
_____no_output_____
Apache-2.0
MRNet_fastai_example.ipynb
nswitanek/mrnet-fastai
Accuracy is terrible, but what do you expect out of a single linear layer...?
learn.unfreeze() learn.summary()
_____no_output_____
Apache-2.0
MRNet_fastai_example.ipynb
nswitanek/mrnet-fastai
ERROR: type should be string, got " https://pbpython.com/effective-matplotlib.html"
# setup imports and read in some data: import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter #%matplotlib notebook df = pd.read_excel("https://github.com/chris1610/pbpython/blob/master/data/sample-salesv3.xlsx?raw=true") df.head() # summarize the data so we can see the total...
_____no_output_____
MIT
styling/effective_matplotlib.ipynb
TillMeineke/machine_learning
Linearna algebra
import numpy as np A = np.array([[1,2,3], [4,5,6]]) A A.shape E = np.eye(5) E np.zeros((2,3)) ones = np.ones((2,3)) ones -1 * ones 5 + ones v = np.array([1,2,3]) ones + v np.sin(A) B = np.array([[1,2,1],[2,2,3],[4,5,5]]) B.shape A.shape C = np.ones((3,3)) B + C B * C == B A.dot(B) B v = B[:,0] v v = np.ar...
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Linearna regresija
# Ax = b A = np.array([[2,0], [-1,1], [0,2]]) b = np.array([2,0,-2]) x = LA.inv(A.T.dot(A)).dot(A.T).dot(b) x x, rss, _, _ = LA.lstsq(A, b, rcond=None) x rss
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Zadatak 1Odrediti koeficijente w0 i w1 tako da funkcija 𝑓(𝑥)=w0+w1𝑥 u smislu metode najmanjih kvadrata najbolje aproksimira skup tačaka (0,1.2), (0.5,2.05), (1,2.9) i (−0.5,0.1) u ravni.
# Aw = w0 + w1x x = np.array([0, 0.5, 1, -0.5]) y = np.array([1.2, 2.05, 2.9, 0.1]) ones = np.ones(4) A = np.vstack((ones, x)).T LA.lstsq(A, y, rcond=None)
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Zadatak 2Odrediti vrednosti koeficijenata 𝑎 i 𝑏 tako da funkcija 𝑓(𝑥)=𝑎+𝑏sin𝑥 u smislu metode najmanjih kvadrata aproksimira skup tacaka (2,2.6), (−1.22,−1.7), (8.32,2.5) i (4.23,−1.6) u ravni. Dati ocenu greske. Prikazati skup tačaka i nacrtati rezultujucu funkciju.
x = np.array([2,-1.22,8.32,4.23]) y = np.array([2.6,-1.7,2.5,-1.6]) A = np.vstack((ones, np.sin(x))).T solution, rss, _, _ = LA.lstsq(A, y, rcond=None) a, b = solution print(a, b) from matplotlib import pyplot as plt xs = np.linspace(-5, 10, 100) plt.plot(xs, a + b * np.sin(xs)) plt.plot(x, y, 'o')
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Zadatak 3U datoteci social_reach.csv se nalaze cene reklamiranja za različite demografske grupe, koje su date u hiljadama evra za 1000 pregleda. Svaka od tri kolone označava različitu platformu za reklamiranje (na primer, platforme mogu biti Facebook, Instagram ili YouTube). Svaki red označava različitu demografsku gru...
import pandas as pd df = pd.read_csv('social_reach.csv') df y = 1000 * np.ones(10) y A = df[['web1', 'web2', 'web3']] A LA.lstsq(A, y, rcond=None)
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Zadatak 4Svaki red u fajlu advertising.csv sadrži informacije o cenama u hiljadama dolara reklamnih usluga na određenom tržištu. Prva kolona se odnosi na cene reklamiranja na televiziji, druga na radiju, a treća u novinama. Četvrta kolona se odnosi na ukupnu prodaju proizvoda koji su se reklamirali na datim medijima. K...
from sklearn.linear_model import LinearRegression import pandas as pd from matplotlib import pyplot as plt df = pd.read_csv('advertising.csv') df.head() tv = df['TV'] sales = df['Sales'] plt.scatter(tv, sales) radio = df['Radio'] plt.scatter(radio, sales) newspaper = df['Newspaper'] plt.scatter(newspaper, sales) X = df...
_____no_output_____
MIT
2021_2022/live/01_linear_regression.ipynb
MATF-RI/Materijali-sa-vezbi
Convolutional Neural Networks Project: Write an Algorithm for a Dog Identification App ---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is r...
import numpy as np from glob import glob # load filenames for human and dog images human_files = np.array(glob("lfw/*/*")) dog_files = np.array(glob("dogImages/*/*/*")) # print number of images in each dataset print('There are %d total human images.' % len(human_files)) print('There are %d total dog images.' % len(do...
There are 13233 total human images. There are 8351 total dog images.
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
Step 1: Detect HumansIn this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/ope...
import cv2 import matplotlib.pyplot as plt %matplotlib inline # extract pre-trained face detector face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml') # load color (BGR) image img = cv2.imread(human_files[4]) # conv...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter. In the above code, `faces` is a numpy array of detected faces, where each row corresponds ...
# returns "True" if face is detected in image stored at img_path def face_detector(img_path): img = cv2.imread(img_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray) return len(faces) > 0
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Assess the Human Face Detector__Question 1:__ Use the code cell below to test the performance of the `face_detector` function. - What percentage of the first 100 images in `human_files` have a detected human face? - What percentage of the first 100 images in `dog_files` have a detected human face? I...
human_files_short = human_files[:100] dog_files_short = dog_files[:100] print(len(human_files_short)) print(len(dog_files_short))
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
__Answer:__ (You can print out your results and/or write your percentages in this cell)
from tqdm import tqdm human_files_short = human_files[:100] dog_files_short = dog_files[:100] #-#-# Do NOT modify the code above this line. #-#-# ## TODO: Test the performance of the face_detector algorithm ## on the images in human_files_short and dog_files_short. human_detected = 0 dog_detected = 0 num_files = l...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this...
### (Optional) ### TODO: Test performance of another face detection algorithm. ### Feel free to use as many code cells as needed.
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
--- Step 2: Detect DogsIn this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images. Obtain Pre-trained VGG-16 ModelThe code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a ve...
import torch import torchvision.models as models # define VGG16 model VGG16 = models.vgg16(pretrained=True) # check if CUDA is available use_cuda = torch.cuda.is_available() # move model to GPU if CUDA is available if use_cuda: VGG16 = VGG16.cuda() use_cuda VGG16
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image. (IMPLEMENTATION) Making Predictions with a Pre-trained ModelIn the next code cell, you will write a function that accepts a path to an image (such as...
from PIL import Image import torchvision.transforms as transforms # Set PIL to be tolerant of image files that are truncated. from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True def VGG16_predict(img_path): ''' Use pre-trained VGG-16 model to obtain index corresponding to predicted ImageNet ...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Write a Dog DetectorWhile looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` t...
### returns "True" if a dog is detected in the image stored at img_path def dog_detector(img_path): ## COMPLETED: Complete the function. prediction = VGG16_predict(img_path) return True if 151 <= prediction <= 268 else False # true/false
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Assess the Dog Detector__Question 2:__ Use the code cell below to test the performance of your `dog_detector` function. - What percentage of the images in `human_files_short` have a detected dog? - What percentage of the images in `dog_files_short` have a detected dog? __Answer:__
### COMPLETED: Test the performance of the dog_detector function ### on the images in human_files_short and dog_files_short. # human_files_short human_detected = 0 dog_detected = 0 num_files = len(human_files_short) for i in range(0, num_files): human_path = human_files_short[i] dog_path = dog_files_short[i] ...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.htmlinception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.htmlid3), etc). Please use t...
### (Optional) ### COMPLETED: Report the performance of another pre-trained network. ### Feel free to use as many code cells as needed. ResNet50 = models.resnet50(pretrained=True) if use_cuda: ResNet50.cuda() # Performance variables human_files_ResNet50 = 0 dogs_files_ResNet50 = 0 num_files = len(human_files_sho...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
Precentage of dogs detected| model | human_files_short | dog_files_short || --- | --- | --- || VGG - 16 | 0.0 % | 91.0% || ResNet - 50 | 1.0 % | 95.0% || Inception v3 | 1.0 % | 92.0% | --- Step 3: Create a CNN to Classify Dog Breeds (from Scratch)Now that we have functions for detecting humans and dogs in images, we ne...
import torch import torchvision.models as models torch.cuda.empty_cache() # check if CUDA is available use_cuda = torch.cuda.is_available() import os from torchvision import datasets import torchvision.transforms as transforms ### COMPLETED: Write data loaders for training, validation, and test sets ## Specify approp...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
**Question 3:** Describe your chosen procedure for preprocessing the data. - How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not? **...
import torch.nn as nn import torch.nn.functional as F # define the CNN architecture class Net(nn.Module): ### COMPLETED: choose an architecture, and complete the class def __init__(self): super(Net, self).__init__() ## Define layers of a CNN # Convolutional layers self.conv1 = n...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. __Answer:__ outline:Input: a fixed size, 224x224 RGB imageKernel Size: 3x3, the smallest size to capture the notion of left/right, up/down, center)Padding: it is 1 for 3x3 kernel, to keep the same spatial...
import torch.optim as optim ### COMPLETED: select loss function criterion_scratch = nn.CrossEntropyLoss() ### COMPLETED: select optimizer optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.01)
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Train and Validate the ModelTrain and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`.
# the following import is required for training to be robust to truncated images from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): """returns trained model""" # initialize tracker for minimum validation loss vali...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Test the ModelTry out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
def test(loaders, model, criterion, use_cuda): # monitor test loss and accuracy test_loss = 0. correct = 0. total = 0. model.eval() for batch_idx, (data, target) in enumerate(loaders['test']): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() ...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
--- Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set. (IMPLEMENTATION) Specify Data Loaders for the Dog DatasetUse the code cell below to write thre...
import os import numpy as np from PIL import Image import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms, models ## COMPLETED: Specify data loaders # number of subprocesses to use for data loading num_workers = 0 # how many samples pe...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Model ArchitectureUse transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`.
import torchvision.models as models import torch.nn as nn ## COMPLETED: Specify model architecture #Check if CUDA is available use_cuda = torch.cuda.is_available() #Load the pretrained model from pytorch model_transfer = models.vgg16(pretrained=True) # Freeze training for all "features" layers for param in model_tran...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem. __Answer:__ I choose the VGG16 model because we have already worked on this model in previous exercises and it seems to work cor...
criterion_transfer = nn.CrossEntropyLoss() optimizer_transfer = optim.SGD(model_transfer.classifier.parameters(), lr= 0.001)
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Train and Validate the ModelTrain and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`.
n_epochs=7 # define loader scratch loaders_transfer = {'train': train_loader, 'valid': valid_loader, 'test': test_loader} # train the model model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt') ...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Test the ModelTry out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
(IMPLEMENTATION) Predict Dog Breed with the ModelWrite a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model.
### COMPLETED: Write a function that takes a path to an image as input ### and returns the dog breed that is predicted by the model. # list of class names by index, i.e. a name can be accessed like class_names[0] #class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes] class_names = [item...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier
--- Step 5: Write your AlgorithmWrite an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,- if a __dog__ is detected in the image, return the predicted breed.- if a __human__ is detected in the image, return the resembling dog breed.- if __ne...
### COMPLETED: Write your algorithm. ### Feel free to use as many code cells as needed. def run_app(img_path): ## handle cases for a human face, dog, and neither breed, prediction = predict_breed_transfer(img_path) #if it's a dog if dog_detector(img_path): plt.imshow(Image.open(img_path)) ...
_____no_output_____
MIT
dog_app.ipynb
blackcisne10/Dog-Breed-Classifier