markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Backpropagation Now that feedforward can be done, the next step is to decide how the parameters should change such that they minimize the cost function. Recall that the chosen cost function for this problem is $$ c(x, P) = \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2 $$ In order to minimize it, an optima...
# The trial solution using the deep neural network: def g_trial(x,params, g0 = 10): return g0 + x*neural_network(params,x) # The right side of the ODE: def g(x, g_trial, gamma = 2): return -gamma*g_trial # The cost function: def cost_function(P, x): # Evaluate the trial function with the current para...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Gradient Descent The idea of the gradient descent algorithm is to update parameters in direction where the cost function decreases goes to a minimum. In general, the update of some parameters $\vec \omega$ given a cost function defined by some weights $\vec \omega$, $c(x, \vec \omega)$, goes as follows: $$ \vec \omega...
def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb): ## Set up initial weigths and biases # For the hidden layer p0 = npr.randn(num_neurons_hidden, 2 ) # For the output layer p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included P = [p0, p1] print('I...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
An implementation of a Deep Neural Network As previously stated, a Deep Neural Network (DNN) follows the same concept of a neural network, but having more than one hidden layer. Suppose that the network has $N_{\text{hidden}}$ hidden layers where the $l$-th layer has $N_{\text{hidden}}^{(l)}$ neurons. The input is stil...
def deep_neural_network(deep_params, x): # N_hidden is the number of hidden layers N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer # Assumes input x being an one-dimensional array num_values = np.size(x) x = x.resha...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Backpropagation This step is very similar for the neural network. The idea in this step is the same as for the neural network, but with more parameters to update for. Again there is no need for computing the gradients analytically since Autograd does the work for us.
# The trial solution using the deep neural network: def g_trial_deep(x,params, g0 = 10): return g0 + x*deep_neural_network(params,x) # The same cost function as for the neural network, but calls deep_neural_network instead. def cost_function_deep(P, x): # Evaluate the trial function with the current param...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Solving the ODE Finally, having set up the networks we are ready to use them to solve the ODE problem. If possible, it is always useful to have an analytical solution at hand to test if the implementations gives reasonable results. As a recap, the equation to solve is $$ g'(x) = -\gamma g(x) $$ where $g(0) = g_0$ wi...
def g_analytic(x, gamma = 2, g0 = 10): return g0*np.exp(-gamma*x)
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Using neural network The code below solves the ODE using a neural network. The number of values for the input $\vec x$ is 10, number of hidden neurons in the hidden layer being 10 and th step size used in gradien descent $\lambda = 0.001$. The program updates the weights and biases in the network num_iter times. Finall...
npr.seed(15) ## Decide the vales of arguments to the function to solve N = 10 x = np.linspace(0, 1, N) ## Set up the initial parameters num_hidden_neurons = 10 num_iter = 10000 lmb = 0.001 P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb) res = g_trial(x,P) res_analytical = g_analytic(x) plt.figu...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Using a deep neural network
npr.seed(15) ## Decide the vales of arguments to the function to solve N = 10 x = np.linspace(0, 1, N) ## Set up the initial parameters num_hidden_neurons = np.array([10,10]) num_iter = 10000 lmb = 0.001 P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb) res = g_trial_deep(x,P) res_analytical ...
doc/src/NeuralNet/diffeq.ipynb
CompPhysics/MachineLearning
cc0-1.0
Single Deletions Perform all single gene deletions on a model
growth_rates, statuses = cobra.flux_analysis.single_gene_deletion(cobra_model)
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
These can also be done for only a subset of genes
growth_rates, statuses = cobra.flux_analysis.single_gene_deletion(cobra_model, cobra_model.genes[:20]) pandas.DataFrame.from_dict({"growth_rates": growth_rates, "status": statuses})
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
This can also be done for reactions
growth_rates, statuses = cobra.flux_analysis.single_reaction_deletion(cobra_model, cobra_model.reactions[:20]) pandas.DataFrame.from_dict({"growth_rates": growth_rates, "status": statuses})
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
Double Deletions Double deletions run in a similar way. Passing in return_frame=True will cause them to format the results as a pandas Dataframe
cobra.flux_analysis.double_gene_deletion(cobra_model, cobra_model.genes[-10:], return_frame=True)
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
By default, the double deletion function will automatically use multiprocessing, splitting the task over up to 4 cores if they are available. The number of cores can be manually sepcified as well. Setting use of a single core will disable use of the multiprocessing library, which often aids debuggging.
start = time() # start timer() cobra.flux_analysis.double_gene_deletion(ecoli_model, ecoli_model.genes[:100], number_of_processes=2) t1 = time() - start print("Double gene deletions for 100 genes completed in %.2f sec with 2 cores" % t1) start = time() # start timer() cobra.flux_analysis.double_gene_deletion(ecoli_m...
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
Double deletions can also be run for reactions
cobra.flux_analysis.double_reaction_deletion(cobra_model, cobra_model.reactions[:10], return_frame=True)
documentation_builder/deletions.ipynb
jerkos/cobrapy
lgpl-2.1
Check the titles of the first 10 papers.
corpus[0:10]
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Next we’ll build a TF/IDF matrix for each paper:
from sklearn.feature_extraction.text import TfidfVectorizer tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df = 0, stop_words = 'english') tfidf_matrix = tf.fit_transform([content for file, content in corpus[:]])
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Next we’ll write a function that will find us the top n similar papers based on cosine similarity:
from sklearn.metrics.pairwise import linear_kernel def find_similar(tfidf_matrix, index, top_n = 5): cosine_similarities = linear_kernel(tfidf_matrix[index:index+1], tfidf_matrix).flatten() related_docs_indices = [i for i in cosine_similarities.argsort()[::-1] if i != index] return [(index, cosine_similar...
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Let’s try it out:
corpus[1619] for index, score in find_similar(tfidf_matrix, 1619): print(score, corpus[index])
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
It’s pretty good for finding duplicate papers!
corpus[1599] for index, score in find_similar(tfidf_matrix, 1599): print (score, corpus[index])
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
But sometimes it identifies duplicates that aren’t identical:
corpus[5784] for index, score in find_similar(tfidf_matrix, 5784): print (score, corpus[index])
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Finally, let us creat a CSV file containing to 5 similar papers for each paper.
import csv with open("./output/similarities.csv", "w") as similarities_file: writer = csv.writer(similarities_file, delimiter = ",") for me_index, item in enumerate(corpus): similar_documents = [(corpus[index], score) for index, score in find_similar(tfidf_matrix, me_index)] me = corpus[me_ind...
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Getting started with a quick-and-easy k-nearest neighbor classifier THis is a tinny example to show the idea of KNN for recommender development.
import numpy as np from sklearn.neighbors import NearestNeighbors # First let's create a dataset called X, with 6 records and 2 features each. X = np.array([[-1, 2], [4, -4], [-2, 1], [-1, 3], [-3, 2], [-1, 4]]) # Next we will instantiate a nearest neighbor object, and call it nbrs. Then we will fit it to dataset X. ...
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Imagine you have a new incoming data point. It contains the values -2 and 4. To search object X and identify the most similar record, all you need to do is call the kneighbors() function on the new incoming data p
print(nbrs.kneighbors([[-2, 4]]))
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
YangDS/recommenders
gpl-3.0
Linear models Assume you have data points with measurements y at positions x as well as measurement errors y_err. How can you use statsmodels to fit a straight line model to this data? For an extensive discussion see Hogg et al. (2010), "Data analysis recipes: Fitting a model to data" ... we'll use the example data giv...
data = """ x y y_err 201 592 61 244 401 25 47 583 38 287 402 15 203 495 21 58 173 15 210 479 27 202 504 14 198 510 30 158 416 16 165 393 14 201 442 25 157 317 52 131 311 16 166 400 34 160 337 31 186 423 42 125 334 26 218 533 16 146 344 22 """ try: fr...
examples/notebooks/chi2_fitting.ipynb
phobson/statsmodels
bsd-3-clause
Construct a 5x3 matrix, unitialized:
x = torch.Tensor(5, 3) print(x)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
!NOTE¡: torch.Size is in fact a tuple, so it supports all tuple operations. b. Operations There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation. Addition: synax 1
y = torch.rand(5, 3) print(x + y)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Addition: providing an output tensor as argument
result = torch.Tensor(5, 3) torch.add(x, y, out=result) print(result)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
¡NOTE!: Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x. You can use standard NumPy-like indexing with all the bells and whistles!
print(x[:, 1])
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Resizing: If you want to resize/reshape a tensor, you can use torch.view:
x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # the size -1 is inferred from other dimensions print(x.size(), y.size(), z.size()) # testing out valid resizings for i in range(256): for j in range(256): try: y = x.view(i,j) print(y.size()) except RuntimeError: ...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Read later: 100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described here. 2. NumPy Bridge Converting a Torch Tensor to a NumPy array and vice versa is a breeze. The Torch Tensor and NumPy array will share their underlying memory loca...
a = torch.ones(5) print(a) b = a.numpy() print(b)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
See how the numpy array changed in value
a.add_(1) print(a) print(b)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
ahh, cool b. Converting NumPy Array to Torch Tensor See how changing the NumPy array changed the Torch Tensor automatically
import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
All the Tensors on the CPU except a CharTensor support converting to NumPy and back. 3. CUDA Tensors Tensors can be moved onto GPU using the .cuda method.
# let's run this cell only if CUDA is available if torch.cuda.is_available(): x = x.cuda() y = y.cuda() x + y
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
II. Autograd: Automatic Differentiation Central to all neural networks in PyTorch is the autograd package. Let's first briefly visit this, and we'll then go to training our first neural network. The autograd package provides automatic differentiation for all operations on Tensors. It's a define-by-run framework, which...
import torch from torch.autograd import Variable
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Create a variable:
x = Variable(torch.ones(2, 2), requires_grad=True) print(x) print(x.grad_fn)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Do an operation on the variable:
y = x + 2 print(y)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
y was created as a result of an operation, so it has a grad_fn.
print(y.grad_fn)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Do more operations on y:
z = y * y * 3 out = z.mean() print(z, out)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2. Gradients let's backprop now: out.backward() is equivalent to doing out.backward(torch.Tensor([1.0]))
# x = torch.autograd.Variable(torch.ones(2,2), requires_grad=True) # x.grad.data.zero_() # re-zero gradients of x if rerunning # y = x + 2 # z = y**2*3 # out = z.mean() # print(out.backward()) # print(x.grad) out.backward()
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
print gradients d(out)/dx:
print(x.grad)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
You should've gotten a matrix of 4.5. Let's call the out Variable "$o$". We have that: $o = \tfrac{1}{4} Σ_i z_i,z_i = 3(x_i + 2)^2$ and $z_i\bigr\rvert_{x_i=1}=27$. $\Rightarrow$ $\tfrac{δo}{δx_i}=\tfrac{3}{2}(x_i+2)$ $\Rightarrow$ $\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$ You can do a ...
x = torch.randn(3) x = Variable(x, requires_grad=True) y = x*2 while y.data.norm() < 1000: y = y*2 print(y) gradients = torch.FloatTensor([0.1, 1.0, 0.0001]) y.backward(gradients) print(x.grad)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Read Later: Documentation of Variable and Function is at http://pytorch.org/docs/autograd III. Neural Networks Neural networks can be constructed using the torch.nn package. Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a metho...
import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, ...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. --ah that makes sense-- You can use any of the Tensor operations in the forward function. The learnable parameters of a model are returned by net.parameters()
params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
The input to the forward is an autograd.Variable, and so is the output. NOTE: Expected input size to this net (LeNet) is 32x32. To use this network on MNIST data, resize the images from the dataset to 32x32.
input = Variable(torch.randn(1, 1, 32, 32)) # batch_size x channels x height x width out = net(input) print(out)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
!NOTE¡: torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are mini-batches of samples, and not a single sample. For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Widgth. If you have a single sample, just use input.unsqueeze(0) to add a fake batch...
output = net(input) target = Variable(torch.arange(1, 11)) # a dummy target example criterion = nn.MSELoss() loss = criterion(output, target) print(loss)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Now, if you follow loss in the backward direction, usint its .grad_fn attribute, you'll see a graph of computations that looks like this: input -&gt; conv2d -&gt; relu -&gt; maxpool2d -&gt; conv2d -&gt; relu -&gt; maxpool2d -&gt; view -&gt; linear -&gt; relu -&gt; linear -&gt; relu -&gt; linear -&gt; MSELos...
print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
3. Backpropagation To backpropagate the error all we have to do is loss.backward(). You need to clear the existing gradients though, or else gradients will be accumulated. Now we'll call loss.backward(), and have a look at conv1'd bias gradients before and after the backward.
net.zero_grad() # zeroes gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Now we've seen how to use loss functions. Read Later: The neural network package contains variables modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here. The only thing left to learn is: * updating the weights of the network 4. Update the Weights The...
learning_rate = 0.01 for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: torch.optim that implements all these methods. Using it is very simple:
import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step() # Does the update
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
!NOTE¡: Observe how gradient buffers had to be manually set to zero using optimizer.zero_grad(). This is because gradients are accumulated as explained in the Backprop section. IV. Training a Classifier This is it. You've seen how to define neural networks, compute loss, and make updates to the weights of a network. N...
import torch import torchvision import torchvision.transforms as transforms
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
The output of torchvision datasets are PIL-Image images of range [0,1]. We transform them to Tensors of normalized range [-1,1]:
transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Let's view some of the training images;
%matplotlib inline import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # un-normalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = data...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2.2 Define a Convolutional Neural Network Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel as orignly defined)
from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) # ConvNet self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) sel...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2.3 Define a Loss Function and Optimizer We'll use Classification Cross-Entropy Loss and SGD with Momentum
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2.4 Train the Network This is when things start to get interesting. We simple have to loop over our data iterator, and feed the inputs to the network and optimize.
for epoch in range(2): # loop over dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data #wrap them in Variable inputs, labels = Variable(inputs), Variable(labels) # zero the para...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2.5 Test the Network on Test Data We trained the network for 2 passes over the training dataset. But we need to check if the network as learnt anything at all. We'll check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add t...
dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) print('GroundTruth: ', ' '.join(f'{classes[labels[j]] for j in range(4)}')) print('GroundTruth: ', ' '.join('%5s' % cl...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Okay, now let's see what the neural network thinks these examples above are:
outputs = net(Variable(images))
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
The outputs are energies (confidences) for the 10 classes. Let's get the index of the highest confidence:
_, predicted = torch.max(outputs.data, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Seems pretty good. Now to look at how the network performs on the whole dataset.
correct = 0 total = 0 for data in testloader: images, labels = data outputs = net(Variable(images)) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum() print(f'Accuracy of network on 10,000 test images: {np.round(100*correct/total,2)}%')
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
That looks way better than chance for 10 different choices (10%). Seems like the network learnt something. What classes performed well and which didn't?:
class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) for data in testloader: images, labels = data outputs = net(Variable(images)) _, predicted = torch.max(outputs.data, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Cool, so what next? How do we run these neural networks on the GPU? 3. Training on GPU Just like how you transfer a Tensor onto a GPU, you transfer the neural net onto the GPU. This will recursively go over all modules and convert their parameters and buffers to CUDA tensors:
net.cuda()
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Remember that you'll have to send the inputs and targets at every step to the GPU too:
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Why don't we notice a MASSIVE speedup compared to CPU? Because the network is very small. Exercise: Try increasing the width of your network (argument 2 of the first nn.Conv2d, and argument 1 of the second nn.Conv2d - they need to be the same number), see what kind of speedup you get. Goals achieved: * Understanding Py...
model.gpu()
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Then, you can copy all your tensors to the GPU:
mytensor = my_tensor.gpu()
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Please note that just calling mytensor.gpu() won't copy the tensor to the GPU. You need to assign it to a new tensor and use that tensor on the GPU. It's natural to execute your forward and backward propagations on multiple GPUs. However, PyTorch will only use one GPU by default. You can easily run your operations on m...
model = nn.DataParallel(model)
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
That's the core behind this tutorial. We'll explore it in more detail below. 1. Imports and Parameters Import PyTorch modules and define parameters.
import torch import torch.nn as nn from torch.autograd import Variable from torch.utils.data import Dataset, DataLoader # Parameters and DataLoaders input_size = 5 output_size = 2 batch_size = 30 data_size = 100
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
2. Dummy DataSet Make a dummy (random) dataset. You just need to implement the getitem
class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len rand_loader = DataLoader(dataset=RandomData...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
3. Simple Model For the demo, our model just gets an input, performs a linear operation, and gives an output. However, you can use DataParallel on any model (CNN, RNN, CapsuleNet, etc.) We've placed a print statement inside the model to monitor the size of input and output tensors. Please pay attention to what is print...
class Model(nn.Module): # Our model def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) print(" In Model; input size", input.size(), ...
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
4. Create Model and DataParallel This is the core part of this tutorial. First, we need to make a model instance and check if we have multiple GPUs. If we have multiple GPUs, we can wrap our model using nn.DataParallel. Then we can put our model on GPUs by model.gpu()
model = Model(input_size, output_size) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model = nn.DataParallel(model) if torch.cuda.is_available(): model.cuda()
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
5. Run the Model Now we can see the sizes of input and output tensors.
for data in rand_loader: if torch.cuda.is_available(): input_var = Variable(data.cuda()) else: input_var = Variable(data) output = model(input_var) print("Outside: input size", input_var.size(), "output_size", output.size())
pytorch/PyTorch60MinBlitz.ipynb
WNoxchi/Kaukasos
mit
Nice! We have the basic useful columns area, job groups but also data on the salaries and the age of the recruitee (a proxy for experience I guess). According to the documentation 90% of the employees earn more than the MINIMUM_SALARY, while only 10% earn more than the MAXIMUM_SALARY. Concerning the age groups, how man...
salaries.AGE_GROUP_NAME.unique()
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Only two… Maybe it is enough. Let's do a basic check on the areas. First what about the granularity?
salaries.AREA_TYPE_NAME.unique()
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Ok! We have 3 different levels, whole country, regions and departments. Is everyone there? Let's start by the departments!
salaries[salaries.AREA_TYPE_CODE == 'D'].AREA_CODE.sort_values().unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
We could expect 101 (metropolitan departments) or 103 (metropolitan departments + overseas collectivities). Here Mayotte is missing. What about the regions?
salaries[salaries.AREA_TYPE_CODE == 'R'].AREA_CODE.sort_values().unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Oh… We expected 18! Again, Mayotte is missing. The first overview revealed some negative salaries. The documentation states that when they are not enough data, the value is -1 while when the data in unavailable it is marked as -2. Let's have a basic description of the salaries. How many missing or uninformative salary...
len(salaries[salaries.MINIMUM_SALARY < 0]) / len(salaries) * 100
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Around 13% of the job groups for a given area (market) don't have salary data! That is a bit more than nothing! When salaries are lesser than 0, are the minimum salary and the maximum salaries always the same?
invalid_rows = salaries[(salaries.MAXIMUM_SALARY < 0) | (salaries.MINIMUM_SALARY < 0)] all(invalid_rows.MINIMUM_SALARY == invalid_rows.MAXIMUM_SALARY)
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Yeahh!! They are exactly the same… How convenient! So let's get a basic overview of the salaries.
valid_salaries = salaries[salaries.MAXIMUM_SALARY > 0] valid_salaries[['MAXIMUM_SALARY', 'MINIMUM_SALARY']].describe()
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Because the minimum MINIMUM_SALARY is lower than the french minimum wage (~1400), we think that these data gather both full-time and part-time offers. It can be scary to deliver as such to our users… Anyway, no weird or missing values here. It can be an overkill but we'll see if the maximum salary is always greater or ...
all(salaries.MAXIMUM_SALARY >= salaries.MINIMUM_SALARY)
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Great! Last but not least. Do we cover every job groups?
salaries.PCS_PROFESSION_CODE.unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
According to INSEE documentation, we should expect around 500 job groups. ~85% of them are covered. But, then how many of these job groups have valid data?
valid_salaries.PCS_PROFESSION_CODE.unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Yeahh!! All of them!! Currently, in Bob, we are mostly using ROME classification. Then, we are interested in the number of ROME job groups covered by this dataset. First, we need to download the mapping between PCS and ROME classifications.
pcs_to_rome = pd.read_csv(path.join(DATA_FOLDER, 'crosswalks/passage_pcs_romev3.csv')) pcs_to_rome.head()
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Quite concise, isn't it!
pcs_to_rome[pcs_to_rome['PCS'].isin(salaries.PCS_PROFESSION_CODE.unique())]\ .ROME.unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Impressive! We have a ~97% coverage for ROME job groups. What about the granularity of this coverage? Coverage at the regions level.
region_professions = salaries[salaries.AREA_TYPE_CODE == 'R']\ .PCS_PROFESSION_CODE.unique() pcs_to_rome[pcs_to_rome['PCS']\ .isin(region_professions)]\ .ROME.unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Exactly the same… Let's have a look at the ROME job groups coverage at the department level.
department_professions = salaries[salaries.AREA_TYPE_CODE == 'D']\ .PCS_PROFESSION_CODE.unique() pcs_to_rome[pcs_to_rome['PCS']\ .isin(department_professions)]\ .ROME.unique().size
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Again, no difference. Everything is going well so far! Global Overview and Comparison with Scraped Data Actually, we have multiple source of data for salaries: the IMT and the FHS (more or less Pôle Emploi statistics history). The FHS dataset provides jobseekers salary expectancies. A notebook has been written before t...
salaries['mean_senior_salary'] = salaries[['MINIMUM_SALARY', 'MAXIMUM_SALARY']].sum(axis=1).div(2) valid_salaries = salaries[salaries.MAXIMUM_SALARY > 0] stats_within_departments = valid_salaries[valid_salaries.AREA_TYPE_CODE == 'D']\ .groupby('AREA_NAME')\ .mean_senior_salary.agg({'mean', 'std'})\ .sort_va...
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Within a department, ~30% of the jobs propose a salary greater than 4200€ or lesser than 1800€. How variable are the salaries within job groups?
stats_within_jobgroups = valid_salaries[valid_salaries.AREA_TYPE_CODE == 'D'].groupby('PCS_PROFESSION_CODE')\ .mean_senior_salary.agg({'mean', 'std'})\ .sort_values('std', ascending=False) stats_within_jobgroups.plot(kind='box');
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
As expected, within a job group, the dispersion is lesser than within a department (standard deviation most of the time lesser than 1000€). Still, why not looking at some examples of highly variable job goups?
valid_salaries[ (valid_salaries.AREA_TYPE_CODE == 'D') & (valid_salaries.PCS_PROFESSION_CODE.isin(stats_within_jobgroups.index))]\ .drop_duplicates().PCS_PROFESSION_NAME.to_frame().head(5)\ .style.set_properties( **{'width': '500px'})
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
Sales persons are the ones with the most highly variable salaries. That seems sensible! What about the conformity of API data with scraped data? According to the website (on the 2nd of October 2017), a nurse in the Isère department, younger than 35 years old, could expect a salary between 1850€ and 4050€. Note that ROM...
nurse_pcs = ['431a', '431b', '431c', '431d', '431f', '431g'] valid_salaries[(valid_salaries.AREA_NAME == "ISERE") \ & (valid_salaries.PCS_PROFESSION_CODE.isin(nurse_pcs) \ & (valid_salaries.AGE_GROUP_NAME == 'Moins de 35 ans'))] \ [['MAXIMUM_SALARY', 'MINIMUM_SALARY', 'PCS_PROFESSION_CODE', 'PCS_PROFESSION_...
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
bayesimpact/bob-emploi
gpl-3.0
<table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/matching_engine/intro-swivel.ipynb""> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <...
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" !pip3 install {U...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. ...
import os PROJECT_ID = "" # Get your Google Cloud project ID and project number from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) shell_output = !gcloud projects list ...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a built-in Swivel job using the Cloud SDK, you need a Cloud Storage bucket for storing the input dataset and pipeline artifacts (the trained model). Set the name of your Cloud Storage bucket below. I...
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "us-central1" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"} if ( SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]" ): # Get your GCP project id from gcloud shell_output = !gcloud auth list 2>/dev/null SERVICE_ACCOUNT = shell_output[2].split...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import libraries and define constants Define constants used in this tutorial.
SOURCE_DATA_PATH = "{}/swivel".format(BUCKET_NAME) PIPELINE_ROOT = "{}/pipeline_root".format(BUCKET_NAME)
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import packages used in this tutorial.
import pandas as pd import tensorflow as tf from google.cloud import aiplatform from sklearn.metrics.pairwise import cosine_similarity
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Copy and configure the Swivel template Download the Swivel template and configuration script.
!gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/pipeline/* .
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Change your pipeline configurations: pipeline_suffix: Suffix of your pipeline name (lowercase and hyphen are allowed). machine_type: e.g. n1-standard-16. accelerator_count: Number of GPUs in each machine. accelerator_type: e.g. NVIDIA_TESLA_P100, NVIDIA_TESLA_V100. region: e.g. us-east1 (optional, default is us-centr...
YOUR_PIPELINE_SUFFIX = "swivel-pipeline-movie" # @param {type:"string"} MACHINE_TYPE = "n1-standard-16" # @param {type:"string"} ACCELERATOR_COUNT = 2 # @param {type:"integer"} ACCELERATOR_TYPE = "NVIDIA_TESLA_V100" # @param {type:"string"} BUCKET = BUCKET_NAME[5:] # remove "gs://" for the following command. !chm...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Both swivel_pipeline_basic.json and swivel_pipeline.json are generated. Create the Swivel job for MovieLens items embeddings You will submit the pipeline job by passing the compiled spec to the create_run_from_job_spec() method. Note that you are passing a parameter_values dict that specifies the pipeline input paramet...
# Copy the MovieLens sample dataset ! gsutil cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/train/* {SOURCE_DATA_PATH}/movielens_25m # MovieLens items embedding sample PARAMETER_VALUES = { "embedding_dim": 100, # <---CHANGE THIS (OPTIONAL) "input_base": "{}/movielens_25m/train"....
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Submit the pipeline to Vertex AI:
# Instantiate PipelineJob object pl = aiplatform.PipelineJob( display_name=YOUR_PIPELINE_SUFFIX, # Whether or not to enable caching # True = always cache pipeline step result # False = never cache pipeline step result # None = defer to cache option for each pipeline component in the pipeline defini...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
After the job is submitted successfully, you can view its details (including run name that you'll need below) and logs. Use TensorBoard to check the model You may use the TensorBoard to check the model training process. In order to do that, you need to find the path to the trained model artifact. After the job finishes...
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/models/movielens/model {SOURCE_DATA_PATH}/movielens_model SAVEDMODEL_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/model") LOGS_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/tensorboard")
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
When the training starts, you can view the logs in TensorBoard:
# If on Google Cloud Notebooks, then don't execute this code. if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: # Load the TensorBoard notebook extension. %load_ext tensorboard # If on Google Cloud Notebooks, then don't execute this code. if not IS_GOOGLE_CLOUD_NOTEBOOK: if "go...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
For Google Cloud Notebooks, you can do the following: Open Cloud Shell from the Google Cloud Console. Install dependencies: pip3 install tensorflow tensorboard-plugin-profile Run the following command: tensorboard --logdir {LOGS_DIR}. You will see a message "TensorBoard 2.x.0 at http://localhost:<PORT>/ (Press CTRL+C ...
ENDPOINT_NAME = "swivel_embedding" # <---CHANGE THIS (OPTIONAL) MODEL_VERSION_NAME = "movie-tf2-cpu-2.4" # <---CHANGE THIS (OPTIONAL) aiplatform.init(project=PROJECT_ID, location=REGION) # Create a model endpoint endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_NAME) # Upload the trained model to Model ...
notebooks/official/matching_engine/intro-swivel.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0