markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Backpropagation
Now that feedforward can be done, the next step is to decide how the parameters should change such that they minimize the cost function.
Recall that the chosen cost function for this problem is
$$
c(x, P) = \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2
$$
In order to minimize it, an optimalization method must be chosen.
Here, gradient descent with a constant step size has been chosen.
Before looking at the gradient descent method, let us set up the cost function along with the right ride of the ODE and trial solution.
|
# The trial solution using the deep neural network:
def g_trial(x,params, g0 = 10):
return g0 + x*neural_network(params,x)
# The right side of the ODE:
def g(x, g_trial, gamma = 2):
return -gamma*g_trial
# The cost function:
def cost_function(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Gradient Descent
The idea of the gradient descent algorithm is to update parameters in direction where the cost function decreases goes to a minimum.
In general, the update of some parameters $\vec \omega$ given a cost function defined by some weights $\vec \omega$, $c(x, \vec \omega)$, goes as follows:
$$
\vec \omega_{\text{new} } = \vec \omega - \lambda \nabla_{\vec \omega} c(x, \vec \omega)
$$
for a number of iterations or until $ \big|\big| \vec \omega_{\text{new} } - \vec \omega \big|\big|$ is smaller than some given tolerance.
The value of $\lambda$ decides how large steps the algorithm must take in the direction of $ \nabla_{\vec \omega} c(x, \vec \omega)$. The notatation $\nabla_{\vec \omega}$ denotes the gradient with respect to the elements in $\vec \omega$.
In our case, we have to minimize the cost function $c(x, P)$ with respect to the two sets of weights and bisases, that is for the hidden layer $P_{\text{hidden} }$ and for the ouput layer $P_{\text{output} }$ .
This means that $P_{\text{hidden} }$ and $P_{\text{output} }$ is updated by
$$
\begin{aligned}
P_{\text{hidden},\text{new}} &= P_{\text{hidden}} - \lambda \nabla_{P_{\text{hidden}}} c(x, P) \
P_{\text{output},\text{new}} &= P_{\text{output}} - \lambda \nabla_{P_{\text{output}}} c(x, P)
\end{aligned}
$$
This might look like a cumberstone to set up the correct expression for finding the gradients. Luckily, Autograd comes to the rescue.
|
def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb):
## Set up initial weigths and biases
# For the hidden layer
p0 = npr.randn(num_neurons_hidden, 2 )
# For the output layer
p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included
P = [p0, p1]
print('Initial cost: %g'%cost_function(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of two arrays;
# one for the gradient w.r.t P_hidden and
# one for the gradient w.r.t P_output
cost_grad = cost_function_grad(P, x)
P[0] = P[0] - lmb * cost_grad[0]
P[1] = P[1] - lmb * cost_grad[1]
print('Final cost: %g'%cost_function(P, x))
return P
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
An implementation of a Deep Neural Network
As previously stated, a Deep Neural Network (DNN) follows the same concept of a neural network, but having more than one hidden layer. Suppose that the network has $N_{\text{hidden}}$ hidden layers where the $l$-th layer has $N_{\text{hidden}}^{(l)}$ neurons. The input is still assumed to be an array of size $1 \times N$. The network must now try to optimalize its output with respect to the collection of weigths and biases $P = \big{P_{\text{input} }, \ P_{\text{hidden} }^{(1)}, \ P_{\text{hidden} }^{(2)}, \ \dots , \ P_{\text{hidden} }^{(N_{\text{hidden}})}, \ P_{\text{output} }\big}$.
Feedforward
The feedforward step is similar to as for the neural netowork, but now considering more than one hidden layer.
The $i$-th neuron at layer $l$ recieves the result $\vec{x}j^{(l-1),\text{hidden} }$ from the $j$-th neuron at layer $l-1$. The $i$-th neuron at layer $l$ weights all of the elements in $\vec{x}_j^{(l-1),\text{hidden} }$ with a weight vector $\vec w{i,j}^{(l), \ \text{hidden} }$ with as many weigths as there are elements in$\vec{x}_j^{(l-1),\text{hidden} }$, and adds a bias $b_i^{(l), \ \text{hidden} }$:
$$
\begin{aligned}
z_{i,j}^{(l),\ \text{hidden}} &= b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}_j^{(l-1),\text{hidden} } \
&=
\begin{pmatrix}
b_i^{(l), \ \text{hidden}} & \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T
\end{pmatrix}
\begin{pmatrix}
1 \
\vec{x}_j^{(l-1),\text{hidden} }
\end{pmatrix}
\end{aligned}
$$
The output from the $i$-th neuron at the hidden layer $l$ becomes a vector $\vec{z}_{i}^{(l),\ \text{hidden}}$:
$$
\begin{aligned}
\vec{z}{i}^{(l),\ \text{hidden}} &= \Big( b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}1^{(l-1),\text{hidden} }, \ \dots \ , \ b_i^{(l), \ \text{hidden}} + \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T\vec{x}{N{hidden}^{(l-1)}}^{(l-1),\text{hidden} } \Big) \
&=
\begin{pmatrix}
b_i^{(l), \ \text{hidden}} & \big(\vec{w}{i}^{(l), \ \text{hidden}}\big)^T
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \
\vec{x}{1}^{(l-1),\text{hidden} } & \vec{x}{2}^{(l-1),\text{hidden} } & \dots & \vec{x}{N_{hidden}^{(l-1)}}^{(l-1),\text{hidden} }
\end{pmatrix}
\end{aligned}
$$
|
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Backpropagation
This step is very similar for the neural network. The idea in this step is the same as for the neural network, but with more parameters to update for. Again there is no need for computing the gradients analytically since Autograd does the work for us.
|
# The trial solution using the deep neural network:
def g_trial_deep(x,params, g0 = 10):
return g0 + x*deep_neural_network(params,x)
# The same cost function as for the neural network, but calls deep_neural_network instead.
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(deep_neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial_deep,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Solving the ODE
Finally, having set up the networks we are ready to use them to solve the ODE problem.
If possible, it is always useful to have an analytical solution at hand to test if the implementations gives reasonable results.
As a recap, the equation to solve is
$$
g'(x) = -\gamma g(x)
$$
where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values.
Solving this analytically yields
$$
g(x) = g_0\exp(-\gamma x)
$$
By making the analytical solution availible in our program, it is possible to check the persomance of our neural networks.
|
def g_analytic(x, gamma = 2, g0 = 10):
return g0*np.exp(-gamma*x)
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Using neural network
The code below solves the ODE using a neural network. The number of values for the input $\vec x$ is 10, number of hidden neurons in the hidden layer being 10 and th step size used in gradien descent $\lambda = 0.001$. The program updates the weights and biases in the network num_iter times. Finally, it plots the results from using the neural network along with the analytical solution. Feel free to experiment with different values and see how the performance of the network is!
|
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = 10
num_iter = 10000
lmb = 0.001
P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb)
res = g_trial(x,P)
res_analytical = g_analytic(x)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Using a deep neural network
|
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = np.array([10,10])
num_iter = 10000
lmb = 0.001
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
res = g_trial_deep(x,P)
res_analytical = g_analytic(x)
plt.figure(figsize=(10,10))
plt.title('Performance of a deep neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','dnn'])
plt.ylabel('g(x)')
plt.show()
|
doc/src/NeuralNet/diffeq.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Single Deletions
Perform all single gene deletions on a model
|
growth_rates, statuses = cobra.flux_analysis.single_gene_deletion(cobra_model)
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
These can also be done for only a subset of genes
|
growth_rates, statuses = cobra.flux_analysis.single_gene_deletion(cobra_model, cobra_model.genes[:20])
pandas.DataFrame.from_dict({"growth_rates": growth_rates, "status": statuses})
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
This can also be done for reactions
|
growth_rates, statuses = cobra.flux_analysis.single_reaction_deletion(cobra_model, cobra_model.reactions[:20])
pandas.DataFrame.from_dict({"growth_rates": growth_rates, "status": statuses})
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
Double Deletions
Double deletions run in a similar way. Passing in return_frame=True will cause them to format the results as a pandas Dataframe
|
cobra.flux_analysis.double_gene_deletion(cobra_model, cobra_model.genes[-10:], return_frame=True)
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
By default, the double deletion function will automatically use multiprocessing, splitting the task over up to 4 cores if they are available. The number of cores can be manually sepcified as well. Setting use of a single core will disable use of the multiprocessing library, which often aids debuggging.
|
start = time() # start timer()
cobra.flux_analysis.double_gene_deletion(ecoli_model, ecoli_model.genes[:100], number_of_processes=2)
t1 = time() - start
print("Double gene deletions for 100 genes completed in %.2f sec with 2 cores" % t1)
start = time() # start timer()
cobra.flux_analysis.double_gene_deletion(ecoli_model, ecoli_model.genes[:100], number_of_processes=1)
t2 = time() - start
print("Double gene deletions for 100 genes completed in %.2f sec with 1 core" % t2)
print("Speedup of %.2fx" % (t2/t1))
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
Double deletions can also be run for reactions
|
cobra.flux_analysis.double_reaction_deletion(cobra_model, cobra_model.reactions[:10], return_frame=True)
|
documentation_builder/deletions.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
Check the titles of the first 10 papers.
|
corpus[0:10]
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Next we’ll build a TF/IDF matrix for each paper:
|
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df = 0, stop_words = 'english')
tfidf_matrix = tf.fit_transform([content for file, content in corpus[:]])
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Next we’ll write a function that will find us the top n similar papers based on cosine similarity:
|
from sklearn.metrics.pairwise import linear_kernel
def find_similar(tfidf_matrix, index, top_n = 5):
cosine_similarities = linear_kernel(tfidf_matrix[index:index+1], tfidf_matrix).flatten()
related_docs_indices = [i for i in cosine_similarities.argsort()[::-1] if i != index]
return [(index, cosine_similarities[index]) for index in related_docs_indices][0:top_n]
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Let’s try it out:
|
corpus[1619]
for index, score in find_similar(tfidf_matrix, 1619):
print(score, corpus[index])
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
It’s pretty good for finding duplicate papers!
|
corpus[1599]
for index, score in find_similar(tfidf_matrix, 1599):
print (score, corpus[index])
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
But sometimes it identifies duplicates that aren’t identical:
|
corpus[5784]
for index, score in find_similar(tfidf_matrix, 5784):
print (score, corpus[index])
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Finally, let us creat a CSV file containing to 5 similar papers for each paper.
|
import csv
with open("./output/similarities.csv", "w") as similarities_file:
writer = csv.writer(similarities_file, delimiter = ",")
for me_index, item in enumerate(corpus):
similar_documents = [(corpus[index], score) for index, score in find_similar(tfidf_matrix, me_index)]
me = corpus[me_index]
document_id = me[0].split("/")[2].split(".")[0]
for ((raw_similar_document_id, title), score) in similar_documents:
similar_document_id = raw_similar_document_id.split("/")[2].split(".")[0]
writer.writerow([document_id, me[1], similar_document_id, title, score])
Print 5 top similar papers for each paper as long as the similarity is greater than 0.5.
items = []
with open("./output/similarities.csv", "r") as similarities_file:
reader = csv.reader(similarities_file, delimiter = ",")
for row in reader:
lst = list(row)
lst[4] = float(lst[4])
items.append(tuple(lst))
by_similarity = sorted(items, key = lambda x: x[4], reverse = True)
for similar_item in [item for item in by_similarity if 0.5 < item[4] < 0.99]:
print (similar_item)
print ('\n')
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Getting started with a quick-and-easy k-nearest neighbor classifier
THis is a tinny example to show the idea of KNN for recommender development.
|
import numpy as np
from sklearn.neighbors import NearestNeighbors
# First let's create a dataset called X, with 6 records and 2 features each.
X = np.array([[-1, 2], [4, -4], [-2, 1], [-1, 3], [-3, 2], [-1, 4]])
# Next we will instantiate a nearest neighbor object, and call it nbrs. Then we will fit it to dataset X.
nbrs = NearestNeighbors(n_neighbors=3, algorithm='ball_tree').fit(X)
# Let's find the k-neighbors of each point in object X. To do that we call the kneighbors() function on object X.
distances, indices = nbrs.kneighbors(X)
# Let's print out the indices of neighbors for each record in object X.
indices
distances
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Imagine you have a new incoming data point. It contains the values -2 and 4. To search object X and identify the most
similar record, all you need to do is call the kneighbors() function on the new incoming data p
|
print(nbrs.kneighbors([[-2, 4]]))
|
notebooks/Example 1 Content-based Recommender with TF-IDF.ipynb
|
YangDS/recommenders
|
gpl-3.0
|
Linear models
Assume you have data points with measurements y at positions x as well as measurement errors y_err.
How can you use statsmodels to fit a straight line model to this data?
For an extensive discussion see Hogg et al. (2010), "Data analysis recipes: Fitting a model to data" ... we'll use the example data given by them in Table 1.
So the model is f(x) = a * x + b and on Figure 1 they print the result we want to reproduce ... the best-fit parameter and the parameter errors for a "standard weighted least-squares fit" for this data are:
* a = 2.24 +- 0.11
* b = 34 +- 18
|
data = """
x y y_err
201 592 61
244 401 25
47 583 38
287 402 15
203 495 21
58 173 15
210 479 27
202 504 14
198 510 30
158 416 16
165 393 14
201 442 25
157 317 52
131 311 16
166 400 34
160 337 31
186 423 42
125 334 26
218 533 16
146 344 22
"""
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
data = pd.read_csv(StringIO(data), sep='\s*', engine='python').astype(float)
# Note: for the results we compare with the paper here, they drop the first four points
data = data[4:]
|
examples/notebooks/chi2_fitting.ipynb
|
phobson/statsmodels
|
bsd-3-clause
|
Construct a 5x3 matrix, unitialized:
|
x = torch.Tensor(5, 3)
print(x)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
!NOTE¡: torch.Size is in fact a tuple, so it supports all tuple operations.
b. Operations
There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.
Addition: synax 1
|
y = torch.rand(5, 3)
print(x + y)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Addition: providing an output tensor as argument
|
result = torch.Tensor(5, 3)
torch.add(x, y, out=result)
print(result)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
¡NOTE!: Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.
You can use standard NumPy-like indexing with all the bells and whistles!
|
print(x[:, 1])
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Resizing: If you want to resize/reshape a tensor, you can use torch.view:
|
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
# testing out valid resizings
for i in range(256):
for j in range(256):
try:
y = x.view(i,j)
print(y.size())
except RuntimeError:
pass
# if you make either i or j = -1, then it basically
# lists all compatible resizings
x.view(16)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Read later:
100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described here.
2. NumPy Bridge
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.
a. Converting a Torch Tensor to a NumPy Array
|
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
See how the numpy array changed in value
|
a.add_(1)
print(a)
print(b)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
ahh, cool
b. Converting NumPy Array to Torch Tensor
See how changing the NumPy array changed the Torch Tensor automatically
|
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
All the Tensors on the CPU except a CharTensor support converting to NumPy and back.
3. CUDA Tensors
Tensors can be moved onto GPU using the .cuda method.
|
# let's run this cell only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
II. Autograd: Automatic Differentiation
Central to all neural networks in PyTorch is the autograd package. Let's first briefly visit this, and we'll then go to training our first neural network.
The autograd package provides automatic differentiation for all operations on Tensors. It's a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.
Let's see this in simpler terms with some examples.
1. Variable
autograd.Variable is the central class of this package. It wraps a Tensor, and supports nearly all operations defined on it. Once you finish your computation you can call .backward() and have all gradients computed automatically.
You can access the raw tensor through the .data attribute, while the gradient wrt this variable is accumulated into .grad.
<img src="http://pytorch.org/tutorials/_images/Variable.png" alt="autograd.Variable">
There's one more class which is very important for the autograd implementation: a Function.
Variable and Function are interconnected and build up an acyclic graph that encodes a complete history of computation. Each variable has a .grad_fn attribute that references a Function that has created the Variable (except for variables created by the user - their grad_fn is None).
If you want to compute the derivatives, you can call .backward() on a Variable. If Variable is a scalar (ie: it holds a one-element datum), you don't need to specify a grad_output argument that is a tensor of matching shape.
|
import torch
from torch.autograd import Variable
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Create a variable:
|
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
print(x.grad_fn)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Do an operation on the variable:
|
y = x + 2
print(y)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
y was created as a result of an operation, so it has a grad_fn.
|
print(y.grad_fn)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Do more operations on y:
|
z = y * y * 3
out = z.mean()
print(z, out)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2. Gradients
let's backprop now: out.backward() is equivalent to doing out.backward(torch.Tensor([1.0]))
|
# x = torch.autograd.Variable(torch.ones(2,2), requires_grad=True)
# x.grad.data.zero_() # re-zero gradients of x if rerunning
# y = x + 2
# z = y**2*3
# out = z.mean()
# print(out.backward())
# print(x.grad)
out.backward()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
print gradients d(out)/dx:
|
print(x.grad)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
You should've gotten a matrix of 4.5. Let's call the out Variable "$o$". We have that: $o = \tfrac{1}{4} Σ_i z_i,z_i = 3(x_i + 2)^2$ and $z_i\bigr\rvert_{x_i=1}=27$.
$\Rightarrow$ $\tfrac{δo}{δx_i}=\tfrac{3}{2}(x_i+2)$ $\Rightarrow$ $\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$
You can do a lot of crazy things with autograd.
|
x = torch.randn(3)
x = Variable(x, requires_grad=True)
y = x*2
while y.data.norm() < 1000:
y = y*2
print(y)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Read Later:
Documentation of Variable and Function is at http://pytorch.org/docs/autograd
III. Neural Networks
Neural networks can be constructed using the torch.nn package.
Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input that returns the output.
For example, look at this network that classifies digit images:
<img src="http://pytorch.org/tutorials/_images/mnist.png" alt=convnet>
It's a simple feed-forward network. It takes the input, feeds it through several layers, one after the other, and then finally gives the output.
A typical training prcedure for a neural network is as follows:
Define the neural network that has some learnable parameters (or weights)
Iterate over a dataset of inputs
Process input through the network
Compute the loss (how far is the output from being ocrrect)
Propagate gradients back into the network's parameters
Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient
1. Define the Network
Let's define this network:
|
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5) # ConvNet
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # classification layer
## NOTE: ahh, so the forward pass in PyTorch is just you defining
# what all the activation functions are going to be?
# Im seeing this pattern of tensor_X = actvnFn(layer(tensor_X))
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
# ahh, just get total number of features by
# multiplying dimensions ('tensor "volume"')
net = Net()
print(net)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. --ah that makes sense-- You can use any of the Tensor operations in the forward function.
The learnable parameters of a model are returned by net.parameters()
|
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
The input to the forward is an autograd.Variable, and so is the output. NOTE: Expected input size to this net (LeNet) is 32x32. To use this network on MNIST data, resize the images from the dataset to 32x32.
|
input = Variable(torch.randn(1, 1, 32, 32)) # batch_size x channels x height x width
out = net(input)
print(out)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
!NOTE¡: torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are mini-batches of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Widgth.
If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.
Before proceeding further, let's recap all the classes we've seen so far.
Recap:
* torch.Tensor - A multi-dimensional array.
* autograd.Variable - Wraps a Tensor and records the history of operations applied to it. Has the same API as a Tensor, with some additions like backward(). Also holds the gradient wrt the tensor.
* nn.Module - Nerual network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
* nn.Parameter - A kind of Variable, that's automatically registered as a parameter when assigned as an attribute to a Module.
* autograd.Function - Implements forward and backward definitions of an autograd operation. Every Variable operation creates at least a single Function node that connects to functions that created a Variable and encodes its history.
At this point, we've covered:
* Defining a neural network
* Processing inputs and calling backward.
Still Left:
* Computing the loss
* Updating the weights of the network
2. Loss Function
A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.
There are several different loss functions under the nn package. A simple loss is: nn.MSELoss which computes the mean-squared error between input and target.
For example:
|
output = net(input)
target = Variable(torch.arange(1, 11)) # a dummy target example
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Now, if you follow loss in the backward direction, usint its .grad_fn attribute, you'll see a graph of computations that looks like this:
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
So, when we call loss.backward(), the whole graph is differentiated wrt the loss, and all Variables in the graph will have their .grad Variable accumulated with the gradient.
For illustration, let's follow a few steps backward:
|
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
3. Backpropagation
To backpropagate the error all we have to do is loss.backward(). You need to clear the existing gradients though, or else gradients will be accumulated.
Now we'll call loss.backward(), and have a look at conv1'd bias gradients before and after the backward.
|
net.zero_grad() # zeroes gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Now we've seen how to use loss functions.
Read Later:
The neural network package contains variables modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here.
The only thing left to learn is:
* updating the weights of the network
4. Update the Weights
The simplest update rule used in practice is Stochastic Gradient Descent (SGD):
weight = weight - learning_rate * gradient
We can implement this in our simply python code;
|
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: torch.optim that implements all these methods. Using it is very simple:
|
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
!NOTE¡: Observe how gradient buffers had to be manually set to zero using optimizer.zero_grad(). This is because gradients are accumulated as explained in the Backprop section.
IV. Training a Classifier
This is it. You've seen how to define neural networks, compute loss, and make updates to the weights of a network.
Now you might be thinking..
1. What about Data?
Generally, when you have to deal with image, text, audo, or video data, you cna use standard python packages that load data into a numpy array. Then you can convert this array into a torch.*Tensor.
For images, packages such as Pillow (PIL) and OpenCV are useful.
For audio, packages usch as scipy and librosa
For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful.
Specifically for vision, we have created a package called torchvision, that has data loaders for common datasets such as ImagNet, CIFAR10, MNIST, etc., and data transformers for images, viz., torhvision.datasets and torch.utils.data.DataLoader.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial we'll use the CIFAR10 dataset. It has classes ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are size 3x32x32, ie: 3-channel color images, 32x32 pixels in size.
<img src="http://pytorch.org/tutorials/_images/cifar10.png" alt=CIFAR10 grid>
2. Training an Image Classifier
We'll do the following steps in order:
Load and normalize the CIFAR10 training and test datasets using torchvision
Define a Convolutional Neural Network
Define a loss function
Train the network on training data
Test the network on test data
2.1 Loading and Normalizing CIFAR10
Using torchvision, it's extremely easy to load CIFAR10:
|
import torch
import torchvision
import torchvision.transforms as transforms
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
The output of torchvision datasets are PIL-Image images of range [0,1]. We transform them to Tensors of normalized range [-1,1]:
|
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Let's view some of the training images;
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # un-normalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2.2 Define a Convolutional Neural Network
Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel as orignly defined)
|
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # ConvNet
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # classification layer
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2.3 Define a Loss Function and Optimizer
We'll use Classification Cross-Entropy Loss and SGD with Momentum
|
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2.4 Train the Network
This is when things start to get interesting. We simple have to loop over our data iterator, and feed the inputs to the network and optimize.
|
for epoch in range(2): # loop over dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
#wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch+1}, {i+1:5d}] loss: {running_loss/2000:.3f}')
running_loss = 0.0
print('Finished Training | Entraînement Terminé.')
# Python 3.6 string formatting refresher
tmp = np.random.random()
print("%.3f" % tmp)
print("{:.3f}".format(tmp))
print(f"{tmp:.3f}")
print("%d" % 52)
print("%5d" % 52)
print(f'{52:5d}')
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2.5 Test the Network on Test Data
We trained the network for 2 passes over the training dataset. But we need to check if the network as learnt anything at all.
We'll check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
Okay, first step. Let's display an image from the test set to get familiar.
|
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
print('GroundTruth: ', ' '.join(f'{classes[labels[j]] for j in range(4)}'))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Okay, now let's see what the neural network thinks these examples above are:
|
outputs = net(Variable(images))
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
The outputs are energies (confidences) for the 10 classes. Let's get the index of the highest confidence:
|
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Seems pretty good.
Now to look at how the network performs on the whole dataset.
|
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print(f'Accuracy of network on 10,000 test images: {np.round(100*correct/total,2)}%')
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
That looks way better than chance for 10 different choices (10%). Seems like the network learnt something.
What classes performed well and which didn't?:
|
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Cool, so what next?
How do we run these neural networks on the GPU?
3. Training on GPU
Just like how you transfer a Tensor onto a GPU, you transfer the neural net onto the GPU. This will recursively go over all modules and convert their parameters and buffers to CUDA tensors:
|
net.cuda()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Remember that you'll have to send the inputs and targets at every step to the GPU too:
|
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Why don't we notice a MASSIVE speedup compared to CPU? Because the network is very small.
Exercise: Try increasing the width of your network (argument 2 of the first nn.Conv2d, and argument 1 of the second nn.Conv2d - they need to be the same number), see what kind of speedup you get.
Goals achieved:
* Understanding PyTorch's Tensor library and neural networks at a high level.
* Train a small neural network to classify images
4. Training on Multiple GPUs
If you want to see even more MASSIVE speedups using all your GPUs, please checkout Optional: Data Parallelism.
5. Where Next?
Train neural nets to play video games
Train a SotA ResNet network on ImageNet
Train a face generator using Generative Adversarial Networks
Train a word-level language model using Recurrent LSTM networks
More examples
More tutorials
Discuss PyTorch on the Forums
Chat with other users on Slack
V. Optional: Data Parallelism
In this tutorial we'll learn how to use multiple GPUs using DataParallel.
It's very easy to use GPUs with PyTorch. You can put the model on a GPU:
|
model.gpu()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Then, you can copy all your tensors to the GPU:
|
mytensor = my_tensor.gpu()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Please note that just calling mytensor.gpu() won't copy the tensor to the GPU. You need to assign it to a new tensor and use that tensor on the GPU.
It's natural to execute your forward and backward propagations on multiple GPUs. However, PyTorch will only use one GPU by default. You can easily run your operations on multiple GPUs by making your model run parallelly using DataParallel:
|
model = nn.DataParallel(model)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
That's the core behind this tutorial. We'll explore it in more detail below.
1. Imports and Parameters
Import PyTorch modules and define parameters.
|
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
# Parameters and DataLoaders
input_size = 5
output_size = 2
batch_size = 30
data_size = 100
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
2. Dummy DataSet
Make a dummy (random) dataset. You just need to implement the getitem
|
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
rand_loader = DataLoader(dataset=RandomDataset(input_size, 100),
batch_size=batch_size, shuffle=True)
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
3. Simple Model
For the demo, our model just gets an input, performs a linear operation, and gives an output. However, you can use DataParallel on any model (CNN, RNN, CapsuleNet, etc.)
We've placed a print statement inside the model to monitor the size of input and output tensors. Please pay attention to what is printed at batch rank 0.
|
class Model(nn.Module):
# Our model
def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)
def forward(self, input):
output = self.fc(input)
print(" In Model; input size", input.size(),
"output size", output.size())
return output
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
4. Create Model and DataParallel
This is the core part of this tutorial. First, we need to make a model instance and check if we have multiple GPUs. If we have multiple GPUs, we can wrap our model using nn.DataParallel. Then we can put our model on GPUs by model.gpu()
|
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
if torch.cuda.is_available():
model.cuda()
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
5. Run the Model
Now we can see the sizes of input and output tensors.
|
for data in rand_loader:
if torch.cuda.is_available():
input_var = Variable(data.cuda())
else:
input_var = Variable(data)
output = model(input_var)
print("Outside: input size", input_var.size(),
"output_size", output.size())
|
pytorch/PyTorch60MinBlitz.ipynb
|
WNoxchi/Kaukasos
|
mit
|
Nice! We have the basic useful columns area, job groups but also data on the salaries and the age of the recruitee (a proxy for experience I guess). According to the documentation 90% of the employees earn more than the MINIMUM_SALARY, while only 10% earn more than the MAXIMUM_SALARY.
Concerning the age groups, how many categories do we have and what are there?
|
salaries.AGE_GROUP_NAME.unique()
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Only two… Maybe it is enough.
Let's do a basic check on the areas.
First what about the granularity?
|
salaries.AREA_TYPE_NAME.unique()
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Ok! We have 3 different levels, whole country, regions and departments.
Is everyone there? Let's start by the departments!
|
salaries[salaries.AREA_TYPE_CODE == 'D'].AREA_CODE.sort_values().unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
We could expect 101 (metropolitan departments) or 103 (metropolitan departments + overseas collectivities). Here Mayotte is missing.
What about the regions?
|
salaries[salaries.AREA_TYPE_CODE == 'R'].AREA_CODE.sort_values().unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Oh… We expected 18! Again, Mayotte is missing.
The first overview revealed some negative salaries. The documentation states that when they are not enough data, the value is -1 while when the data in unavailable it is marked as -2.
Let's have a basic description of the salaries.
How many missing or uninformative salary data do we have?
We start with the minimum salary
|
len(salaries[salaries.MINIMUM_SALARY < 0]) / len(salaries) * 100
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Around 13% of the job groups for a given area (market) don't have salary data! That is a bit more than nothing!
When salaries are lesser than 0, are the minimum salary and the maximum salaries always the same?
|
invalid_rows = salaries[(salaries.MAXIMUM_SALARY < 0) | (salaries.MINIMUM_SALARY < 0)]
all(invalid_rows.MINIMUM_SALARY == invalid_rows.MAXIMUM_SALARY)
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Yeahh!! They are exactly the same… How convenient!
So let's get a basic overview of the salaries.
|
valid_salaries = salaries[salaries.MAXIMUM_SALARY > 0]
valid_salaries[['MAXIMUM_SALARY', 'MINIMUM_SALARY']].describe()
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Because the minimum MINIMUM_SALARY is lower than the french minimum wage (~1400), we think that these data gather both full-time and part-time offers. It can be scary to deliver as such to our users…
Anyway, no weird or missing values here.
It can be an overkill but we'll see if the maximum salary is always greater or equal to the minimum salary.
|
all(salaries.MAXIMUM_SALARY >= salaries.MINIMUM_SALARY)
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Great!
Last but not least. Do we cover every job groups?
|
salaries.PCS_PROFESSION_CODE.unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
According to INSEE documentation, we should expect around 500 job groups. ~85% of them are covered.
But, then how many of these job groups have valid data?
|
valid_salaries.PCS_PROFESSION_CODE.unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Yeahh!! All of them!!
Currently, in Bob, we are mostly using ROME classification. Then, we are interested in the number of ROME job groups covered by this dataset.
First, we need to download the mapping between PCS and ROME classifications.
|
pcs_to_rome = pd.read_csv(path.join(DATA_FOLDER, 'crosswalks/passage_pcs_romev3.csv'))
pcs_to_rome.head()
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Quite concise, isn't it!
|
pcs_to_rome[pcs_to_rome['PCS'].isin(salaries.PCS_PROFESSION_CODE.unique())]\
.ROME.unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Impressive! We have a ~97% coverage for ROME job groups.
What about the granularity of this coverage?
Coverage at the regions level.
|
region_professions = salaries[salaries.AREA_TYPE_CODE == 'R']\
.PCS_PROFESSION_CODE.unique()
pcs_to_rome[pcs_to_rome['PCS']\
.isin(region_professions)]\
.ROME.unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Exactly the same…
Let's have a look at the ROME job groups coverage at the department level.
|
department_professions = salaries[salaries.AREA_TYPE_CODE == 'D']\
.PCS_PROFESSION_CODE.unique()
pcs_to_rome[pcs_to_rome['PCS']\
.isin(department_professions)]\
.ROME.unique().size
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Again, no difference.
Everything is going well so far!
Global Overview and Comparison with Scraped Data
Actually, we have multiple source of data for salaries: the IMT and the FHS (more or less Pôle Emploi statistics history). The FHS dataset provides jobseekers salary expectancies. A notebook has been written before to investigate the distribution of these expected salaries.
An analysis of the IMT salary data has been done before.
The main conclusions of this notebook were:
- for a given job group, the salary was quite consistent on the french territory (you'll earn the almost the same if you are a deliverer in Lyon or in Paris).
- there is a high variation in salaries within a single department.
Does it still stands?
How variable are the salaries within departments?
|
salaries['mean_senior_salary'] = salaries[['MINIMUM_SALARY', 'MAXIMUM_SALARY']].sum(axis=1).div(2)
valid_salaries = salaries[salaries.MAXIMUM_SALARY > 0]
stats_within_departments = valid_salaries[valid_salaries.AREA_TYPE_CODE == 'D']\
.groupby('AREA_NAME')\
.mean_senior_salary.agg({'mean', 'std'})\
.sort_values('mean', ascending=False)
stats_within_departments.plot(kind='box');
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Within a department, ~30% of the jobs propose a salary greater than 4200€ or lesser than 1800€.
How variable are the salaries within job groups?
|
stats_within_jobgroups = valid_salaries[valid_salaries.AREA_TYPE_CODE == 'D'].groupby('PCS_PROFESSION_CODE')\
.mean_senior_salary.agg({'mean', 'std'})\
.sort_values('std', ascending=False)
stats_within_jobgroups.plot(kind='box');
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
As expected, within a job group, the dispersion is lesser than within a department (standard deviation most of the time lesser than 1000€).
Still, why not looking at some examples of highly variable job goups?
|
valid_salaries[
(valid_salaries.AREA_TYPE_CODE == 'D') &
(valid_salaries.PCS_PROFESSION_CODE.isin(stats_within_jobgroups.index))]\
.drop_duplicates().PCS_PROFESSION_NAME.to_frame().head(5)\
.style.set_properties( **{'width': '500px'})
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
Sales persons are the ones with the most highly variable salaries. That seems sensible!
What about the conformity of API data with scraped data?
According to the website (on the 2nd of October 2017), a nurse in the Isère department, younger than 35 years old, could expect a salary between 1850€ and 4050€.
Note that ROME code for nurse is "J1502" which corresponds to 6 PCS classifications (431a, 431b, 431c, 431d, 431f and 431g).
|
nurse_pcs = ['431a', '431b', '431c', '431d', '431f', '431g']
valid_salaries[(valid_salaries.AREA_NAME == "ISERE") \
& (valid_salaries.PCS_PROFESSION_CODE.isin(nurse_pcs) \
& (valid_salaries.AGE_GROUP_NAME == 'Moins de 35 ans'))] \
[['MAXIMUM_SALARY', 'MINIMUM_SALARY', 'PCS_PROFESSION_CODE', 'PCS_PROFESSION_NAME']]
|
data_analysis/notebooks/datasets/imt/salaries_imt_api.ipynb
|
bayesimpact/bob-emploi
|
gpl-3.0
|
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/matching_engine/intro-swivel.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/matching_engine/intro-swivel.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This notebook demonstrate how to train an embedding with Submatrix-wise Vector Embedding Learner (Swivel) using Vertex Pipelines. The purpose of the embedding learner is to compute cooccurrences between tokens in a given dataset and to use the cooccurrences to generate embeddings.
Vertex AI provides a pipeline template
for training with Swivel, so you don't need to design your own pipeline or write
your own training code.
It will require you provide a bucket where the dataset will be stored.
Note: you may incur charges for training, storage or usage of other GCP products (Dataflow) in connection with testing this SDK.
Dataset
You will use the following sample datasets in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/swivel:
movielens_25m: A movie rating dataset for the items input type that you can use to create embeddings for movies. This dataset is processed so that each line contains the movies that have same rating by the same user. The directory also includes movies.csv, which maps the movie ids to their names.
wikipedia: A text corpus dataset created from a Wikipedia dump that you can use to create word embeddings.
Objective
In this notebook, you will learn how to train custom embeddings using Vertex Pipelines and deploy the model for serving. The steps performed include:
Setup: Importing the required libraries and setting your global variables.
Configure parameters: Setting the appropriate parameter values for the pipeline job.
Train on Vertex Pipelines: Create a Swivel job to Vertex Pipelines using pipeline template.
Deploy on Vertex Prediction: Importing and deploying the trained model to a callable endpoint.
Predict: Calling the deployed endpoint using online prediction.
Cleaning up: Deleting resources created by this tutorial.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Dataflow
Cloud Storage
Learn about Vertex AI
pricing, Cloud Storage
pricing, and Dataflow pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as google-cloud-aiplatform, tensorboard-plugin-profile. Use the latest major GA version of each package.
|
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
!pip3 install {USER_FLAG} --upgrade pip
!pip3 install {USER_FLAG} --upgrade scikit-learn
!pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile
!pip3 install {USER_FLAG} --upgrade tensorflow
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Dataflow API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
|
import os
PROJECT_ID = ""
# Get your Google Cloud project ID and project number from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
shell_output = !gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)" 2>/dev/null
PROJECT_NUMBER = shell_output[0]
print("Project number: ", PROJECT_NUMBER)
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a built-in Swivel job using the Cloud SDK, you need a Cloud Storage bucket for storing the input dataset and pipeline artifacts (the trained model).
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
|
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
|
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].split()[1]
print("Service Account:", SERVICE_ACCOUNT)
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Import libraries and define constants
Define constants used in this tutorial.
|
SOURCE_DATA_PATH = "{}/swivel".format(BUCKET_NAME)
PIPELINE_ROOT = "{}/pipeline_root".format(BUCKET_NAME)
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Import packages used in this tutorial.
|
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from sklearn.metrics.pairwise import cosine_similarity
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Copy and configure the Swivel template
Download the Swivel template and configuration script.
|
!gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/pipeline/* .
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Change your pipeline configurations:
pipeline_suffix: Suffix of your pipeline name (lowercase and hyphen are allowed).
machine_type: e.g. n1-standard-16.
accelerator_count: Number of GPUs in each machine.
accelerator_type: e.g. NVIDIA_TESLA_P100, NVIDIA_TESLA_V100.
region: e.g. us-east1 (optional, default is us-central1)
network_name: e.g., my_network_name (optional, otherwise it uses "default" network).
VPC Network peering, subnetwork and private IP address configuration
Executing the following cell will generate two files:
1. swivel_pipeline_basic.json: The basic template allows public IPs and default network for the Dataflow job, and doesn't require setting up VPC Network peering for Vertex AI and you will use it in this notebook sample.
1. swivel_pipeline.json: This template enables private IPs and subnet configuration for the Dataflow job, also requires setting up VPC Network peering for the Vertex custom training. This template includes the following args:
* "--subnetwork=regions/%REGION%/subnetworks/%NETWORK_NAME%",
* "--no_use_public_ips",
* \"network\": \"projects/%PROJECT_NUMBER%/global/networks/%NETWORK_NAME%\"
WARNING In order to specify private IPs and configure VPC network, you need to set up VPC Network peering for Vertex AI for your subnetwork (e.g. "default" network on "us-central1") before submitting the following job. This is required for using private IP addresses for DataFlow and Vertex AI.
|
YOUR_PIPELINE_SUFFIX = "swivel-pipeline-movie" # @param {type:"string"}
MACHINE_TYPE = "n1-standard-16" # @param {type:"string"}
ACCELERATOR_COUNT = 2 # @param {type:"integer"}
ACCELERATOR_TYPE = "NVIDIA_TESLA_V100" # @param {type:"string"}
BUCKET = BUCKET_NAME[5:] # remove "gs://" for the following command.
!chmod +x swivel_template_configuration*
!./swivel_template_configuration_basic.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_number {PROJECT_NUMBER} -project_id {PROJECT_ID} -machine_type {MACHINE_TYPE} -accelerator_count {ACCELERATOR_COUNT} -accelerator_type {ACCELERATOR_TYPE} -pipeline_root {BUCKET}
!./swivel_template_configuration.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_number {PROJECT_NUMBER} -project_id {PROJECT_ID} -machine_type {MACHINE_TYPE} -accelerator_count {ACCELERATOR_COUNT} -accelerator_type {ACCELERATOR_TYPE} -pipeline_root {BUCKET}
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Both swivel_pipeline_basic.json and swivel_pipeline.json are generated.
Create the Swivel job for MovieLens items embeddings
You will submit the pipeline job by passing the compiled spec to the create_run_from_job_spec() method. Note that you are passing a parameter_values dict that specifies the pipeline input parameters to use.
The following table shows the runtime parameters required by the Swivel job:
| Parameter |Data type | Description | Required |
|----------------------------|----------|--------------------------------------------------------------------|------------------------|
| embedding_dim | int | Dimensions of the embeddings to train. | No - Default is 100 |
| input_base | string | Cloud Storage path where the input data is stored. | Yes |
| input_type | string | Type of the input data. Can be either 'text' (for wikipedia sample) or 'items'(for movielens sample). | Yes |
| max_vocab_size | int | Maximum vocabulary size to generate embeddings for. | No - Default is 409600 |
|num_epochs | int | Number of epochs for training. | No - Default is 20 |
In short, the items input type means that each line of your input data should be space-separated item ids. Each line is tokenized by splitting on whitespace. The text input type means that each line of your input data should be equivalent to a sentence. Each line is tokenized by lowercasing, and splitting on whitespace.
|
# Copy the MovieLens sample dataset
! gsutil cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/train/* {SOURCE_DATA_PATH}/movielens_25m
# MovieLens items embedding sample
PARAMETER_VALUES = {
"embedding_dim": 100, # <---CHANGE THIS (OPTIONAL)
"input_base": "{}/movielens_25m/train".format(SOURCE_DATA_PATH),
"input_type": "items", # For movielens sample
"max_vocab_size": 409600, # <---CHANGE THIS (OPTIONAL)
"num_epochs": 5, # <---CHANGE THIS (OPTIONAL)
}
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Submit the pipeline to Vertex AI:
|
# Instantiate PipelineJob object
pl = aiplatform.PipelineJob(
display_name=YOUR_PIPELINE_SUFFIX,
# Whether or not to enable caching
# True = always cache pipeline step result
# False = never cache pipeline step result
# None = defer to cache option for each pipeline component in the pipeline definition
enable_caching=False,
# Local or GCS path to a compiled pipeline definition
template_path="swivel_pipeline_basic.json",
# Dictionary containing input parameters for your pipeline
parameter_values=PARAMETER_VALUES,
# GCS path to act as the pipeline root
pipeline_root=PIPELINE_ROOT,
)
# Submit the Pipeline to Vertex AI
# Optionally you may specify the service account below: submit(service_account=SERVICE_ACCOUNT)
# You must have iam.serviceAccounts.actAs permission on the service account to use it
pl.submit()
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
After the job is submitted successfully, you can view its details (including run name that you'll need below) and logs.
Use TensorBoard to check the model
You may use the TensorBoard to check the model training process. In order to do that, you need to find the path to the trained model artifact. After the job finishes successfully (~ a few hours), you can view the trained model output path in the Vertex ML Metadata browser. It is going to have the following format:
{BUCKET_NAME}/pipeline_root/{PROJECT_NUMBER}/swivel-{TIMESTAMP}/EmbTrainerComponent_-{SOME_NUMBER}/model/
You may copy this path for the MODELOUTPUT_DIR below.
Alternatively, you can download a pretrained model to {SOURCE_DATA_PATH}/movielens_model and proceed. This pretrained model is for demo purpose and not optimized for production usage.
|
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/models/movielens/model {SOURCE_DATA_PATH}/movielens_model
SAVEDMODEL_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/model")
LOGS_DIR = os.path.join(SOURCE_DATA_PATH, "movielens_model/tensorboard")
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
When the training starts, you can view the logs in TensorBoard:
|
# If on Google Cloud Notebooks, then don't execute this code.
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
# Load the TensorBoard notebook extension.
%load_ext tensorboard
# If on Google Cloud Notebooks, then don't execute this code.
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
%tensorboard --logdir $LOGS_DIR
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
For Google Cloud Notebooks, you can do the following:
Open Cloud Shell from the Google Cloud Console.
Install dependencies: pip3 install tensorflow tensorboard-plugin-profile
Run the following command: tensorboard --logdir {LOGS_DIR}. You will see a message "TensorBoard 2.x.0 at http://localhost:<PORT>/ (Press CTRL+C to quit)" as the output. Take note of the port number.
You can click on the Web Preview button and view the TensorBoard dashboard and profiling results. You need to configure Web Preview's port to be the same port as you receive from step 3.
Deploy the embedding model for online serving
To deploy the trained model, you will perform the following steps:
* Create a model endpoint (if needed).
* Upload the trained model to Model resource.
* Deploy the Model to the endpoint.
|
ENDPOINT_NAME = "swivel_embedding" # <---CHANGE THIS (OPTIONAL)
MODEL_VERSION_NAME = "movie-tf2-cpu-2.4" # <---CHANGE THIS (OPTIONAL)
aiplatform.init(project=PROJECT_ID, location=REGION)
# Create a model endpoint
endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_NAME)
# Upload the trained model to Model resource
model = aiplatform.Model.upload(
display_name=MODEL_VERSION_NAME,
artifact_uri=SAVEDMODEL_DIR,
serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-4:latest",
)
# Deploy the Model to the Endpoint
model.deploy(
endpoint=endpoint,
machine_type="n1-standard-2",
)
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.