markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Finally let's grab and print out the top 2 rated movies for each user
top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1] top_movies for i in range(num_users): movie_names = [movies[index] for index in top_movies[i]] print('{}: {}'.format(users[i],movie_names))
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
turbomanage/training-data-analyst
apache-2.0
Load signal In this tutorial, the signal is imported from a .wav file. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the pink noise signal from MOSQITO that is use...
# Define path to the .wav file # To be replaced by your own path path = "../validations/sq_metrics/loudness_zwst/input/ISO_532_1/Test signal 5 (pinknoise 60 dB).wav" # load signal sig, fs = load(path, wav_calib=2 * 2 **0.5) # plot signal t = np.linspace(0, (len(sig) - 1) / fs, len(sig)) plt.figure(1) plt.plot(t, sig, c...
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
Compute sharpness of the whole signal The acoustic sharpness is computed by using the following command line. In addition to the signal (as ndarray) and the sampling frequency, the function takes 1 input arguments: "weitghting" to specify the weighting functions to be used ('din' by default, 'aures', 'bismarck' or 'fas...
sharpness = sharpness_din_st(sig, fs, weighting="din")
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
The function return the Sharpness of the signal :
print("Sharpness = {:.1f} acum".format(sharpness) )
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
Compute sharpness per signal segments To compute the sharpness for successive, possibly overlaping, time segments, you can use the sharpness_din_perseg function. It accepts two more input paramters: - nperseg: to define the length of each segment - noverlap: to define the number of points to overlap between segments
sharpness, time_axis = sharpness_din_perseg(sig, fs, nperseg=8192 * 2, noverlap=4096, weighting="din") plt.figure(2) plt.plot(time_axis, sharpness, color=COLORS[0]) plt.xlabel("Time [s]") plt.ylabel("S_din [acum]") plt.ylim((0, 3))
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
Compute sharpness from loudness In case you have already computed the loudness of a signal, you can use the sharpness_din_from_loudness function to compute the sharpnes. It takes the loudness and the specific loudness as input. The loudness can be computed per time segment or not.
N, N_specific, bark_axis, time_axis = loudness_zwst_perseg( sig, fs, nperseg=8192 * 2, noverlap=4096 ) sharpness = sharpness_din_from_loudness(N, N_specific, weighting='din') plt.figure(3) plt.plot(time_axis, sharpness, color=COLORS[0]) plt.xlabel("Time [s]") plt.ylabel("S_din [acum]") plt.ylim((0, 3))
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
Compute sharpness from spectrum The commands below shows how to compute the stationary sharpness from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO. One should note that only stationary values can be computed from a frequency input. The input spectrum can be either ...
# Compute spectrum n = len(sig) spec = np.abs(2 / np.sqrt(2) / n * fft(sig)[0:n//2]) freqs = fftfreq(n, 1/fs)[0:n//2] # Compute sharpness S = sharpness_din_freq(spec, freqs) print("Sharpness_din = {:.1f} sone".format(S) )
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
from datetime import date print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
tutorials/tuto_sharpness_din.ipynb
Eomys/MoSQITo
apache-2.0
Network creation and initialization is very similar to C++: networks are created using the make_net(name) factory function the net.set(key,value) method is used to set up parameters the .setLearningRate(lr,mom) method is used to set learning rate and momentum .initialize() is called to create the network As in C++, t...
net = clstm.make_net_init("lstm1","ninput=1:nhidden=4:noutput=2") print net net.setLearningRate(1e-4,0.9) print clstm.network_info(net)
misc/lstm-delay.ipynb
MichalBusta/clstm
apache-2.0
You can navigate the network structure as you would in C++. You can use similar methods to create more complex network architectures than possible with make_net.
print net.sub.size() print net.sub[0] print net.sub[0].name
misc/lstm-delay.ipynb
MichalBusta/clstm
apache-2.0
This cell generally illustrates how to invoke the CLSTM library from Python: net.inputs, net.outputs, net.d_inputs, and net.d_outputs are Sequence types Sequence objects can be converted to rank 3 arrays using the .array() method The values in a Sequence can be set with the .aset(array) method
N = 20 xs = array(randn(N,1,1)<0.2, 'f') net.inputs.aset(xs) net.forward()
misc/lstm-delay.ipynb
MichalBusta/clstm
apache-2.0
Here is a training loop that generates a delayed-by-one from a random input sequence and trains the network to learn this task.
N = 20 test = array(rand(N)<0.3, 'f') plot(test, '--', c="black") ntrain = 30000 for i in range(ntrain): xs = array(rand(N)<0.3, 'f') ys = roll(xs, 1) ys[0] = 0 ys = array([1-ys, ys],'f').T.copy() net.inputs.aset(xs.reshape(N,1,1)) net.forward() net.d_outputs.aset(ys.reshape(N,2,1)-net.outpu...
misc/lstm-delay.ipynb
MichalBusta/clstm
apache-2.0
Installs DOcplexif needed
import sys try: import docplex.mp except: if hasattr(sys, 'real_prefix'): #we are in a virtual env. !pip install docplex else: !pip install --user docplex
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
If either CPLEX or docplex where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account. Step 2: Set up the prescriptive model Create the model All objects of the model belong to one model instance.
# first import the Model class from docplex.mp from docplex.mp.model import Model # create one model instance, with a name m = Model(name='telephone_production')
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables The continuous variable desk represents the production of desk telephones. The continuous variable cell represents the production of cell phones.
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound desk = m.continuous_var(name='desk') cell = m.continuous_var(name='cell')
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Set up the constraints Desk and cell phone must both be greater than 100 Assembly time is limited Painting time is limited.
# write constraints # constraint #1: desk production is greater than 100 m.add_constraint(desk >= 100) # constraint #2: cell production is greater than 100 m.add_constraint(cell >= 100) # constraint #3: assembly time limit ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400) # constraint #4: paiting time ...
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective We want to maximize the expected revenue.
m.maximize(12 * desk + 20 * cell)
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
A few remarks about how we formulated the mathemtical model in Python using DOcplex: - all arithmetic operations (+, *, -) are done using Python operators - comparison operators used in writing linear constraint use Python comparison operators too. Print information about the model We can print information about the mo...
m.print_information()
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Graphical representation of a Linear Problem A simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis. This is often done to demonstrate optimization concepts. To do this, follow these steps: - Assign one variable to the x-axis and the other to the y-axis. - Draw each...
s = m.solve() m.print_solution()
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region. Multiple Optimal Solutions It is possible that an LP has multiple optimal solutions. At least one optimal solution will be at a vertex. By default, the CPLEX® Optimizer repo...
# create a new model, copy of m im = m.copy() # get the 'desk' variable of the new model from its name idesk = im.get_var_by_name('desk') # add a new (infeasible) constraint im.add_constraint(idesk >= 1100); # solve the new proble, we expect a result of None as the model is now infeasible ims = im.solve() if ims is Non...
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Correcting infeasible models To correct an infeasible model, you must use your knowledge of the real-world situation you are modeling. If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example...
overtime = m.continuous_var(name='overtime', ub=40)
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Modify the assembly time constraint by changing its right-hand side by adding overtime. Note: this operation modifies the model by performing a side-effect on the constraint object. DOcplex allows dynamic edition of model elements.
ct_assembly.rhs = 400 + overtime
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Last, modify the objective expression to add the penalization term. Note that we use the Python decrement operator.
m.maximize(12*desk + 20 * cell - 2 * overtime)
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
And solve again using DOcplex:
s2 = m.solve() m.print_solution()
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Unbounded Variable vs. Unbounded model A variable is unbounded when one or both of its bounds is infinite. A model is unbounded when its objective value can be increased or decreased without limit. The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be conf...
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost)) print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Default optimality criteria for CPLEX optimizer Because CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs. The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being: $$ c — y^{t}A> –10^{-6} $$ You can ad...
# revert soft constraints ct_assembly.rhs = 440 s3 = m.solve() # now get slack value for assembly constraint: expected value is 40 print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value)) # get slack value for painting time constraint, expected value is 0. print('* slack value for pa...
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Degeneracy It is possible that multiple non-optimal solutions with the same objective value exist. As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known a...
m.parameters.lpmethod = 4 m.solve(log_output=True)
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Euler's method Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation $$ \frac{dy}{dx} = f(y(x), x) $$ with the initial condition: $$ y(x_0)=y_0 $$ Euler's method performs updates using the equations: $$ y_{n+1} = y_n + h f(y_n,x_n) $$ $$ h = x_{n+1}...
def solve_euler(derivs, y0, x): """Solve a 1d ODE using Euler's method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list...
assignments/assignment10/ODEsEx01.ipynb
LimeeZ/phys292-2015-work
mit
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation: $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$ Write a function solve_midpoint that implements the midpoint met...
def solve_midpoint(derivs, y0, x): """Solve a 1d ODE using the Midpoint method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarr...
assignments/assignment10/ODEsEx01.ipynb
LimeeZ/phys292-2015-work
mit
You are now going to solve the following differential equation: $$ \frac{dy}{dx} = x + 2y $$ which has the analytical solution: $$ y(x) = 0.25 e^{2x} - 0.5 x - 0.25 $$ First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
def solve_exact(x): """compute the exact solution to dy/dx = x + 2y. Parameters ---------- x : np.ndarray Array of x values to compute the solution at. Returns ------- y : np.ndarray Array of solutions at y[i] = y(x[i]). """ # YOUR CODE HERE raise NotImp...
assignments/assignment10/ODEsEx01.ipynb
LimeeZ/phys292-2015-work
mit
In the following cell you are going to solve the above ODE using four different algorithms: Euler's method Midpoint method odeint Exact Here are the details: Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$). Define the derivs function for the above differential equation. Using the...
# YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading the plots
assignments/assignment10/ODEsEx01.ipynb
LimeeZ/phys292-2015-work
mit
For this example, we need two factors: a 10-day mean close price factor, and a 30-day one:
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10) mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)
notebooks/tutorials/2_pipeline_lesson4/notebook.ipynb
quantopian/research_public
apache-2.0
Then, let's create a percent difference factor by combining our mean_close_30 factor with our mean_close_10 factor.
percent_difference = (mean_close_10 - mean_close_30) / mean_close_30
notebooks/tutorials/2_pipeline_lesson4/notebook.ipynb
quantopian/research_public
apache-2.0
In this example, percent_difference is still a Factor even though it's composed as a combination of more primitive factors. We can add percent_difference as a column in our pipeline. Let's define make_pipeline to create a pipeline with percent_difference as a column (and not the mean close factors):
def make_pipeline(): mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10) mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30) percent_difference = (mean_close_10 - mean_close_30) / mean_close_30 return Pipeline( columns={ ...
notebooks/tutorials/2_pipeline_lesson4/notebook.ipynb
quantopian/research_public
apache-2.0
Let's see what the new output looks like:
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05') result
notebooks/tutorials/2_pipeline_lesson4/notebook.ipynb
quantopian/research_public
apache-2.0
This notebook uses TF2.x. Please check your tensorflow version using the cell below.
# Show the currently installed version of TensorFlow print("TensorFlow version: ",tf.version.VERSION)
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train a model for MNIST without quantization aware training
# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture. # TODO: Your co...
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Clone and fine-tune pre-trained model with quantization aware training Define the model You will apply quantization aware training to the whole model and see this in the model summary. All layers are now prefixed by "quant". Note that the resulting model is quantization aware but not quantized (e.g. the weights are flo...
import tensorflow_model_optimization as tfmot quantize_model = tfmot.quantization.keras.quantize_model # q_aware stands for for quantization aware. q_aware_model = quantize_model(model) # `quantize_model` requires a recompile. q_aware_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoric...
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train and evaluate the model against baseline To demonstrate fine tuning after training the model for just an epoch, fine tune with quantization aware training on a subset of the training data.
train_images_subset = train_images[0:1000] # out of 60000 train_labels_subset = train_labels[0:1000] q_aware_model.fit(train_images_subset, train_labels_subset, batch_size=500, epochs=1, validation_split=0.1)
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
For this example, there is minimal to no loss in test accuracy after quantization aware training, compared to the baseline.
_, baseline_model_accuracy = model.evaluate( test_images, test_labels, verbose=0) _, q_aware_model_accuracy = q_aware_model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) print('Quant test accuracy:', q_aware_model_accuracy)
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create quantized model for TFLite backend After this, you have an actually quantized model with int8 weights and uint8 activations.
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_tflite_model = converter.convert()
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
See persistence of accuracy from TF to TFLite Define a helper function to evaluate the TF Lite model on the test dataset.
import numpy as np def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 100...
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
You evaluate the quantized model and see that the accuracy from TensorFlow persists to the TFLite backend.
interpreter = tf.lite.Interpreter(model_content=quantized_tflite_model) interpreter.allocate_tensors() test_accuracy = evaluate_model(interpreter) print('Quant TFLite test_accuracy:', test_accuracy) print('Quant TF test accuracy:', q_aware_model_accuracy)
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
See 4x smaller model from quantization You create a float TFLite model and then see that the quantized TFLite model is 4x smaller.
# Create float TFLite model. # TODO: Your code goes here # Measure sizes of models. _, float_file = tempfile.mkstemp('.tflite') _, quant_file = tempfile.mkstemp('.tflite') with open(quant_file, 'wb') as f: f.write(quantized_tflite_model) with open(float_file, 'wb') as f: f.write(float_tflite_model) print("Floa...
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We're going to need os, numpy, matplotlib, skimage, theano and lasagne. We also want to import some layer classes and utilities from Lasagne for convenience.
import os, time, glob, tqdm import numpy as np from matplotlib import pyplot as plt import torch, torch.nn as nn, torch.nn.functional as F import torchvision import skimage.transform, skimage.util from skimage.util import montage from sklearn.model_selection import StratifiedShuffleSplit import cv2 from batchup import ...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Data loading We are loading images from a folder of files, so we could approach this a number of ways. Our dataset consists of 25,000 images so we could load them all into memory then access them from there. It would work, but it wouldn't scale. I'd prefer to demonstrate an approach that is more scalable and useful out...
TRAIN_PATH = r'E:\datasets\dogsvscats\train' TEST_PATH = r'E:\datasets\dogsvscats\test1' # Get the paths of the images trainval_image_paths = glob.glob(os.path.join(TRAIN_PATH, '*.jpg')) tests_image_paths = glob.glob(os.path.join(TEST_PATH, '*.jpg'))
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Okay. We have our image paths. Now we need to create our ground truths. Luckily the filename of each file starts with either cat. or dog. indicating which it is. We will assign dogs a class of 1 and cats a class of 0.
# The ground truth classifications are given by the filename having either a 'dog.' or 'cat.' prefix # Use: # 0: cat # 1: dog trainval_y = [(1 if os.path.basename(p).lower().startswith('dog.') else 0) for p in trainval_image_paths] trainval_y = np.array(trainval_y).astype(np.int32)
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Split into training and validation We use Scikit-Learn StratifiedShuffleSplit for this.
# We only want one split, with 10% of the data for validation splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=12345) # Get the training set and validation set sample indices train_ndx, val_ndx = next(splitter.split(trainval_y, trainval_y)) print('{} training, {} validation'.format(len(train_...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Define a function for loading a mini-batch of images Given a list of indices into the train_image_paths list we must: load each one scale each one to the fixed size that we need standardise each image (subtract mean, divide by standard deviation)
MODEL_MEAN = np.array([0.485, 0.456, 0.406]) MODEL_STD = np.array([0.229, 0.224, 0.225]) TARGET_SIZE = 64 def img_to_net(img): """ Convert an image from image format; shape (height, width, channel) range [0-1] to network format; shape (channel, height, width), standardised by mean MODEL_MEAN and st...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Show an image to check our code so far:
plt.imshow(net_to_img(load_image(trainval_image_paths[0]))) plt.show()
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Looks okay. Make a BatchUp data source BatchUp can extract mini-batches from data sources that have an array-like interface. We must first define an image accessor that looks like an array. We do this by implementing __len__ and __getitem__ methods:
class ImageAccessor (object): def __init__(self, paths): """ Constructor paths - the list of paths of the images that we are to access """ self.paths = paths def __len__(self): """ The length of this array """ return len(s...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Now we make ArrayDataSource instances for the training and validation sets. These provide methods for getting mini-batches that we will use for training.
# image accessor trainval_X = ImageAccessor(trainval_image_paths) train_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=train_ndx) val_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=val_ndx)
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Process mini-batches in background threads We want to do all the image loading in background threads so that the images are ready for the main thread that must feed the GPU with data to work on. BatchUp provides worker pools for this purpose.
# A pool with 4 threads pool = work_pool.WorkerThreadPool(4)
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Wrap our training and validation data sources so that they generate mini-batches in parallel background threads
train_ds = pool.parallel_data_source(train_ds) val_ds = pool.parallel_data_source(val_ds)
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Build the network Now we will define a class for the pet classifier network.
class PetClassifier (nn.Module): def __init__(self): super(PetClassifier, self).__init__() # First two convolutional layers: 48 filters, 3x3 convolution, 1 pixel padding self.conv1_1 = nn.Conv2d(3, 48, kernel_size=3, padding=1) self.conv1_2 = nn.Conv2d(48, 48, kernel_size=3, padding=...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Set up loss and optimizer
loss_function = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(pet_net.parameters(), lr=1e-3)
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Train the network Define settings for training:
NUM_EPOCHS = 50 BATCH_SIZE = 128
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
The training loop:
print('Training...') for epoch_i in range(NUM_EPOCHS): t1 = time.time() # TRAIN pet_net.train() train_loss = 0.0 n_batches = 0 # Ask train_ds for batches of size `BATCH_SIZE` and shuffled in random order for i, (batch_X, batch_y) in enumerate(train_ds.batch_iterator(batch_size=BATCH_SI...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Apply to some example images from the test set
# Number of samples to try N_TEST = 15 # Shuffle test sample indcies rng = np.random.RandomState(12345) test_ndx = rng.permutation(len(tests_image_paths)) # Select first `N_TEST` samples test_ndx = test_ndx[:N_TEST] for test_i in test_ndx: # Load the image X = load_image(tests_image_paths[test_i]) w...
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
Britefury/deep-learning-tutorial-pydata2016
mit
Prepare the data Download and process data
urlretrieve("http://files.grouplens.org/datasets/movielens/ml-1m.zip", "movielens.zip") ZipFile("movielens.zip", "r").extractall() ratings_data = pd.read_csv( "ml-1m/ratings.dat", sep="::", names=["user_id", "movie_id", "rating", "unix_timestamp"], ) ratings_data["movie_id"] = ratings_data["movie_id"].app...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Create train and eval data splits
random_selection = np.random.rand(len(ratings_data.index)) <= 0.85 train_data = ratings_data[random_selection] eval_data = ratings_data[~random_selection] train_data.to_csv("train_data.csv", index=False, sep="|", header=False) eval_data.to_csv("eval_data.csv", index=False, sep="|", header=False) print(f"Train data spl...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Define dataset metadata and hyperparameters
csv_header = list(ratings_data.columns) user_vocabulary = list(ratings_data.user_id.unique()) movie_vocabulary = list(ratings_data.movie_id.unique()) target_feature_name = "rating" learning_rate = 0.001 batch_size = 128 num_epochs = 3 base_embedding_dim = 64
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Train and evaluate the model
def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True): return tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=csv_header, label_name=target_feature_name, num_epochs=1, header=False, field_delim="|", ...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Experiment 1: baseline collaborative filtering model Implement embedding encoder
def embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None): return keras.Sequential( [ StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices ), layers.Embedding( input_dim=len(vocabulary)...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Implement the baseline model
def create_baseline_model(): # Receive the user as an input. user_input = layers.Input(name="user_id", shape=(), dtype=tf.string) # Get user embedding. user_embedding = embedding_encoder( vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name="user" )(user_input) # Receive ...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Notice that the number of trainable parameters is 623,744
history = run_experiment(baseline_model) plt.plot(history.history["loss"]) plt.plot(history.history["val_loss"]) plt.title("model loss") plt.ylabel("loss") plt.xlabel("epoch") plt.legend(["train", "eval"], loc="upper left") plt.show()
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Experiment 2: memory-efficient model Implement Quotient-Remainder embedding as a layer The Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size embedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table, we create two num_buckets X embedding_dim embedding ...
class QREmbedding(keras.layers.Layer): def __init__(self, vocabulary, embedding_dim, num_buckets, name=None): super(QREmbedding, self).__init__(name=name) self.num_buckets = num_buckets self.index_lookup = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0 ...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Implement Mixed Dimension embedding as a layer In the mixed dimension embedding technique, we train embedding vectors with full dimensions for the frequently queried items, while train embedding vectors with reduced dimensions for less frequent items, plus a projection weights matrix to bring low dimension embeddings t...
class MDEmbedding(keras.layers.Layer): def __init__( self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None ): super(MDEmbedding, self).__init__(name=name) self.num_blocks = len(blocks_vocabulary) # Create vocab to block lookup. keys = [] ...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Implement the memory-efficient model In this experiment, we are going to use the Quotient-Remainder technique to reduce the size of the user embeddings, and the Mixed Dimension technique to reduce the size of the movie embeddings. While in the paper, an alpha-power rule is used to determined the dimensions of the embed...
movie_frequencies = ratings_data["movie_id"].value_counts() movie_frequencies.hist(bins=10)
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
You can see that we can group the movies into three blocks, and assign them 64, 32, and 16 embedding dimensions, respectively. Feel free to experiment with different number of blocks and dimensions.
sorted_movie_vocabulary = list(movie_frequencies.keys()) movie_blocks_vocabulary = [ sorted_movie_vocabulary[:400], # high popularity movies block sorted_movie_vocabulary[400:1700], # normal popularity movies block sorted_movie_vocabulary[1700:], # low popularity movies block ] movie_blocks_embedding_d...
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Notice that the number of trainable parameters is 117,968, which is more than 5x less than the number of parameters in the baseline model.
history = run_experiment(memory_efficient_model) plt.plot(history.history["loss"]) plt.plot(history.history["val_loss"]) plt.title("model loss") plt.ylabel("loss") plt.xlabel("epoch") plt.legend(["train", "eval"], loc="upper left") plt.show()
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
keras-team/keras-io
apache-2.0
Convolution: Naive forward pass The core of a convolutional network is the convolution operation. In the file neural_network/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. You don't have to worry too much about efficiency at this point; just write the code in whatev...
x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.arra...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conve...
from scipy.misc import imread, imresize kitten, puppy = imread('../skynet/datasets/kitten.jpg'), imread('../skynet/datasets/puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Convolution: Naive backward pass Implement the backward pass for the convolution operation in the function conv_backward_naive in the file neural_network/layers.py. Again, you don't need to worry too much about computational efficiency. When you are done, run the following to check your backward pass with a numeric gra...
x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array( lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array( lambda w...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Max pooling: Naive forward Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file neural_network/layers.py. Again, don't worry too much about computational efficiency. Check your implementation by running the following:
x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], ...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Max pooling: Naive backward Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency. Check your implementation with numeric gradient checking by running the following:
x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array( lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it ...
from skynet.neural_network.fast_layers import conv_forward_fast, conv_backward_fast from time import time x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file neural_network/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
from skynet.neural_network.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} ...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file neural_network/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug: Sanity check loss After you build a new...
model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss) # Initial loss (no regularization)...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set: The initial settings with learning_rate equal to 1e-3 doesn't work well with me, why? I have to tune the learning_rate to 1e-4 to overfit small data.
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-4, }, verbose=True, print_every=100...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following:
from skynet.utils.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normali...
# Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) pr...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Spatial batch normalization: backward In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Experiment! Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started: Things you should try: Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient Number of filters: Above we used 32 filters. Do more o...
# Train a really good model on CIFAR-10 # FunConvNet print("sanity check") model = AntConvNet(conv_params=[ { 'num_filters': 3, 'filter_size': 3, 'stride': 1, #'pad': (filter_size - 1) / 2 } ], hidden_dims=...
notebooks/ConvolutionalNetworks.ipynb
Alexoner/skynet
mit
Those displacements look pretty large. I wonder how far the system wanders...
x_i = np.array([0.0254846374656213, -0.051270560984526176, 3.328532865089032e-06]) v_i = np.array([-1.4420901467875399e-06, 6.341746857347185e-06, -3.412200633855404e-08]) x_f = x_i + w_No.t[-1]*v_i print(x_f) print(np.linalg.norm(x_f))
GW150914/AdjustCoM.ipynb
moble/MatchedFiltering
mit
That's not very far. I guess it's not a very long simulation...
w_No.t[-1]
GW150914/AdjustCoM.ipynb
moble/MatchedFiltering
mit
Indeed it's pretty short, so the system doesn't get very far. I start to worry when we need to be careful with higher modes, or when the displacements are a few times larger than this.
scri.SpEC.metadata.read_metadata_into_object?
GW150914/AdjustCoM.ipynb
moble/MatchedFiltering
mit
When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a comp...
# GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- r...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output : <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> To summarize, you how know how to: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create...
# GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C i...
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. ...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflo...
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns:...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] ...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > ...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python d...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropaga...
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the c...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
Expected Output: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and t...
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 1...
deep-learning/Tensorflow-Tutorial.ipynb
amirziai/learning
mit
As seen in the example, net.Python() function returns a callable that can be used just like any other operator. In this example, we add a new Python operator to the net with input "x" and output "y". Note that you can save the output of net.Python() and call it multiple times to add multiple Python operators (with poss...
def f_reshape(inputs, outputs): outputs[0].reshape(inputs[0].shape) outputs[0].data[...] = 2 * inputs[0].data workspace.ResetWorkspace() net = core.Net("tutorial") net.Python(f_reshape)(["x"], ["z"]) workspace.FeedBlob("x", np.array([3.])) workspace.RunNetOnce(net) print(workspace.FetchBlob("z"))
caffe2/python/tutorials/Python_Op.ipynb
Yangqing/caffe2
apache-2.0
This example works correctly because "reshape" method updates an underlying Caffe2 tensor and a subsequent call to the ".data" property returns a Numpy array that shares memory with a Caffe2 tensor. The last line in "f_reshape" copies data into the shared memory location. There're several additional arguments that net....
def f_workspace(inputs, outputs, workspace): outputs[0].feed(2 * workspace.blobs["x"].fetch()) workspace.ResetWorkspace() net = core.Net("tutorial") net.Python(f_workspace, pass_workspace=True)([], ["y"]) workspace.FeedBlob("x", np.array([3.])) workspace.RunNetOnce(net) print(workspace.FetchBlob("y"))
caffe2/python/tutorials/Python_Op.ipynb
Yangqing/caffe2
apache-2.0
Gradient Python Operator Another important net.Python() argument is "grad_f" - a Python function for a corresponding gradient operator:
def f(inputs, outputs): outputs[0].reshape(inputs[0].shape) outputs[0].data[...] = inputs[0].data * 2 def grad_f(inputs, outputs): # Ordering of inputs is [fwd inputs, outputs, grad_outputs] grad_output = inputs[2] grad_input = outputs[0] grad_input.reshape(grad_output.shape) ...
caffe2/python/tutorials/Python_Op.ipynb
Yangqing/caffe2
apache-2.0