| Custom C++ and CUDA Extensions |
| ============================== |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| return (1 - s) |
| } |
|
|
| ``<torch/extension.h>`` is the one-stop header to include all the necessary PyTorch |
| bits to write C++ extensions. It includes: |
|
|
| - The ATen library, which is our primary API for tensor computation, |
| - `pybind11 <https://github.com/pybind/pybind11>`_, which is how we create Python bindings for our C++ code, |
| - Headers that manage the details of interaction between ATen and pybind11. |
|
|
| The implementation of :func:`d_sigmoid` shows how to use the ATen API. |
| PyTorch's tensor and variable interface is generated automatically from the |
| ATen library, so we can more or less translate our Python implementation 1:1 |
| into C++. Our primary datatype for all computations will be |
| :class:`torch::Tensor`. Its full API can be inspected `here |
| <https://pytorch.org/cppdocs/api/classat_1_1_tensor.html>`_. Notice |
| also that we can include ``<iostream>`` or *any other C or C++ header* -- we have |
| the full power of C++11 at our disposal. |
| |
| Forward Pass |
| ************ |
| |
| Next we can port our entire forward pass to C++: |
| |
| .. code-block:: cpp |
| |
| #include <vector> |
| |
| std::vector<at::Tensor> lltm_forward( |
| torch::Tensor input, |
| torch::Tensor weights, |
| torch::Tensor bias, |
| torch::Tensor old_h, |
| torch::Tensor old_cell) { |
| auto X = torch::cat({old_h, input}, /*dim=*/1); |
| |
| auto gate_weights = torch::addmm(bias, X, weights.transpose(0, 1)); |
| auto gates = gate_weights.chunk(3, /*dim=*/1); |
| |
| auto input_gate = torch::sigmoid(gates[0]); |
| auto output_gate = torch::sigmoid(gates[1]); |
| auto candidate_cell = torch::elu(gates[2], /*alpha=*/1.0); |
| |
| auto new_cell = old_cell + candidate_cell * input_gate; |
| auto new_h = torch::tanh(new_cell) * output_gate; |
| |
| return {new_h, |
| new_cell, |
| input_gate, |
| output_gate, |
| candidate_cell, |
| X, |
| gate_weights}; |
| } |
| |
| Backward Pass |
| ************* |
| |
| The C++ extension API currently does not provide a way of automatically |
| generating a backwards function for us. As such, we have to also implement the |
| backward pass of our LLTM, which computes the derivative of the loss with |
| respect to each input of the forward pass. Ultimately, we will plop both the |
| forward and backward function into a :class:`torch.autograd.Function` to create |
| a nice Python binding. The backward function is slightly more involved, so |
| we'll not dig deeper into the code (if you are interested, `Alex Graves' thesis |
| <https://www.cs.toronto.edu/~graves/phd.pdf>`_ is a good read for more |
| information on this): |
| |
| .. code-block:: cpp |
| |
| // tanh'(z) = 1 - tanh^2(z) |
| torch::Tensor d_tanh(torch::Tensor z) { |
| return 1 - z.tanh().pow(2); |
| } |
|
|
| // elu'(z) = relu'(z) + { alpha |
| |
| |
| auto mask = (alpha |
| return (z > 0).type_as(z) + mask.type_as(z) |
| } |
|
|
| std::vector<torch::Tensor> lltm_backward( |
| torch::Tensor grad_h, |
| torch::Tensor grad_cell, |
| torch::Tensor new_cell, |
| torch::Tensor input_gate, |
| torch::Tensor output_gate, |
| torch::Tensor candidate_cell, |
| torch::Tensor X, |
| torch::Tensor gate_weights, |
| torch::Tensor weights) { |
| auto d_output_gate = torch::tanh(new_cell) |
| auto d_tanh_new_cell = output_gate |
| auto d_new_cell = d_tanh(new_cell) |
|
|
| auto d_old_cell = d_new_cell; |
| auto d_candidate_cell = input_gate |
| auto d_input_gate = candidate_cell |
|
|
| auto gates = gate_weights.chunk(3, 1); |
| d_input_gate |
| d_output_gate |
| d_candidate_cell |
|
|
| auto d_gates = |
| torch::cat({d_input_gate, d_output_gate, d_candidate_cell}, 1); |
|
|
| auto d_weights = d_gates.t().mm(X); |
| auto d_bias = d_gates.sum(0, true); |
|
|
| auto d_X = d_gates.mm(weights); |
| const auto state_size = grad_h.size(1); |
| auto d_old_h = d_X.slice(1, 0, state_size); |
| auto d_input = d_X.slice(1, state_size); |
|
|
| return {d_old_h, d_input, d_weights, d_bias, d_old_cell}; |
| } |
|
|
| Binding to Python |
| ^^^^^^^^^^^^^^^^^ |
|
|
| Once you have your operation written in C++ and ATen, you can use pybind11 to |
| bind your C++ functions or classes into Python in a very simple manner. |
| Questions or issues you have about this part of PyTorch C++ extensions will |
| largely be addressed by `pybind11 documentation |
| <https://pybind11.readthedocs.io/en/master/>`_. |
|
|
| For our extensions, the necessary binding code spans only four lines: |
|
|
| .. code-block:: cpp |
|
|
| PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { |
| m.def("forward", &lltm_forward, "LLTM forward"); |
| m.def("backward", &lltm_backward, "LLTM backward"); |
| } |
|
|
| One bit to note here is the macro ``TORCH_EXTENSION_NAME``. The torch extension |
| build will define it as the name you give your extension in the ``setup.py`` |
| script. In this case, the value of ``TORCH_EXTENSION_NAME`` would be "lltm". |
| This is to avoid having to maintain the name of the extension in two places |
| (the build script and your C++ code), as a mismatch between the two can lead to |
| nasty and hard to track issues. |
|
|
| Using Your Extension |
| ^^^^^^^^^^^^^^^^^^^^ |
|
|
| We are now set to import our extension in PyTorch. At this point, your directory |
| structure could look something like this:: |
|
|
| pytorch/ |
| lltm-extension/ |
| lltm.cpp |
| setup.py |
|
|
| Now, run ``python setup.py install`` to build and install your extension. This |
| should look something like this:: |
|
|
| running install |
| running bdist_egg |
| running egg_info |
| creating lltm_cpp.egg-info |
| writing lltm_cpp.egg-info/PKG-INFO |
| writing dependency_links to lltm_cpp.egg-info/dependency_links.txt |
| writing top-level names to lltm_cpp.egg-info/top_level.txt |
| writing manifest file 'lltm_cpp.egg-info/SOURCES.txt' |
| reading manifest file 'lltm_cpp.egg-info/SOURCES.txt' |
| writing manifest file 'lltm_cpp.egg-info/SOURCES.txt' |
| installing library code to build/bdist.linux-x86_64/egg |
| running install_lib |
| running build_ext |
| building 'lltm_cpp' extension |
| creating build |
| creating build/temp.linux-x86_64-3.7 |
| gcc -pthread -B ~/local/miniconda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I~/local/miniconda/lib/python3.7/site-packages/torch/include -I~/local/miniconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I~/local/miniconda/lib/python3.7/site-packages/torch/include/TH -I~/local/miniconda/lib/python3.7/site-packages/torch/include/THC -I~/local/miniconda/include/python3.7m -c lltm.cpp -o build/temp.linux-x86_64-3.7/lltm.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=lltm_cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++11 |
| cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ |
| creating build/lib.linux-x86_64-3.7 |
| g++ -pthread -shared -B ~/local/miniconda/compiler_compat -L~/local/miniconda/lib -Wl,-rpath=~/local/miniconda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.7/lltm.o -o build/lib.linux-x86_64-3.7/lltm_cpp.cpython-37m-x86_64-linux-gnu.so |
| creating build/bdist.linux-x86_64 |
| creating build/bdist.linux-x86_64/egg |
| copying build/lib.linux-x86_64-3.7/lltm_cpp.cpython-37m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg |
| creating stub loader for lltm_cpp.cpython-37m-x86_64-linux-gnu.so |
| byte-compiling build/bdist.linux-x86_64/egg/lltm_cpp.py to lltm_cpp.cpython-37.pyc |
| creating build/bdist.linux-x86_64/egg/EGG-INFO |
| copying lltm_cpp.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO |
| copying lltm_cpp.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO |
| copying lltm_cpp.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO |
| copying lltm_cpp.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO |
| writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt |
| zip_safe flag not set; analyzing archive contents... |
| __pycache__.lltm_cpp.cpython-37: module references __file__ |
| creating 'dist/lltm_cpp-0.0.0-py3.7-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it |
| removing 'build/bdist.linux-x86_64/egg' (and everything under it) |
| Processing lltm_cpp-0.0.0-py3.7-linux-x86_64.egg |
| removing '~/local/miniconda/lib/python3.7/site-packages/lltm_cpp-0.0.0-py3.7-linux-x86_64.egg' (and everything under it) |
| creating ~/local/miniconda/lib/python3.7/site-packages/lltm_cpp-0.0.0-py3.7-linux-x86_64.egg |
| Extracting lltm_cpp-0.0.0-py3.7-linux-x86_64.egg to ~/local/miniconda/lib/python3.7/site-packages |
| lltm-cpp 0.0.0 is already the active version in easy-install.pth |
|
|
| Installed ~/local/miniconda/lib/python3.7/site-packages/lltm_cpp-0.0.0-py3.7-linux-x86_64.egg |
| Processing dependencies for lltm-cpp==0.0.0 |
| Finished processing dependencies for lltm-cpp==0.0.0 |
|
|
|
|
| A small note on compilers: Due to ABI versioning issues, the compiler you use to |
| build your C++ extension must be |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| std::vector<torch::Tensor> lltm_cuda_backward( |
| torch::Tensor grad_h, |
| torch::Tensor grad_cell, |
| torch::Tensor new_cell, |
| torch::Tensor input_gate, |
| torch::Tensor output_gate, |
| torch::Tensor candidate_cell, |
| torch::Tensor X, |
| torch::Tensor gate_weights, |
| torch::Tensor weights); |
|
|
| // C++ interface |
|
|
| #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") |
| #define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") |
| #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) |
|
|
| std::vector<torch::Tensor> lltm_forward( |
| torch::Tensor input, |
| torch::Tensor weights, |
| torch::Tensor bias, |
| torch::Tensor old_h, |
| torch::Tensor old_cell) { |
| CHECK_INPUT(input); |
| CHECK_INPUT(weights); |
| CHECK_INPUT(bias); |
| CHECK_INPUT(old_h); |
| CHECK_INPUT(old_cell); |
|
|
| return lltm_cuda_forward(input, weights, bias, old_h, old_cell); |
| } |
|
|
| std::vector<torch::Tensor> lltm_backward( |
| torch::Tensor grad_h, |
| torch::Tensor grad_cell, |
| torch::Tensor new_cell, |
| torch::Tensor input_gate, |
| torch::Tensor output_gate, |
| torch::Tensor candidate_cell, |
| torch::Tensor X, |
| torch::Tensor gate_weights, |
| torch::Tensor weights) { |
| CHECK_INPUT(grad_h); |
| CHECK_INPUT(grad_cell); |
| CHECK_INPUT(input_gate); |
| CHECK_INPUT(output_gate); |
| CHECK_INPUT(candidate_cell); |
| CHECK_INPUT(X); |
| CHECK_INPUT(gate_weights); |
| CHECK_INPUT(weights); |
|
|
| return lltm_cuda_backward( |
| grad_h, |
| grad_cell, |
| new_cell, |
| input_gate, |
| output_gate, |
| candidate_cell, |
| X, |
| gate_weights, |
| weights); |
| } |
|
|
| PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { |
| m.def("forward", &lltm_forward, "LLTM forward (CUDA)"); |
| m.def("backward", &lltm_backward, "LLTM backward (CUDA)"); |
| } |
|
|
| As you can see, it is largely boilerplate, checks and forwarding to functions |
| that we'll define in the CUDA file. We'll name this file |
| ``lltm_cuda_kernel.cu`` (note the ``.cu`` extension!). NVCC can reasonably |
| compile C++11, thus we still have ATen and the C++ standard library available |
| to us (but not ``torch.h``). Note that :mod:`setuptools` cannot handle files |
| with the same name but different extensions, so if you use the ``setup.py`` |
| method instead of the JIT method, you must give your CUDA file a different name |
| than your C++ file (for the JIT method, ``lltm.cpp`` and ``lltm.cu`` would work |
| fine). Let's take a small peek at what this file will look like: |
| |
| .. code-block:: cpp |
| |
| #include <torch/extension.h> |
| |
| #include <cuda.h> |
| #include <cuda_runtime.h> |
| |
| #include <vector> |
| |
| template <typename scalar_t> |
| __device__ __forceinline__ scalar_t sigmoid(scalar_t z) { |
| return 1.0 / (1.0 + exp(-z)); |
| } |
| |
| Here we see the headers I just described, as well as the fact that we are using |
| CUDA-specific declarations like ``__device__`` and ``__forceinline__`` and |
| functions like ``exp``. Let's continue with a few more helper functions that |
| we'll need: |
| |
| .. code-block:: cpp |
| |
| template <typename scalar_t> |
| __device__ __forceinline__ scalar_t d_sigmoid(scalar_t z) { |
| const auto s = sigmoid(z); |
| return (1.0 - s) * s; |
| } |
| |
| template <typename scalar_t> |
| __device__ __forceinline__ scalar_t d_tanh(scalar_t z) { |
| const auto t = tanh(z); |
| return 1 - (t * t); |
| } |
| |
| template <typename scalar_t> |
| __device__ __forceinline__ scalar_t elu(scalar_t z, scalar_t alpha = 1.0) { |
| return fmax(0.0, z) + fmin(0.0, alpha * (exp(z) - 1.0)); |
| } |
| |
| template <typename scalar_t> |
| __device__ __forceinline__ scalar_t d_elu(scalar_t z, scalar_t alpha = 1.0) { |
| const auto e = exp(z); |
| const auto d_relu = z < 0.0 ? 0.0 : 1.0; |
| return d_relu + (((alpha * (e - 1.0)) < 0.0) ? (alpha * e) : 0.0); |
| } |
| |
| To now actually implement a function, we'll again need two things: one function |
| that performs operations we don't wish to explicitly write by hand and calls |
| into CUDA kernels, and then the actual CUDA kernel for the parts we want to |
| speed up. For the forward pass, the first function should look like this: |
| |
| .. code-block:: cpp |
| |
| std::vector<torch::Tensor> lltm_cuda_forward( |
| torch::Tensor input, |
| torch::Tensor weights, |
| torch::Tensor bias, |
| torch::Tensor old_h, |
| torch::Tensor old_cell) { |
| auto X = torch::cat({old_h, input}, /*dim=*/1); |
| auto gates = torch::addmm(bias, X, weights.transpose(0, 1)); |
| |
| const auto batch_size = old_cell.size(0); |
| const auto state_size = old_cell.size(1); |
| |
| auto new_h = torch::zeros_like(old_cell); |
| auto new_cell = torch::zeros_like(old_cell); |
| auto input_gate = torch::zeros_like(old_cell); |
| auto output_gate = torch::zeros_like(old_cell); |
| auto candidate_cell = torch::zeros_like(old_cell); |
| |
| const int threads = 1024; |
| const dim3 blocks((state_size + threads - 1) / threads, batch_size); |
| |
| AT_DISPATCH_FLOATING_TYPES(gates.type(), "lltm_forward_cuda", ([&] { |
| lltm_cuda_forward_kernel<scalar_t><<<blocks, threads>>>( |
| gates.data<scalar_t>(), |
| old_cell.data<scalar_t>(), |
| new_h.data<scalar_t>(), |
| new_cell.data<scalar_t>(), |
| input_gate.data<scalar_t>(), |
| output_gate.data<scalar_t>(), |
| candidate_cell.data<scalar_t>(), |
| state_size); |
| })); |
| |
| return {new_h, new_cell, input_gate, output_gate, candidate_cell, X, gates}; |
| } |
| |
| The main point of interest here is the ``AT_DISPATCH_FLOATING_TYPES`` macro and |
| the kernel launch (indicated by the ``<<<...>>>``). While ATen abstracts away |
| the device and datatype of the tensors we deal with, a tensor will, at runtime, |
| still be backed by memory of a concrete type on a concrete device. As such, we |
| need a way of determining at runtime what type a tensor is and then selectively |
| call functions with the corresponding correct type signature. Done manually, |
| this would (conceptually) look something like this: |
| |
| .. code-block:: cpp |
| |
| switch (tensor.type().scalarType()) { |
| case torch::ScalarType::Double: |
| return function<double>(tensor.data<double>()); |
| case torch::ScalarType::Float: |
| return function<float>(tensor.data<float>()); |
| ... |
| } |
| |
| The purpose of ``AT_DISPATCH_FLOATING_TYPES`` is to take care of this dispatch |
| for us. It takes a type (``gates.type()`` in our case), a name (for error |
| messages) and a lambda function. Inside this lambda function, the type alias |
| ``scalar_t`` is available and is defined as the type that the tensor actually |
| is at runtime in that context. As such, if we have a template function (which |
| our CUDA kernel will be), we can instantiate it with this ``scalar_t`` alias, |
| and the correct function will be called. In this case, we also want to retrieve |
| the data pointers of the tensors as pointers of that ``scalar_t`` type. If you |
| wanted to dispatch over all types and not just floating point types (``Float`` |
| and ``Double``), you can use ``AT_DISPATCH_ALL_TYPES``. |
| |
| Note that we perform some operations with plain ATen. These operations will |
| still run on the GPU, but using ATen's default implementations. This makes |
| sense, because ATen will use highly optimized routines for things like matrix |
| multiplies (e.g. ``addmm``) or convolutions which would be much harder to |
| implement and improve ourselves. |
|
|
| As for the kernel launch itself, we are here specifying that each CUDA block |
| will have 1024 threads, and that the entire GPU grid is split into as many |
| blocks of ``1 x 1024`` threads as are required to fill our matrices with one |
| thread per component. For example, if our state size was 2048 and our batch |
| size 4, we'd launch a total of ``4 x 2 = 8`` blocks with each 1024 threads. If |
| you've never heard of CUDA "blocks" or "grids" before, an `introductory read |
| about CUDA <https://devblogs.nvidia.com/even-easier-introduction-cuda>`_ may |
| help. |
|
|
| The actual CUDA kernel is fairly simple (if you've ever programmed GPUs before): |
| |
| .. code-block:: cpp |
| |
| template <typename scalar_t> |
| __global__ void lltm_cuda_forward_kernel( |
| const scalar_t* __restrict__ gates, |
| const scalar_t* __restrict__ old_cell, |
| scalar_t* __restrict__ new_h, |
| scalar_t* __restrict__ new_cell, |
| scalar_t* __restrict__ input_gate, |
| scalar_t* __restrict__ output_gate, |
| scalar_t* __restrict__ candidate_cell, |
| size_t state_size) { |
| const int column = blockIdx.x * blockDim.x + threadIdx.x; |
| const int index = blockIdx.y * state_size + column; |
| const int gates_row = blockIdx.y * (state_size * 3); |
| if (column < state_size) { |
| input_gate[index] = sigmoid(gates[gates_row + column]); |
| output_gate[index] = sigmoid(gates[gates_row + state_size + column]); |
| candidate_cell[index] = elu(gates[gates_row + 2 * state_size + column]); |
| new_cell[index] = |
| old_cell[index] + candidate_cell[index] * input_gate[index]; |
| new_h[index] = tanh(new_cell[index]) * output_gate[index]; |
| } |
| } |
| |
| What's primarily interesting here is that we are able to compute all of these |
| pointwise operations entirely in parallel for each individual component in our |
| gate matrices. If you imagine having to do this with a giant ``for`` loop over |
| a million elements in serial, you can see why this would be much faster. |
|
|
| Using accessors |
| ^^^^^^^^^^^^^^^ |
|
|
| You can see in the CUDA kernel that we work directly on pointers with the right |
| type. Indeed, working directly with high level type agnostic tensors inside cuda |
| kernels would be very inefficient. |
|
|
| However, this comes at a cost of ease of use and readibility, especially for |
| highly dimensional data. In our example, we know for example that the contiguous |
| ``gates`` tensor has 3 dimensions: |
|
|
| 1. batch, size of ``batch_size`` and stride of ``3 |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| // assert foo is 2-dimensional and holds floats. |
| auto foo_a = foo.accessor<float,2>(); |
| float trace = 0; |
|
|
| for(int i = 0; i < foo_a.size(0); i++) { |
| // use the accessor foo_a to get tensor data. |
| trace += foo_a[i][i]; |
| } |
|
|
| Accessor objects have a relatively high level interface, with ``.size()`` and |
| ``.stride()`` methods and multi-dimensional indexing. The ``.accessor<>`` |
| interface is designed to access data efficiently on cpu tensor. The equivalent |
| for cuda tensors are ``packed_accessor64<>`` and ``packed_accessor32<>``, which |
| produce Packed Accessors with either 64-bit or 32-bit integer indexing. |
|
|
| The fundamental difference with Accessor is that a Packed Accessor copies size |
| and stride data inside of its structure instead of pointing to it. It allows us |
| to pass it to a CUDA kernel function and use its interface inside it. |
|
|
| We can design a function that takes Packed Accessors instead of pointers. |
|
|
| .. code-block:: cpp |
|
|
| __global__ void lltm_cuda_forward_kernel( |
| const torch::PackedTensorAccessor32<scalar_t,3,torch::RestrictPtrTraits> gates, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> old_cell, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> new_h, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> new_cell, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> input_gate, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> output_gate, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> candidate_cell) |
|
|
| Let's decompose the template used here. the first two arguments ``scalar_t`` and |
| ``2`` are the same as regular Accessor. The argument |
| ``torch::RestrictPtrTraits`` indicates that the ``__restrict__`` keyword must be |
| used. Note also that we've used the ``PackedAccessor32`` variant which store the |
| sizes and strides in an ``int32_t``. This is important as using the 64-bit |
| variant (``PackedAccessor64``) can make the kernel slower. |
|
|
| The function declaration becomes |
|
|
| .. code-block:: cpp |
|
|
| template <typename scalar_t> |
| __global__ void lltm_cuda_forward_kernel( |
| const torch::PackedTensorAccessor32<scalar_t,3,torch::RestrictPtrTraits> gates, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> old_cell, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> new_h, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> new_cell, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> input_gate, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> output_gate, |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> candidate_cell) { |
| //batch index |
| const int n = blockIdx.y; |
| // column index |
| const int c = blockIdx.x |
| if (c < gates.size(2)){ |
| input_gate[n][c] = sigmoid(gates[n][0][c]); |
| output_gate[n][c] = sigmoid(gates[n][1][c]); |
| candidate_cell[n][c] = elu(gates[n][2][c]); |
| new_cell[n][c] = |
| old_cell[n][c] + candidate_cell[n][c] |
| new_h[n][c] = tanh(new_cell[n][c]) |
| } |
| } |
|
|
| The implementation is much more readable! This function is then called by |
| creating Packed Accessors with the ``.packed_accessor32<>`` method within the |
| host function. |
|
|
| .. code-block:: cpp |
|
|
| std::vector<torch::Tensor> lltm_cuda_forward( |
| torch::Tensor input, |
| torch::Tensor weights, |
| torch::Tensor bias, |
| torch::Tensor old_h, |
| torch::Tensor old_cell) { |
| auto X = torch::cat({old_h, input}, 1); |
| auto gate_weights = torch::addmm(bias, X, weights.transpose(0, 1)); |
|
|
| const auto batch_size = old_cell.size(0); |
| const auto state_size = old_cell.size(1); |
|
|
| auto gates = gate_weights.reshape({batch_size, 3, state_size}); |
| auto new_h = torch::zeros_like(old_cell); |
| auto new_cell = torch::zeros_like(old_cell); |
| auto input_gate = torch::zeros_like(old_cell); |
| auto output_gate = torch::zeros_like(old_cell); |
| auto candidate_cell = torch::zeros_like(old_cell); |
|
|
| const int threads = 1024; |
| const dim3 blocks((state_size + threads - 1) / threads, batch_size); |
|
|
| AT_DISPATCH_FLOATING_TYPES(gates.type(), "lltm_forward_cuda", ([&] { |
| lltm_cuda_forward_kernel<scalar_t><<<blocks, threads>>>( |
| gates.packed_accessor32<scalar_t,3,torch::RestrictPtrTraits>(), |
| old_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| new_h.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| new_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| input_gate.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| output_gate.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| candidate_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>()); |
| })); |
|
|
| return {new_h, new_cell, input_gate, output_gate, candidate_cell, X, gates}; |
| } |
|
|
| The backwards pass follows much the same pattern and I won't elaborate further |
| on it: |
| |
| .. code-block:: cpp |
| |
| template <typename scalar_t> |
| __global__ void lltm_cuda_backward_kernel( |
| torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> d_old_cell, |
| torch::PackedTensorAccessor32<scalar_t,3,torch::RestrictPtrTraits> d_gates, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> grad_h, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> grad_cell, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> new_cell, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> input_gate, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> output_gate, |
| const torch::PackedTensorAccessor32<scalar_t,2,torch::RestrictPtrTraits> candidate_cell, |
| const torch::PackedTensorAccessor32<scalar_t,3,torch::RestrictPtrTraits> gate_weights) { |
| //batch index |
| const int n = blockIdx.y; |
| // column index |
| const int c = blockIdx.x * blockDim.x + threadIdx.x; |
| if (c < d_gates.size(2)){ |
| const auto d_output_gate = tanh(new_cell[n][c]) * grad_h[n][c]; |
| const auto d_tanh_new_cell = output_gate[n][c] * grad_h[n][c]; |
| const auto d_new_cell = |
| d_tanh(new_cell[n][c]) * d_tanh_new_cell + grad_cell[n][c]; |
| |
| |
| d_old_cell[n][c] = d_new_cell; |
| const auto d_candidate_cell = input_gate[n][c] * d_new_cell; |
| const auto d_input_gate = candidate_cell[n][c] * d_new_cell; |
| |
| d_gates[n][0][c] = |
| d_input_gate * d_sigmoid(gate_weights[n][0][c]); |
| d_gates[n][1][c] = |
| d_output_gate * d_sigmoid(gate_weights[n][1][c]); |
| d_gates[n][2][c] = |
| d_candidate_cell * d_elu(gate_weights[n][2][c]); |
| } |
| } |
| |
| std::vector<torch::Tensor> lltm_cuda_backward( |
| torch::Tensor grad_h, |
| torch::Tensor grad_cell, |
| torch::Tensor new_cell, |
| torch::Tensor input_gate, |
| torch::Tensor output_gate, |
| torch::Tensor candidate_cell, |
| torch::Tensor X, |
| torch::Tensor gates, |
| torch::Tensor weights) { |
| auto d_old_cell = torch::zeros_like(new_cell); |
| auto d_gates = torch::zeros_like(gates); |
| |
| const auto batch_size = new_cell.size(0); |
| const auto state_size = new_cell.size(1); |
| |
| const int threads = 1024; |
| const dim3 blocks((state_size + threads - 1) / threads, batch_size); |
| |
| AT_DISPATCH_FLOATING_TYPES(X.type(), "lltm_forward_cuda", ([&] { |
| lltm_cuda_backward_kernel<scalar_t><<<blocks, threads>>>( |
| d_old_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| d_gates.packed_accessor32<scalar_t,3,torch::RestrictPtrTraits>(), |
| grad_h.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| grad_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| new_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| input_gate.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| output_gate.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| candidate_cell.packed_accessor32<scalar_t,2,torch::RestrictPtrTraits>(), |
| gates.packed_accessor32<scalar_t,3,torch::RestrictPtrTraits>()); |
| })); |
| |
| auto d_gate_weights = d_gates.reshape({batch_size, 3*state_size}); |
| auto d_weights = d_gate_weights.t().mm(X); |
| auto d_bias = d_gate_weights.sum(/*dim=*/0, /*keepdim=*/true); |
| |
| auto d_X = d_gate_weights.mm(weights); |
| auto d_old_h = d_X.slice(/*dim=*/1, 0, state_size); |
| auto d_input = d_X.slice(/*dim=*/1, state_size); |
| |
| return {d_old_h, d_input, d_weights, d_bias, d_old_cell, d_gates}; |
| } |
| |
| |
| Integrating a C++/CUDA Operation with PyTorch |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| |
| Integration of our CUDA-enabled op with PyTorch is again very straightforward. |
| If you want to write a ``setup.py`` script, it could look like this:: |
| |
| from setuptools import setup |
| from torch.utils.cpp_extension import BuildExtension, CUDAExtension |
| |
| setup( |
| name='lltm', |
| ext_modules=[ |
| CUDAExtension('lltm_cuda', [ |
| 'lltm_cuda.cpp', |
| 'lltm_cuda_kernel.cu', |
| ]) |
| ], |
| cmdclass={ |
| 'build_ext': BuildExtension |
| }) |
| |
| Instead of :func:`CppExtension`, we now use :func:`CUDAExtension`. We can just |
| specify the ``.cu`` file along with the ``.cpp`` files -- the library takes |
| care of all the hassle this entails for you. The JIT mechanism is even |
| simpler:: |
| |
| from torch.utils.cpp_extension import load |
| |
| lltm = load(name='lltm', sources=['lltm_cuda.cpp', 'lltm_cuda_kernel.cu']) |
| |
| Performance Comparison |
| ********************** |
| |
| Our hope was that parallelizing and fusing the pointwise operations of our code |
| with CUDA would improve the performance of our LLTM. Let's see if that holds |
| true. We can run the code I listed earlier to run a benchmark. Our fastest |
| version earlier was the CUDA-based C++ code:: |
|
|
| Forward: 149.802 us | Backward 393.458 us |
|
|
|
|
| And now with our custom CUDA kernel:: |
|
|
| Forward: 129.431 us | Backward 304.641 us |
|
|
| More performance increases! |
|
|
| Conclusion |
| ---------- |
|
|
| You should now be equipped with a good overview of PyTorch's C++ extension |
| mechanism as well as a motivation for using them. You can find the code |
| examples displayed in this note `here |
| <https://github.com/pytorch/extension-cpp>`_. If you have questions, please use |
| `the forums <https://discuss.pytorch.org>`_. Also be sure to check our `FAQ |
| <https://pytorch.org/cppdocs/notes/faq.html>`_ in case you run into any issues. |
| |