| Loading a TorchScript Model in C++ |
| ===================================== |
|
|
| As its name suggests, the primary interface to PyTorch is the Python |
| programming language. While Python is a suitable and preferred language for |
| many scenarios requiring dynamism and ease of iteration, there are equally many |
| situations where precisely these properties of Python are unfavorable. One |
| environment in which the latter often applies is |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| return -1; |
| } |
|
|
|
|
| torch::jit::script::Module module; |
| try { |
| // Deserialize the ScriptModule from a file using torch::jit::load(). |
| module = torch::jit::load(argv[1]); |
| } |
| catch (const c10::Error& e) { |
| std::cerr << "error loading the model\n"; |
| return -1; |
| } |
|
|
| std::cout << "ok\n"; |
| } |
|
|
|
|
| The ``<torch/script.h>`` header encompasses all relevant includes from the |
| LibTorch library necessary to run the example. Our application accepts the file |
| path to a serialized PyTorch ``ScriptModule`` as its only command line argument |
| and then proceeds to deserialize the module using the ``torch::jit::load()`` |
| function, which takes this file path as input. In return we receive a ``torch::jit::script::Module`` |
| object. We will examine how to execute it in a moment. |
|
|
| Depending on LibTorch and Building the Application |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
| Assume we stored the above code into a file called ``example-app.cpp``. A |
| minimal ``CMakeLists.txt`` to build it could look as simple as: |
|
|
| .. code-block:: cmake |
|
|
| cmake_minimum_required(VERSION 3.0 FATAL_ERROR) |
| project(custom_ops) |
|
|
| find_package(Torch REQUIRED) |
|
|
| add_executable(example-app example-app.cpp) |
| target_link_libraries(example-app "${TORCH_LIBRARIES}") |
| set_property(TARGET example-app PROPERTY CXX_STANDARD 14) |
|
|
| The last thing we need to build the example application is the LibTorch |
| distribution. You can always grab the latest stable release from the `download |
| page <https://pytorch.org/>`_ on the PyTorch website. If you download and unzip |
| the latest archive, you should receive a folder with the following directory |
| structure: |
|
|
| .. code-block:: sh |
|
|
| libtorch/ |
| bin/ |
| include/ |
| lib/ |
| share/ |
|
|
| - The ``lib/`` folder contains the shared libraries you must link against, |
| - The ``include/`` folder contains header files your program will need to include, |
| - The ``share/`` folder contains the necessary CMake configuration to enable the simple ``find_package(Torch)`` command above. |
|
|
| .. tip:: |
| On Windows, debug and release builds are not ABI-compatible. If you plan to |
| build your project in debug mode, please try the debug version of LibTorch. |
| Also, make sure you specify the correct configuration in the ``cmake --build .`` |
| line below. |
|
|
| The last step is building the application. For this, assume our example |
| directory is laid out like this: |
|
|
| .. code-block:: sh |
|
|
| example-app/ |
| CMakeLists.txt |
| example-app.cpp |
|
|
| We can now run the following commands to build the application from within the |
| ``example-app/`` folder: |
|
|
| .. code-block:: sh |
|
|
| mkdir build |
| cd build |
| cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch .. |
| cmake --build . --config Release |
|
|
| where ``/path/to/libtorch`` should be the full path to the unzipped LibTorch |
| distribution. If all goes well, it will look something like this: |
|
|
| .. code-block:: sh |
|
|
| root@4b5a67132e81:/example-app# mkdir build |
| root@4b5a67132e81:/example-app# cd build |
| root@4b5a67132e81:/example-app/build# cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch .. |
| -- The C compiler identification is GNU 5.4.0 |
| -- The CXX compiler identification is GNU 5.4.0 |
| -- Check for working C compiler: /usr/bin/cc |
| -- Check for working C compiler: /usr/bin/cc -- works |
| -- Detecting C compiler ABI info |
| -- Detecting C compiler ABI info - done |
| -- Detecting C compile features |
| -- Detecting C compile features - done |
| -- Check for working CXX compiler: /usr/bin/c++ |
| -- Check for working CXX compiler: /usr/bin/c++ -- works |
| -- Detecting CXX compiler ABI info |
| -- Detecting CXX compiler ABI info - done |
| -- Detecting CXX compile features |
| -- Detecting CXX compile features - done |
| -- Looking for pthread.h |
| -- Looking for pthread.h - found |
| -- Looking for pthread_create |
| -- Looking for pthread_create - not found |
| -- Looking for pthread_create in pthreads |
| -- Looking for pthread_create in pthreads - not found |
| -- Looking for pthread_create in pthread |
| -- Looking for pthread_create in pthread - found |
| -- Found Threads: TRUE |
| -- Configuring done |
| -- Generating done |
| -- Build files have been written to: /example-app/build |
| root@4b5a67132e81:/example-app/build# make |
| Scanning dependencies of target example-app |
| [ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o |
| [100%] Linking CXX executable example-app |
| [100%] Built target example-app |
|
|
| If we supply the path to the traced ``ResNet18`` model ``traced_resnet_model.pt`` we created earlier |
| to the resulting ``example-app`` binary, we should be rewarded with a friendly |
| "ok". Please note, if try to run this example with ``my_module_model.pt`` you will get an error saying that |
| your input is of an incompatible shape. ``my_module_model.pt`` expects 1D instead of 4D. |
|
|
| .. code-block:: sh |
|
|
| root@4b5a67132e81:/example-app/build# ./example-app <path_to_model>/traced_resnet_model.pt |
| ok |
|
|
| Step 4: Executing the Script Module in C++ |
| ------------------------------------------ |
|
|
| Having successfully loaded our serialized ``ResNet18`` in C++, we are now just a |
| couple lines of code away from executing it! Let's add those lines to our C++ |
| application's ``main()`` function: |
|
|
| .. code-block:: cpp |
|
|
| // Create a vector of inputs. |
| std::vector<torch::jit::IValue> inputs; |
| inputs.push_back(torch::ones({1, 3, 224, 224})); |
|
|
| // Execute the model and turn its output into a tensor. |
| at::Tensor output = module.forward(inputs).toTensor(); |
| std::cout << output.slice(1, 0, 5) << '\n'; |
|
|
| The first two lines set up the inputs to our model. We create a vector of |
| ``torch::jit::IValue`` (a type-erased value type ``script::Module`` methods |
| accept and return) and add a single input. To create the input tensor, we use |
| ``torch::ones()``, the equivalent to ``torch.ones`` in the C++ API. We then |
| run the ``script::Module``'s ``forward`` method, passing it the input vector we |
| created. In return we get a new ``IValue``, which we convert to a tensor by |
| calling ``toTensor()``. |
| |
| .. tip:: |
| |
| To learn more about functions like ``torch::ones`` and the PyTorch C++ API in |
| general, refer to its documentation at https://pytorch.org/cppdocs. The |
| PyTorch C++ API provides near feature parity with the Python API, allowing |
| you to further manipulate and process tensors just like in Python. |
| |
| In the last line, we print the first five entries of the output. Since we |
| supplied the same input to our model in Python earlier in this tutorial, we |
| should ideally see the same output. Let's try it out by re-compiling our |
| application and running it with the same serialized model: |
|
|
| .. code-block:: sh |
|
|
| root@4b5a67132e81:/example-app/build# make |
| Scanning dependencies of target example-app |
| [ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o |
| [100%] Linking CXX executable example-app |
| [100%] Built target example-app |
| root@4b5a67132e81:/example-app/build# ./example-app traced_resnet_model.pt |
| -0.2698 -0.0381 0.4023 -0.3010 -0.0448 |
| [ Variable[CPUFloatType]{1,5} ] |
|
|
|
|
| For reference, the output in Python previously was:: |
|
|
| tensor([-0.2698, -0.0381, 0.4023, -0.3010, -0.0448], grad_fn=<SliceBackward>) |
|
|
| Looks like a good match! |
|
|
| .. tip:: |
|
|
| To move your model to GPU memory, you can write ``model.to(at::kCUDA);``. |
| Make sure the inputs to a model are also living in CUDA memory |
| by calling ``tensor.to(at::kCUDA)``, which will return a new tensor in CUDA |
| memory. |
|
|
| Step 5: Getting Help and Exploring the API |
| ------------------------------------------ |
|
|
| This tutorial has hopefully equipped you with a general understanding of a |
| PyTorch model's path from Python to C++. With the concepts described in this |
| tutorial, you should be able to go from a vanilla, "eager" PyTorch model, to a |
| compiled ``ScriptModule`` in Python, to a serialized file on disk and -- to |
| close the loop -- to an executable ``script::Module`` in C++. |
| |
| Of course, there are many concepts we did not cover. For example, you may find |
| yourself wanting to extend your ``ScriptModule`` with a custom operator |
| implemented in C++ or CUDA, and executing this custom operator inside your |
| ``ScriptModule`` loaded in your pure C++ production environment. The good news |
| is: this is possible, and well supported! For now, you can explore `this |
| <https://github.com/pytorch/pytorch/tree/master/test/custom_operator>`_ folder |
| for examples, and we will follow up with a tutorial shortly. In the time being, |
| the following links may be generally helpful: |
| |
| - The Torch Script reference: https://pytorch.org/docs/master/jit.html |
| - The PyTorch C++ API documentation: https://pytorch.org/cppdocs/ |
| - The PyTorch Python API documentation: https://pytorch.org/docs/ |
| |
| As always, if you run into any problems or have questions, you can use our |
| `forum <https://discuss.pytorch.org/>`_ or `GitHub issues |
| <https://github.com/pytorch/pytorch/issues>`_ to get in touch. |
| |