diff --git a/.gitattributes b/.gitattributes index 5c7f8a7641956ba4e65a3605ae8b447e0a0ac322..fe428d38f70cd73870cea97de1943fd526db3e22 100644 --- a/.gitattributes +++ b/.gitattributes @@ -685,3 +685,4 @@ mplug_owl2/lib/libquadmath.so.0 filter=lfs diff=lfs merge=lfs -text mplug_owl2/lib/libitm.so.1.0.0 filter=lfs diff=lfs merge=lfs -text pllava/lib/python3.10/site-packages/decord.libs/libavcodec-bc50294c.so.58.35.100 filter=lfs diff=lfs merge=lfs -text pllava/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12 filter=lfs diff=lfs merge=lfs -text +pllava/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.10 filter=lfs diff=lfs merge=lfs -text diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/ANDROID_API_MIN.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/ANDROID_API_MIN.rst new file mode 100644 index 0000000000000000000000000000000000000000..7ca2455ec2c11b31e82ef52f45fac0cbd5c893b3 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/ANDROID_API_MIN.rst @@ -0,0 +1,9 @@ +ANDROID_API_MIN +--------------- + +.. versionadded:: 3.2 + +Set the Android MIN API version (e.g. ``9``). The version number +must be a positive decimal integer. This property is initialized by +the value of the :variable:`CMAKE_ANDROID_API_MIN` variable if it is set +when a target is created. Native code builds using this API version. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/AUTOMOC_EXECUTABLE.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/AUTOMOC_EXECUTABLE.rst new file mode 100644 index 0000000000000000000000000000000000000000..a6d5aa03e6731d527f1cd77b0e09983a174c03c1 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/AUTOMOC_EXECUTABLE.rst @@ -0,0 +1,17 @@ +AUTOMOC_EXECUTABLE +------------------ + +.. versionadded:: 3.14 + +``AUTOMOC_EXECUTABLE`` is file path pointing to the ``moc`` +executable to use for :prop_tgt:`AUTOMOC` enabled files. Setting +this property will make CMake skip the automatic detection of the +``moc`` binary as well as the sanity-tests normally run to ensure +that the binary is available and working as expected. + +Usually this property does not need to be set. Only consider this +property if auto-detection of ``moc`` can not work -- e.g. because +you are building the ``moc`` binary as part of your project. + +See the :manual:`cmake-qt(7)` manual for more information on using CMake +with Qt. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_FLAGS.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_FLAGS.rst new file mode 100644 index 0000000000000000000000000000000000000000..5229d467d1ef30740de012e92c6e4c7ae9e2dbfe --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_FLAGS.rst @@ -0,0 +1,14 @@ +COMPILE_FLAGS +------------- + +Additional flags to use when compiling this target's sources. + +The ``COMPILE_FLAGS`` property sets additional compiler flags used to +build sources within the target. Use :prop_tgt:`COMPILE_DEFINITIONS` +to pass additional preprocessor definitions. + +.. note:: + + This property has been superseded by the :prop_tgt:`COMPILE_OPTIONS` property. + Alternatively, you can also use the :command:`target_compile_options` command + instead. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_NAME.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_NAME.rst new file mode 100644 index 0000000000000000000000000000000000000000..b76afeb0896c05f1c3a2a23687212a0c46b9617b --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_NAME.rst @@ -0,0 +1,13 @@ +COMPILE_PDB_NAME +---------------- + +.. versionadded:: 3.1 + +Output name for the MS debug symbol ``.pdb`` file generated by the +compiler while building source files. + +This property specifies the base name for the debug symbols file. +If not set, the default is unspecified. + +.. |PDB_XXX| replace:: :prop_tgt:`PDB_NAME` +.. include:: COMPILE_PDB_NOTE.txt diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_OUTPUT_DIRECTORY_CONFIG.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_OUTPUT_DIRECTORY_CONFIG.rst new file mode 100644 index 0000000000000000000000000000000000000000..c25c2fc4e5b85a77a2dbff1d5c9fe58e61223345 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/COMPILE_PDB_OUTPUT_DIRECTORY_CONFIG.rst @@ -0,0 +1,18 @@ +COMPILE_PDB_OUTPUT_DIRECTORY_ +------------------------------------- + +.. versionadded:: 3.1 + +Per-configuration output directory for the MS debug symbol ``.pdb`` file +generated by the compiler while building source files. + +This is a per-configuration version of +:prop_tgt:`COMPILE_PDB_OUTPUT_DIRECTORY`, +but multi-configuration generators (Visual Studio, Xcode) do NOT append a +per-configuration subdirectory to the specified directory. This +property is initialized by the value of the +:variable:`CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY_` variable +if it is set when a target is created. + +.. |PDB_XXX| replace:: :prop_tgt:`PDB_OUTPUT_DIRECTORY_` +.. include:: COMPILE_PDB_NOTE.txt diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DEPLOYMENT_REMOTE_DIRECTORY.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DEPLOYMENT_REMOTE_DIRECTORY.rst new file mode 100644 index 0000000000000000000000000000000000000000..3f691b1815242c3d9e26903f5d8aa0e15f408af8 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DEPLOYMENT_REMOTE_DIRECTORY.rst @@ -0,0 +1,20 @@ +DEPLOYMENT_REMOTE_DIRECTORY +--------------------------- + +.. versionadded:: 3.6 + +Set the WinCE project ``RemoteDirectory`` in ``DeploymentTool`` and +``RemoteExecutable`` in ``DebuggerTool`` in ``.vcproj`` files generated +by the :ref:`Visual Studio Generators`. +This is useful when you want to debug on remote WinCE device. +For example: + +.. code-block:: cmake + + set_property(TARGET ${TARGET} PROPERTY + DEPLOYMENT_REMOTE_DIRECTORY "\\FlashStorage") + +produces:: + + + diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DOTNET_SDK.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DOTNET_SDK.rst new file mode 100644 index 0000000000000000000000000000000000000000..ca1dcaca9ab6de976f63e80c24c6fb7825a86a42 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/DOTNET_SDK.rst @@ -0,0 +1,25 @@ +DOTNET_SDK +---------- + +.. versionadded:: 3.23 + +Specify the .NET SDK for C# projects. For example: ``Microsoft.NET.Sdk``. + +This property tells :ref:`Visual Studio Generators` for VS 2019 and +above to generate a .NET SDK-style project using the specified SDK. +The property is meaningful only to these generators, and only in C# +targets. It is ignored for C++ projects, even if they are managed +(e.g. using :prop_tgt:`COMMON_LANGUAGE_RUNTIME`). + +This property must be a non-empty string to generate .NET SDK-style projects. +CMake does not perform any validations for the value of the property. + +This property may be initialized for all targets using the +:variable:`CMAKE_DOTNET_SDK` variable. + +.. note:: + + The :ref:`Visual Studio Generators` in this version of CMake have not + yet learned to support :command:`add_custom_command` in .NET SDK-style + projects. It is currently an error to attach a custom command to a + target with the ``DOTNET_SDK`` property set. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/EXPORT_NO_SYSTEM.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/EXPORT_NO_SYSTEM.rst new file mode 100644 index 0000000000000000000000000000000000000000..f86abd3c648321a1a72fca1415b99a46b210cbdf --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/EXPORT_NO_SYSTEM.rst @@ -0,0 +1,13 @@ +EXPORT_NO_SYSTEM +---------------- + +.. versionadded:: 3.25 + +This property affects the behavior of the :command:`install(EXPORT)` and +:command:`export` commands when they install or export the target respectively. +When ``EXPORT_NO_SYSTEM`` is set to true, those commands generate an imported +target with :prop_tgt:`SYSTEM` property set to false. + +See the :prop_tgt:`NO_SYSTEM_FROM_IMPORTED` target property to set this +behavior on the target *consuming* the include directories rather than the +one *providing* them. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HEADER_SET.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HEADER_SET.rst new file mode 100644 index 0000000000000000000000000000000000000000..a703fc155a3a8e2bf1846cf1b2aff817819648f2 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HEADER_SET.rst @@ -0,0 +1,15 @@ +HEADER_SET +---------- + +.. versionadded:: 3.23 + +Semicolon-separated list of files in the target's default header set, +(i.e. the file set with name and type ``HEADERS``). If any of the paths +are relative, they are computed relative to the target's source directory. +The property supports +:manual:`generator expressions `. + +This property is normally only set by :command:`target_sources(FILE_SET)` +rather than being manipulated directly. + +See :prop_tgt:`HEADER_SET_` for the list of files in other header sets. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HIP_STANDARD.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HIP_STANDARD.rst new file mode 100644 index 0000000000000000000000000000000000000000..9de873038126d427eabfa3b1229aaf8ad826ec5e --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/HIP_STANDARD.rst @@ -0,0 +1,54 @@ +HIP_STANDARD +------------ + +.. versionadded:: 3.21 + +The HIP/C++ standard requested to build this target. + +Supported values are: + +``98`` + HIP C++98 + +``11`` + HIP C++11 + +``14`` + HIP C++14 + +``17`` + HIP C++17 + +``20`` + HIP C++20 + +``23`` + HIP C++23 + +``26`` + .. versionadded:: 3.25 + + HIP C++26. CMake 3.25 and later *recognize* ``26`` as a valid value, + no version has support for any compiler. + +If the value requested does not result in a compile flag being added for +the compiler in use, a previous standard flag will be added instead. This +means that using: + +.. code-block:: cmake + + set_property(TARGET tgt PROPERTY HIP_STANDARD 11) + +with a compiler which does not support ``-std=gnu++11`` or an equivalent +flag will not result in an error or warning, but will instead add the +``-std=gnu++98`` flag if supported. This "decay" behavior may be controlled +with the :prop_tgt:`HIP_STANDARD_REQUIRED` target property. +Additionally, the :prop_tgt:`HIP_EXTENSIONS` target property may be used to +control whether compiler-specific extensions are enabled on a per-target basis. + +See the :manual:`cmake-compile-features(7)` manual for information on +compile features and a list of supported compilers. + +This property is initialized by the value of +the :variable:`CMAKE_HIP_STANDARD` variable if it is set when a target +is created. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/PDB_NOTE.txt b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/PDB_NOTE.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5ada07c9d533a6f9ada2a49ee9c9de1e197e61e --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/PDB_NOTE.txt @@ -0,0 +1,9 @@ +.. note:: + This property does not apply to STATIC library targets because no linker + is invoked to produce them so they have no linker-generated ``.pdb`` file + containing debug symbols. + + The linker-generated program database files are specified by the + ``/pdb`` linker flag and are not the same as compiler-generated + program database files specified by the ``/Fd`` compiler flag. + Use the |COMPILE_PDB_XXX| property to specify the latter. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/SYSTEM.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/SYSTEM.rst new file mode 100644 index 0000000000000000000000000000000000000000..f5c11bcb8b0c508ef340854e1efb00b22ad067a4 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/SYSTEM.rst @@ -0,0 +1,26 @@ +SYSTEM +------ + +.. versionadded:: 3.25 + +Specifies that a target is a system target. This has the following +effects: + +* Entries of :prop_tgt:`INTERFACE_INCLUDE_DIRECTORIES` are treated as + system include directories when compiling consumers. + Entries of :prop_tgt:`INTERFACE_SYSTEM_INCLUDE_DIRECTORIES` are not + affected, and will always be treated as system include directories. +* On Apple platforms, If the :prop_tgt:`FRAMEWORK` target property is true, + the frameworks directory is treated as system. + +For imported targets, this property defaults to true, which means +that their :prop_tgt:`INTERFACE_INCLUDE_DIRECTORIES` and, if the +:prop_tgt:`FRAMEWORK` target property is true, frameworks directory are +treated as system directories by default. If their ``SYSTEM`` property is +false, then their :prop_tgt:`INTERFACE_INCLUDE_DIRECTORIES` as well as +frameworks will not be treated as system. Use the :prop_tgt:`EXPORT_NO_SYSTEM` +property to change how a target's ``SYSTEM`` property is set when it is +installed. + +For non-imported targets, this target property is initialized from +the :prop_dir:`SYSTEM` directory property when the target is created. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_CONFIGURATION_TYPE.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_CONFIGURATION_TYPE.rst new file mode 100644 index 0000000000000000000000000000000000000000..4adffd4f23dc67475710b64376ed75abecdaad7e --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_CONFIGURATION_TYPE.rst @@ -0,0 +1,14 @@ +VS_CONFIGURATION_TYPE +--------------------- + +.. versionadded:: 3.6 + +Visual Studio project configuration type. + +Sets the ``ConfigurationType`` attribute for a generated Visual Studio project. +The property value may use +:manual:`generator expressions `. +If this property is set, it overrides the default setting that is based on the +target type (e.g. ``StaticLibrary``, ``Application``, ...). + +Supported on :ref:`Visual Studio Generators` for VS 2010 and higher. diff --git a/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_WINRT_COMPONENT.rst b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_WINRT_COMPONENT.rst new file mode 100644 index 0000000000000000000000000000000000000000..8b4aaf7ac0b7fcce6f8a63eeece92f9898d2bf45 --- /dev/null +++ b/mplug_owl2/lib/python3.10/site-packages/cmake/data/share/cmake-3.31/Help/prop_tgt/VS_WINRT_COMPONENT.rst @@ -0,0 +1,13 @@ +VS_WINRT_COMPONENT +------------------ + +.. versionadded:: 3.1 + +Mark a target as a Windows Runtime component for the Visual Studio generator. +Compile the target with ``C++/CX`` language extensions for Windows Runtime. +For ``SHARED`` and ``MODULE`` libraries, this also defines the +``_WINRT_DLL`` preprocessor macro. + +.. note:: + Currently this is implemented only by Visual Studio generators. + Support may be added to other generators in the future. diff --git a/pllava/lib/python3.10/site-packages/_distutils_hack/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/_distutils_hack/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..61ab22ab8b06317d8a2ebbe0d213ed4f9156dc2e Binary files /dev/null and b/pllava/lib/python3.10/site-packages/_distutils_hack/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/LICENSE b/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..9ecdc7586d08805bc984539f6672476e86e538b6 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2005-2021 Fredrik Johansson and mpmath contributors + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + a. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + b. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + c. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH +DAMAGE. diff --git a/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/WHEEL b/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..57e3d840d59a650ac5bccbad5baeec47d155f0ad --- /dev/null +++ b/pllava/lib/python3.10/site-packages/mpmath-1.3.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.4) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/pllava/lib/python3.10/site-packages/nvidia/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..2a53a7b94863367f14ff8e530f11d7ac5d53ced5 Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cublas/include/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cublas/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5bfb052335e74db3671c15d8750e01b2a89fcd5a Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cublas/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti.h new file mode 100644 index 0000000000000000000000000000000000000000..be316531dcfd846bcea8feadf3604437ce2447a1 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti.h @@ -0,0 +1,123 @@ +/* + * Copyright 2010-2017 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(_CUPTI_H_) +#define _CUPTI_H_ + +#ifdef _WIN32 +#ifndef WIN32_LEAN_AND_MEAN +#define WIN32_LEAN_AND_MEAN +#endif +#ifdef NOMINMAX +#include +#else +#define NOMINMAX +#include +#undef NOMINMAX +#endif +#endif + +#include +#include +#include + +/* Activity, callback, event and metric APIs */ +#include +#include +#include +#include + +/* Runtime, driver, and nvtx function identifiers */ +#include +#include +#include + +/* To support function parameter structures for obsoleted API. See + cuda.h for the actual definition of these structures. */ +typedef unsigned int CUdeviceptr_v1; +typedef struct CUDA_MEMCPY2D_v1_st { int dummy; } CUDA_MEMCPY2D_v1; +typedef struct CUDA_MEMCPY3D_v1_st { int dummy; } CUDA_MEMCPY3D_v1; +typedef struct CUDA_ARRAY_DESCRIPTOR_v1_st { int dummy; } CUDA_ARRAY_DESCRIPTOR_v1; +typedef struct CUDA_ARRAY3D_DESCRIPTOR_v1_st { int dummy; } CUDA_ARRAY3D_DESCRIPTOR_v1; + +/* Function parameter structures */ +#include +#include + +/* The following parameter structures cannot be included unless a + header that defines GL_VERSION is included before including them. + If these are needed then make sure such a header is included + already. */ +#ifdef GL_VERSION +#include +#include +#endif + +//#include + +/* The following parameter structures cannot be included by default as + they are not guaranteed to be available on all systems. Uncomment + the includes that are available, or use the include explicitly. */ +#if defined(__linux__) +//#include +//#include +#endif + +#ifdef _WIN32 +//#include +//#include +//#include +//#include +//#include +//#include +#endif + +#endif /*_CUPTI_H_*/ + + diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity.h new file mode 100644 index 0000000000000000000000000000000000000000..fb98c23e5591a45789d7e72a0a4561dce199905a --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity.h @@ -0,0 +1,10982 @@ +/* + * Copyright 2011-2021 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(_CUPTI_ACTIVITY_H_) +#define _CUPTI_ACTIVITY_H_ + +#include +#include +#include +#include +#include +#if defined(CUPTI_DIRECTIVE_SUPPORT) +#include +#include +#endif + +#ifndef CUPTIAPI +#ifdef _WIN32 +#define CUPTIAPI __stdcall +#else +#define CUPTIAPI +#endif +#endif + +#if defined(__LP64__) +#define CUPTILP64 1 +#elif defined(_WIN64) +#define CUPTILP64 1 +#else +#undef CUPTILP64 +#endif + +#define ACTIVITY_RECORD_ALIGNMENT 8 +#if defined(_WIN32) // Windows 32- and 64-bit +#define START_PACKED_ALIGNMENT __pragma(pack(push,1)) // exact fit - no padding +#define PACKED_ALIGNMENT __declspec(align(ACTIVITY_RECORD_ALIGNMENT)) +#define END_PACKED_ALIGNMENT __pragma(pack(pop)) +#elif defined(__GNUC__) // GCC +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT __attribute__ ((__packed__)) __attribute__ ((aligned (ACTIVITY_RECORD_ALIGNMENT))) +#define END_PACKED_ALIGNMENT +#else // all other compilers +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT +#define END_PACKED_ALIGNMENT +#endif + +#define CUPTI_UNIFIED_MEMORY_CPU_DEVICE_ID ((uint32_t) 0xFFFFFFFFU) +#define CUPTI_INVALID_CONTEXT_ID ((uint32_t) 0xFFFFFFFFU) +#define CUPTI_INVALID_STREAM_ID ((uint32_t) 0xFFFFFFFFU) +#define CUPTI_INVALID_CHANNEL_ID ((uint32_t) 0xFFFFFFFFU) +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \defgroup CUPTI_ACTIVITY_API CUPTI Activity API + * Functions, types, and enums that implement the CUPTI Activity API. + * @{ + */ + +/** + * \brief The kinds of activity records. + * + * Each activity record kind represents information about a GPU or an + * activity occurring on a CPU or GPU. Each kind is associated with a + * activity record structure that holds the information associated + * with the kind. + * \see CUpti_Activity + * \see CUpti_ActivityAPI + * \see CUpti_ActivityContext + * \see CUpti_ActivityDevice + * \see CUpti_ActivityDevice2 + * \see CUpti_ActivityDevice3 + * \see CUpti_ActivityDevice4 + * \see CUpti_ActivityDeviceAttribute + * \see CUpti_ActivityEvent + * \see CUpti_ActivityEventInstance + * \see CUpti_ActivityKernel + * \see CUpti_ActivityKernel2 + * \see CUpti_ActivityKernel3 + * \see CUpti_ActivityKernel4 + * \see CUpti_ActivityKernel5 + * \see CUpti_ActivityKernel6 + * \see CUpti_ActivityKernel7 + * \see CUpti_ActivityKernel8 + * \see CUpti_ActivityCdpKernel + * \see CUpti_ActivityPreemption + * \see CUpti_ActivityMemcpy + * \see CUpti_ActivityMemcpy3 + * \see CUpti_ActivityMemcpy4 + * \see CUpti_ActivityMemcpy5 + * \see CUpti_ActivityMemcpyPtoP + * \see CUpti_ActivityMemcpyPtoP2 + * \see CUpti_ActivityMemcpyPtoP3 + * \see CUpti_ActivityMemcpyPtoP4 + * \see CUpti_ActivityMemset + * \see CUpti_ActivityMemset2 + * \see CUpti_ActivityMemset3 + * \see CUpti_ActivityMemset4 + * \see CUpti_ActivityMetric + * \see CUpti_ActivityMetricInstance + * \see CUpti_ActivityName + * \see CUpti_ActivityMarker + * \see CUpti_ActivityMarker2 + * \see CUpti_ActivityMarkerData + * \see CUpti_ActivitySourceLocator + * \see CUpti_ActivityGlobalAccess + * \see CUpti_ActivityGlobalAccess2 + * \see CUpti_ActivityGlobalAccess3 + * \see CUpti_ActivityBranch + * \see CUpti_ActivityBranch2 + * \see CUpti_ActivityOverhead + * \see CUpti_ActivityEnvironment + * \see CUpti_ActivityInstructionExecution + * \see CUpti_ActivityUnifiedMemoryCounter + * \see CUpti_ActivityFunction + * \see CUpti_ActivityModule + * \see CUpti_ActivitySharedAccess + * \see CUpti_ActivityPCSampling + * \see CUpti_ActivityPCSampling2 + * \see CUpti_ActivityPCSampling3 + * \see CUpti_ActivityPCSamplingRecordInfo + * \see CUpti_ActivityCudaEvent + * \see CUpti_ActivityStream + * \see CUpti_ActivitySynchronization + * \see CUpti_ActivityInstructionCorrelation + * \see CUpti_ActivityExternalCorrelation + * \see CUpti_ActivityUnifiedMemoryCounter2 + * \see CUpti_ActivityOpenAccData + * \see CUpti_ActivityOpenAccLaunch + * \see CUpti_ActivityOpenAccOther + * \see CUpti_ActivityOpenMp + * \see CUpti_ActivityNvLink + * \see CUpti_ActivityNvLink2 + * \see CUpti_ActivityNvLink3 + * \see CUpti_ActivityNvLink4 + * \see CUpti_ActivityMemory + * \see CUpti_ActivityPcie + */ +typedef enum { + /** + * The activity record is invalid. + */ + CUPTI_ACTIVITY_KIND_INVALID = 0, + /** + * A host<->host, host<->device, or device<->device memory copy. The + * corresponding activity record structure is \ref + * CUpti_ActivityMemcpy5. + */ + CUPTI_ACTIVITY_KIND_MEMCPY = 1, + /** + * A memory set executing on the GPU. The corresponding activity + * record structure is \ref CUpti_ActivityMemset4. + */ + CUPTI_ACTIVITY_KIND_MEMSET = 2, + /** + * A kernel executing on the GPU. This activity kind may significantly change + * the overall performance characteristics of the application because all + * kernel executions are serialized on the GPU. Other activity kind for kernel + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL doesn't break kernel concurrency. + * The corresponding activity record structure is \ref CUpti_ActivityKernel8. + */ + CUPTI_ACTIVITY_KIND_KERNEL = 3, + /** + * A CUDA driver API function execution. The corresponding activity + * record structure is \ref CUpti_ActivityAPI. + */ + CUPTI_ACTIVITY_KIND_DRIVER = 4, + /** + * A CUDA runtime API function execution. The corresponding activity + * record structure is \ref CUpti_ActivityAPI. + */ + CUPTI_ACTIVITY_KIND_RUNTIME = 5, + /** + * An event value. The corresponding activity record structure is + * \ref CUpti_ActivityEvent. + */ + CUPTI_ACTIVITY_KIND_EVENT = 6, + /** + * A metric value. The corresponding activity record structure is + * \ref CUpti_ActivityMetric. + */ + CUPTI_ACTIVITY_KIND_METRIC = 7, + /** + * Information about a device. The corresponding activity record + * structure is \ref CUpti_ActivityDevice4. + */ + CUPTI_ACTIVITY_KIND_DEVICE = 8, + /** + * Information about a context. The corresponding activity record + * structure is \ref CUpti_ActivityContext. + */ + CUPTI_ACTIVITY_KIND_CONTEXT = 9, + /** + * A kernel executing on the GPU. This activity kind doesn't break + * kernel concurrency. The corresponding activity record structure + * is \ref CUpti_ActivityKernel8. + */ + CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL = 10, + /** + * Resource naming done via NVTX APIs for thread, device, context, etc. + * The corresponding activity record structure is \ref CUpti_ActivityName. + */ + CUPTI_ACTIVITY_KIND_NAME = 11, + /** + * Instantaneous, start, or end NVTX marker. The corresponding activity + * record structure is \ref CUpti_ActivityMarker2. + */ + CUPTI_ACTIVITY_KIND_MARKER = 12, + /** + * Extended, optional, data about a marker. The corresponding + * activity record structure is \ref CUpti_ActivityMarkerData. + */ + CUPTI_ACTIVITY_KIND_MARKER_DATA = 13, + /** + * Source information about source level result. The corresponding + * activity record structure is \ref CUpti_ActivitySourceLocator. + */ + CUPTI_ACTIVITY_KIND_SOURCE_LOCATOR = 14, + /** + * Results for source-level global acccess. The + * corresponding activity record structure is \ref + * CUpti_ActivityGlobalAccess3. + */ + CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS = 15, + /** + * Results for source-level branch. The corresponding + * activity record structure is \ref CUpti_ActivityBranch2. + */ + CUPTI_ACTIVITY_KIND_BRANCH = 16, + /** + * Overhead activity records. The + * corresponding activity record structure is + * \ref CUpti_ActivityOverhead. + */ + CUPTI_ACTIVITY_KIND_OVERHEAD = 17, + /** + * A CDP (CUDA Dynamic Parallel) kernel executing on the GPU. The + * corresponding activity record structure is \ref + * CUpti_ActivityCdpKernel. This activity can not be directly + * enabled or disabled. It is enabled and disabled through + * concurrent kernel activity i.e. _CONCURRENT_KERNEL. + */ + CUPTI_ACTIVITY_KIND_CDP_KERNEL = 18, + /** + * Preemption activity record indicating a preemption of a CDP (CUDA + * Dynamic Parallel) kernel executing on the GPU. The corresponding + * activity record structure is \ref CUpti_ActivityPreemption. + */ + CUPTI_ACTIVITY_KIND_PREEMPTION = 19, + /** + * Environment activity records indicating power, clock, thermal, + * etc. levels of the GPU. The corresponding activity record + * structure is \ref CUpti_ActivityEnvironment. + */ + CUPTI_ACTIVITY_KIND_ENVIRONMENT = 20, + /** + * An event value associated with a specific event domain + * instance. The corresponding activity record structure is \ref + * CUpti_ActivityEventInstance. + */ + CUPTI_ACTIVITY_KIND_EVENT_INSTANCE = 21, + /** + * A peer to peer memory copy. The corresponding activity record + * structure is \ref CUpti_ActivityMemcpyPtoP4. + */ + CUPTI_ACTIVITY_KIND_MEMCPY2 = 22, + /** + * A metric value associated with a specific metric domain + * instance. The corresponding activity record structure is \ref + * CUpti_ActivityMetricInstance. + */ + CUPTI_ACTIVITY_KIND_METRIC_INSTANCE = 23, + /** + * Results for source-level instruction execution. + * The corresponding activity record structure is \ref + * CUpti_ActivityInstructionExecution. + */ + CUPTI_ACTIVITY_KIND_INSTRUCTION_EXECUTION = 24, + /** + * Unified Memory counter record. The corresponding activity + * record structure is \ref CUpti_ActivityUnifiedMemoryCounter2. + */ + CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER = 25, + /** + * Device global/function record. The corresponding activity + * record structure is \ref CUpti_ActivityFunction. + */ + CUPTI_ACTIVITY_KIND_FUNCTION = 26, + /** + * CUDA Module record. The corresponding activity + * record structure is \ref CUpti_ActivityModule. + */ + CUPTI_ACTIVITY_KIND_MODULE = 27, + /** + * A device attribute value. The corresponding activity record + * structure is \ref CUpti_ActivityDeviceAttribute. + */ + CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE = 28, + /** + * Results for source-level shared acccess. The + * corresponding activity record structure is \ref + * CUpti_ActivitySharedAccess. + */ + CUPTI_ACTIVITY_KIND_SHARED_ACCESS = 29, + /** + * Enable PC sampling for kernels. This will serialize + * kernels. The corresponding activity record structure + * is \ref CUpti_ActivityPCSampling3. + */ + CUPTI_ACTIVITY_KIND_PC_SAMPLING = 30, + /** + * Summary information about PC sampling records. The + * corresponding activity record structure is \ref + * CUpti_ActivityPCSamplingRecordInfo. + */ + CUPTI_ACTIVITY_KIND_PC_SAMPLING_RECORD_INFO = 31, + /** + * SASS/Source line-by-line correlation record. + * This will generate sass/source correlation for functions that have source + * level analysis or pc sampling results. The records will be generated only + * when either of source level analysis or pc sampling activity is enabled. + * The corresponding activity record structure is \ref + * CUpti_ActivityInstructionCorrelation. + */ + CUPTI_ACTIVITY_KIND_INSTRUCTION_CORRELATION = 32, + /** + * OpenACC data events. + * The corresponding activity record structure is \ref + * CUpti_ActivityOpenAccData. + */ + CUPTI_ACTIVITY_KIND_OPENACC_DATA = 33, + /** + * OpenACC launch events. + * The corresponding activity record structure is \ref + * CUpti_ActivityOpenAccLaunch. + */ + CUPTI_ACTIVITY_KIND_OPENACC_LAUNCH = 34, + /** + * OpenACC other events. + * The corresponding activity record structure is \ref + * CUpti_ActivityOpenAccOther. + */ + CUPTI_ACTIVITY_KIND_OPENACC_OTHER = 35, + /** + * Information about a CUDA event. The + * corresponding activity record structure is \ref + * CUpti_ActivityCudaEvent. + */ + CUPTI_ACTIVITY_KIND_CUDA_EVENT = 36, + /** + * Information about a CUDA stream. The + * corresponding activity record structure is \ref + * CUpti_ActivityStream. + */ + CUPTI_ACTIVITY_KIND_STREAM = 37, + /** + * Records for synchronization management. The + * corresponding activity record structure is \ref + * CUpti_ActivitySynchronization. + */ + CUPTI_ACTIVITY_KIND_SYNCHRONIZATION = 38, + /** + * Records for correlation of different programming APIs. The + * corresponding activity record structure is \ref + * CUpti_ActivityExternalCorrelation. + */ + CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION = 39, + /** + * NVLink information. + * The corresponding activity record structure is \ref + * CUpti_ActivityNvLink4. + */ + CUPTI_ACTIVITY_KIND_NVLINK = 40, + /** + * Instantaneous Event information. + * The corresponding activity record structure is \ref + * CUpti_ActivityInstantaneousEvent. + */ + CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT = 41, + /** + * Instantaneous Event information for a specific event + * domain instance. + * The corresponding activity record structure is \ref + * CUpti_ActivityInstantaneousEventInstance + */ + CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT_INSTANCE = 42, + /** + * Instantaneous Metric information + * The corresponding activity record structure is \ref + * CUpti_ActivityInstantaneousMetric. + */ + CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC = 43, + /** + * Instantaneous Metric information for a specific metric + * domain instance. + * The corresponding activity record structure is \ref + * CUpti_ActivityInstantaneousMetricInstance. + */ + CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC_INSTANCE = 44, + /** + * Memory activity tracking allocation and freeing of the memory + * The corresponding activity record structure is \ref + * CUpti_ActivityMemory. + */ + CUPTI_ACTIVITY_KIND_MEMORY = 45, + /** + * PCI devices information used for PCI topology. + * The corresponding activity record structure is \ref + * CUpti_ActivityPcie. + */ + CUPTI_ACTIVITY_KIND_PCIE = 46, + /** + * OpenMP parallel events. + * The corresponding activity record structure is \ref + * CUpti_ActivityOpenMp. + */ + CUPTI_ACTIVITY_KIND_OPENMP = 47, + /** + * A CUDA driver kernel launch occurring outside of any + * public API function execution. Tools can handle these + * like records for driver API launch functions, although + * the cbid field is not used here. + * The corresponding activity record structure is \ref + * CUpti_ActivityAPI. + */ + CUPTI_ACTIVITY_KIND_INTERNAL_LAUNCH_API = 48, + /** + * Memory activity tracking allocation and freeing of the memory + * The corresponding activity record structure is \ref + * CUpti_ActivityMemory3. + */ + CUPTI_ACTIVITY_KIND_MEMORY2 = 49, + + /** + * Memory pool activity tracking creation, destruction and + * triming of the memory pool. + * The corresponding activity record structure is \ref + * CUpti_ActivityMemoryPool2. + */ + CUPTI_ACTIVITY_KIND_MEMORY_POOL = 50, + + /** + * The corresponding activity record structure is \ref CUpti_ActivityGraphTrace. + */ + CUPTI_ACTIVITY_KIND_GRAPH_TRACE = 51, + + /** + * JIT operation tracking + * The corresponding activity record structure is \ref + * CUpti_ActivityJit. + */ + CUPTI_ACTIVITY_KIND_JIT = 52, + + + CUPTI_ACTIVITY_KIND_COUNT, + + CUPTI_ACTIVITY_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityKind; + +/** + * \brief The kinds of activity objects. + * \see CUpti_ActivityObjectKindId + */ +typedef enum { + /** + * The object kind is not known. + */ + CUPTI_ACTIVITY_OBJECT_UNKNOWN = 0, + /** + * A process. + */ + CUPTI_ACTIVITY_OBJECT_PROCESS = 1, + /** + * A thread. + */ + CUPTI_ACTIVITY_OBJECT_THREAD = 2, + /** + * A device. + */ + CUPTI_ACTIVITY_OBJECT_DEVICE = 3, + /** + * A context. + */ + CUPTI_ACTIVITY_OBJECT_CONTEXT = 4, + /** + * A stream. + */ + CUPTI_ACTIVITY_OBJECT_STREAM = 5, + + CUPTI_ACTIVITY_OBJECT_FORCE_INT = 0x7fffffff +} CUpti_ActivityObjectKind; + +/** + * \brief Identifiers for object kinds as specified by + * CUpti_ActivityObjectKind. + * \see CUpti_ActivityObjectKind + */ +typedef union { + /** + * A process object requires that we identify the process ID. A + * thread object requires that we identify both the process and + * thread ID. + */ + struct { + uint32_t processId; + uint32_t threadId; + } pt; + /** + * A device object requires that we identify the device ID. A + * context object requires that we identify both the device and + * context ID. A stream object requires that we identify device, + * context, and stream ID. + */ + struct { + uint32_t deviceId; + uint32_t contextId; + uint32_t streamId; + } dcs; +} CUpti_ActivityObjectKindId; + +/** + * \brief The kinds of activity overhead. + */ +typedef enum { + /** + * The overhead kind is not known. + */ + CUPTI_ACTIVITY_OVERHEAD_UNKNOWN = 0, + /** + * Compiler(JIT) overhead. + */ + CUPTI_ACTIVITY_OVERHEAD_DRIVER_COMPILER = 1, + /** + * Activity buffer flush overhead. + */ + CUPTI_ACTIVITY_OVERHEAD_CUPTI_BUFFER_FLUSH = 1<<16, + /** + * CUPTI instrumentation overhead. + */ + CUPTI_ACTIVITY_OVERHEAD_CUPTI_INSTRUMENTATION = 2<<16, + /** + * CUPTI resource creation and destruction overhead. + */ + CUPTI_ACTIVITY_OVERHEAD_CUPTI_RESOURCE = 3<<16, + CUPTI_ACTIVITY_OVERHEAD_FORCE_INT = 0x7fffffff +} CUpti_ActivityOverheadKind; + +/** + * \brief The kind of a compute API. + */ +typedef enum { + /** + * The compute API is not known. + */ + CUPTI_ACTIVITY_COMPUTE_API_UNKNOWN = 0, + /** + * The compute APIs are for CUDA. + */ + CUPTI_ACTIVITY_COMPUTE_API_CUDA = 1, + /** + * The compute APIs are for CUDA running + * in MPS (Multi-Process Service) environment. + */ + CUPTI_ACTIVITY_COMPUTE_API_CUDA_MPS = 2, + + CUPTI_ACTIVITY_COMPUTE_API_FORCE_INT = 0x7fffffff +} CUpti_ActivityComputeApiKind; + +/** + * \brief Flags associated with activity records. + * + * Activity record flags. Flags can be combined by bitwise OR to + * associated multiple flags with an activity record. Each flag is + * specific to a certain activity kind, as noted below. + */ +typedef enum { + /** + * Indicates the activity record has no flags. + */ + CUPTI_ACTIVITY_FLAG_NONE = 0, + + /** + * Indicates the activity represents a device that supports + * concurrent kernel execution. Valid for + * CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUPTI_ACTIVITY_FLAG_DEVICE_CONCURRENT_KERNELS = 1 << 0, + + /** + * Indicates if the activity represents a CUdevice_attribute value + * or a CUpti_DeviceAttribute value. Valid for + * CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE. + */ + CUPTI_ACTIVITY_FLAG_DEVICE_ATTRIBUTE_CUDEVICE = 1 << 0, + + /** + * Indicates the activity represents an asynchronous memcpy + * operation. Valid for CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUPTI_ACTIVITY_FLAG_MEMCPY_ASYNC = 1 << 0, + + /** + * Indicates the activity represents an instantaneous marker. Valid + * for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_INSTANTANEOUS = 1 << 0, + + /** + * Indicates the activity represents a region start marker. Valid + * for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_START = 1 << 1, + + /** + * Indicates the activity represents a region end marker. Valid for + * CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_END = 1 << 2, + + /** + * Indicates the activity represents an attempt to acquire a user + * defined synchronization object. + * Valid for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE = 1 << 3, + + /** + * Indicates the activity represents success in acquiring the + * user defined synchronization object. + * Valid for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE_SUCCESS = 1 << 4, + + /** + * Indicates the activity represents failure in acquiring the + * user defined synchronization object. + * Valid for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_SYNC_ACQUIRE_FAILED = 1 << 5, + + /** + * Indicates the activity represents releasing a reservation on + * user defined synchronization object. + * Valid for CUPTI_ACTIVITY_KIND_MARKER. + */ + CUPTI_ACTIVITY_FLAG_MARKER_SYNC_RELEASE = 1 << 6, + + /** + * Indicates the activity represents a marker that does not specify + * a color. Valid for CUPTI_ACTIVITY_KIND_MARKER_DATA. + */ + CUPTI_ACTIVITY_FLAG_MARKER_COLOR_NONE = 1 << 0, + + /** + * Indicates the activity represents a marker that specifies a color + * in alpha-red-green-blue format. Valid for + * CUPTI_ACTIVITY_KIND_MARKER_DATA. + */ + CUPTI_ACTIVITY_FLAG_MARKER_COLOR_ARGB = 1 << 1, + + /** + * The number of bytes requested by each thread + * Valid for CUpti_ActivityGlobalAccess3. + */ + CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_SIZE_MASK = 0xFF << 0, + /** + * If bit in this flag is set, the access was load, else it is a + * store access. Valid for CUpti_ActivityGlobalAccess3. + */ + CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_LOAD = 1 << 8, + /** + * If this bit in flag is set, the load access was cached else it is + * uncached. Valid for CUpti_ActivityGlobalAccess3. + */ + CUPTI_ACTIVITY_FLAG_GLOBAL_ACCESS_KIND_CACHED = 1 << 9, + /** + * If this bit in flag is set, the metric value overflowed. Valid + * for CUpti_ActivityMetric and CUpti_ActivityMetricInstance. + */ + CUPTI_ACTIVITY_FLAG_METRIC_OVERFLOWED = 1 << 0, + /** + * If this bit in flag is set, the metric value couldn't be + * calculated. This occurs when a value(s) required to calculate the + * metric is missing. Valid for CUpti_ActivityMetric and + * CUpti_ActivityMetricInstance. + */ + CUPTI_ACTIVITY_FLAG_METRIC_VALUE_INVALID = 1 << 1, + /** + * If this bit in flag is set, the source level metric value couldn't be + * calculated. This occurs when a value(s) required to calculate the + * source level metric cannot be evaluated. + * Valid for CUpti_ActivityInstructionExecution. + */ + CUPTI_ACTIVITY_FLAG_INSTRUCTION_VALUE_INVALID = 1 << 0, + /** + * The mask for the instruction class, \ref CUpti_ActivityInstructionClass + * Valid for CUpti_ActivityInstructionExecution and + * CUpti_ActivityInstructionCorrelation + */ + CUPTI_ACTIVITY_FLAG_INSTRUCTION_CLASS_MASK = 0xFF << 1, + /** + * When calling cuptiActivityFlushAll, this flag + * can be set to force CUPTI to flush all records in the buffer, whether + * finished or not + */ + CUPTI_ACTIVITY_FLAG_FLUSH_FORCED = 1 << 0, + + /** + * The number of bytes requested by each thread + * Valid for CUpti_ActivitySharedAccess. + */ + CUPTI_ACTIVITY_FLAG_SHARED_ACCESS_KIND_SIZE_MASK = 0xFF << 0, + /** + * If bit in this flag is set, the access was load, else it is a + * store access. Valid for CUpti_ActivitySharedAccess. + */ + CUPTI_ACTIVITY_FLAG_SHARED_ACCESS_KIND_LOAD = 1 << 8, + + /** + * Indicates the activity represents an asynchronous memset + * operation. Valid for CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUPTI_ACTIVITY_FLAG_MEMSET_ASYNC = 1 << 0, + + /** + * Indicates the activity represents thrashing in CPU. + * Valid for counter of kind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING in + * CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER + */ + CUPTI_ACTIVITY_FLAG_THRASHING_IN_CPU = 1 << 0, + + /** + * Indicates the activity represents page throttling in CPU. + * Valid for counter of kind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING in + * CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER + */ + CUPTI_ACTIVITY_FLAG_THROTTLING_IN_CPU = 1 << 0, + + CUPTI_ACTIVITY_FLAG_FORCE_INT = 0x7fffffff +} CUpti_ActivityFlag; + +/** + * \brief The stall reason for PC sampling activity. + */ +typedef enum { + /** + * Invalid reason + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_INVALID = 0, + /** + * No stall, instruction is selected for issue + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_NONE = 1, + /** + * Warp is blocked because next instruction is not yet available, + * because of instruction cache miss, or because of branching effects + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_INST_FETCH = 2, + /** + * Instruction is waiting on an arithmatic dependency + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_EXEC_DEPENDENCY = 3, + /** + * Warp is blocked because it is waiting for a memory access to complete. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_MEMORY_DEPENDENCY = 4, + /** + * Texture sub-system is fully utilized or has too many outstanding requests. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_TEXTURE = 5, + /** + * Warp is blocked as it is waiting at __syncthreads() or at memory barrier. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_SYNC = 6, + /** + * Warp is blocked waiting for __constant__ memory and immediate memory access to complete. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_CONSTANT_MEMORY_DEPENDENCY = 7, + /** + * Compute operation cannot be performed due to the required resources not + * being available. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_PIPE_BUSY = 8, + /** + * Warp is blocked because there are too many pending memory operations. + * In Kepler architecture it often indicates high number of memory replays. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_MEMORY_THROTTLE = 9, + /** + * Warp was ready to issue, but some other warp issued instead. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_NOT_SELECTED = 10, + /** + * Miscellaneous reasons + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_OTHER = 11, + /** + * Sleeping. + */ + CUPTI_ACTIVITY_PC_SAMPLING_STALL_SLEEPING = 12, + CUPTI_ACTIVITY_PC_SAMPLING_STALL_FORCE_INT = 0x7fffffff +} CUpti_ActivityPCSamplingStallReason; + +/** + * \brief Sampling period for PC sampling method + * + * Sampling period can be set using \ref cuptiActivityConfigurePCSampling + */ +typedef enum { + /** + * The PC sampling period is not set. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_INVALID = 0, + /** + * Minimum sampling period available on the device. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MIN = 1, + /** + * Sampling period in lower range. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_LOW = 2, + /** + * Medium sampling period. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MID = 3, + /** + * Sampling period in higher range. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_HIGH = 4, + /** + * Maximum sampling period available on the device. + */ + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_MAX = 5, + CUPTI_ACTIVITY_PC_SAMPLING_PERIOD_FORCE_INT = 0x7fffffff +} CUpti_ActivityPCSamplingPeriod; + +/** + * \brief The kind of a memory copy, indicating the source and + * destination targets of the copy. + * + * Each kind represents the source and destination targets of a memory + * copy. Targets are host, device, and array. + */ +typedef enum { + /** + * The memory copy kind is not known. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_UNKNOWN = 0, + /** + * A host to device memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_HTOD = 1, + /** + * A device to host memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_DTOH = 2, + /** + * A host to device array memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_HTOA = 3, + /** + * A device array to host memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_ATOH = 4, + /** + * A device array to device array memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_ATOA = 5, + /** + * A device array to device memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_ATOD = 6, + /** + * A device to device array memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_DTOA = 7, + /** + * A device to device memory copy on the same device. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_DTOD = 8, + /** + * A host to host memory copy. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_HTOH = 9, + /** + * A peer to peer memory copy across different devices. + */ + CUPTI_ACTIVITY_MEMCPY_KIND_PTOP = 10, + + CUPTI_ACTIVITY_MEMCPY_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityMemcpyKind; + +/** + * \brief The kinds of memory accessed by a memory operation/copy. + * + * Each kind represents the type of the memory + * accessed by a memory operation/copy. + */ +typedef enum { + /** + * The memory kind is unknown. + */ + CUPTI_ACTIVITY_MEMORY_KIND_UNKNOWN = 0, + /** + * The memory is pageable. + */ + CUPTI_ACTIVITY_MEMORY_KIND_PAGEABLE = 1, + /** + * The memory is pinned. + */ + CUPTI_ACTIVITY_MEMORY_KIND_PINNED = 2, + /** + * The memory is on the device. + */ + CUPTI_ACTIVITY_MEMORY_KIND_DEVICE = 3, + /** + * The memory is an array. + */ + CUPTI_ACTIVITY_MEMORY_KIND_ARRAY = 4, + /** + * The memory is managed + */ + CUPTI_ACTIVITY_MEMORY_KIND_MANAGED = 5, + /** + * The memory is device static + */ + CUPTI_ACTIVITY_MEMORY_KIND_DEVICE_STATIC = 6, + /** + * The memory is managed static + */ + CUPTI_ACTIVITY_MEMORY_KIND_MANAGED_STATIC = 7, + CUPTI_ACTIVITY_MEMORY_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityMemoryKind; + +/** + * \brief The kind of a preemption activity. + */ +typedef enum { + /** + * The preemption kind is not known. + */ + CUPTI_ACTIVITY_PREEMPTION_KIND_UNKNOWN = 0, + /** + * Preemption to save CDP block. + */ + CUPTI_ACTIVITY_PREEMPTION_KIND_SAVE = 1, + /** + * Preemption to restore CDP block. + */ + CUPTI_ACTIVITY_PREEMPTION_KIND_RESTORE = 2, + CUPTI_ACTIVITY_PREEMPTION_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityPreemptionKind; + +/** + * \brief The kind of environment data. Used to indicate what type of + * data is being reported by an environment activity record. + */ +typedef enum { + /** + * Unknown data. + */ + CUPTI_ACTIVITY_ENVIRONMENT_UNKNOWN = 0, + /** + * The environment data is related to speed. + */ + CUPTI_ACTIVITY_ENVIRONMENT_SPEED = 1, + /** + * The environment data is related to temperature. + */ + CUPTI_ACTIVITY_ENVIRONMENT_TEMPERATURE = 2, + /** + * The environment data is related to power. + */ + CUPTI_ACTIVITY_ENVIRONMENT_POWER = 3, + /** + * The environment data is related to cooling. + */ + CUPTI_ACTIVITY_ENVIRONMENT_COOLING = 4, + + CUPTI_ACTIVITY_ENVIRONMENT_COUNT, + CUPTI_ACTIVITY_ENVIRONMENT_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityEnvironmentKind; + +/** + * \brief Reasons for clock throttling. + * + * The possible reasons that a clock can be throttled. There can be + * more than one reason that a clock is being throttled so these types + * can be combined by bitwise OR. These are used in the + * clocksThrottleReason field in the Environment Activity Record. + */ +typedef enum { + /** + * Nothing is running on the GPU and the clocks are dropping to idle + * state. + */ + CUPTI_CLOCKS_THROTTLE_REASON_GPU_IDLE = 0x00000001, + /** + * The GPU clocks are limited by a user specified limit. + */ + CUPTI_CLOCKS_THROTTLE_REASON_USER_DEFINED_CLOCKS = 0x00000002, + /** + * A software power scaling algorithm is reducing the clocks below + * requested clocks. + */ + CUPTI_CLOCKS_THROTTLE_REASON_SW_POWER_CAP = 0x00000004, + /** + * Hardware slowdown to reduce the clock by a factor of two or more + * is engaged. This is an indicator of one of the following: 1) + * Temperature is too high, 2) External power brake assertion is + * being triggered (e.g. by the system power supply), 3) Change in + * power state. + */ + CUPTI_CLOCKS_THROTTLE_REASON_HW_SLOWDOWN = 0x00000008, + /** + * Some unspecified factor is reducing the clocks. + */ + CUPTI_CLOCKS_THROTTLE_REASON_UNKNOWN = 0x80000000, + /** + * Throttle reason is not supported for this GPU. + */ + CUPTI_CLOCKS_THROTTLE_REASON_UNSUPPORTED = 0x40000000, + /** + * No clock throttling. + */ + CUPTI_CLOCKS_THROTTLE_REASON_NONE = 0x00000000, + + CUPTI_CLOCKS_THROTTLE_REASON_FORCE_INT = 0x7fffffff +} CUpti_EnvironmentClocksThrottleReason; + +/** + * \brief Scope of the unified memory counter (deprecated in CUDA 7.0) + */ +typedef enum { + /** + * The unified memory counter scope is not known. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_UNKNOWN = 0, + /** + * Collect unified memory counter for single process on one device + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_PROCESS_SINGLE_DEVICE = 1, + /** + * Collect unified memory counter for single process across all devices + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_PROCESS_ALL_DEVICES = 2, + + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_COUNT, + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_SCOPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityUnifiedMemoryCounterScope; + +/** + * \brief Kind of the Unified Memory counter + * + * Many activities are associated with Unified Memory mechanism; among them + * are tranfer from host to device, device to host, page fault at + * host side. + */ +typedef enum { + /** + * The unified memory counter kind is not known. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_UNKNOWN = 0, + /** + * Number of bytes transfered from host to device + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD = 1, + /** + * Number of bytes transfered from device to host + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH = 2, + /** + * Number of CPU page faults, this is only supported on 64 bit + * Linux and Mac platforms + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT = 3, + /** + * Number of GPU page faults, this is only supported on devices with + * compute capability 6.0 and higher and 64 bit Linux platforms + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT = 4, + /** + * Thrashing occurs when data is frequently accessed by + * multiple processors and has to be constantly migrated around + * to achieve data locality. In this case the overhead of migration + * may exceed the benefits of locality. + * This is only supported on 64 bit Linux platforms. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING = 5, + /** + * Throttling is a prevention technique used by the driver to avoid + * further thrashing. Here, the driver doesn't service the fault for + * one of the contending processors for a specific period of time, + * so that the other processor can run at full-speed. + * This is only supported on 64 bit Linux platforms. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING = 6, + /** + * In case throttling does not help, the driver tries to pin the memory + * to a processor for a specific period of time. One of the contending + * processors will have slow access to the memory, while the other will + * have fast access. + * This is only supported on 64 bit Linux platforms. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP = 7, + + /** + * Number of bytes transferred from one device to another device. + * This is only supported on 64 bit Linux platforms. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOD = 8, + + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_COUNT, + CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityUnifiedMemoryCounterKind; + +/** + * \brief Memory access type for unified memory page faults + * + * This is valid for \ref CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT + * and \ref CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT + */ +typedef enum { + /** + * The unified memory access type is not known + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_UNKNOWN = 0, + /** + * The page fault was triggered by read memory instruction + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_READ = 1, + /** + * The page fault was triggered by write memory instruction + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_WRITE = 2, + /** + * The page fault was triggered by atomic memory instruction + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_ATOMIC = 3, + /** + * The page fault was triggered by memory prefetch operation + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_ACCESS_TYPE_PREFETCH = 4 +} CUpti_ActivityUnifiedMemoryAccessType; + +/** + * \brief Migration cause of the Unified Memory counter + * + * This is valid for \ref CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD and + * \ref CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH + */ +typedef enum { + /** + * The unified memory migration cause is not known + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_UNKNOWN = 0, + /** + * The unified memory migrated due to an explicit call from + * the user e.g. cudaMemPrefetchAsync + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_USER = 1, + /** + * The unified memory migrated to guarantee data coherence + * e.g. CPU/GPU faults on Pascal+ and kernel launch on pre-Pascal GPUs + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_COHERENCE = 2, + /** + * The unified memory was speculatively migrated by the UVM driver + * before being accessed by the destination processor to improve + * performance + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_PREFETCH = 3, + /** + * The unified memory migrated to the CPU because it was evicted to make + * room for another block of memory on the GPU + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_EVICTION = 4, + /** + * The unified memory migrated to another processor because of access counter + * notifications. Only frequently accessed pages are migrated between CPU and GPU, or + * between peer GPUs. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_MIGRATION_CAUSE_ACCESS_COUNTERS = 5, +} CUpti_ActivityUnifiedMemoryMigrationCause; + +/** + * \brief Remote memory map cause of the Unified Memory counter + * + * This is valid for \ref CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP + */ +typedef enum { + /** + * The cause of mapping to remote memory was unknown + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_UNKNOWN = 0, + /** + * Mapping to remote memory was added to maintain data coherence. + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_COHERENCE = 1, + /** + * Mapping to remote memory was added to prevent further thrashing + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_THRASHING = 2, + /** + * Mapping to remote memory was added to enforce the hints + * specified by the programmer or by performance heuristics of the + * UVM driver + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_POLICY = 3, + /** + * Mapping to remote memory was added because there is no more + * memory available on the processor and eviction was not + * possible + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_OUT_OF_MEMORY = 4, + /** + * Mapping to remote memory was added after the memory was + * evicted to make room for another block of memory on the GPU + */ + CUPTI_ACTIVITY_UNIFIED_MEMORY_REMOTE_MAP_CAUSE_EVICTION = 5, +} CUpti_ActivityUnifiedMemoryRemoteMapCause; + +/** + * \brief SASS instruction classification. + * + * The sass instruction are broadly divided into different class. Each enum represents a classification. + */ +typedef enum { + /** + * The instruction class is not known. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_UNKNOWN = 0, + /** + * Represents a 32 bit floating point operation. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_32 = 1, + /** + * Represents a 64 bit floating point operation. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_64 = 2, + /** + * Represents an integer operation. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_INTEGER = 3, + /** + * Represents a bit conversion operation. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_BIT_CONVERSION = 4, + /** + * Represents a control flow instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_CONTROL_FLOW = 5, + /** + * Represents a global load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_GLOBAL = 6, + /** + * Represents a shared load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_SHARED = 7, + /** + * Represents a local load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_LOCAL = 8, + /** + * Represents a generic load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_GENERIC = 9, + /** + * Represents a surface load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_SURFACE = 10, + /** + * Represents a constant load instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_CONSTANT = 11, + /** + * Represents a texture load-store instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_TEXTURE = 12, + /** + * Represents a global atomic instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_GLOBAL_ATOMIC = 13, + /** + * Represents a shared atomic instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_SHARED_ATOMIC = 14, + /** + * Represents a surface atomic instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_SURFACE_ATOMIC = 15, + /** + * Represents a inter-thread communication instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_INTER_THREAD_COMMUNICATION = 16, + /** + * Represents a barrier instruction. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_BARRIER = 17, + /** + * Represents some miscellaneous instructions which do not fit in the above classification. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_MISCELLANEOUS = 18, + /** + * Represents a 16 bit floating point operation. + */ + CUPTI_ACTIVITY_INSTRUCTION_CLASS_FP_16 = 19, + + /** + * Represents uniform instruction. + */ + + CUPTI_ACTIVITY_INSTRUCTION_CLASS_UNIFORM = 20, + + CUPTI_ACTIVITY_INSTRUCTION_CLASS_KIND_FORCE_INT = 0x7fffffff +} CUpti_ActivityInstructionClass; + +/** + * \brief Partitioned global caching option + */ +typedef enum { + /** + * Partitioned global cache config unknown. + */ + CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_UNKNOWN = 0, + /** + * Partitioned global cache not supported. + */ + CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_NOT_SUPPORTED = 1, + /** + * Partitioned global cache config off. + */ + CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_OFF = 2, + /** + * Partitioned global cache config on. + */ + CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_ON = 3, + CUPTI_ACTIVITY_PARTITIONED_GLOBAL_CACHE_CONFIG_FORCE_INT = 0x7fffffff +} CUpti_ActivityPartitionedGlobalCacheConfig; + +/** + * \brief Synchronization type. + * + * The types of synchronization to be used with CUpti_ActivitySynchronization. + */ + +typedef enum { + /** + * Unknown data. + */ + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_UNKNOWN = 0, + /** + * Event synchronize API. + */ + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_EVENT_SYNCHRONIZE = 1, + /** + * Stream wait event API. + */ + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_STREAM_WAIT_EVENT = 2, + /** + * Stream synchronize API. + */ + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_STREAM_SYNCHRONIZE = 3, + /** + * Context synchronize API. + */ + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_CONTEXT_SYNCHRONIZE = 4, + + CUPTI_ACTIVITY_SYNCHRONIZATION_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivitySynchronizationType; + +/** + * \brief stream type. + * + * The types of stream to be used with CUpti_ActivityStream. + */ + +typedef enum { + /** + * Unknown data. + */ + CUPTI_ACTIVITY_STREAM_CREATE_FLAG_UNKNOWN = 0, + /** + * Default stream. + */ + CUPTI_ACTIVITY_STREAM_CREATE_FLAG_DEFAULT = 1, + /** + * Non-blocking stream. + */ + CUPTI_ACTIVITY_STREAM_CREATE_FLAG_NON_BLOCKING = 2, + /** + * Null stream. + */ + CUPTI_ACTIVITY_STREAM_CREATE_FLAG_NULL = 3, + /** + * Stream create Mask + */ + CUPTI_ACTIVITY_STREAM_CREATE_MASK = 0xFFFF, + + CUPTI_ACTIVITY_STREAM_CREATE_FLAG_FORCE_INT = 0x7fffffff +} CUpti_ActivityStreamFlag; + +/** +* \brief Link flags. +* +* Describes link properties, to be used with CUpti_ActivityNvLink. +*/ + +typedef enum { + CUPTI_LINK_FLAG_INVALID = 0, + /** + * Is peer to peer access supported by this link. + */ + CUPTI_LINK_FLAG_PEER_ACCESS = (1 << 1), + /** + * Is system memory access supported by this link. + */ + CUPTI_LINK_FLAG_SYSMEM_ACCESS = (1 << 2), + /** + * Is peer atomic access supported by this link. + */ + CUPTI_LINK_FLAG_PEER_ATOMICS = (1 << 3), + /** + * Is system memory atomic access supported by this link. + */ + CUPTI_LINK_FLAG_SYSMEM_ATOMICS = (1 << 4), + + CUPTI_LINK_FLAG_FORCE_INT = 0x7fffffff +} CUpti_LinkFlag; + +/** +* \brief Memory operation types. +* +* Describes the type of memory operation, to be used with CUpti_ActivityMemory3. +*/ + +typedef enum { + CUPTI_ACTIVITY_MEMORY_OPERATION_TYPE_INVALID = 0, + /** + * Memory is allocated. + */ + CUPTI_ACTIVITY_MEMORY_OPERATION_TYPE_ALLOCATION = 1, + /** + * Memory is released. + */ + CUPTI_ACTIVITY_MEMORY_OPERATION_TYPE_RELEASE = 2, + + CUPTI_ACTIVITY_MEMORY_OPERATION_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityMemoryOperationType; + +/** +* \brief Memory pool types. +* +* Describes the type of memory pool, to be used with CUpti_ActivityMemory3. +*/ + +typedef enum { + CUPTI_ACTIVITY_MEMORY_POOL_TYPE_INVALID = 0, + /** + * Memory pool is local to the process. + */ + CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL = 1, + /** + * Memory pool is imported by the process. + */ + CUPTI_ACTIVITY_MEMORY_POOL_TYPE_IMPORTED = 2, + + CUPTI_ACTIVITY_MEMORY_POOL_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityMemoryPoolType; + +/** +* \brief Memory pool operation types. +* +* Describes the type of memory pool operation, to be used with CUpti_ActivityMemoryPool2. +*/ + +typedef enum { + CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_INVALID = 0, + /** + * Memory pool is created. + */ + CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_CREATED = 1, + /** + * Memory pool is destroyed. + */ + CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_DESTROYED = 2, + /** + * Memory pool is trimmed. + */ + CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_TRIMMED = 3, + + CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityMemoryPoolOperationType; + +typedef enum { + CUPTI_CHANNEL_TYPE_INVALID = 0, + CUPTI_CHANNEL_TYPE_COMPUTE = 1, + CUPTI_CHANNEL_TYPE_ASYNC_MEMCPY = 2 +} CUpti_ChannelType; + +/** + * The source-locator ID that indicates an unknown source + * location. There is not an actual CUpti_ActivitySourceLocator object + * corresponding to this value. + */ +#define CUPTI_SOURCE_LOCATOR_ID_UNKNOWN 0 + +/** + * An invalid function index ID. + */ +#define CUPTI_FUNCTION_INDEX_ID_INVALID 0 + +/** + * An invalid/unknown correlation ID. A correlation ID of this value + * indicates that there is no correlation for the activity record. + */ +#define CUPTI_CORRELATION_ID_UNKNOWN 0 + +/** + * An invalid/unknown grid ID. + */ +#define CUPTI_GRID_ID_UNKNOWN 0LL + +/** + * An invalid/unknown timestamp for a start, end, queued, submitted, + * or completed time. + */ +#define CUPTI_TIMESTAMP_UNKNOWN 0LL + +/** + * An invalid/unknown value. + */ +#define CUPTI_SYNCHRONIZATION_INVALID_VALUE -1 + +/** + * An invalid/unknown process id. + */ +#define CUPTI_AUTO_BOOST_INVALID_CLIENT_PID 0 + +/** + * Invalid/unknown NVLink port number. +*/ +#define CUPTI_NVLINK_INVALID_PORT -1 + +/** + * Maximum NVLink port numbers. +*/ +#define CUPTI_MAX_NVLINK_PORTS 32 + +START_PACKED_ALIGNMENT +/** + * \brief Unified Memory counters configuration structure + * + * This structure controls the enable/disable of the various + * Unified Memory counters consisting of scope, kind and other parameters. + * See function \ref cuptiActivityConfigureUnifiedMemoryCounter + */ +typedef struct PACKED_ALIGNMENT { + /** + * Unified Memory counter Counter scope. (deprecated in CUDA 7.0) + */ + CUpti_ActivityUnifiedMemoryCounterScope scope; + + /** + * Unified Memory counter Counter kind + */ + CUpti_ActivityUnifiedMemoryCounterKind kind; + + /** + * Device id of the traget device. This is relevant only + * for single device scopes. (deprecated in CUDA 7.0) + */ + uint32_t deviceId; + + /** + * Control to enable/disable the counter. To enable the counter + * set it to non-zero value while disable is indicated by zero. + */ + uint32_t enable; +} CUpti_ActivityUnifiedMemoryCounterConfig; + +/** + * \brief Device auto boost state structure + * + * This structure defines auto boost state for a device. + * See function \ref cuptiGetAutoBoostState + */ +typedef struct PACKED_ALIGNMENT { + /** + * Returned auto boost state. 1 is returned in case auto boost is enabled, 0 + * otherwise + */ + uint32_t enabled; + + /** + * Id of process that has set the current boost state. The value will be + * CUPTI_AUTO_BOOST_INVALID_CLIENT_PID if the user does not have the + * permission to query process ids or there is an error in querying the + * process id. + */ + uint32_t pid; + +} CUpti_ActivityAutoBoostState; + +/** + * \brief PC sampling configuration structure + * + * This structure defines the pc sampling configuration. + * + * See function \ref cuptiActivityConfigurePCSampling + */ +typedef struct PACKED_ALIGNMENT { + /** + * Size of configuration structure. + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + uint32_t size; + /** + * There are 5 level provided for sampling period. The level + * internally maps to a period in terms of cycles. Same level can + * map to different number of cycles on different gpus. No of + * cycles will be chosen to minimize information loss. The period + * chosen will be given by samplingPeriodInCycles in + * \ref CUpti_ActivityPCSamplingRecordInfo for each kernel instance. + */ + CUpti_ActivityPCSamplingPeriod samplingPeriod; + + /** + * This will override the period set by samplingPeriod. Value 0 in samplingPeriod2 will be + * considered as samplingPeriod2 should not be used and samplingPeriod should be used. + * Valid values for samplingPeriod2 are between 5 to 31 both inclusive. + * This will set the sampling period to (2^samplingPeriod2) cycles. + */ + uint32_t samplingPeriod2; +} CUpti_ActivityPCSamplingConfig; + +/** + * \brief The base activity record. + * + * The activity API uses a CUpti_Activity as a generic representation + * for any activity. The 'kind' field is used to determine the + * specific activity kind, and from that the CUpti_Activity object can + * be cast to the specific activity record type appropriate for that kind. + * + * Note that all activity record types are padded and aligned to + * ensure that each member of the record is naturally aligned. + * + * \see CUpti_ActivityKind + */ +typedef struct PACKED_ALIGNMENT { + /** + * The kind of this activity. + */ + CUpti_ActivityKind kind; +} CUpti_Activity; + +/** + * \brief The activity record for memory copies. (deprecated) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemcpy; + +/** + * \brief The activity record for memory copies. (deprecated in CUDA 11.1) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemcpy3; + +/** + * \brief The activity record for memory copies. (deprecated in CUDA 11.6) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemcpy4; + +/** + * \brief The activity record for memory copies. + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * The ID of the HW channel on which the memory copy is occuring. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; + + /** + * Reserved for internal use. + */ + uint32_t pad2; + +} CUpti_ActivityMemcpy5; + +/** + * \brief The activity record for peer-to-peer memory copies. + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2) but is no longer generated + * by CUPTI. Peer-to-peer memory copy activities are now reported using the + * CUpti_ActivityMemcpyPtoP2 activity record.. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemcpyPtoP; + +typedef CUpti_ActivityMemcpyPtoP CUpti_ActivityMemcpy2; + +/** + * \brief The activity record for peer-to-peer memory copies. + * (deprecated in CUDA 11.1) + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed the memcpy through graph launch. + * This field will be 0 if memcpy is not done using graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemcpyPtoP2; + +/** + * \brief The activity record for peer-to-peer memory copies. + * (deprecated in CUDA 11.6) + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed the memcpy through graph launch. + * This field will be 0 if memcpy is not done using graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemcpyPtoP3; + +/** + * \brief The activity record for peer-to-peer memory copies. + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed the memcpy through graph launch. + * This field will be 0 if memcpy is not done using graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * The ID of the HW channel on which the memory copy is occuring. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; +} CUpti_ActivityMemcpyPtoP4; + +/** + * \brief The activity record for memset. (deprecated) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemset; + +/** + * \brief The activity record for memset. (deprecated in CUDA 11.1) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemset2; + +/** + * \brief The activity record for memset. (deprecated in CUDA 11.6) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ + +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemset3; + +/** + * \brief The activity record for memset. + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint32_t graphId; + + /** + * The ID of the HW channel on which the memory set is occuring. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; + + /** + * Undefined. Reserved for internal use + */ + uint32_t pad2; + +} CUpti_ActivityMemset4; + +/** + * \brief The activity record for memory. + * + * This activity record represents a memory allocation and free operation + * (CUPTI_ACTIVITY_KIND_MEMORY). + * This activity record provides a single record for the memory + * allocation and memory release operations. + * + * Note: It is recommended to move to the new activity record \ref CUpti_ActivityMemory3 + * enabled using the kind \ref CUPTI_ACTIVITY_KIND_MEMORY2. + * \ref CUpti_ActivityMemory3 provides separate records for memory + * allocation and memory release operations. This allows to correlate the + * corresponding driver and runtime API activity record with the memory operation. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY + */ + CUpti_ActivityKind kind; + + /** + * The memory kind requested by the user + */ + CUpti_ActivityMemoryKind memoryKind; + + /** + * The virtual address of the allocation + */ + uint64_t address; + + /** + * The number of bytes of memory allocated. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory operation, i.e. + * the time when memory was allocated, in ns. + */ + uint64_t start; + + /** + * The end timestamp for the memory operation, i.e. + * the time when memory was freed, in ns. + * This will be 0 if memory is not freed in the application + */ + uint64_t end; + + /** + * The program counter of the allocation of memory + */ + uint64_t allocPC; + + /** + * The program counter of the freeing of memory. This will + * be 0 if memory is not freed in the application + */ + uint64_t freePC; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory allocation is taking place. + */ + uint32_t deviceId; + + /** + * The ID of the context. If context is NULL, \p contextId is set to CUPTI_INVALID_CONTEXT_ID. + */ + uint32_t contextId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Variable name. This name is shared across all activity + * records representing the same symbol, and so should not be + * modified. + */ + const char* name; +} CUpti_ActivityMemory; + +/** + * \brief The activity record for memory. + * + * This activity record represents a memory allocation and free operation + * (CUPTI_ACTIVITY_KIND_MEMORY2). + * This activity record provides separate records for memory allocation and + * memory release operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory operation. + * + * Note: This activity record is an upgrade over \ref CUpti_ActivityMemory + * enabled using the kind \ref CUPTI_ACTIVITY_KIND_MEMORY. + * \ref CUpti_ActivityMemory provides a single record for the memory + * allocation and memory release operations. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY2 + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryOperationType. + */ + CUpti_ActivityMemoryOperationType memoryOperationType; + + /** + * The memory kind requested by the user, \ref CUpti_ActivityMemoryKind. + */ + CUpti_ActivityMemoryKind memoryKind; + + /** + * The correlation ID of the memory operation. Each memory operation is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The number of bytes of memory allocated. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; + + /** + * The program counter of the memory operation. + */ + uint64_t PC; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory operation is taking place. + */ + uint32_t deviceId; + + /** + * The ID of the context. If context is NULL, \p contextId is set to CUPTI_INVALID_CONTEXT_ID. + */ + uint32_t contextId; + + /** + * The ID of the stream. If memory operation is not async, \p streamId is set to CUPTI_INVALID_STREAM_ID. + */ + uint32_t streamId; + + /** + * Variable name. This name is shared across all activity + * records representing the same symbol, and so should not be + * modified. + */ + const char* name; + + /** + * \p isAsync is set if memory operation happens through async memory APIs. + */ + uint32_t isAsync; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad1; +#endif + + /** + * The memory pool configuration used for the memory operations. + */ + struct { + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad2; +#endif + /** + * The base address of the memory pool. + */ + uint64_t address; + /** + * The release threshold of the memory pool in bytes. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + union { + /** + * The size of the memory pool in bytes. + * \p size is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t size; + /** + * The processId of the memory pool. + * \p processId is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_IMPORTED, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t processId; + } pool; + } memoryPoolConfig; + +} CUpti_ActivityMemory2; + +/** + * \brief The activity record for memory. + * + * This activity record represents a memory allocation and free operation + * (CUPTI_ACTIVITY_KIND_MEMORY2). + * This activity record provides separate records for memory allocation and + * memory release operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory operation. + * + * Note: This activity record is an upgrade over \ref CUpti_ActivityMemory + * enabled using the kind \ref CUPTI_ACTIVITY_KIND_MEMORY. + * \ref CUpti_ActivityMemory provides a single record for the memory + * allocation and memory release operations. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY2 + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryOperationType. + */ + CUpti_ActivityMemoryOperationType memoryOperationType; + + /** + * The memory kind requested by the user, \ref CUpti_ActivityMemoryKind. + */ + CUpti_ActivityMemoryKind memoryKind; + + /** + * The correlation ID of the memory operation. Each memory operation is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The number of bytes of memory allocated. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; + + /** + * The program counter of the memory operation. + */ + uint64_t PC; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory operation is taking place. + */ + uint32_t deviceId; + + /** + * The ID of the context. If context is NULL, \p contextId is set to CUPTI_INVALID_CONTEXT_ID. + */ + uint32_t contextId; + + /** + * The ID of the stream. If memory operation is not async, \p streamId is set to CUPTI_INVALID_STREAM_ID. + */ + uint32_t streamId; + + /** + * Variable name. This name is shared across all activity + * records representing the same symbol, and so should not be + * modified. + */ + const char* name; + + /** + * \p isAsync is set if memory operation happens through async memory APIs. + */ + uint32_t isAsync; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad1; +#endif + + /** + * The memory pool configuration used for the memory operations. + */ + struct PACKED_ALIGNMENT { + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad2; +#endif + /** + * The base address of the memory pool. + */ + uint64_t address; + /** + * The release threshold of the memory pool in bytes. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + union { + /** + * The size of the memory pool in bytes. + * \p size is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t size; + /** + * The processId of the memory pool. + * \p processId is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_IMPORTED, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t processId; + } pool; + + /** + * The utilized size of the memory pool. \p utilizedSize is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t utilizedSize; + } memoryPoolConfig; + +} CUpti_ActivityMemory3; + +/** + * \brief The activity record for memory pool. + * + * This activity record represents a memory pool creation, destruction and + * trimming (CUPTI_ACTIVITY_KIND_MEMORY_POOL). + * This activity record provides separate records for memory pool creation, + * destruction and triming operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory pool operation. + * + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY_POOL + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryPoolOperationType. + */ + CUpti_ActivityMemoryPoolOperationType memoryPoolOperationType; + + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; + + /** + * The correlation ID of the memory pool operation. Each memory pool + * operation is assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory pool is created. + */ + uint32_t deviceId; + + /** + * The minimum bytes to keep of the memory pool. \p minBytesToKeep is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_TRIMMED, + * \ref CUpti_ActivityMemoryPoolOperationType + */ + size_t minBytesToKeep; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The size of the memory pool operation in bytes. \p size is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t size; + + /** + * The release threshold of the memory pool. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; +} CUpti_ActivityMemoryPool; + +/** + * \brief The activity record for memory pool. + * + * This activity record represents a memory pool creation, destruction and + * trimming (CUPTI_ACTIVITY_KIND_MEMORY_POOL). + * This activity record provides separate records for memory pool creation, + * destruction and triming operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory pool operation. + * + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY_POOL + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryPoolOperationType. + */ + CUpti_ActivityMemoryPoolOperationType memoryPoolOperationType; + + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; + + /** + * The correlation ID of the memory pool operation. Each memory pool + * operation is assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory pool is created. + */ + uint32_t deviceId; + + /** + * The minimum bytes to keep of the memory pool. \p minBytesToKeep is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_TRIMMED, + * \ref CUpti_ActivityMemoryPoolOperationType + */ + size_t minBytesToKeep; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The size of the memory pool operation in bytes. \p size is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t size; + + /** + * The release threshold of the memory pool. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; + + /** + * The utilized size of the memory pool. \p utilizedSize is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t utilizedSize; +} CUpti_ActivityMemoryPool2; + +/** + * \brief The activity record for kernel. (deprecated) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel8 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL + * or CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t cacheConfigRequested; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t cacheConfigExecuted; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the kernel. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the kernel. Each kernel execution + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the kernel. + */ + uint32_t runtimeCorrelationId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel; + +/** + * \brief The activity record for kernel. (deprecated) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel8 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel2; + +/** + * \brief The activity record for a kernel (CUDA 6.5(with sm_52 support) onwards). + * (deprecated in CUDA 9.0) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL). + * Kernel activities are now reported using the CUpti_ActivityKernel8 activity + * record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel3; + +/** + * \brief The type of the CUDA kernel launch. + */ +typedef enum { + /** + * The kernel was launched via a regular kernel call + */ + CUPTI_ACTIVITY_LAUNCH_TYPE_REGULAR = 0, + /** + * The kernel was launched via API \ref cudaLaunchCooperativeKernel() or + * \ref cuLaunchCooperativeKernel() + */ + CUPTI_ACTIVITY_LAUNCH_TYPE_COOPERATIVE_SINGLE_DEVICE = 1, + /** + * The kernel was launched via API \ref cudaLaunchCooperativeKernelMultiDevice() or + * \ref cuLaunchCooperativeKernelMultiDevice() + */ + CUPTI_ACTIVITY_LAUNCH_TYPE_COOPERATIVE_MULTI_DEVICE = 2 +} CUpti_ActivityLaunchType; + +/** + * \brief The activity record for a kernel (CUDA 9.0(with sm_70 support) onwards). + * (deprecated in CUDA 11.0) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL). + * Kernel activities are now reported using the CUpti_ActivityKernel8 activity + * record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchrnous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; +} CUpti_ActivityKernel4; + +/** + * \brief The shared memory limit per block config for a kernel + * This should be used to set 'cudaOccFuncShmemConfig' field in occupancy calculator API + */ +typedef enum { + /* The shared memory limit config is default */ + CUPTI_FUNC_SHMEM_LIMIT_DEFAULT = 0x00, + /* User has opted for a higher dynamic shared memory limit using function attribute + 'cudaFuncAttributeMaxDynamicSharedMemorySize' for runtime API or + CU_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES for driver API */ + CUPTI_FUNC_SHMEM_LIMIT_OPTIN = 0x01, + CUPTI_FUNC_SHMEM_LIMIT_FORCE_INT = 0x7fffffff +} CUpti_FuncShmemLimitConfig; + +/** + * \brief The activity record for a kernel (CUDA 11.0(with sm_80 support) onwards). + * (deprecated in CUDA 11.2) + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel8 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchrnous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; +} CUpti_ActivityKernel5; + +/** + * \brief The activity record for kernel. (deprecated in CUDA 11.6) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel8 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchrnous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; +} CUpti_ActivityKernel6; + +/** + * \brief The activity record for kernel. (deprecated in CUDA 11.8) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel8 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchrnous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; + + /** + * The ID of the HW channel on which the kernel is launched. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; + +} CUpti_ActivityKernel7; + +/** + * \brief The activity record for kernel. + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes (deprecated in CUDA 11.8). + * Refer field localMemoryTotal_v2 + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchrnous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; + + /** + * The ID of the HW channel on which the kernel is launched. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; + + + /** + * The X-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterX; + + /** + * The Y-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterY; + + /** + * The Z-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterZ; + + /** + * The cluster scheduling policy for the kernel. Refer CUclusterSchedulingPolicy + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterSchedulingPolicy; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint64_t localMemoryTotal_v2; +} CUpti_ActivityKernel8; + +/** + * \brief The activity record for CDP (CUDA Dynamic Parallelism) + * kernel. + * + * This activity record represents a CDP kernel execution. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_CDP_KERNEL + */ + CUpti_ActivityKind kind; + + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel execution + * is assigned a unique grid ID. + */ + int64_t gridId; + + /** + * The grid ID of the parent kernel. + */ + int64_t parentGridId; + + /** + * The timestamp when kernel is queued up, in ns. A value of + * CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time is + * unknown. + */ + uint64_t queued; + + /** + * The timestamp when kernel is submitted to the gpu, in ns. A value + * of CUPTI_TIMESTAMP_UNKNOWN indicates that the submission time is + * unknown. + */ + uint64_t submitted; + + /** + * The timestamp when kernel is marked as completed, in ns. A value + * of CUPTI_TIMESTAMP_UNKNOWN indicates that the completion time is + * unknown. + */ + uint64_t completed; + + /** + * The X-dimension of the parent block. + */ + uint32_t parentBlockX; + + /** + * The Y-dimension of the parent block. + */ + uint32_t parentBlockY; + + /** + * The Z-dimension of the parent block. + */ + uint32_t parentBlockZ; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityCdpKernel; + +/** + * \brief The activity record for a preemption of a CDP kernel. + * + * This activity record represents a preemption of a CDP kernel. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PREEMPTION + */ + CUpti_ActivityKind kind; + + /** + * kind of the preemption + */ + CUpti_ActivityPreemptionKind preemptionKind; + + /** + * The timestamp of the preemption, in ns. A value of 0 indicates + * that timestamp information could not be collected for the + * preemption. + */ + uint64_t timestamp; + + /** + * The grid-id of the block that is preempted + */ + int64_t gridId; + + /** + * The X-dimension of the block that is preempted + */ + uint32_t blockX; + + /** + * The Y-dimension of the block that is preempted + */ + uint32_t blockY; + + /** + * The Z-dimension of the block that is preempted + */ + uint32_t blockZ; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityPreemption; + +/** + * \brief The activity record for a driver or runtime API invocation. + * + * This activity record represents an invocation of a driver or + * runtime API (CUPTI_ACTIVITY_KIND_DRIVER and + * CUPTI_ACTIVITY_KIND_RUNTIME). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DRIVER, + * CUPTI_ACTIVITY_KIND_RUNTIME, or CUPTI_ACTIVITY_KIND_INTERNAL_LAUNCH_API. + */ + CUpti_ActivityKind kind; + + /** + * The ID of the driver or runtime function. + */ + CUpti_CallbackId cbid; + + /** + * The start timestamp for the function, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the function. + */ + uint64_t start; + + /** + * The end timestamp for the function, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the function. + */ + uint64_t end; + + /** + * The ID of the process where the driver or runtime CUDA function + * is executing. + */ + uint32_t processId; + + /** + * The ID of the thread where the driver or runtime CUDA function is + * executing. + */ + uint32_t threadId; + + /** + * The correlation ID of the driver or runtime CUDA function. Each + * function invocation is assigned a unique correlation ID that is + * identical to the correlation ID in the memcpy, memset, or kernel + * activity record that is associated with this function. + */ + uint32_t correlationId; + + /** + * The return value for the function. For a CUDA driver function + * with will be a CUresult value, and for a CUDA runtime function + * this will be a cudaError_t value. + */ + uint32_t returnValue; +} CUpti_ActivityAPI; + +/** + * \brief The activity record for a CUPTI event. + * + * This activity record represents a CUPTI event value + * (CUPTI_ACTIVITY_KIND_EVENT). This activity record kind is not + * produced by the activity API but is included for completeness and + * ease-of-use. Profile frameworks built on top of CUPTI that collect + * event data may choose to use this type to store the collected event + * data. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_EVENT. + */ + CUpti_ActivityKind kind; + + /** + * The event ID. + */ + CUpti_EventID id; + + /** + * The event value. + */ + uint64_t value; + + /** + * The event domain ID. + */ + CUpti_EventDomainID domain; + + /** + * The correlation ID of the event. Use of this ID is user-defined, + * but typically this ID value will equal the correlation ID of the + * kernel for which the event was gathered. + */ + uint32_t correlationId; +} CUpti_ActivityEvent; + +/** + * \brief The activity record for a CUPTI event with instance + * information. + * + * This activity record represents the a CUPTI event value for a + * specific event domain instance + * (CUPTI_ACTIVITY_KIND_EVENT_INSTANCE). This activity record kind is + * not produced by the activity API but is included for completeness + * and ease-of-use. Profile frameworks built on top of CUPTI that + * collect event data may choose to use this type to store the + * collected event data. This activity record should be used when + * event domain instance information needs to be associated with the + * event. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be + * CUPTI_ACTIVITY_KIND_EVENT_INSTANCE. + */ + CUpti_ActivityKind kind; + + /** + * The event ID. + */ + CUpti_EventID id; + + /** + * The event domain ID. + */ + CUpti_EventDomainID domain; + + /** + * The event domain instance. + */ + uint32_t instance; + + /** + * The event value. + */ + uint64_t value; + + /** + * The correlation ID of the event. Use of this ID is user-defined, + * but typically this ID value will equal the correlation ID of the + * kernel for which the event was gathered. + */ + uint32_t correlationId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityEventInstance; + +/** + * \brief The activity record for a CUPTI metric. + * + * This activity record represents the collection of a CUPTI metric + * value (CUPTI_ACTIVITY_KIND_METRIC). This activity record kind is not + * produced by the activity API but is included for completeness and + * ease-of-use. Profile frameworks built on top of CUPTI that collect + * metric data may choose to use this type to store the collected metric + * data. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_METRIC. + */ + CUpti_ActivityKind kind; + + /** + * The metric ID. + */ + CUpti_MetricID id; + + /** + * The metric value. + */ + CUpti_MetricValue value; + + /** + * The correlation ID of the metric. Use of this ID is user-defined, + * but typically this ID value will equal the correlation ID of the + * kernel for which the metric was gathered. + */ + uint32_t correlationId; + + /** + * The properties of this metric. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t pad[3]; +} CUpti_ActivityMetric; + +/** + * \brief The activity record for a CUPTI metric with instance + * information. + * + * This activity record represents a CUPTI metric value + * for a specific metric domain instance + * (CUPTI_ACTIVITY_KIND_METRIC_INSTANCE). This activity record kind + * is not produced by the activity API but is included for + * completeness and ease-of-use. Profile frameworks built on top of + * CUPTI that collect metric data may choose to use this type to store + * the collected metric data. This activity record should be used when + * metric domain instance information needs to be associated with the + * metric. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be + * CUPTI_ACTIVITY_KIND_METRIC_INSTANCE. + */ + CUpti_ActivityKind kind; + + /** + * The metric ID. + */ + CUpti_MetricID id; + + /** + * The metric value. + */ + CUpti_MetricValue value; + + /** + * The metric domain instance. + */ + uint32_t instance; + + /** + * The correlation ID of the metric. Use of this ID is user-defined, + * but typically this ID value will equal the correlation ID of the + * kernel for which the metric was gathered. + */ + uint32_t correlationId; + + /** + * The properties of this metric. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t pad[7]; +} CUpti_ActivityMetricInstance; + +/** + * \brief The activity record for source locator. + * + * This activity record represents a source locator + * (CUPTI_ACTIVITY_KIND_SOURCE_LOCATOR). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_SOURCE_LOCATOR. + */ + CUpti_ActivityKind kind; + + /** + * The ID for the source path, will be used in all the source level + * results. + */ + uint32_t id; + + /** + * The line number in the source . + */ + uint32_t lineNumber; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The path for the file. + */ + const char *fileName; +} CUpti_ActivitySourceLocator; + +/** + * \brief The activity record for source-level global + * access. (deprecated) + * + * This activity records the locations of the global + * accesses in the source (CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS). + * Global access activities are now reported using the + * CUpti_ActivityGlobalAccess3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this global access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * The pc offset for the access. + */ + uint32_t pcOffset; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of 32 bytes transactions to L2 cache generated by this access + */ + uint64_t l2_transactions; +} CUpti_ActivityGlobalAccess; + +/** + * \brief The activity record for source-level global + * access. (deprecated in CUDA 9.0) + * + * This activity records the locations of the global + * accesses in the source (CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS). + * Global access activities are now reported using the + * CUpti_ActivityGlobalAccess3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this global access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the access. + */ + uint32_t pcOffset; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of 32 bytes transactions to L2 cache generated by this access + */ + uint64_t l2_transactions; + + /** + * The minimum number of L2 transactions possible based on the access pattern. + */ + uint64_t theoreticalL2Transactions; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityGlobalAccess2; + +/** + * \brief The activity record for source-level global + * access. + * + * This activity records the locations of the global + * accesses in the source (CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this global access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * The pc offset for the access. + */ + uint64_t pcOffset; + + /** + * This increments each time when this instruction is executed by number of + * threads that executed this instruction with predicate and condition code + * evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of 32 bytes transactions to L2 cache generated by this + access + */ + uint64_t l2_transactions; + + /** + * The minimum number of L2 transactions possible based on the access pattern. + */ + uint64_t theoreticalL2Transactions; +} CUpti_ActivityGlobalAccess3; + +/** + * \brief The activity record for source level result + * branch. (deprecated) + * + * This activity record the locations of the branches in the + * source (CUPTI_ACTIVITY_KIND_BRANCH). + * Branch activities are now reported using the + * CUpti_ActivityBranch2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_BRANCH. + */ + CUpti_ActivityKind kind; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * The pc offset for the branch. + */ + uint32_t pcOffset; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * regardless of predicate or condition code. + */ + uint32_t executed; + + /** + * Number of times this branch diverged + */ + uint32_t diverged; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction + */ + uint64_t threadsExecuted; +} CUpti_ActivityBranch; + +/** + * \brief The activity record for source level result + * branch. + * + * This activity record the locations of the branches in the + * source (CUPTI_ACTIVITY_KIND_BRANCH). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_BRANCH. + */ + CUpti_ActivityKind kind; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the branch. + */ + uint32_t pcOffset; + + /** + * Number of times this branch diverged + */ + uint32_t diverged; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction + */ + uint64_t threadsExecuted; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * regardless of predicate or condition code. + */ + uint32_t executed; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityBranch2; + + +/** + * \brief The activity record for a device. (deprecated) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice4 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityDevice; + +/** + * \brief The activity record for a device. (deprecated) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice4 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityDevice2; + +/** + * \brief The activity record for a device. (CUDA 7.0 onwards) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice4 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; + + /** + * Flag to indicate whether the device is visible to CUDA. Users can + * set the device visibility using CUDA_VISIBLE_DEVICES environment + */ + uint8_t isCudaVisible; + + uint8_t reserved[7]; +} CUpti_ActivityDevice3; + + +/** + * \brief The activity record for a device. (CUDA 11.6 onwards) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; + + /** + * Flag to indicate whether the device is visible to CUDA. Users can + * set the device visibility using CUDA_VISIBLE_DEVICES environment + */ + uint8_t isCudaVisible; + + /** + * MIG enabled flag for device + */ + uint8_t isMigEnabled; + + uint8_t reserved[6]; + + /** + * GPU Instance id for MIG enabled devices. + * If mig mode is disabled value is set to UINT32_MAX + */ + uint32_t gpuInstanceId; + + /** + * Compute Instance id for MIG enabled devices. + * If mig mode is disabled value is set to UINT32_MAX + */ + uint32_t computeInstanceId; + + /** + * The MIG UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid migUuid; + +} CUpti_ActivityDevice4; + + +/** + * \brief The activity record for a device attribute. + * + * This activity record represents information about a GPU device: + * either a CUpti_DeviceAttribute or CUdevice_attribute value + * (CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be + * CUPTI_ACTIVITY_KIND_DEVICE_ATTRIBUTE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The ID of the device that this attribute applies to. + */ + uint32_t deviceId; + + /** + * The attribute, either a CUpti_DeviceAttribute or + * CUdevice_attribute. Flag + * CUPTI_ACTIVITY_FLAG_DEVICE_ATTRIBUTE_CUDEVICE is used to indicate + * what kind of attribute this is. If + * CUPTI_ACTIVITY_FLAG_DEVICE_ATTRIBUTE_CUDEVICE is 1 then + * CUdevice_attribute field is value, otherwise + * CUpti_DeviceAttribute field is valid. + */ + union { + CUdevice_attribute cu; + CUpti_DeviceAttribute cupti; + } attribute; + + /** + * The value for the attribute. See CUpti_DeviceAttribute and + * CUdevice_attribute for the type of the value for a given + * attribute. + */ + union { + double vDouble; + uint32_t vUint32; + uint64_t vUint64; + int32_t vInt32; + int64_t vInt64; + } value; +} CUpti_ActivityDeviceAttribute; + +/** + * \brief The activity record for a context. + * + * This activity record represents information about a context + * (CUPTI_ACTIVITY_KIND_CONTEXT). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_CONTEXT. + */ + CUpti_ActivityKind kind; + + /** + * The context ID. + */ + uint32_t contextId; + + /** + * The device ID. + */ + uint32_t deviceId; + + /** + * The compute API kind. \see CUpti_ActivityComputeApiKind + */ + uint16_t computeApiKind; + + /** + * The ID for the NULL stream in this context + */ + uint16_t nullStreamId; + +} CUpti_ActivityContext; + +/** + * \brief The activity record providing a name. + * + * This activity record provides a name for a device, context, thread, + * etc. and other resource naming done via NVTX APIs + * (CUPTI_ACTIVITY_KIND_NAME). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NAME. + */ + CUpti_ActivityKind kind; + + /** + * The kind of activity object being named. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object. 'objectKind' indicates + * which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The name. + */ + const char *name; + +} CUpti_ActivityName; + +/** + * \brief The activity record providing a marker which is an + * instantaneous point in time. (deprecated in CUDA 8.0) + * + * The marker is specified with a descriptive name and unique id + * (CUPTI_ACTIVITY_KIND_MARKER). + * Marker activity is now reported using the + * CUpti_ActivityMarker2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MARKER. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the marker. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The timestamp for the marker, in ns. A value of 0 indicates that + * timestamp information could not be collected for the marker. + */ + uint64_t timestamp; + + /** + * The marker ID. + */ + uint32_t id; + + /** + * The kind of activity object associated with this marker. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object associated with this + * marker. 'objectKind' indicates which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The marker name for an instantaneous or start marker. This will + * be NULL for an end marker. + */ + const char *name; + +} CUpti_ActivityMarker; + +/** + * \brief The activity record providing a marker which is an + * instantaneous point in time. + * + * The marker is specified with a descriptive name and unique id + * (CUPTI_ACTIVITY_KIND_MARKER). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MARKER. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the marker. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The timestamp for the marker, in ns. A value of 0 indicates that + * timestamp information could not be collected for the marker. + */ + uint64_t timestamp; + + /** + * The marker ID. + */ + uint32_t id; + + /** + * The kind of activity object associated with this marker. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object associated with this + * marker. 'objectKind' indicates which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; + + + /** + * The marker name for an instantaneous or start marker. This will + * be NULL for an end marker. + */ + const char *name; + + /** + * The name of the domain to which this marker belongs to. + * This will be NULL for default domain. + */ + const char *domain; + +} CUpti_ActivityMarker2; + +/** + * \brief The activity record providing detailed information for a marker. + * + * The marker data contains color, payload, and category. + * (CUPTI_ACTIVITY_KIND_MARKER_DATA). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be + * CUPTI_ACTIVITY_KIND_MARKER_DATA. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the marker. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The marker ID. + */ + uint32_t id; + + /** + * Defines the payload format for the value associated with the marker. + */ + CUpti_MetricValueKind payloadKind; + + /** + * The payload value. + */ + CUpti_MetricValue payload; + + /** + * The color for the marker. + */ + uint32_t color; + + /** + * The category for the marker. + */ + uint32_t category; + +} CUpti_ActivityMarkerData; + +/** + * \brief The activity record for CUPTI and driver overheads. + * + * This activity record provides CUPTI and driver overhead information + * (CUPTI_ACTIVITY_OVERHEAD). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_OVERHEAD. + */ + CUpti_ActivityKind kind; + + /** + * The kind of overhead, CUPTI, DRIVER, COMPILER etc. + */ + CUpti_ActivityOverheadKind overheadKind; + + /** + * The kind of activity object that the overhead is associated with. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object. 'objectKind' indicates + * which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + + /** + * The start timestamp for the overhead, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the overhead. + */ + uint64_t start; + + /** + * The end timestamp for the overhead, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the overhead. + */ + uint64_t end; +} CUpti_ActivityOverhead; + +/** + * \brief The activity record for CUPTI environmental data. + * + * This activity record provides CUPTI environmental data, include + * power, clocks, and thermals. This information is sampled at + * various rates and returned in this activity record. The consumer + * of the record needs to check the environmentKind field to figure + * out what kind of environmental record this is. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_ENVIRONMENT. + */ + CUpti_ActivityKind kind; + + /** + * The ID of the device + */ + uint32_t deviceId; + + /** + * The timestamp when this sample was retrieved, in ns. A value of 0 + * indicates that timestamp information could not be collected for + * the marker. + */ + uint64_t timestamp; + + /** + * The kind of data reported in this record. + */ + CUpti_ActivityEnvironmentKind environmentKind; + + union { + /** + * Data returned for CUPTI_ACTIVITY_ENVIRONMENT_SPEED environment + * kind. + */ + struct { + /** + * The SM frequency in MHz + */ + uint32_t smClock; + + /** + * The memory frequency in MHz + */ + uint32_t memoryClock; + + /** + * The PCIe link generation. + */ + uint32_t pcieLinkGen; + + /** + * The PCIe link width. + */ + uint32_t pcieLinkWidth; + + /** + * The clocks throttle reasons. + */ + CUpti_EnvironmentClocksThrottleReason clocksThrottleReasons; + } speed; + /** + * Data returned for CUPTI_ACTIVITY_ENVIRONMENT_TEMPERATURE + * environment kind. + */ + struct { + /** + * The GPU temperature in degrees C. + */ + uint32_t gpuTemperature; + } temperature; + /** + * Data returned for CUPTI_ACTIVITY_ENVIRONMENT_POWER environment + * kind. + */ + struct { + /** + * The power in milliwatts consumed by GPU and associated + * circuitry. + */ + uint32_t power; + + /** + * The power in milliwatts that will trigger power management + * algorithm. + */ + uint32_t powerLimit; + } power; + /** + * Data returned for CUPTI_ACTIVITY_ENVIRONMENT_COOLING + * environment kind. + */ + struct { + /** + * The fan speed as percentage of maximum. + */ + uint32_t fanSpeed; + } cooling; + } data; +} CUpti_ActivityEnvironment; + +/** + * \brief The activity record for source-level instruction execution. + * + * This activity records result for source level instruction execution. + * (CUPTI_ACTIVITY_KIND_INSTRUCTION_EXECUTION). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTRUCTION_EXECUTION. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction execution. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction, regardless of predicate or condition code. + */ + uint64_t threadsExecuted; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t notPredOffThreadsExecuted; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * regardless of predicate or condition code. + */ + uint32_t executed; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityInstructionExecution; + +/** + * \brief The activity record for PC sampling. (deprecated in CUDA 8.0) + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING). + * PC sampling activities are now reported using the + * CUpti_ActivityPCSampling2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * The same PC can be sampled with different stall reasons. + */ + uint32_t samples; + + /** + * Current stall reason. Includes one of the reasons from + * \ref CUpti_ActivityPCSamplingStallReason + */ + CUpti_ActivityPCSamplingStallReason stallReason; +} CUpti_ActivityPCSampling; + +/** + * \brief The activity record for PC sampling. (deprecated in CUDA 9.0) + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING). + * PC sampling activities are now reported using the + * CUpti_ActivityPCSampling3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * These samples indicate that no instruction was issued in that cycle from + * the warp scheduler from where the warp was sampled. + * Field is valid for devices with compute capability 6.0 and higher + */ + uint32_t latencySamples; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * The same PC can be sampled with different stall reasons. The count includes + * latencySamples. + */ + uint32_t samples; + + /** + * Current stall reason. Includes one of the reasons from + * \ref CUpti_ActivityPCSamplingStallReason + */ + CUpti_ActivityPCSamplingStallReason stallReason; + + uint32_t pad; +} CUpti_ActivityPCSampling2; + +/** + * \brief The activity record for PC sampling. + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * These samples indicate that no instruction was issued in that cycle from + * the warp scheduler from where the warp was sampled. + * Field is valid for devices with compute capability 6.0 and higher + */ + uint32_t latencySamples; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * The same PC can be sampled with different stall reasons. The count includes + * latencySamples. + */ + uint32_t samples; + + /** + * Current stall reason. Includes one of the reasons from + * \ref CUpti_ActivityPCSamplingStallReason + */ + CUpti_ActivityPCSamplingStallReason stallReason; + + /** + * The pc offset for the instruction. + */ + uint64_t pcOffset; +} CUpti_ActivityPCSampling3; + +/** + * \brief The activity record for record status for PC sampling. + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING_RECORD_INFO). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING_RECORD_INFO. + */ + CUpti_ActivityKind kind; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Number of times the PC was sampled for this kernel instance including all + * dropped samples. + */ + uint64_t totalSamples; + + /** + * Number of samples that were dropped by hardware due to backpressure/overflow. + */ + uint64_t droppedSamples; + /** + * Sampling period in terms of number of cycles . + */ + uint64_t samplingPeriodInCycles; +} CUpti_ActivityPCSamplingRecordInfo; + +/** + * \brief The activity record for Unified Memory counters (deprecated in CUDA 7.0) + * + * This activity record represents a Unified Memory counter + * (CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER + */ + CUpti_ActivityKind kind; + + /** + * The Unified Memory counter kind. See \ref CUpti_ActivityUnifiedMemoryCounterKind + */ + CUpti_ActivityUnifiedMemoryCounterKind counterKind; + + /** + * Scope of the Unified Memory counter. See \ref CUpti_ActivityUnifiedMemoryCounterScope + */ + CUpti_ActivityUnifiedMemoryCounterScope scope; + + /** + * The ID of the device involved in the memory transfer operation. + * It is not relevant if the scope of the counter is global (all devices). + */ + uint32_t deviceId; + + /** + * Value of the counter + * + */ + uint64_t value; + + /** + * The timestamp when this sample was retrieved, in ns. A value of 0 + * indicates that timestamp information could not be collected + */ + uint64_t timestamp; + + /** + * The ID of the process to which this record belongs to. In case of + * global scope, processId is undefined. + */ + uint32_t processId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityUnifiedMemoryCounter; + +/** + * \brief The activity record for Unified Memory counters (CUDA 7.0 and beyond) + * + * This activity record represents a Unified Memory counter + * (CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER + */ + CUpti_ActivityKind kind; + + /** + * The Unified Memory counter kind + */ + CUpti_ActivityUnifiedMemoryCounterKind counterKind; + + /** + * Value of the counter + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD, + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH, + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THREASHING and + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP, it is the size of the + * memory region in bytes. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT, it + * is the number of page fault groups for the same page. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT, + * it is the program counter for the instruction that caused fault. + */ + uint64_t value; + + /** + * The start timestamp of the counter, in ns. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD and + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH, timestamp is + * captured when activity starts on GPU. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT and + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT, timestamp is + * captured when CUDA driver started processing the fault. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING, timestamp + * is captured when CUDA driver detected thrashing of memory region. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING, + * timestamp is captured when throttling opeeration was started by CUDA driver. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP, + * timestamp is captured when CUDA driver has pushed all required operations + * to the processor specified by dstId. + */ + uint64_t start; + + /** + * The end timestamp of the counter, in ns. + * Ignore this field if counterKind is + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD and + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_DTOH, timestamp is + * captured when activity finishes on GPU. + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT, timestamp is + * captured when CUDA driver queues the replay of faulting memory accesses on the GPU + * For counterKind CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING, timestamp + * is captured when throttling operation was finished by CUDA driver + */ + uint64_t end; + + /** + * This is the virtual base address of the page/s being transferred. For cpu and + * gpu faults, the virtual address for the page that faulted. + */ + uint64_t address; + + /** + * The ID of the source CPU/device involved in the memory transfer, page fault, thrashing, + * throttling or remote map operation. For counterKind + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING, it is a bitwise ORing of the + * device IDs fighting for the memory region. Ignore this field if counterKind is + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT + */ + uint32_t srcId; + + /** + * The ID of the destination CPU/device involved in the memory transfer or remote map + * operation. Ignore this field if counterKind is + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_CPU_PAGE_FAULT_COUNT or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING + */ + uint32_t dstId; + + /** + * The ID of the stream causing the transfer. + * This value of this field is invalid. + */ + uint32_t streamId; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The flags associated with this record. See enums \ref CUpti_ActivityUnifiedMemoryAccessType + * if counterKind is CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_GPU_PAGE_FAULT + * and \ref CUpti_ActivityUnifiedMemoryMigrationCause if counterKind is + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_BYTES_TRANSFER_HTOD + * and \ref CUpti_ActivityUnifiedMemoryRemoteMapCause if counterKind is + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_REMOTE_MAP and \ref CUpti_ActivityFlag + * if counterKind is CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THRASHING or + * CUPTI_ACTIVITY_UNIFIED_MEMORY_COUNTER_KIND_THROTTLING + */ + uint32_t flags; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityUnifiedMemoryCounter2; + +/** + * \brief The activity record for global/device functions. + * + * This activity records function name and corresponding module + * information. + * (CUPTI_ACTIVITY_KIND_FUNCTION). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_FUNCTION. + */ + CUpti_ActivityKind kind; + + /** + * ID to uniquely identify the record + */ + uint32_t id; + + /** + * The ID of the context where the function is launched. + */ + uint32_t contextId; + + /** + * The module ID in which this global/device function is present. + */ + uint32_t moduleId; + + /** + * The function's unique symbol index in the module. + */ + uint32_t functionIndex; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The name of the function. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityFunction; + +/** + * \brief The activity record for a CUDA module. + * + * This activity record represents a CUDA module + * (CUPTI_ACTIVITY_KIND_MODULE). This activity record kind is not + * produced by the activity API but is included for completeness and + * ease-of-use. Profile frameworks built on top of CUPTI that collect + * module data from the module callback may choose to use this type to + * store the collected module data. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MODULE. + */ + CUpti_ActivityKind kind; + + /** + * The ID of the context where the module is loaded. + */ + uint32_t contextId; + + /** + * The module ID. + */ + uint32_t id; + + /** + * The cubin size. + */ + uint32_t cubinSize; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The pointer to cubin. + */ + const void *cubin; +} CUpti_ActivityModule; + +/** + * \brief The activity record for source-level shared + * access. + * + * This activity records the locations of the shared + * accesses in the source + * (CUPTI_ACTIVITY_KIND_SHARED_ACCESS). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_SHARED_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this shared access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the access. + */ + uint32_t pcOffset; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of shared memory transactions generated by this access + */ + uint64_t sharedTransactions; + + /** + * The minimum number of shared memory transactions possible based on the access pattern. + */ + uint64_t theoreticalSharedTransactions; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivitySharedAccess; + +/** + * \brief The activity record for CUDA event. + * + * This activity is used to track recorded events. + * (CUPTI_ACTIVITY_KIND_CUDA_EVENT). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_CUDA_EVENT. + */ + CUpti_ActivityKind kind; + + /** + * The correlation ID of the API to which this result is associated. + */ + uint32_t correlationId; + + /** + * The ID of the context where the event was recorded. + */ + uint32_t contextId; + + /** + * The compute stream where the event was recorded. + */ + uint32_t streamId; + + /** + * A unique event ID to identify the event record. + */ + uint32_t eventId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityCudaEvent; + +/** + * \brief The activity record for CUDA stream. + * + * This activity is used to track created streams. + * (CUPTI_ACTIVITY_KIND_STREAM). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_STREAM. + */ + CUpti_ActivityKind kind; + /** + * The ID of the context where the stream was created. + */ + uint32_t contextId; + + /** + * A unique stream ID to identify the stream. + */ + uint32_t streamId; + + /** + * The clamped priority for the stream. + */ + uint32_t priority; + + /** + * Flags associated with the stream. + */ + CUpti_ActivityStreamFlag flag; + + /** + * The correlation ID of the API to which this result is associated. + */ + uint32_t correlationId; +} CUpti_ActivityStream; + +/** + * \brief The activity record for synchronization management. + * + * This activity is used to track various CUDA synchronization APIs. + * (CUPTI_ACTIVITY_KIND_SYNCHRONIZATION). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_SYNCHRONIZATION. + */ + CUpti_ActivityKind kind; + + /** + * The type of record. + */ + CUpti_ActivitySynchronizationType type; + + /** + * The start timestamp for the function, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the function. + */ + uint64_t start; + + /** + * The end timestamp for the function, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the function. + */ + uint64_t end; + + /** + * The correlation ID of the API to which this result is associated. + */ + uint32_t correlationId; + + /** + * The ID of the context for which the synchronization API is called. + * In case of context synchronization API it is the context id for which the API is called. + * In case of stream/event synchronization it is the ID of the context where the stream/event was created. + */ + uint32_t contextId; + + /** + * The compute stream for which the synchronization API is called. + * A CUPTI_SYNCHRONIZATION_INVALID_VALUE value indicate the field is not applicable for this record. + * Not valid for cuCtxSynchronize, cuEventSynchronize. + */ + uint32_t streamId; + + /** + * The event ID for which the synchronization API is called. + * A CUPTI_SYNCHRONIZATION_INVALID_VALUE value indicate the field is not applicable for this record. + * Not valid for cuCtxSynchronize, cuStreamSynchronize. + */ + uint32_t cudaEventId; +} CUpti_ActivitySynchronization; + + +/** + * \brief The activity record for source-level sass/source + * line-by-line correlation. + * + * This activity records source level sass/source correlation + * information. + * (CUPTI_ACTIVITY_KIND_INSTRUCTION_CORRELATION). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTRUCTION_CORRELATION. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityInstructionCorrelation; + +/** + * \brief The OpenAcc event kind for OpenAcc activity records. + * + * \see CUpti_ActivityKindOpenAcc + */ +typedef enum { + CUPTI_OPENACC_EVENT_KIND_INVALID = 0, + CUPTI_OPENACC_EVENT_KIND_DEVICE_INIT = 1, + CUPTI_OPENACC_EVENT_KIND_DEVICE_SHUTDOWN = 2, + CUPTI_OPENACC_EVENT_KIND_RUNTIME_SHUTDOWN = 3, + CUPTI_OPENACC_EVENT_KIND_ENQUEUE_LAUNCH = 4, + CUPTI_OPENACC_EVENT_KIND_ENQUEUE_UPLOAD = 5, + CUPTI_OPENACC_EVENT_KIND_ENQUEUE_DOWNLOAD = 6, + CUPTI_OPENACC_EVENT_KIND_WAIT = 7, + CUPTI_OPENACC_EVENT_KIND_IMPLICIT_WAIT = 8, + CUPTI_OPENACC_EVENT_KIND_COMPUTE_CONSTRUCT = 9, + CUPTI_OPENACC_EVENT_KIND_UPDATE = 10, + CUPTI_OPENACC_EVENT_KIND_ENTER_DATA = 11, + CUPTI_OPENACC_EVENT_KIND_EXIT_DATA = 12, + CUPTI_OPENACC_EVENT_KIND_CREATE = 13, + CUPTI_OPENACC_EVENT_KIND_DELETE = 14, + CUPTI_OPENACC_EVENT_KIND_ALLOC = 15, + CUPTI_OPENACC_EVENT_KIND_FREE = 16, + CUPTI_OPENACC_EVENT_KIND_FORCE_INT = 0x7fffffff +} CUpti_OpenAccEventKind; + +/** + * \brief The OpenAcc parent construct kind for OpenAcc activity records. + */ +typedef enum { + CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN = 0, + CUPTI_OPENACC_CONSTRUCT_KIND_PARALLEL = 1, + CUPTI_OPENACC_CONSTRUCT_KIND_KERNELS = 2, + CUPTI_OPENACC_CONSTRUCT_KIND_LOOP = 3, + CUPTI_OPENACC_CONSTRUCT_KIND_DATA = 4, + CUPTI_OPENACC_CONSTRUCT_KIND_ENTER_DATA = 5, + CUPTI_OPENACC_CONSTRUCT_KIND_EXIT_DATA = 6, + CUPTI_OPENACC_CONSTRUCT_KIND_HOST_DATA = 7, + CUPTI_OPENACC_CONSTRUCT_KIND_ATOMIC = 8, + CUPTI_OPENACC_CONSTRUCT_KIND_DECLARE = 9, + CUPTI_OPENACC_CONSTRUCT_KIND_INIT = 10, + CUPTI_OPENACC_CONSTRUCT_KIND_SHUTDOWN = 11, + CUPTI_OPENACC_CONSTRUCT_KIND_SET = 12, + CUPTI_OPENACC_CONSTRUCT_KIND_UPDATE = 13, + CUPTI_OPENACC_CONSTRUCT_KIND_ROUTINE = 14, + CUPTI_OPENACC_CONSTRUCT_KIND_WAIT = 15, + CUPTI_OPENACC_CONSTRUCT_KIND_RUNTIME_API = 16, + CUPTI_OPENACC_CONSTRUCT_KIND_FORCE_INT = 0x7fffffff + +} CUpti_OpenAccConstructKind; + +typedef enum { + CUPTI_OPENMP_EVENT_KIND_INVALID = 0, + CUPTI_OPENMP_EVENT_KIND_PARALLEL = 1, + CUPTI_OPENMP_EVENT_KIND_TASK = 2, + CUPTI_OPENMP_EVENT_KIND_THREAD = 3, + CUPTI_OPENMP_EVENT_KIND_IDLE = 4, + CUPTI_OPENMP_EVENT_KIND_WAIT_BARRIER = 5, + CUPTI_OPENMP_EVENT_KIND_WAIT_TASKWAIT = 6, + CUPTI_OPENMP_EVENT_KIND_FORCE_INT = 0x7fffffff +} CUpti_OpenMpEventKind; + +/** + * \brief The base activity record for OpenAcc records. + * + * The OpenACC activity API part uses a CUpti_ActivityOpenAcc as a generic + * representation for any OpenACC activity. The 'kind' field is used to determine the + * specific activity kind, and from that the CUpti_ActivityOpenAcc object can + * be cast to the specific OpenACC activity record type appropriate for that kind. + * + * Note that all OpenACC activity record types are padded and aligned to + * ensure that each member of the record is naturally aligned. + * + * \see CUpti_ActivityKind + */ +typedef struct PACKED_ALIGNMENT { + /** + * The kind of this activity. + */ + CUpti_ActivityKind kind; + + /** + * CUPTI OpenACC event kind (\see CUpti_OpenAccEventKind) + */ + CUpti_OpenAccEventKind eventKind; + + /** + * CUPTI OpenACC parent construct kind (\see CUpti_OpenAccConstructKind) + * + * Note that for applications using PGI OpenACC runtime < 16.1, this + * will always be CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN. + */ + CUpti_OpenAccConstructKind parentConstruct; + + /* + * Version number + */ + uint32_t version; + + /* + * 1 for any implicit event, such as an implicit wait at a synchronous data construct + * 0 otherwise + */ + uint32_t implicit; + + /* + * Device type + */ + uint32_t deviceType; + + /* + * Device number + */ + uint32_t deviceNumber; + + /** + * ThreadId + */ + uint32_t threadId; + + /* + * Value of async() clause of the corresponding directive + */ + uint64_t async; + + /* + * Internal asynchronous queue number used + */ + uint64_t asyncMap; + + /* + * The line number of the directive or program construct or the starting line + * number of the OpenACC construct corresponding to the event. + * A zero value means the line number is not known. + */ + uint32_t lineNo; + + /* + * For an OpenACC construct, this contains the line number of the end + * of the construct. A zero value means the line number is not known. + */ + uint32_t endLineNo; + + /* + * The line number of the first line of the function named in funcName. + * A zero value means the line number is not known. + */ + uint32_t funcLineNo; + + /* + * The last line number of the function named in funcName. + * A zero value means the line number is not known. + */ + uint32_t funcEndLineNo; + + /** + * CUPTI start timestamp + */ + uint64_t start; + + /** + * CUPTI end timestamp + */ + uint64_t end; + + /** + * CUDA device id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuDeviceId; + + /** + * CUDA context id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuContextId; + + /** + * CUDA stream id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuStreamId; + + /** + * The ID of the process where the OpenACC activity is executing. + */ + uint32_t cuProcessId; + + /** + * The ID of the thread where the OpenACC activity is executing. + */ + uint32_t cuThreadId; + + /** + * The OpenACC correlation ID. + * Valid only if deviceType is acc_device_nvidia. + * If not 0, it uniquely identifies this record. It is identical to the + * externalId in the preceeding external correlation record of type + * CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC. + */ + uint32_t externalId; + + /* + * A pointer to null-terminated string containing the name of or path to + * the source file, if known, or a null pointer if not. + */ + const char *srcFile; + + /* + * A pointer to a null-terminated string containing the name of the + * function in which the event occurred. + */ + const char *funcName; +} CUpti_ActivityOpenAcc; + +/** + * \brief The activity record for OpenACC data. + * + * (CUPTI_ACTIVITY_KIND_OPENACC_DATA). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_OPENACC_DATA. + */ + CUpti_ActivityKind kind; + + /** + * CUPTI OpenACC event kind (\see CUpti_OpenAccEventKind) + */ + CUpti_OpenAccEventKind eventKind; + + /* + * CUPTI OpenACC parent construct kind (\see CUpti_OpenAccConstructKind) + * + * Note that for applications using PGI OpenACC runtime < 16.1, this + * will always be CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN. + */ + CUpti_OpenAccConstructKind parentConstruct; + + /* + * Version number + */ + uint32_t version; + + /* + * 1 for any implicit event, such as an implicit wait at a synchronous data construct + * 0 otherwise + */ + uint32_t implicit; + + /* + * Device type + */ + uint32_t deviceType; + + /* + * Device number + */ + uint32_t deviceNumber; + + /** + * ThreadId + */ + uint32_t threadId; + + /* + * Value of async() clause of the corresponding directive + */ + uint64_t async; + + /* + * Internal asynchronous queue number used + */ + uint64_t asyncMap; + + /* + * The line number of the directive or program construct or the starting line + * number of the OpenACC construct corresponding to the event. + * A negative or zero value means the line number is not known. + */ + uint32_t lineNo; + + /* + * For an OpenACC construct, this contains the line number of the end + * of the construct. A negative or zero value means the line number is not known. + */ + uint32_t endLineNo; + + /* + * The line number of the first line of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcLineNo; + + /* + * The last line number of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcEndLineNo; + + /** + * CUPTI start timestamp + */ + uint64_t start; + + /** + * CUPTI end timestamp + */ + uint64_t end; + + /** + * CUDA device id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuDeviceId; + + /** + * CUDA context id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuContextId; + + /** + * CUDA stream id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuStreamId; + + /** + * The ID of the process where the OpenACC activity is executing. + */ + uint32_t cuProcessId; + + /** + * The ID of the thread where the OpenACC activity is executing. + */ + uint32_t cuThreadId; + + /** + * The OpenACC correlation ID. + * Valid only if deviceType is acc_device_nvidia. + * If not 0, it uniquely identifies this record. It is identical to the + * externalId in the preceeding external correlation record of type + * CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC. + */ + uint32_t externalId; + + /* + * A pointer to null-terminated string containing the name of or path to + * the source file, if known, or a null pointer if not. + */ + const char *srcFile; + + /* + * A pointer to a null-terminated string containing the name of the + * function in which the event occurred. + */ + const char *funcName; + + /* --- end of common CUpti_ActivityOpenAcc part --- */ + + /** + * Number of bytes + */ + uint64_t bytes; + + /** + * Host pointer if available + */ + uint64_t hostPtr; + + /** + * Device pointer if available + */ + uint64_t devicePtr; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad1; +#endif + + /* + * A pointer to null-terminated string containing the name of the variable + * for which this event is triggered, if known, or a null pointer if not. + */ + const char *varName; + +} CUpti_ActivityOpenAccData; + +/** + * \brief The activity record for OpenACC launch. + * + * (CUPTI_ACTIVITY_KIND_OPENACC_LAUNCH). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_OPENACC_LAUNCH. + */ + CUpti_ActivityKind kind; + + /** + * CUPTI OpenACC event kind (\see CUpti_OpenAccEventKind) + */ + CUpti_OpenAccEventKind eventKind; + + /* + * CUPTI OpenACC parent construct kind (\see CUpti_OpenAccConstructKind) + * + * Note that for applications using PGI OpenACC runtime < 16.1, this + * will always be CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN. + */ + CUpti_OpenAccConstructKind parentConstruct; + + /* + * Version number + */ + uint32_t version; + + /* + * 1 for any implicit event, such as an implicit wait at a synchronous data construct + * 0 otherwise + */ + uint32_t implicit; + + /* + * Device type + */ + uint32_t deviceType; + + /* + * Device number + */ + uint32_t deviceNumber; + + /** + * ThreadId + */ + uint32_t threadId; + + /* + * Value of async() clause of the corresponding directive + */ + uint64_t async; + + /* + * Internal asynchronous queue number used + */ + uint64_t asyncMap; + + /* + * The line number of the directive or program construct or the starting line + * number of the OpenACC construct corresponding to the event. + * A negative or zero value means the line number is not known. + */ + uint32_t lineNo; + + /* + * For an OpenACC construct, this contains the line number of the end + * of the construct. A negative or zero value means the line number is not known. + */ + uint32_t endLineNo; + + /* + * The line number of the first line of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcLineNo; + + /* + * The last line number of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcEndLineNo; + + /** + * CUPTI start timestamp + */ + uint64_t start; + + /** + * CUPTI end timestamp + */ + uint64_t end; + + /** + * CUDA device id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuDeviceId; + + /** + * CUDA context id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuContextId; + + /** + * CUDA stream id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuStreamId; + + /** + * The ID of the process where the OpenACC activity is executing. + */ + uint32_t cuProcessId; + + /** + * The ID of the thread where the OpenACC activity is executing. + */ + uint32_t cuThreadId; + + /** + * The OpenACC correlation ID. + * Valid only if deviceType is acc_device_nvidia. + * If not 0, it uniquely identifies this record. It is identical to the + * externalId in the preceeding external correlation record of type + * CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC. + */ + uint32_t externalId; + + /* + * A pointer to null-terminated string containing the name of or path to + * the source file, if known, or a null pointer if not. + */ + const char *srcFile; + + /* + * A pointer to a null-terminated string containing the name of the + * function in which the event occurred. + */ + const char *funcName; + + /* --- end of common CUpti_ActivityOpenAcc part --- */ + + /** + * The number of gangs created for this kernel launch + */ + uint64_t numGangs; + + /** + * The number of workers created for this kernel launch + */ + uint64_t numWorkers; + + /** + * The number of vector lanes created for this kernel launch + */ + uint64_t vectorLength; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad1; +#endif + + /* + * A pointer to null-terminated string containing the name of the + * kernel being launched, if known, or a null pointer if not. + */ + const char *kernelName; + +} CUpti_ActivityOpenAccLaunch; + +/** + * \brief The activity record for OpenACC other. + * + * (CUPTI_ACTIVITY_KIND_OPENACC_OTHER). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_OPENACC_OTHER. + */ + CUpti_ActivityKind kind; + + /** + * CUPTI OpenACC event kind (\see CUpti_OpenAccEventKind) + */ + CUpti_OpenAccEventKind eventKind; + + /* + * CUPTI OpenACC parent construct kind (\see CUpti_OpenAccConstructKind) + * + * Note that for applications using PGI OpenACC runtime < 16.1, this + * will always be CUPTI_OPENACC_CONSTRUCT_KIND_UNKNOWN. + */ + CUpti_OpenAccConstructKind parentConstruct; + + /* + * Version number + */ + uint32_t version; + + /* + * 1 for any implicit event, such as an implicit wait at a synchronous data construct + * 0 otherwise + */ + uint32_t implicit; + + /* + * Device type + */ + uint32_t deviceType; + + /* + * Device number + */ + uint32_t deviceNumber; + + /** + * ThreadId + */ + uint32_t threadId; + + /* + * Value of async() clause of the corresponding directive + */ + uint64_t async; + + /* + * Internal asynchronous queue number used + */ + uint64_t asyncMap; + + /* + * The line number of the directive or program construct or the starting line + * number of the OpenACC construct corresponding to the event. + * A negative or zero value means the line number is not known. + */ + uint32_t lineNo; + + /* + * For an OpenACC construct, this contains the line number of the end + * of the construct. A negative or zero value means the line number is not known. + */ + uint32_t endLineNo; + + /* + * The line number of the first line of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcLineNo; + + /* + * The last line number of the function named in func_name. + * A negative or zero value means the line number is not known. + */ + uint32_t funcEndLineNo; + + /** + * CUPTI start timestamp + */ + uint64_t start; + + /** + * CUPTI end timestamp + */ + uint64_t end; + + /** + * CUDA device id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuDeviceId; + + /** + * CUDA context id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuContextId; + + /** + * CUDA stream id + * Valid only if deviceType is acc_device_nvidia. + */ + uint32_t cuStreamId; + + /** + * The ID of the process where the OpenACC activity is executing. + */ + uint32_t cuProcessId; + + /** + * The ID of the thread where the OpenACC activity is executing. + */ + uint32_t cuThreadId; + + /** + * The OpenACC correlation ID. + * Valid only if deviceType is acc_device_nvidia. + * If not 0, it uniquely identifies this record. It is identical to the + * externalId in the preceeding external correlation record of type + * CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC. + */ + uint32_t externalId; + + /* + * A pointer to null-terminated string containing the name of or path to + * the source file, if known, or a null pointer if not. + */ + const char *srcFile; + + /* + * A pointer to a null-terminated string containing the name of the + * function in which the event occurred. + */ + const char *funcName; + + /* --- end of common CUpti_ActivityOpenAcc part --- */ +} CUpti_ActivityOpenAccOther; + + +/** + * \brief The base activity record for OpenMp records. + * + * \see CUpti_ActivityKind + */ +typedef struct PACKED_ALIGNMENT { + + /** + * The kind of this activity. + */ + CUpti_ActivityKind kind; + + /** + * CUPTI OpenMP event kind (\see CUpti_OpenMpEventKind) + */ + CUpti_OpenMpEventKind eventKind; + + /* + * Version number + */ + uint32_t version; + + /** + * ThreadId + */ + uint32_t threadId; + + /** + * CUPTI start timestamp + */ + uint64_t start; + + /** + * CUPTI end timestamp + */ + uint64_t end; + + /** + * The ID of the process where the OpenMP activity is executing. + */ + uint32_t cuProcessId; + + /** + * The ID of the thread where the OpenMP activity is executing. + */ + uint32_t cuThreadId; + +} CUpti_ActivityOpenMp; + +/** + * \brief The kind of external APIs supported for correlation. + * + * Custom correlation kinds are reserved for usage in external tools. + * + * \see CUpti_ActivityExternalCorrelation + */ +typedef enum { + CUPTI_EXTERNAL_CORRELATION_KIND_INVALID = 0, + + /** + * The external API is unknown to CUPTI + */ + CUPTI_EXTERNAL_CORRELATION_KIND_UNKNOWN = 1, + + /** + * The external API is OpenACC + */ + CUPTI_EXTERNAL_CORRELATION_KIND_OPENACC = 2, + + /** + * The external API is custom0 + */ + CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM0 = 3, + + /** + * The external API is custom1 + */ + CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM1 = 4, + + /** + * The external API is custom2 + */ + CUPTI_EXTERNAL_CORRELATION_KIND_CUSTOM2 = 5, + + /** + * Add new kinds before this line + */ + CUPTI_EXTERNAL_CORRELATION_KIND_SIZE, + + CUPTI_EXTERNAL_CORRELATION_KIND_FORCE_INT = 0x7fffffff +} CUpti_ExternalCorrelationKind; + +/** + * \brief The activity record for correlation with external records + * + * This activity record correlates native CUDA records (e.g. CUDA Driver API, + * kernels, memcpys, ...) with records from external APIs such as OpenACC. + * (CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION). + * + * \see CUpti_ActivityKind + */ +typedef struct PACKED_ALIGNMENT { + /** + * The kind of this activity. + */ + CUpti_ActivityKind kind; + + /** + * The kind of external API this record correlated to. + */ + CUpti_ExternalCorrelationKind externalKind; + + /** + * The correlation ID of the associated non-CUDA API record. + * The exact field in the associated external record depends + * on that record's activity kind (\see externalKind). + */ + uint64_t externalId; + + /** + * The correlation ID of the associated CUDA driver or runtime API record. + */ + uint32_t correlationId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t reserved; +} CUpti_ActivityExternalCorrelation; + +/** +* \brief The device type for device connected to NVLink. +*/ +typedef enum { + CUPTI_DEV_TYPE_INVALID = 0, + /** + * The device type is GPU. + */ + CUPTI_DEV_TYPE_GPU = 1, + /** + * The device type is NVLink processing unit in CPU. + */ + CUPTI_DEV_TYPE_NPU = 2, + CUPTI_DEV_TYPE_FORCE_INT = 0x7fffffff +} CUpti_DevType; + +/** +* \brief NVLink information. (deprecated in CUDA 9.0) +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NVLink information are now reported using the +* CUpti_ActivityNvLink2 activity record. +*/ + +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + /** + * NVLink version. + */ + uint32_t nvlinkVersion; + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice4. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice4. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + /** + * Port numbers for maximum 4 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[4]; + /** + * Port numbers for maximum 4 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[4]; + /** + * Banwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; +} CUpti_ActivityNvLink; + +/** +* \brief NVLink information. (deprecated in CUDA 10.0) +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NvLink information are now reported using the +* CUpti_ActivityNvLink4 activity record. +*/ + +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + /** + * NvLink version. + */ + uint32_t nvlinkVersion; + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice4. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice4. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + /** + * Port numbers for maximum 16 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[CUPTI_MAX_NVLINK_PORTS]; + /** + * Port numbers for maximum 16 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[CUPTI_MAX_NVLINK_PORTS]; + /** + * Banwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; +} CUpti_ActivityNvLink2; + +/** +* \brief NVLink information. +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NvLink information are now reported using the +* CUpti_ActivityNvLink4 activity record. +*/ + +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + /** + * NvLink version. + */ + uint32_t nvlinkVersion; + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice4. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice4. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + /** + * Port numbers for maximum 16 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[CUPTI_MAX_NVLINK_PORTS]; + /** + * Port numbers for maximum 16 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[CUPTI_MAX_NVLINK_PORTS]; + /** + * Banwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; + /** + * NVSwitch is connected as an intermediate node. + */ + uint8_t nvswitchConnected; + /** + * Undefined. reserved for internal use + */ + uint8_t pad[7]; +} CUpti_ActivityNvLink3; + +/** +* \brief NVLink information. +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +*/ + +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + /** + * NvLink version. + */ + uint32_t nvlinkVersion; + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice4. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice4. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + /** + * Port numbers for maximum 32 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[CUPTI_MAX_NVLINK_PORTS]; + /** + * Port numbers for maximum 32 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[CUPTI_MAX_NVLINK_PORTS]; + /** + * Banwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; + /** + * NVSwitch is connected as an intermediate node. + */ + uint8_t nvswitchConnected; + /** + * Undefined. reserved for internal use + */ + uint8_t pad[7]; +} CUpti_ActivityNvLink4; + +#define CUPTI_MAX_GPUS 32 +/** + * Field to differentiate whether PCIE Activity record + * is of a GPU or a PCI Bridge + */ +typedef enum { + /** + * PCIE GPU record + */ + CUPTI_PCIE_DEVICE_TYPE_GPU = 0, + + /** + * PCIE Bridge record + */ + CUPTI_PCIE_DEVICE_TYPE_BRIDGE = 1, + + CUPTI_PCIE_DEVICE_TYPE_FORCE_INT = 0x7fffffff +} CUpti_PcieDeviceType; + +/** + * \brief PCI devices information required to construct topology + * + * This structure gives capabilities of GPU and PCI bridge connected to the PCIE bus + * which can be used to understand the topology. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PCIE. + */ + CUpti_ActivityKind kind; + /** + * Type of device in topology, \ref CUpti_PcieDeviceType. If type is + * CUPTI_PCIE_DEVICE_TYPE_GPU use devId for id and gpuAttr and if type is + * CUPTI_PCIE_DEVICE_TYPE_BRIDGE use bridgeId for id and bridgeAttr. + */ + CUpti_PcieDeviceType type; + /** + * A unique identifier for GPU or Bridge in Topology + */ + union { + /** + * GPU device ID + */ + CUdevice devId; + /** + * A unique identifier for Bridge in the Topology + */ + uint32_t bridgeId; + } id; + + /** + * Domain for the GPU or Bridge, required to identify which PCIE bus it belongs to in + * multiple NUMA systems. + */ + uint32_t domain; + /** + * PCIE Generation of GPU or Bridge. + */ + uint16_t pcieGeneration; + /** + * Link rate of the GPU or bridge in gigatransfers per second (GT/s) + */ + uint16_t linkRate; + /** + * Link width of the GPU or bridge + */ + uint16_t linkWidth; + + /** + * Upstream bus ID for the GPU or PCI bridge. Required to identify which bus it is + * connected to in the topology. + */ + uint16_t upstreamBus; + + /** + * Attributes for more information about GPU (gpuAttr) or PCI Bridge (bridgeAttr) + */ + union { + struct { + /** + * UUID for the device. \ref CUpti_ActivityDevice4. + */ + CUuuid uuidDev; + /** + * CUdevice with which this device has P2P capability. + * This can also be obtained by querying cuDeviceCanAccessPeer or + * cudaDeviceCanAccessPeer APIs + */ + CUdevice peerDev[CUPTI_MAX_GPUS]; + } gpuAttr; + + struct { + /** + * The downstream bus number, used to search downstream devices/bridges connected + * to this bridge. + */ + uint16_t secondaryBus; + /** + * Device ID of the bridge + */ + uint16_t deviceId; + /** + * Vendor ID of the bridge + */ + uint16_t vendorId; + /** + * Padding for alignment + */ + uint16_t pad0; + } bridgeAttr; + } attr; +} CUpti_ActivityPcie; + +/** + * \brief PCIE Generation. + * + * Enumeration of PCIE Generation for + * pcie activity attribute pcieGeneration + */ +typedef enum { + /** + * PCIE Generation 1 + */ + CUPTI_PCIE_GEN_GEN1 = 1, + /** + * PCIE Generation 2 + */ + CUPTI_PCIE_GEN_GEN2 = 2, + /** + * PCIE Generation 3 + */ + CUPTI_PCIE_GEN_GEN3 = 3, + /** + * PCIE Generation 4 + */ + CUPTI_PCIE_GEN_GEN4 = 4, + /** + * PCIE Generation 5 + */ + CUPTI_PCIE_GEN_GEN5 = 5, + + CUPTI_PCIE_GEN_FORCE_INT = 0x7fffffff +} CUpti_PcieGen; + +/** + * \brief The activity record for an instantaneous CUPTI event. + * + * This activity record represents a CUPTI event value + * (CUPTI_ACTIVITY_KIND_EVENT) sampled at a particular instant. + * This activity record kind is not produced by the activity API but is + * included for completeness and ease-of-use. Profiler frameworks built on + * top of CUPTI that collect event data at a particular time may choose to + * use this type to store the collected event data. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT. + */ + CUpti_ActivityKind kind; + + /** + * The event ID. + */ + CUpti_EventID id; + + /** + * The event value. + */ + uint64_t value; + + /** + * The timestamp at which event is sampled + */ + uint64_t timestamp; + + /** + * The device id + */ + uint32_t deviceId; + /** + * Undefined. reserved for internal use + */ + uint32_t reserved; +} CUpti_ActivityInstantaneousEvent; + +/** + * \brief The activity record for an instantaneous CUPTI event + * with event domain instance information. + * + * This activity record represents the a CUPTI event value for a + * specific event domain instance + * (CUPTI_ACTIVITY_KIND_EVENT_INSTANCE) sampled at a particular instant. + * This activity record kind is not produced by the activity API but is + * included for completeness and ease-of-use. Profiler frameworks built on + * top of CUPTI that collect event data may choose to use this type to store the + * collected event data. This activity record should be used when + * event domain instance information needs to be associated with the + * event. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTANTANEOUS_EVENT_INSTANCE. + */ + CUpti_ActivityKind kind; + + /** + * The event ID. + */ + CUpti_EventID id; + + /** + * The event value. + */ + uint64_t value; + + /** + * The timestamp at which event is sampled + */ + uint64_t timestamp; + + /** + * The device id + */ + uint32_t deviceId; + /** + * The event domain instance + */ + uint8_t instance; + /** + * Undefined. reserved for internal use + */ + uint8_t pad[3]; +} CUpti_ActivityInstantaneousEventInstance; + +/** + * \brief The activity record for an instantaneous CUPTI metric. + * + * This activity record represents the collection of a CUPTI metric + * value (CUPTI_ACTIVITY_KIND_METRIC) at a particular instance. + * This activity record kind is not produced by the activity API but + * is included for completeness and ease-of-use. Profiler frameworks built + * on top of CUPTI that collect metric data may choose to use this type to + * store the collected metric data. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC. + */ + CUpti_ActivityKind kind; + + /** + * The metric ID. + */ + CUpti_MetricID id; + + /** + * The metric value. + */ + CUpti_MetricValue value; + + /** + * The timestamp at which metric is sampled + */ + uint64_t timestamp; + + /** + * The device id + */ + uint32_t deviceId; + + /** + * The properties of this metric. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * Undefined. reserved for internal use + */ + uint8_t pad[3]; +} CUpti_ActivityInstantaneousMetric; + +/** + * \brief The instantaneous activity record for a CUPTI metric with instance + * information. + + * This activity record represents a CUPTI metric value + * for a specific metric domain instance + * (CUPTI_ACTIVITY_KIND_METRIC_INSTANCE) sampled at a particular time. This + * activity record kind is not produced by the activity API but is included for + * completeness and ease-of-use. Profiler frameworks built on top of + * CUPTI that collect metric data may choose to use this type to store + * the collected metric data. This activity record should be used when + * metric domain instance information needs to be associated with the + * metric. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_INSTANTANEOUS_METRIC_INSTANCE. + */ + CUpti_ActivityKind kind; + + /** + * The metric ID. + */ + CUpti_MetricID id; + + /** + * The metric value. + */ + CUpti_MetricValue value; + + /** + * The timestamp at which metric is sampled + */ + uint64_t timestamp; + + /** + * The device id + */ + uint32_t deviceId; + + /** + * The properties of this metric. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The metric domain instance + */ + uint8_t instance; + /** + * Undefined. reserved for internal use + */ + uint8_t pad[2]; +} CUpti_ActivityInstantaneousMetricInstance; + +/** + * \brief The types of JIT entry. + * + * To be used in CUpti_ActivityJit. + */ +typedef enum { + CUPTI_ACTIVITY_JIT_ENTRY_INVALID= 0, + /** + * PTX to CUBIN. + */ + CUPTI_ACTIVITY_JIT_ENTRY_PTX_TO_CUBIN = 1, + /** + * NVVM-IR to PTX + */ + CUPTI_ACTIVITY_JIT_ENTRY_NVVM_IR_TO_PTX = 2, + + CUPTI_ACTIVITY_JIT_ENTRY_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityJitEntryType; + +/** + * \brief The types of JIT compilation operations. + * + * To be used in CUpti_ActivityJit. + */ + +typedef enum { + CUPTI_ACTIVITY_JIT_OPERATION_INVALID = 0, + /** + * Loaded from the compute cache. + */ + CUPTI_ACTIVITY_JIT_OPERATION_CACHE_LOAD = 1, + /** + * Stored in the compute cache. + */ + CUPTI_ACTIVITY_JIT_OPERATION_CACHE_STORE = 2, + /** + * JIT compilation. + */ + CUPTI_ACTIVITY_JIT_OPERATION_COMPILE = 3, + + CUPTI_ACTIVITY_JIT_OPERATION_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityJitOperationType; + +/** + * \brief The activity record for JIT operations. + * This activity represents the JIT operations (compile, load, store) of a CUmodule + * from the Compute Cache. + * Gives the exact hashed path of where the cached module is loaded from, + * or where the module will be stored after Just-In-Time (JIT) compilation. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind must be CUPTI_ACTIVITY_KIND_JIT. + */ + CUpti_ActivityKind kind; + /** + * The JIT entry type. + */ + CUpti_ActivityJitEntryType jitEntryType; + /** + * The JIT operation type. + */ + CUpti_ActivityJitOperationType jitOperationType; + /** + * The device ID. + */ + uint32_t deviceId; + /** + * The start timestamp for the JIT operation, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the JIT operation. + */ + uint64_t start; + /** + * The end timestamp for the JIT operation, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the JIT operation. + */ + uint64_t end; + /** + * The correlation ID of the JIT operation to which + * records belong to. Each JIT operation is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the JIT operation. + */ + uint32_t correlationId; + /** + * Internal use. + */ + uint32_t padding; + /** + * The correlation ID to correlate JIT compilation, load and store operations. + * Each JIT compilation unit is assigned a unique correlation ID + * at the time of the JIT compilation. This correlation id can be used + * to find the matching JIT cache load/store records. + */ + uint64_t jitOperationCorrelationId; + /** + * The size of compute cache. + */ + uint64_t cacheSize; + /** + * The path where the fat binary is cached. + */ + const char* cachePath; +} CUpti_ActivityJit; + + +/** + * \brief The activity record for trace of graph execution. + * + * This activity record represents execution for a graph without giving visibility + * about the execution of its nodes. This is intended to reduce overheads in tracing + * each node. The activity kind is CUPTI_ACTIVITY_KIND_GRAPH_TRACE + */ +typedef struct { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GRAPH_TRACE + */ + CUpti_ActivityKind kind; + + /** + * The correlation ID of the graph launch. Each graph launch is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the graph. + */ + uint32_t correlationId; + + /** + * The start timestamp for the graph execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the graph. + */ + uint64_t start; + + /** + * The end timestamp for the graph execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the graph. + */ + uint64_t end; + + /** + * The ID of the device where the graph execution is occurring. + */ + uint32_t deviceId; + + /** + * The unique ID of the graph that is launched. + */ + uint32_t graphId; + + /** + * The ID of the context where the graph is being launched. + */ + uint32_t contextId; + + /** + * The ID of the stream where the graph is being launched. + */ + uint32_t streamId; + + /** + * This field is reserved for internal use + */ + void *reserved; +} CUpti_ActivityGraphTrace; + +END_PACKED_ALIGNMENT + +/** + * \brief Activity attributes. + * + * These attributes are used to control the behavior of the activity + * API. + */ +typedef enum { + /** + * The device memory size (in bytes) reserved for storing profiling data for concurrent + * kernels (activity kind \ref CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL), memcopies and memsets + * for each buffer on a context. The value is a size_t. + * + * There is a limit on how many device buffers can be allocated per context. User + * can query and set this limit using the attribute + * \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_POOL_LIMIT. + * CUPTI doesn't pre-allocate all the buffers, it pre-allocates only those many + * buffers as set by the attribute \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_PRE_ALLOCATE_VALUE. + * When all of the data in a buffer is consumed, it is added in the reuse pool, and + * CUPTI picks a buffer from this pool when a new buffer is needed. Thus memory + * footprint does not scale with the kernel count. Applications with the high density + * of kernels, memcopies and memsets might result in having CUPTI to allocate more device buffers. + * CUPTI allocates another buffer only when it runs out of the buffers in the + * reuse pool. + * + * Since buffer allocation happens in the main application thread, this might result + * in stalls in the critical path. CUPTI pre-allocates 3 buffers of the same size to + * mitigate this issue. User can query and set the pre-allocation limit using the + * attribute \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_PRE_ALLOCATE_VALUE. + * + * Having larger buffer size leaves less device memory for the application. + * Having smaller buffer size increases the risk of dropping timestamps for + * records if too many kernels or memcopies or memsets are launched at one time. + * + * This value only applies to new buffer allocations. Set this value before initializing + * CUDA or before creating a context to ensure it is considered for the following allocations. + * + * The default value is 3200000 (~3MB) which can accommodate profiling data + * up to 100,000 kernels, memcopies and memsets combined. + * + * Note: Starting with the CUDA 11.2 release, CUPTI allocates profiling buffer in the + * pinned host memory by default as this might help in improving the performance of the + * tracing run. Refer to the description of the attribute + * \ref CUPTI_ACTIVITY_ATTR_MEM_ALLOCATION_TYPE_HOST_PINNED for more details. + * Size of the memory and maximum number of pools are still controlled by the attributes + * \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE and \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_POOL_LIMIT. + * + * Note: The actual amount of device memory per buffer reserved by CUPTI might be larger. + */ + CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE = 0, + /** + * The device memory size (in bytes) reserved for storing profiling + * data for CDP operations for each buffer on a context. The + * value is a size_t. + * + * Having larger buffer size means less flush operations but + * consumes more device memory. This value only applies to new + * allocations. + * + * Set this value before initializing CUDA or before creating a + * context to ensure it is considered for the following allocations. + * + * The default value is 8388608 (8MB). + * + * Note: The actual amount of device memory per context reserved by + * CUPTI might be larger. + */ + CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE_CDP = 1, + /** + * The maximum number of device memory buffers per context. The value is a size_t. + * + * For an application with high rate of kernel launches, memcopies and memsets having a bigger pool + * limit helps in timestamp collection for all these activties at the expense of a larger memory footprint. + * Refer to the description of the attribute \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE + * for more details. + * + * Setting this value will not modify the number of memory buffers + * currently stored. + * + * Set this value before initializing CUDA to ensure the limit is + * not exceeded. + * + * The default value is 250. + */ + CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_POOL_LIMIT = 2, + + /** + * The profiling semaphore pool size reserved for storing profiling data for + * serialized kernels tracing (activity kind \ref CUPTI_ACTIVITY_KIND_KERNEL) + * for each context. The value is a size_t. + * + * There is a limit on how many semaphore pools can be allocated per context. User + * can query and set this limit using the attribute + * \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_LIMIT. + * CUPTI doesn't pre-allocate all the semaphore pools, it pre-allocates only those many + * semaphore pools as set by the attribute \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_PRE_ALLOCATE_VALUE. + * When all of the data in a semaphore pool is consumed, it is added in the reuse pool, and + * CUPTI picks a semaphore pool from the reuse pool when a new semaphore pool is needed. Thus memory + * footprint does not scale with the kernel count. Applications with the high density + * of kernels might result in having CUPTI to allocate more semaphore pools. + * CUPTI allocates another semaphore pool only when it runs out of the semaphore pools in the + * reuse pool. + * + * Since semaphore pool allocation happens in the main application thread, this might result + * in stalls in the critical path. CUPTI pre-allocates 3 semaphore pools of the same size to + * mitigate this issue. User can query and set the pre-allocation limit using the + * attribute \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_PRE_ALLOCATE_VALUE. + * + * Having larger semaphore pool size leaves less device memory for the application. + * Having smaller semaphore pool size increases the risk of dropping timestamps for + * kernel records if too many kernels are issued/launched at one time. + * + * This value only applies to new semaphore pool allocations. Set this value before initializing + * CUDA or before creating a context to ensure it is considered for the following allocations. + * + * The default value is 25000 which can accommodate profiling data for upto 25,000 kernels. + * + */ + CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_SIZE = 3, + /** + * The maximum number of profiling semaphore pools per context. The value is a size_t. + * + * For an application with high rate of kernel launches, having a bigger + * pool limit helps in timestamp collection for all the kernels, at the + * expense of a larger device memory footprint. + * Refer to the description of the attribute \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_SIZE + * for more details. + * + * Set this value before initializing CUDA to ensure the limit is not exceeded. + * + * The default value is 250. + */ + CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_LIMIT = 4, + + /** + * The flag to indicate whether user should provide activity buffer of zero value. + * The value is a uint8_t. + * + * If the value of this attribute is non-zero, user should provide + * a zero value buffer in the \ref CUpti_BuffersCallbackRequestFunc. + * If the user does not provide a zero value buffer after setting this to non-zero, + * the activity buffer may contain some uninitialized values when CUPTI returns it in + * \ref CUpti_BuffersCallbackCompleteFunc + * + * If the value of this attribute is zero, CUPTI will initialize the user buffer + * received in the \ref CUpti_BuffersCallbackRequestFunc to zero before filling it. + * If the user sets this to zero, a few stalls may appear in critical path because CUPTI + * will zero out the buffer in the main thread. + * Set this value before returning from \ref CUpti_BuffersCallbackRequestFunc to + * ensure it is considered for all the subsequent user buffers. + * + * The default value is 0. + */ + CUPTI_ACTIVITY_ATTR_ZEROED_OUT_ACTIVITY_BUFFER = 5, + + /** + * Number of device buffers to pre-allocate for a context during the initialization phase. + * The value is a size_t. + * + * Refer to the description of the attribute \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE + * for details. + * + * This value must be less than the maximum number of device buffers set using + * the attribute \ref CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_POOL_LIMIT + * + * Set this value before initializing CUDA or before creating a context to ensure it + * is considered by the CUPTI. + * + * The default value is set to 3 to ping pong between these buffers (if possible). + */ + CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_PRE_ALLOCATE_VALUE = 6, + + /** + * Number of profiling semaphore pools to pre-allocate for a context during the + * initialization phase. The value is a size_t. + * + * Refer to the description of the attribute \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_SIZE + * for details. + * + * This value must be less than the maximum number of profiling semaphore pools set + * using the attribute \ref CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_POOL_LIMIT + * + * Set this value before initializing CUDA or before creating a context to ensure it + * is considered by the CUPTI. + * + * The default value is set to 3 to ping pong between these pools (if possible). + */ + CUPTI_ACTIVITY_ATTR_PROFILING_SEMAPHORE_PRE_ALLOCATE_VALUE = 7, + + /** + * Allocate page-locked (pinned) host memory for storing profiling data for concurrent + * kernels, memcopies and memsets for each buffer on a context. The value is a uint8_t. + * + * Starting with the CUDA 11.2 release, CUPTI allocates profiling buffer in the pinned host + * memory by default as this might help in improving the performance of the tracing run. + * Allocating excessive amounts of pinned memory may degrade system performance, since it + * reduces the amount of memory available to the system for paging. For this reason user + * might want to change the location from pinned host memory to device memory by setting + * value of this attribute to 0. + * + * The default value is 1. + */ + CUPTI_ACTIVITY_ATTR_MEM_ALLOCATION_TYPE_HOST_PINNED = 8, + + + CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_FORCE_INT = 0x7fffffff +} CUpti_ActivityAttribute; + +/** + * \brief Thread-Id types. + * + * CUPTI uses different methods to obtain the thread-id depending on the + * support and the underlying platform. This enum documents these methods + * for each type. APIs \ref cuptiSetThreadIdType and \ref cuptiGetThreadIdType + * can be used to set and get the thread-id type. + */ +typedef enum { + /** + * Default type + * Windows uses API GetCurrentThreadId() + * Linux/Mac/Android/QNX use POSIX pthread API pthread_self() + */ + CUPTI_ACTIVITY_THREAD_ID_TYPE_DEFAULT = 0, + + /** + * This type is based on the system API available on the underlying platform + * and thread-id obtained is supposed to be unique for the process lifetime. + * Windows uses API GetCurrentThreadId() + * Linux uses syscall SYS_gettid + * Mac uses syscall SYS_thread_selfid + * Android/QNX use gettid() + */ + CUPTI_ACTIVITY_THREAD_ID_TYPE_SYSTEM = 1, + + CUPTI_ACTIVITY_THREAD_ID_TYPE_FORCE_INT = 0x7fffffff +} CUpti_ActivityThreadIdType; + +/** + * \brief Get the CUPTI timestamp. + * + * Returns a timestamp normalized to correspond with the start and end + * timestamps reported in the CUPTI activity records. The timestamp is + * reported in nanoseconds. + * + * \param timestamp Returns the CUPTI timestamp + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p timestamp is NULL + */ +CUptiResult CUPTIAPI cuptiGetTimestamp(uint64_t *timestamp); + +/** + * \brief Get the ID of a context. + * + * Get the ID of a context. + * + * \param context The context + * \param contextId Returns a process-unique ID for the context + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_CONTEXT The context is NULL or not valid. + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p contextId is NULL + */ +CUptiResult CUPTIAPI cuptiGetContextId(CUcontext context, uint32_t *contextId); + +/** + * \brief Get the ID of a stream. + * + * Get the ID of a stream. The stream ID is unique within a context + * (i.e. all streams within a context will have unique stream + * IDs). + * + * \param context If non-NULL then the stream is checked to ensure + * that it belongs to this context. Typically this parameter should be + * null. + * \param stream The stream + * \param streamId Returns a context-unique ID for the stream + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_STREAM if unable to get stream ID, or + * if \p context is non-NULL and \p stream does not belong to the + * context + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p streamId is NULL + * + * **DEPRECATED** This method is deprecated as of CUDA 8.0. + * Use method cuptiGetStreamIdEx instead. + */ +CUptiResult CUPTIAPI cuptiGetStreamId(CUcontext context, CUstream stream, uint32_t *streamId); + +/** +* \brief Get the ID of a stream. +* +* Get the ID of a stream. The stream ID is unique within a context +* (i.e. all streams within a context will have unique stream +* IDs). +* +* \param context If non-NULL then the stream is checked to ensure +* that it belongs to this context. Typically this parameter should be +* null. +* \param stream The stream +* \param perThreadStream Flag to indicate if program is compiled for per-thread streams +* \param streamId Returns a context-unique ID for the stream +* +* \retval CUPTI_SUCCESS +* \retval CUPTI_ERROR_NOT_INITIALIZED +* \retval CUPTI_ERROR_INVALID_STREAM if unable to get stream ID, or +* if \p context is non-NULL and \p stream does not belong to the +* context +* \retval CUPTI_ERROR_INVALID_PARAMETER if \p streamId is NULL +*/ +CUptiResult CUPTIAPI cuptiGetStreamIdEx(CUcontext context, CUstream stream, uint8_t perThreadStream, uint32_t *streamId); + +/** + * \brief Get the ID of a device + * + * If \p context is NULL, returns the ID of the device that contains + * the currently active context. If \p context is non-NULL, returns + * the ID of the device which contains that context. Operates in a + * similar manner to cudaGetDevice() or cuCtxGetDevice() but may be + * called from within callback functions. + * + * \param context The context, or NULL to indicate the current context. + * \param deviceId Returns the ID of the device that is current for + * the calling thread. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_DEVICE if unable to get device ID + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p deviceId is NULL + */ +CUptiResult CUPTIAPI cuptiGetDeviceId(CUcontext context, uint32_t *deviceId); + +/** + * \brief Get the unique ID of a graph node + * + * Returns the unique ID of the CUDA graph node. + * + * \param node The graph node. + * \param nodeId Returns the unique ID of the node + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p node is NULL + */ +CUptiResult CUPTIAPI cuptiGetGraphNodeId(CUgraphNode node, uint64_t *nodeId); + +/** + * \brief Get the unique ID of graph + * + * Returns the unique ID of CUDA graph. + * + * \param graph The graph. + * \param pId Returns the unique ID of the graph + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p graph is NULL + */ +CUptiResult CUPTIAPI cuptiGetGraphId(CUgraph graph, uint32_t *pId); + +/** + * \brief Enable collection of a specific kind of activity record. + * + * Enable collection of a specific kind of activity record. Multiple + * kinds can be enabled by calling this function multiple times. By + * default all activity kinds are disabled for collection. + * + * \param kind The kind of activity record to collect + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_NOT_COMPATIBLE if the activity kind cannot be enabled + * \retval CUPTI_ERROR_INVALID_KIND if the activity kind is not supported + */ +CUptiResult CUPTIAPI cuptiActivityEnable(CUpti_ActivityKind kind); + +/** + * \brief Enable collection of a specific kind of activity record. For certain activity kinds + * it dumps existing records. + * + * In general, the behavior of this API is similar to the API \ref cuptiActivityEnable i.e. it + * enables the collection of a specific kind of activity record. + * Additionally, this API can help in dumping the records for activities which happened in + * the past before enabling the corresponding activity kind. + * The API allows to get records for the current resource allocations done in CUDA + * For CUPTI_ACTIVITY_KIND_DEVICE, existing device records are dumped + * For CUPTI_ACTIVITY_KIND_CONTEXT, existing context records are dumped + * For CUPTI_ACTIVITY_KIND_STREAM, existing stream records are dumped + * For CUPTI_ACTIVITY_KIND_ NVLINK, existing NVLINK records are dumped + * For CUPTI_ACTIVITY_KIND_PCIE, existing PCIE records are dumped + * For other activities, the behavior is similar to the API \ref cuptiActivityEnable + * + * Device records are emitted in CUPTI on CUDA driver initialization. Those records + * can only be retrieved by the user if CUPTI is attached before CUDA initialization. + * Context and stream records are emitted on context and stream creation. + * The use case of the API is to provide the records for CUDA resources + * (contexs/streams/devices) that are currently active if user late attaches CUPTI. + * + * Before calling this function, the user must register buffer callbacks + * to get the activity records by calling \ref cuptiActivityRegisterCallbacks. + * If the user does not register the buffers and calls API \ref cuptiActivityEnableAndDump, + * then CUPTI will enable the activity kind but not provide any records for that + * activity kind. + * + * \param kind The kind of activity record to collect + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_UNKNOWN if buffer is not initialized. + * \retval CUPTI_ERROR_NOT_COMPATIBLE if the activity kind cannot be enabled + * \retval CUPTI_ERROR_INVALID_KIND if the activity kind is not supported + */ +CUptiResult CUPTIAPI cuptiActivityEnableAndDump(CUpti_ActivityKind kind); + +/** + * \brief Disable collection of a specific kind of activity record. + * + * Disable collection of a specific kind of activity record. Multiple + * kinds can be disabled by calling this function multiple times. By + * default all activity kinds are disabled for collection. + * + * \param kind The kind of activity record to stop collecting + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_KIND if the activity kind is not supported + */ +CUptiResult CUPTIAPI cuptiActivityDisable(CUpti_ActivityKind kind); + +/** + * \brief Enable collection of a specific kind of activity record for + * a context. + * + * Enable collection of a specific kind of activity record for a + * context. This setting done by this API will supersede the global + * settings for activity records enabled by \ref cuptiActivityEnable. + * Multiple kinds can be enabled by calling this function multiple + * times. + * + * \param context The context for which activity is to be enabled + * \param kind The kind of activity record to collect + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_NOT_COMPATIBLE if the activity kind cannot be enabled + * \retval CUPTI_ERROR_INVALID_KIND if the activity kind is not supported + */ +CUptiResult CUPTIAPI cuptiActivityEnableContext(CUcontext context, CUpti_ActivityKind kind); + +/** + * \brief Disable collection of a specific kind of activity record for + * a context. + * + * Disable collection of a specific kind of activity record for a context. + * This setting done by this API will supersede the global settings + * for activity records. + * Multiple kinds can be enabled by calling this function multiple times. + * + * \param context The context for which activity is to be disabled + * \param kind The kind of activity record to stop collecting + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_KIND if the activity kind is not supported + */ +CUptiResult CUPTIAPI cuptiActivityDisableContext(CUcontext context, CUpti_ActivityKind kind); + +/** + * \brief Get the number of activity records that were dropped of + * insufficient buffer space. + * + * Get the number of records that were dropped because of insufficient + * buffer space. The dropped count includes records that could not be + * recorded because CUPTI did not have activity buffer space available + * for the record (because the CUpti_BuffersCallbackRequestFunc + * callback did not return an empty buffer of sufficient size) and + * also CDP records that could not be record because the device-size + * buffer was full (size is controlled by the + * CUPTI_ACTIVITY_ATTR_DEVICE_BUFFER_SIZE_CDP attribute). The dropped + * count maintained for the queue is reset to zero when this function + * is called. + * + * \param context The context, or NULL to get dropped count from global queue + * \param streamId The stream ID + * \param dropped The number of records that were dropped since the last call + * to this function. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p dropped is NULL + */ +CUptiResult CUPTIAPI cuptiActivityGetNumDroppedRecords(CUcontext context, uint32_t streamId, + size_t *dropped); + +/** + * \brief Iterate over the activity records in a buffer. + * + * This is a helper function to iterate over the activity records in a + * buffer. A buffer of activity records is typically obtained by + * receiving a CUpti_BuffersCallbackCompleteFunc callback. + * + * An example of typical usage: + * \code + * CUpti_Activity *record = NULL; + * CUptiResult status = CUPTI_SUCCESS; + * do { + * status = cuptiActivityGetNextRecord(buffer, validSize, &record); + * if(status == CUPTI_SUCCESS) { + * // Use record here... + * } + * else if (status == CUPTI_ERROR_MAX_LIMIT_REACHED) + * break; + * else { + * goto Error; + * } + * } while (1); + * \endcode + * + * \param buffer The buffer containing activity records + * \param record Inputs the previous record returned by + * cuptiActivityGetNextRecord and returns the next activity record + * from the buffer. If input value is NULL, returns the first activity + * record in the buffer. Records of kind CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL + * may contain invalid (0) timestamps, indicating that no timing information could + * be collected for lack of device memory. + * \param validBufferSizeBytes The number of valid bytes in the buffer. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_MAX_LIMIT_REACHED if no more records in the buffer + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p buffer is NULL. + */ +CUptiResult CUPTIAPI cuptiActivityGetNextRecord(uint8_t* buffer, size_t validBufferSizeBytes, + CUpti_Activity **record); + +/** + * \brief Function type for callback used by CUPTI to request an empty + * buffer for storing activity records. + * + * This callback function signals the CUPTI client that an activity + * buffer is needed by CUPTI. The activity buffer is used by CUPTI to + * store activity records. The callback function can decline the + * request by setting \p *buffer to NULL. In this case CUPTI may drop + * activity records. + * + * \param buffer Returns the new buffer. If set to NULL then no buffer + * is returned. + * \param size Returns the size of the returned buffer. + * \param maxNumRecords Returns the maximum number of records that + * should be placed in the buffer. If 0 then the buffer is filled with + * as many records as possible. If > 0 the buffer is filled with at + * most that many records before it is returned. + */ +typedef void (CUPTIAPI *CUpti_BuffersCallbackRequestFunc)( + uint8_t **buffer, + size_t *size, + size_t *maxNumRecords); + +/** + * \brief Function type for callback used by CUPTI to return a buffer + * of activity records. + * + * This callback function returns to the CUPTI client a buffer + * containing activity records. The buffer contains \p validSize + * bytes of activity records which should be read using + * cuptiActivityGetNextRecord. The number of dropped records can be + * read using cuptiActivityGetNumDroppedRecords. After this call CUPTI + * relinquished ownership of the buffer and will not use it + * anymore. The client may return the buffer to CUPTI using the + * CUpti_BuffersCallbackRequestFunc callback. + * Note: CUDA 6.0 onwards, all buffers returned by this callback are + * global buffers i.e. there is no context/stream specific buffer. + * User needs to parse the global buffer to extract the context/stream + * specific activity records. + * + * \param context The context this buffer is associated with. If NULL, the + * buffer is associated with the global activities. This field is deprecated + * as of CUDA 6.0 and will always be NULL. + * \param streamId The stream id this buffer is associated with. + * This field is deprecated as of CUDA 6.0 and will always be NULL. + * \param buffer The activity record buffer. + * \param size The total size of the buffer in bytes as set in + * CUpti_BuffersCallbackRequestFunc. + * \param validSize The number of valid bytes in the buffer. + */ +typedef void (CUPTIAPI *CUpti_BuffersCallbackCompleteFunc)( + CUcontext context, + uint32_t streamId, + uint8_t *buffer, + size_t size, + size_t validSize); + +/** + * \brief Registers callback functions with CUPTI for activity buffer + * handling. + * + * This function registers two callback functions to be used in asynchronous + * buffer handling. If registered, activity record buffers are handled using + * asynchronous requested/completed callbacks from CUPTI. + * + * Registering these callbacks prevents the client from using CUPTI's + * blocking enqueue/dequeue functions. + * + * \param funcBufferRequested callback which is invoked when an empty + * buffer is requested by CUPTI + * \param funcBufferCompleted callback which is invoked when a buffer + * containing activity records is available from CUPTI + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if either \p + * funcBufferRequested or \p funcBufferCompleted is NULL + */ +CUptiResult CUPTIAPI cuptiActivityRegisterCallbacks(CUpti_BuffersCallbackRequestFunc funcBufferRequested, + CUpti_BuffersCallbackCompleteFunc funcBufferCompleted); + +/** + * \brief Wait for all activity records to be delivered via the + * completion callback. + * + * This function does not return until all activity records associated + * with the specified context/stream are returned to the CUPTI client + * using the callback registered in cuptiActivityRegisterCallbacks. To + * ensure that all activity records are complete, the requested + * stream(s), if any, are synchronized. + * + * If \p context is NULL, the global activity records (i.e. those not + * associated with a particular stream) are flushed (in this case no + * streams are synchonized). If \p context is a valid CUcontext and + * \p streamId is 0, the buffers of all streams of this context are + * flushed. Otherwise, the buffers of the specified stream in this + * context is flushed. + * + * Before calling this function, the buffer handling callback api + * must be activated by calling cuptiActivityRegisterCallbacks. + * + * \param context A valid CUcontext or NULL. + * \param streamId The stream ID. + * \param flag The flag can be set to indicate a forced flush. See CUpti_ActivityFlag + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_CUPTI_ERROR_INVALID_OPERATION if not preceeded + * by a successful call to cuptiActivityRegisterCallbacks + * \retval CUPTI_ERROR_UNKNOWN an internal error occurred + * + * **DEPRECATED** This method is deprecated + * CONTEXT and STREAMID will be ignored. Use cuptiActivityFlushAll + * to flush all data. + */ +CUptiResult CUPTIAPI cuptiActivityFlush(CUcontext context, uint32_t streamId, uint32_t flag); + +/** + * \brief Request to deliver activity records via the buffer completion callback. + * + * This function returns the activity records associated with all contexts/streams + * (and the global buffers not associated with any stream) to the CUPTI client + * using the callback registered in cuptiActivityRegisterCallbacks. + * + * This is a blocking call but it doesn't issue any CUDA synchronization calls + * implicitly thus it's not guaranteed that all activities are completed on the + * underlying devices. Activity record is considered as completed if it has all + * the information filled up including the timestamps if any. It is the client's + * responsibility to issue necessary CUDA synchronization calls before calling + * this function if all activity records with complete information are expected + * to be delivered. + * + * Behavior of the function based on the input flag: + * - ::For default flush i.e. when flag is set as 0, it returns all the + * activity buffers which have all the activity records completed, buffers need not + * to be full though. It doesn't return buffers which have one or more incomplete + * records. Default flush can be done at a regular interval in a separate thread. + * - ::For forced flush i.e. when flag CUPTI_ACTIVITY_FLAG_FLUSH_FORCED is passed + * to the function, it returns all the activity buffers including the ones which have + * one or more incomplete activity records. It's suggested for clients to do the + * force flush before the termination of the profiling session to allow remaining + * buffers to be delivered. In general, it can be done in the at-exit handler. + * + * Before calling this function, the buffer handling callback api must be activated + * by calling cuptiActivityRegisterCallbacks. + * + * \param flag The flag can be set to indicate a forced flush. See CUpti_ActivityFlag + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_OPERATION if not preceeded by a + * successful call to cuptiActivityRegisterCallbacks + * \retval CUPTI_ERROR_UNKNOWN an internal error occurred + * + * \see cuptiActivityFlushPeriod + */ +CUptiResult CUPTIAPI cuptiActivityFlushAll(uint32_t flag); + +/** + * \brief Read an activity API attribute. + * + * Read an activity API attribute and return it in \p *value. + * + * \param attr The attribute to read + * \param valueSize Size of buffer pointed by the value, and + * returns the number of bytes written to \p value + * \param value Returns the value of the attribute + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p valueSize or \p value is NULL, or + * if \p attr is not an activity attribute + * \retval CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT Indicates that + * the \p value buffer is too small to hold the attribute value. + */ +CUptiResult CUPTIAPI cuptiActivityGetAttribute(CUpti_ActivityAttribute attr, + size_t *valueSize, void* value); + +/** + * \brief Write an activity API attribute. + * + * Write an activity API attribute. + * + * \param attr The attribute to write + * \param valueSize The size, in bytes, of the value + * \param value The attribute value to write + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p valueSize or \p value is NULL, or + * if \p attr is not an activity attribute + * \retval CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT Indicates that + * the \p value buffer is too small to hold the attribute value. + */ +CUptiResult CUPTIAPI cuptiActivitySetAttribute(CUpti_ActivityAttribute attr, + size_t *valueSize, void* value); + + +/** + * \brief Set Unified Memory Counter configuration. + * + * \param config A pointer to \ref CUpti_ActivityUnifiedMemoryCounterConfig structures + * containing Unified Memory counter configuration. + * \param count Number of Unified Memory counter configuration structures + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p config is NULL or + * any parameter in the \p config structures is not a valid value + * \retval CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED One potential reason is that + * platform (OS/arch) does not support the unified memory counters + * \retval CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_DEVICE Indicates that the device + * does not support the unified memory counters + * \retval CUPTI_ERROR_UM_PROFILING_NOT_SUPPORTED_ON_NON_P2P_DEVICES Indicates that + * multi-GPU configuration without P2P support between any pair of devices + * does not support the unified memory counters + */ +CUptiResult CUPTIAPI cuptiActivityConfigureUnifiedMemoryCounter(CUpti_ActivityUnifiedMemoryCounterConfig *config, uint32_t count); + +/** + * \brief Get auto boost state + * + * The profiling results can be inconsistent in case auto boost is enabled. + * CUPTI tries to disable auto boost while profiling. It can fail to disable in + * cases where user does not have the permissions or CUDA_AUTO_BOOST env + * variable is set. The function can be used to query whether auto boost is + * enabled. + * + * \param context A valid CUcontext. + * \param state A pointer to \ref CUpti_ActivityAutoBoostState structure which + * contains the current state and the id of the process that has requested the + * current state + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p CUcontext or \p state is NULL + * \retval CUPTI_ERROR_NOT_SUPPORTED Indicates that the device does not support auto boost + * \retval CUPTI_ERROR_UNKNOWN an internal error occurred + */ +CUptiResult CUPTIAPI cuptiGetAutoBoostState(CUcontext context, CUpti_ActivityAutoBoostState *state); + +/** + * \brief Set PC sampling configuration. + * + * For Pascal and older GPU architectures this API must be called before enabling + * activity kind CUPTI_ACTIVITY_KIND_PC_SAMPLING. There is no such requirement + * for Volta and newer GPU architectures. + * + * For Volta and newer GPU architectures if this API is called in the middle of + * execution, PC sampling configuration will be updated for subsequent kernel launches. + * + * \param ctx The context + * \param config A pointer to \ref CUpti_ActivityPCSamplingConfig structure + * containing PC sampling configuration. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this api is called while + * some valid event collection method is set. + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p config is NULL or + * any parameter in the \p config structures is not a valid value + * \retval CUPTI_ERROR_NOT_SUPPORTED Indicates that the system/device + * does not support the unified memory counters + */ +CUptiResult CUPTIAPI cuptiActivityConfigurePCSampling(CUcontext ctx, CUpti_ActivityPCSamplingConfig *config); + +/** + * \brief Returns the last error from a cupti call or callback + * + * Returns the last error that has been produced by any of the cupti api calls + * or the callback in the same host thread and resets it to CUPTI_SUCCESS. + */ +CUptiResult CUPTIAPI cuptiGetLastError(void); + +/** + * \brief Set the thread-id type + * + * CUPTI uses the method corresponding to set type to generate the thread-id. + * See enum \ref CUpti_ActivityThreadIdType for the list of methods. + * Activity records having thread-id field contain the same value. + * Thread id type must not be changed during the profiling session to + * avoid thread-id value mismatch across activity records. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_SUPPORTED if \p type is not supported on the platform + */ +CUptiResult CUPTIAPI cuptiSetThreadIdType(CUpti_ActivityThreadIdType type); + +/** + * \brief Get the thread-id type + * + * Returns the thread-id type used in CUPTI + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p type is NULL + */ +CUptiResult CUPTIAPI cuptiGetThreadIdType(CUpti_ActivityThreadIdType *type); + +/** +* \brief Check support for a compute capability +* +* This function is used to check the support for a device based on +* it's compute capability. It sets the \p support when the compute +* capability is supported by the current version of CUPTI, and clears +* it otherwise. This version of CUPTI might not support all GPUs sharing +* the same compute capability. It is suggested to use API \ref +* cuptiDeviceSupported which provides correct information. +* +* \param major The major revision number of the compute capability +* \param minor The minor revision number of the compute capability +* \param support Pointer to an integer to return the support status +* +* \retval CUPTI_SUCCESS +* \retval CUPTI_ERROR_INVALID_PARAMETER if \p support is NULL +* +* \sa ::cuptiDeviceSupported +*/ +CUptiResult CUPTIAPI cuptiComputeCapabilitySupported(int major, int minor, int *support); + +/** +* \brief Check support for a compute device +* +* This function is used to check the support for a compute device. +* It sets the \p support when the device is supported by the current +* version of CUPTI, and clears it otherwise. +* +* \param dev The device handle returned by CUDA Driver API cuDeviceGet +* \param support Pointer to an integer to return the support status +* +* \retval CUPTI_SUCCESS +* \retval CUPTI_ERROR_INVALID_PARAMETER if \p support is NULL +* \retval CUPTI_ERROR_INVALID_DEVICE if \p dev is not a valid device +* +* \sa ::cuptiComputeCapabilitySupported +*/ +CUptiResult CUPTIAPI cuptiDeviceSupported(CUdevice dev, int *support); + +/** + * This indicates the virtualization mode in which CUDA device is running + */ +typedef enum { + /** + * No virtualization mode isassociated with the device + * i.e. it's a baremetal GPU + */ + CUPTI_DEVICE_VIRTUALIZATION_MODE_NONE = 0, + /** + * The device is associated with the pass-through GPU. + * In this mode, an entire physical GPU is directly assigned + * to one virtual machine (VM). + */ + CUPTI_DEVICE_VIRTUALIZATION_MODE_PASS_THROUGH = 1, + /** + * The device is associated with the virtual GPU (vGPU). + * In this mode multiple virtual machines (VMs) have simultaneous, + * direct access to a single physical GPU. + */ + CUPTI_DEVICE_VIRTUALIZATION_MODE_VIRTUAL_GPU = 2, + + CUPTI_DEVICE_VIRTUALIZATION_MODE_FORCE_INT = 0x7fffffff +} CUpti_DeviceVirtualizationMode; + +/** + * \brief Query the virtualization mode of the device + * + * This function is used to query the virtualization mode of the CUDA device. + * + * \param dev The device handle returned by CUDA Driver API cuDeviceGet + * \param mode Pointer to an CUpti_DeviceVirtualizationMode to return the virtualization mode + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_DEVICE if \p dev is not a valid device + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p mode is NULL + * + */ +CUptiResult CUPTIAPI cuptiDeviceVirtualizationMode(CUdevice dev, CUpti_DeviceVirtualizationMode *mode); + +/** + * \brief Detach CUPTI from the running process + * + * This API detaches the CUPTI from the running process. It destroys and cleans up all the + * resources associated with CUPTI in the current process. After CUPTI detaches from the process, + * the process will keep on running with no CUPTI attached to it. + * For safe operation of the API, it is recommended this API is invoked from the exit callsite + * of any of the CUDA Driver or Runtime API. Otherwise CUPTI client needs to make sure that + * required CUDA synchronization and CUPTI activity buffer flush is done before calling the API. + * Sample code showing the usage of the API in the cupti callback handler code: + * \code + void CUPTIAPI + cuptiCallbackHandler(void *userdata, CUpti_CallbackDomain domain, + CUpti_CallbackId cbid, void *cbdata) + { + const CUpti_CallbackData *cbInfo = (CUpti_CallbackData *)cbdata; + + // Take this code path when CUPTI detach is requested + if (detachCupti) { + switch(domain) + { + case CUPTI_CB_DOMAIN_RUNTIME_API: + case CUPTI_CB_DOMAIN_DRIVER_API: + if (cbInfo->callbackSite == CUPTI_API_EXIT) { + // call the CUPTI detach API + cuptiFinalize(); + } + break; + default: + break; + } + } + } + \endcode + */ +CUptiResult CUPTIAPI cuptiFinalize(void); + +/** + * \brief Push an external correlation id for the calling thread + * + * This function notifies CUPTI that the calling thread is entering an external API region. + * When a CUPTI activity API record is created while within an external API region and + * CUPTI_ACTIVITY_KIND_EXTERNAL_CORRELATION is enabled, the activity API record will + * be preceeded by a CUpti_ActivityExternalCorrelation record for each \ref CUpti_ExternalCorrelationKind. + * + * \param kind The kind of external API activities should be correlated with. + * \param id External correlation id. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER The external API kind is invalid + */ +CUptiResult CUPTIAPI cuptiActivityPushExternalCorrelationId(CUpti_ExternalCorrelationKind kind, uint64_t id); + +/** + * \brief Pop an external correlation id for the calling thread + * + * This function notifies CUPTI that the calling thread is leaving an external API region. + * + * \param kind The kind of external API activities should be correlated with. + * \param lastId If the function returns successful, contains the last external correlation id for this \p kind, can be NULL. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER The external API kind is invalid. + * \retval CUPTI_ERROR_QUEUE_EMPTY No external id is currently associated with \p kind. + */ +CUptiResult CUPTIAPI cuptiActivityPopExternalCorrelationId(CUpti_ExternalCorrelationKind kind, uint64_t *lastId); + +/** + * \brief Controls the collection of queued and submitted timestamps for kernels. + * + * This API is used to control the collection of queued and submitted timestamps + * for kernels whose records are provided through the struct \ref CUpti_ActivityKernel8. + * Default value is 0, i.e. these timestamps are not collected. This API needs + * to be called before initialization of CUDA and this setting should not be + * changed during the profiling session. + * + * \param enable is a boolean, denoting whether these timestamps should be + * collected + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + */ +CUptiResult CUPTIAPI cuptiActivityEnableLatencyTimestamps(uint8_t enable); + +/** + * \brief Sets the flush period for the worker thread + * + * CUPTI creates a worker thread to minimize the perturbance for the application created + * threads. CUPTI offloads certain operations from the application threads to the worker + * thread, this includes synchronization of profiling resources between host and device, + * delivery of the activity buffers to the client using the callback registered in + * cuptiActivityRegisterCallbacks. For performance reasons, CUPTI wakes up the worker + * thread based on certain heuristics. + * + * This API is used to control the flush period of the worker thread. This setting will + * override the CUPTI heurtistics. Setting time to zero disables the periodic flush and + * restores the default behavior. + * + * Periodic flush can return only those activity buffers which are full and have all the + * activity records completed. + * + * It's allowed to use the API \ref cuptiActivityFlushAll to flush the data on-demand, even + * when client sets the periodic flush. + * + * \param time flush period in msec + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_NOT_INITIALIZED + * + * \see cuptiActivityFlushAll + */ +CUptiResult CUPTIAPI cuptiActivityFlushPeriod(uint32_t time); + +/** + * \brief Controls the collection of launch attributes for kernels. + * + * This API is used to control the collection of launch attributes for kernels whose + * records are provided through the struct \ref CUpti_ActivityKernel8. + * Default value is 0, i.e. these attributes are not collected. + * + * \param enable is a boolean denoting whether these launch attributes should be collected + */ +CUptiResult CUPTIAPI cuptiActivityEnableLaunchAttributes(uint8_t enable); + +/** + * \brief Function type for callback used by CUPTI to request a timestamp + * to be used in activity records. + * + * This callback function signals the CUPTI client that a timestamp needs + * to be returned. This timestamp would be treated as normalized timestamp + * to be used for various purposes in CUPTI. For example to store start and + * end timestamps reported in the CUPTI activity records. + * The returned timestamp must be in nanoseconds. + * + * \sa ::cuptiActivityRegisterTimestampCallback + */ +typedef uint64_t (CUPTIAPI *CUpti_TimestampCallbackFunc)(void); + +/** + * \brief Registers callback function with CUPTI for providing timestamp. + * + * This function registers a callback function to obtain timestamp of user's + * choice instead of using CUPTI provided timestamp. + * By default CUPTI uses different methods, based on the underlying platform, + * to retrieve the timestamp + * Linux and Android use clock_gettime(CLOCK_REALTIME, ..) + * Windows uses QueryPerformanceCounter() + * Mac uses mach_absolute_time() + * QNX uses ClockCycles() + * Timestamps retrieved using these methods are converted to nanosecond if needed + * before usage. + * + * The registration of timestamp callback should be done before any of the CUPTI + * activity kinds are enabled to make sure that all the records report the timestamp using + * the callback function registered through cuptiActivityRegisterTimestampCallback API. + * + * Changing the timestamp callback function in CUPTI through + * cuptiActivityRegisterTimestampCallback API in the middle of the profiling + * session can cause records generated prior to the change to report + * timestamps through previous timestamp method. + * + * \param funcTimestamp callback which is invoked when a timestamp is + * needed by CUPTI + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p funcTimestamp is NULL + * \retval CUPTI_ERROR_NOT_INITIALIZED + */ +CUptiResult CUPTIAPI cuptiActivityRegisterTimestampCallback(CUpti_TimestampCallbackFunc funcTimestamp); + +/** @} */ /* END CUPTI_ACTIVITY_API */ + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +#if defined(__cplusplus) +} +#endif + +#endif /*_CUPTI_ACTIVITY_H_*/ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity_deprecated.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity_deprecated.h new file mode 100644 index 0000000000000000000000000000000000000000..084ea84ed7be17af6d1634d772fd270fb5a0351f --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_activity_deprecated.h @@ -0,0 +1,4784 @@ +/* + * Copyright 2011-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(_CUPTI_ACTIVITY_DEPRECATED_H_) +#define _CUPTI_ACTIVITY_DEPRECATED_H_ + +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \brief The kinds of activity records. + * + * Each activity record kind represents information about a GPU or an + * activity occurring on a CPU or GPU. Each kind is associated with a + * activity record structure that holds the information associated + * with the kind. + * \see CUpti_ActivityOverhead + * \see CUpti_ActivityOverhead2 + * \see CUpti_ActivityDevice + * \see CUpti_ActivityDevice2 + * \see CUpti_ActivityDevice3 + * \see CUpti_ActivityDevice4 + * \see CUpti_ActivityKernel + * \see CUpti_ActivityKernel2 + * \see CUpti_ActivityKernel3 + * \see CUpti_ActivityKernel4 + * \see CUpti_ActivityKernel5 + * \see CUpti_ActivityKernel6 + * \see CUpti_ActivityKernel7 + * \see CUpti_ActivityKernel8 + * \see CUpti_ActivityMemcpy + * \see CUpti_ActivityMemcpy3 + * \see CUpti_ActivityMemcpy4 + * \see CUpti_ActivityMemcpyPtoP + * \see CUpti_ActivityMemcpyPtoP2 + * \see CUpti_ActivityMemcpyPtoP3 + * \see CUpti_ActivityMemset + * \see CUpti_ActivityMemset2 + * \see CUpti_ActivityMemset3 + * \see CUpti_ActivityMemory2 + * \see CUpti_ActivityMemoryPool + * \see CUpti_ActivityMarker + * \see CUpti_ActivityGlobalAccess + * \see CUpti_ActivityGlobalAccess2 + * \see CUpti_ActivityBranch + * \see CUpti_ActivityPCSampling + * \see CUpti_ActivityPCSampling2 + * \see CUpti_ActivityUnifiedMemoryCounter + * \see CUpti_ActivityNvLink + * \see CUpti_ActivityNvLink2 + * \see CUpti_ActivityNvLink3 + */ + +/** + * \brief The activity record for CUPTI and driver overheads. + * (Deprecated in CUDA 12.2) + * + * This activity record provides CUPTI and driver overhead information + * (CUPTI_ACTIVITY_OVERHEAD). These records are now reported using + * CUpti_ActivityOverhead3 + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_OVERHEAD. + */ + CUpti_ActivityKind kind; + + /** + * The kind of overhead, CUPTI, DRIVER, COMPILER etc. + */ + CUpti_ActivityOverheadKind overheadKind; + + /** + * The kind of activity object that the overhead is associated with. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object. 'objectKind' indicates + * which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + + /** + * The start timestamp for the overhead, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the overhead. + */ + uint64_t start; + + /** + * The end timestamp for the overhead, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the overhead. + */ + uint64_t end; +} CUpti_ActivityOverhead; + +/** + * \brief The activity record for CUPTI and driver overheads. + * + * This activity record provides CUPTI and driver overhead information + * (CUPTI_ACTIVITY_OVERHEAD). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_OVERHEAD. + */ + CUpti_ActivityKind kind; + + /** + * The kind of overhead, CUPTI, DRIVER, COMPILER etc. + */ + CUpti_ActivityOverheadKind overheadKind; + + /** + * The kind of activity object that the overhead is associated with. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object. 'objectKind' indicates + * which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + + /** + * The start timestamp for the overhead, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the overhead. + */ + uint64_t start; + + /** + * The end timestamp for the overhead, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the overhead. + */ + uint64_t end; + + /** + * The correlation ID of the overhead operation to which + * records belong to. This ID is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the overhead operation. + * In some cases, it can be zero, such as for CUPTI_ACTIVITY_OVERHEAD_CUPTI_BUFFER_FLUSH records. + */ + uint32_t correlationId; + + /** + * Reserved for internal use. + */ + uint32_t reserved0; +} CUpti_ActivityOverhead2; + +/** + * \brief The activity record for a device. (deprecated) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice5 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityDevice; + +/** + * \brief The activity record for a device. (deprecated) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice5 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; +} CUpti_ActivityDevice2; + +/** + * \brief The activity record for a device. (CUDA 7.0 onwards) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice5 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; + + /** + * Flag to indicate whether the device is visible to CUDA. Users can + * set the device visibility using CUDA_VISIBLE_DEVICES environment + */ + uint8_t isCudaVisible; + + uint8_t reserved[7]; +} CUpti_ActivityDevice3; + +/** + * \brief The activity record for a device. (CUDA 11.6 onwards) + * + * This activity record represents information about a GPU device + * (CUPTI_ACTIVITY_KIND_DEVICE). + * Device activity is now reported using the + * CUpti_ActivityDevice5 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_DEVICE. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the device. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The global memory bandwidth available on the device, in + * kBytes/sec. + */ + uint64_t globalMemoryBandwidth; + + /** + * The amount of global memory on the device, in bytes. + */ + uint64_t globalMemorySize; + + /** + * The amount of constant memory on the device, in bytes. + */ + uint32_t constantMemorySize; + + /** + * The size of the L2 cache on the device, in bytes. + */ + uint32_t l2CacheSize; + + /** + * The number of threads per warp on the device. + */ + uint32_t numThreadsPerWarp; + + /** + * The core clock rate of the device, in kHz. + */ + uint32_t coreClockRate; + + /** + * Number of memory copy engines on the device. + */ + uint32_t numMemcpyEngines; + + /** + * Number of multiprocessors on the device. + */ + uint32_t numMultiprocessors; + + /** + * The maximum "instructions per cycle" possible on each device + * multiprocessor. + */ + uint32_t maxIPC; + + /** + * Maximum number of warps that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxWarpsPerMultiprocessor; + + /** + * Maximum number of blocks that can be present on a multiprocessor + * at any given time. + */ + uint32_t maxBlocksPerMultiprocessor; + + /** + * Maximum amount of shared memory available per multiprocessor, in bytes. + */ + uint32_t maxSharedMemoryPerMultiprocessor; + + /** + * Maximum number of 32-bit registers available per multiprocessor. + */ + uint32_t maxRegistersPerMultiprocessor; + + /** + * Maximum number of registers that can be allocated to a block. + */ + uint32_t maxRegistersPerBlock; + + /** + * Maximum amount of shared memory that can be assigned to a block, + * in bytes. + */ + uint32_t maxSharedMemoryPerBlock; + + /** + * Maximum number of threads allowed in a block. + */ + uint32_t maxThreadsPerBlock; + + /** + * Maximum allowed X dimension for a block. + */ + uint32_t maxBlockDimX; + + /** + * Maximum allowed Y dimension for a block. + */ + uint32_t maxBlockDimY; + + /** + * Maximum allowed Z dimension for a block. + */ + uint32_t maxBlockDimZ; + + /** + * Maximum allowed X dimension for a grid. + */ + uint32_t maxGridDimX; + + /** + * Maximum allowed Y dimension for a grid. + */ + uint32_t maxGridDimY; + + /** + * Maximum allowed Z dimension for a grid. + */ + uint32_t maxGridDimZ; + + /** + * Compute capability for the device, major number. + */ + uint32_t computeCapabilityMajor; + + /** + * Compute capability for the device, minor number. + */ + uint32_t computeCapabilityMinor; + + /** + * The device ID. + */ + uint32_t id; + + /** + * ECC enabled flag for device + */ + uint32_t eccEnabled; + + /** + * The device UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid uuid; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The device name. This name is shared across all activity records + * representing instances of the device, and so should not be + * modified. + */ + const char *name; + + /** + * Flag to indicate whether the device is visible to CUDA. Users can + * set the device visibility using CUDA_VISIBLE_DEVICES environment + */ + uint8_t isCudaVisible; + + /** + * MIG enabled flag for device + */ + uint8_t isMigEnabled; + + uint8_t reserved[6]; + + /** + * GPU Instance id for MIG enabled devices. + * If mig mode is disabled value is set to UINT32_MAX + */ + uint32_t gpuInstanceId; + + /** + * Compute Instance id for MIG enabled devices. + * If mig mode is disabled value is set to UINT32_MAX + */ + uint32_t computeInstanceId; + + /** + * The MIG UUID. This value is the globally unique immutable + * alphanumeric identifier of the device. + */ + CUuuid migUuid; + +} CUpti_ActivityDevice4; + +/** + * \brief The activity record for kernel. (deprecated) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel9 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL + * or CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t cacheConfigRequested; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t cacheConfigExecuted; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the kernel. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the kernel. Each kernel execution + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the kernel. + */ + uint32_t runtimeCorrelationId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel; + +/** + * \brief The activity record for kernel. (deprecated) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel9 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel2; + +/** + * \brief The activity record for a kernel (CUDA 6.5(with sm_52 support) onwards). + * (deprecated in CUDA 9.0) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL). + * Kernel activities are now reported using the CUpti_ActivityKernel9 activity + * record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityKernel3; + +/** + * \brief The activity record for a kernel (CUDA 9.0(with sm_70 support) onwards). + * (deprecated in CUDA 11.0) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL). + * Kernel activities are now reported using the CUpti_ActivityKernel9 activity + * record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchronous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; +} CUpti_ActivityKernel4; + +/** + * \brief The activity record for a kernel (CUDA 11.0(with sm_80 support) onwards). + * (deprecated in CUDA 11.2) + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel9 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchronous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; +} CUpti_ActivityKernel5; + +/** + * \brief The activity record for kernel. (deprecated in CUDA 11.6) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel9 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchronous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; +} CUpti_ActivityKernel6; + +/** + * \brief The activity record for kernel. (deprecated in CUDA 11.8) + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) but is no longer generated + * by CUPTI. Kernel activities are now reported using the + * CUpti_ActivityKernel9 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchronous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; + + /** + * The ID of the HW channel on which the kernel is launched. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; +} CUpti_ActivityKernel7; + +/** + * \brief The activity record for kernel. + * + * This activity record represents a kernel execution + * (CUPTI_ACTIVITY_KIND_KERNEL and + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL) + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_KERNEL or + * CUPTI_ACTIVITY_KIND_CONCURRENT_KERNEL. + */ + CUpti_ActivityKind kind; + + /** + * For devices with compute capability 7.0+ cacheConfig values are not updated + * in case field isSharedMemoryCarveoutRequested is set + */ + union { + uint8_t both; + struct { + /** + * The cache configuration requested by the kernel. The value is one + * of the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t requested:4; + + /** + * The cache configuration used for the kernel. The value is one of + * the CUfunc_cache enumeration values from cuda.h. + */ + uint8_t executed:4; + } config; + } cacheConfig; + + /** + * The shared memory configuration used for the kernel. The value is one of + * the CUsharedconfig enumeration values from cuda.h. + */ + uint8_t sharedMemoryConfig; + + /** + * The number of registers required for each thread executing the + * kernel. + */ + uint16_t registersPerThread; + + /** + * The partitioned global caching requested for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheRequested; + + /** + * The partitioned global caching executed for the kernel. Partitioned + * global caching is required to enable caching on certain chips, such as + * devices with compute capability 5.2. Partitioned global caching can be + * automatically disabled if the occupancy requirement of the launch cannot + * support caching. + */ + CUpti_ActivityPartitionedGlobalCacheConfig partitionedGlobalCacheExecuted; + + /** + * The start timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t start; + + /** + * The end timestamp for the kernel execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the kernel. + */ + uint64_t end; + + /** + * The completed timestamp for the kernel execution, in ns. It + * represents the completion of all it's child kernels and the + * kernel itself. A value of CUPTI_TIMESTAMP_UNKNOWN indicates that + * the completion time is unknown. + */ + uint64_t completed; + + /** + * The ID of the device where the kernel is executing. + */ + uint32_t deviceId; + + /** + * The ID of the context where the kernel is executing. + */ + uint32_t contextId; + + /** + * The ID of the stream where the kernel is executing. + */ + uint32_t streamId; + + /** + * The X-dimension grid size for the kernel. + */ + int32_t gridX; + + /** + * The Y-dimension grid size for the kernel. + */ + int32_t gridY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t gridZ; + + /** + * The X-dimension block size for the kernel. + */ + int32_t blockX; + + /** + * The Y-dimension block size for the kernel. + */ + int32_t blockY; + + /** + * The Z-dimension grid size for the kernel. + */ + int32_t blockZ; + + /** + * The static shared memory allocated for the kernel, in bytes. + */ + int32_t staticSharedMemory; + + /** + * The dynamic shared memory reserved for the kernel, in bytes. + */ + int32_t dynamicSharedMemory; + + /** + * The amount of local memory reserved for each thread, in bytes. + */ + uint32_t localMemoryPerThread; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes (deprecated in CUDA 11.8). + * Refer field localMemoryTotal_v2 + */ + uint32_t localMemoryTotal; + + /** + * The correlation ID of the kernel. Each kernel execution is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the kernel. + */ + uint32_t correlationId; + + /** + * The grid ID of the kernel. Each kernel is assigned a unique + * grid ID at runtime. + */ + int64_t gridId; + + /** + * The name of the kernel. This name is shared across all activity + * records representing the same kernel, and so should not be + * modified. + */ + const char *name; + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The timestamp when the kernel is queued up in the command buffer, in ns. + * A value of CUPTI_TIMESTAMP_UNKNOWN indicates that the queued time + * could not be collected for the kernel. This timestamp is not collected + * by default. Use API \ref cuptiActivityEnableLatencyTimestamps() to + * enable collection. + * + * Command buffer is a buffer written by CUDA driver to send commands + * like kernel launch, memory copy etc to the GPU. All launches of CUDA + * kernels are asynchronous with respect to the host, the host requests + * the launch by writing commands into the command buffer, then returns + * without checking the GPU's progress. + */ + uint64_t queued; + + /** + * The timestamp when the command buffer containing the kernel launch + * is submitted to the GPU, in ns. A value of CUPTI_TIMESTAMP_UNKNOWN + * indicates that the submitted time could not be collected for the kernel. + * This timestamp is not collected by default. Use API \ref + * cuptiActivityEnableLatencyTimestamps() to enable collection. + */ + uint64_t submitted; + + /** + * The indicates if the kernel was executed via a regular launch or via a + * single/multi device cooperative launch. \see CUpti_ActivityLaunchType + */ + uint8_t launchType; + + /** + * This indicates if CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT was + * updated for the kernel launch + */ + uint8_t isSharedMemoryCarveoutRequested; + + /** + * Shared memory carveout value requested for the function in percentage of + * the total resource. The value will be updated only if field + * isSharedMemoryCarveoutRequested is set. + */ + uint8_t sharedMemoryCarveoutRequested; + + /** + * Undefined. Reserved for internal use. + */ + uint8_t padding; + + /** + * Shared memory size set by the driver. + */ + uint32_t sharedMemoryExecuted; + + /** + * The unique ID of the graph node that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint64_t graphNodeId; + + /** + * The shared memory limit config for the kernel. This field shows whether user has opted for a + * higher per block limit of dynamic shared memory. + */ + CUpti_FuncShmemLimitConfig shmemLimitConfig; + + /** + * The unique ID of the graph that launched this kernel through graph launch APIs. + * This field will be 0 if the kernel is not launched through graph launch APIs. + */ + uint32_t graphId; + + /** + * The pointer to the access policy window. The structure CUaccessPolicyWindow is + * defined in cuda.h. + */ + CUaccessPolicyWindow *pAccessPolicyWindow; + + /** + * The ID of the HW channel on which the kernel is launched. + */ + uint32_t channelID; + + /** + * The type of the channel + */ + CUpti_ChannelType channelType; + + /** + * The X-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterX; + + /** + * The Y-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterY; + + /** + * The Z-dimension cluster size for the kernel. + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterZ; + + /** + * The cluster scheduling policy for the kernel. Refer CUclusterSchedulingPolicy + * Field is valid for devices with compute capability 9.0 and higher + */ + uint32_t clusterSchedulingPolicy; + + /** + * The total amount of local memory reserved for the kernel, in + * bytes. + */ + uint64_t localMemoryTotal_v2; +} CUpti_ActivityKernel8; + +/** + * \brief The activity record for memory copies. (deprecated) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemcpy; + +/** + * \brief The activity record for memory copies. (deprecated in CUDA 11.1) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemcpy3; + +/** + * \brief The activity record for memory copies. (deprecated in CUDA 11.6) + * + * This activity record represents a memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the memory copy. + */ + uint32_t correlationId; + + /** + * The runtime correlation ID of the memory copy. Each memory copy + * is assigned a unique runtime correlation ID that is identical to + * the correlation ID in the runtime API activity record that + * launched the memory copy. + */ + uint32_t runtimeCorrelationId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemcpy4; + +/** + * \brief The activity record for peer-to-peer memory copies. + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2) but is no longer generated + * by CUPTI. Peer-to-peer memory copy activities are now reported using the + * CUpti_ActivityMemcpyPtoP2 activity record.. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemcpyPtoP; + +typedef CUpti_ActivityMemcpyPtoP CUpti_ActivityMemcpy2; + +/** + * \brief The activity record for peer-to-peer memory copies. + * (deprecated in CUDA 11.1) + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed the memcpy through graph launch. + * This field will be 0 if memcpy is not done using graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemcpyPtoP2; + +/** + * \brief The activity record for peer-to-peer memory copies. + * (deprecated in CUDA 11.6) + * + * This activity record represents a peer-to-peer memory copy + * (CUPTI_ACTIVITY_KIND_MEMCPY2). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMCPY2. + */ + CUpti_ActivityKind kind; + + /** + * The kind of the memory copy, stored as a byte to reduce record + * size. \see CUpti_ActivityMemcpyKind + */ + uint8_t copyKind; + + /** + * The source memory kind read by the memory copy, stored as a byte + * to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t srcKind; + + /** + * The destination memory kind read by the memory copy, stored as a + * byte to reduce record size. \see CUpti_ActivityMemoryKind + */ + uint8_t dstKind; + + /** + * The flags associated with the memory copy. \see + * CUpti_ActivityFlag + */ + uint8_t flags; + + /** + * The number of bytes transferred by the memory copy. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t start; + + /** + * The end timestamp for the memory copy, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory copy. + */ + uint64_t end; + + /** + * The ID of the device where the memory copy is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory copy is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory copy is occurring. + */ + uint32_t streamId; + + /** + * The ID of the device where memory is being copied from. + */ + uint32_t srcDeviceId; + + /** + * The ID of the context owning the memory being copied from. + */ + uint32_t srcContextId; + + /** + * The ID of the device where memory is being copied to. + */ + uint32_t dstDeviceId; + + /** + * The ID of the context owning the memory being copied to. + */ + uint32_t dstContextId; + + /** + * The correlation ID of the memory copy. Each memory copy is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory copy. + */ + uint32_t correlationId; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed the memcpy through graph launch. + * This field will be 0 if memcpy is not done using graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memcpy through graph launch. + * This field will be 0 if the memcpy is not done through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemcpyPtoP3; + +/** + * \brief The activity record for memset. (deprecated) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; +} CUpti_ActivityMemset; + +/** + * \brief The activity record for memset. (deprecated in CUDA 11.1) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint64_t graphNodeId; +} CUpti_ActivityMemset2; + +/** + * \brief The activity record for memset. (deprecated in CUDA 11.6) + * + * This activity record represents a memory set operation + * (CUPTI_ACTIVITY_KIND_MEMSET). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMSET. + */ + CUpti_ActivityKind kind; + + /** + * The value being assigned to memory by the memory set. + */ + uint32_t value; + + /** + * The number of bytes being set by the memory set. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t start; + + /** + * The end timestamp for the memory set, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the memory set. + */ + uint64_t end; + + /** + * The ID of the device where the memory set is occurring. + */ + uint32_t deviceId; + + /** + * The ID of the context where the memory set is occurring. + */ + uint32_t contextId; + + /** + * The ID of the stream where the memory set is occurring. + */ + uint32_t streamId; + + /** + * The correlation ID of the memory set. Each memory set is assigned + * a unique correlation ID that is identical to the correlation ID + * in the driver API activity record that launched the memory set. + */ + uint32_t correlationId; + + /** + * The flags associated with the memset. \see CUpti_ActivityFlag + */ + uint16_t flags; + + /** + * The memory kind of the memory set \see CUpti_ActivityMemoryKind + */ + uint16_t memoryKind; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * Undefined. Reserved for internal use. + */ + void *reserved0; + + /** + * The unique ID of the graph node that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint64_t graphNodeId; + + /** + * The unique ID of the graph that executed this memset through graph launch. + * This field will be 0 if the memset is not executed through graph launch. + */ + uint32_t graphId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t padding; +} CUpti_ActivityMemset3; + +/** + * \brief The activity record for memory. + * + * This activity record represents a memory allocation and free operation + * (CUPTI_ACTIVITY_KIND_MEMORY2). + * This activity record provides separate records for memory allocation and + * memory release operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory operation. + * + * Note: This activity record is an upgrade over \ref CUpti_ActivityMemory + * enabled using the kind \ref CUPTI_ACTIVITY_KIND_MEMORY. + * \ref CUpti_ActivityMemory provides a single record for the memory + * allocation and memory release operations. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY2 + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryOperationType. + */ + CUpti_ActivityMemoryOperationType memoryOperationType; + + /** + * The memory kind requested by the user, \ref CUpti_ActivityMemoryKind. + */ + CUpti_ActivityMemoryKind memoryKind; + + /** + * The correlation ID of the memory operation. Each memory operation is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The number of bytes of memory allocated. + */ + uint64_t bytes; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; + + /** + * The program counter of the memory operation. + */ + uint64_t PC; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory operation is taking place. + */ + uint32_t deviceId; + + /** + * The ID of the context. If context is NULL, \p contextId is set to CUPTI_INVALID_CONTEXT_ID. + */ + uint32_t contextId; + + /** + * The ID of the stream. If memory operation is not async, \p streamId is set to CUPTI_INVALID_STREAM_ID. + */ + uint32_t streamId; + + /** + * Variable name. This name is shared across all activity + * records representing the same symbol, and so should not be + * modified. + */ + const char* name; + + /** + * \p isAsync is set if memory operation happens through async memory APIs. + */ + uint32_t isAsync; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad1; +#endif + + /** + * The memory pool configuration used for the memory operations. + */ + struct { + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad2; +#endif + + /** + * The base address of the memory pool. + */ + uint64_t address; + + /** + * The release threshold of the memory pool in bytes. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + /** + * The size of the memory pool in bytes and the processID of the memory pool. + * \p size is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + * \p processId is valid if \p memoryPoolType is + * CUPTI_ACTIVITY_MEMORY_POOL_TYPE_IMPORTED, \ref CUpti_ActivityMemoryPoolType. + */ + union { + uint64_t size; + uint64_t processId; + } pool; + } memoryPoolConfig; + +} CUpti_ActivityMemory2; + +/** + * \brief The activity record for memory pool. + * + * This activity record represents a memory pool creation, destruction and + * trimming (CUPTI_ACTIVITY_KIND_MEMORY_POOL). + * This activity record provides separate records for memory pool creation, + * destruction and trimming operations. + * This allows to correlate the corresponding driver and runtime API + * activity record with the memory pool operation. + * + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MEMORY_POOL + */ + CUpti_ActivityKind kind; + + /** + * The memory operation requested by the user, \ref CUpti_ActivityMemoryPoolOperationType. + */ + CUpti_ActivityMemoryPoolOperationType memoryPoolOperationType; + + /** + * The type of the memory pool, \ref CUpti_ActivityMemoryPoolType + */ + CUpti_ActivityMemoryPoolType memoryPoolType; + + /** + * The correlation ID of the memory pool operation. Each memory pool + * operation is assigned a unique correlation ID that is identical to the + * correlation ID in the driver and runtime API activity record that + * launched the memory operation. + */ + uint32_t correlationId; + + /** + * The ID of the process to which this record belongs to. + */ + uint32_t processId; + + /** + * The ID of the device where the memory pool is created. + */ + uint32_t deviceId; + + /** + * The minimum bytes to keep of the memory pool. \p minBytesToKeep is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_OPERATION_TYPE_TRIMMED, + * \ref CUpti_ActivityMemoryPoolOperationType + */ + size_t minBytesToKeep; + +#ifndef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The virtual address of the allocation. + */ + uint64_t address; + + /** + * The size of the memory pool operation in bytes. \p size is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t size; + + /** + * The release threshold of the memory pool. \p releaseThreshold is + * valid for CUPTI_ACTIVITY_MEMORY_POOL_TYPE_LOCAL, \ref CUpti_ActivityMemoryPoolType. + */ + uint64_t releaseThreshold; + + /** + * The start timestamp for the memory operation, in ns. + */ + uint64_t timestamp; +} CUpti_ActivityMemoryPool; + +/** + * \brief The activity record providing a marker which is an + * instantaneous point in time. (deprecated in CUDA 8.0) + * + * The marker is specified with a descriptive name and unique id + * (CUPTI_ACTIVITY_KIND_MARKER). + * Marker activity is now reported using the + * CUpti_ActivityMarker2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_MARKER. + */ + CUpti_ActivityKind kind; + + /** + * The flags associated with the marker. \see CUpti_ActivityFlag + */ + CUpti_ActivityFlag flags; + + /** + * The timestamp for the marker, in ns. A value of 0 indicates that + * timestamp information could not be collected for the marker. + */ + uint64_t timestamp; + + /** + * The marker ID. + */ + uint32_t id; + + /** + * The kind of activity object associated with this marker. + */ + CUpti_ActivityObjectKind objectKind; + + /** + * The identifier for the activity object associated with this + * marker. 'objectKind' indicates which ID is valid for this record. + */ + CUpti_ActivityObjectKindId objectId; + +#ifdef CUPTILP64 + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +#endif + + /** + * The marker name for an instantaneous or start marker. This will + * be NULL for an end marker. + */ + const char *name; + +} CUpti_ActivityMarker; + +/** + * \brief The activity record for source-level global + * access. (deprecated) + * + * This activity records the locations of the global + * accesses in the source (CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS). + * Global access activities are now reported using the + * CUpti_ActivityGlobalAccess3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this global access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * The pc offset for the access. + */ + uint32_t pcOffset; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of 32 bytes transactions to L2 cache generated by this access + */ + uint64_t l2_transactions; +} CUpti_ActivityGlobalAccess; + +/** + * \brief The activity record for source-level global + * access. (deprecated in CUDA 9.0) + * + * This activity records the locations of the global + * accesses in the source (CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS). + * Global access activities are now reported using the + * CUpti_ActivityGlobalAccess3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GLOBAL_ACCESS. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this global access. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the access. + */ + uint32_t pcOffset; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction with predicate and condition code evaluating to true. + */ + uint64_t threadsExecuted; + + /** + * The total number of 32 bytes transactions to L2 cache generated by this access + */ + uint64_t l2_transactions; + + /** + * The minimum number of L2 transactions possible based on the access pattern. + */ + uint64_t theoreticalL2Transactions; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * when at least one of thread among warp is active with predicate and condition code + * evaluating to true. + */ + uint32_t executed; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityGlobalAccess2; + +/** + * \brief The activity record for source level result + * branch. (deprecated) + * + * This activity record the locations of the branches in the + * source (CUPTI_ACTIVITY_KIND_BRANCH). + * Branch activities are now reported using the + * CUpti_ActivityBranch2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_BRANCH. + */ + CUpti_ActivityKind kind; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * The pc offset for the branch. + */ + uint32_t pcOffset; + + /** + * The number of times this instruction was executed per warp. It will be incremented + * regardless of predicate or condition code. + */ + uint32_t executed; + + /** + * Number of times this branch diverged + */ + uint32_t diverged; + + /** + * This increments each time when this instruction is executed by number + * of threads that executed this instruction + */ + uint64_t threadsExecuted; +} CUpti_ActivityBranch; + +/** + * \brief The activity record for PC sampling. (deprecated in CUDA 8.0) + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING). + * PC sampling activities are now reported using the + * CUpti_ActivityPCSampling2 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * The same PC can be sampled with different stall reasons. + */ + uint32_t samples; + + /** + * Current stall reason. Includes one of the reasons from + * \ref CUpti_ActivityPCSamplingStallReason + */ + CUpti_ActivityPCSamplingStallReason stallReason; +} CUpti_ActivityPCSampling; + +/** + * \brief The activity record for PC sampling. (deprecated in CUDA 9.0) + * + * This activity records information obtained by sampling PC + * (CUPTI_ACTIVITY_KIND_PC_SAMPLING). + * PC sampling activities are now reported using the + * CUpti_ActivityPCSampling3 activity record. + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_PC_SAMPLING. + */ + CUpti_ActivityKind kind; + + /** + * The properties of this instruction. + */ + CUpti_ActivityFlag flags; + + /** + * The ID for source locator. + */ + uint32_t sourceLocatorId; + + /** + * The correlation ID of the kernel to which this result is associated. + */ + uint32_t correlationId; + + /** + * Correlation ID with global/device function name + */ + uint32_t functionId; + + /** + * The pc offset for the instruction. + */ + uint32_t pcOffset; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * These samples indicate that no instruction was issued in that cycle from + * the warp scheduler from where the warp was sampled. + * Field is valid for devices with compute capability 6.0 and higher + */ + uint32_t latencySamples; + + /** + * Number of times the PC was sampled with the stallReason in the record. + * The same PC can be sampled with different stall reasons. The count includes + * latencySamples. + */ + uint32_t samples; + + /** + * Current stall reason. Includes one of the reasons from + * \ref CUpti_ActivityPCSamplingStallReason + */ + CUpti_ActivityPCSamplingStallReason stallReason; + + uint32_t pad; +} CUpti_ActivityPCSampling2; + +/** + * \brief The activity record for Unified Memory counters (deprecated in CUDA 7.0) + * + * This activity record represents a Unified Memory counter + * (CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER). + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_UNIFIED_MEMORY_COUNTER + */ + CUpti_ActivityKind kind; + + /** + * The Unified Memory counter kind. See \ref CUpti_ActivityUnifiedMemoryCounterKind + */ + CUpti_ActivityUnifiedMemoryCounterKind counterKind; + + /** + * Scope of the Unified Memory counter. See \ref CUpti_ActivityUnifiedMemoryCounterScope + */ + CUpti_ActivityUnifiedMemoryCounterScope scope; + + /** + * The ID of the device involved in the memory transfer operation. + * It is not relevant if the scope of the counter is global (all devices). + */ + uint32_t deviceId; + + /** + * Value of the counter + * + */ + uint64_t value; + + /** + * The timestamp when this sample was retrieved, in ns. A value of 0 + * indicates that timestamp information could not be collected + */ + uint64_t timestamp; + + /** + * The ID of the process to which this record belongs to. In case of + * global scope, processId is undefined. + */ + uint32_t processId; + + /** + * Undefined. Reserved for internal use. + */ + uint32_t pad; +} CUpti_ActivityUnifiedMemoryCounter; + +/** +* \brief NVLink information. (deprecated in CUDA 9.0) +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NVLink information are now reported using the +* CUpti_ActivityNvLink2 activity record. +*/ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + + /** + * NVLink version. + */ + uint32_t nvlinkVersion; + + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice5. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice5. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + + /** + * Port numbers for maximum 4 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[4]; + + /** + * Port numbers for maximum 4 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[4]; + + /** + * Bandwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; +} CUpti_ActivityNvLink; + +/** +* \brief NVLink information. (deprecated in CUDA 10.0) +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NvLink information are now reported using the +* CUpti_ActivityNvLink4 activity record. +*/ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + + /** + * NvLink version. + */ + uint32_t nvlinkVersion; + + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice5. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice5. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + + /** + * Port numbers for maximum 16 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[CUPTI_MAX_NVLINK_PORTS]; + + /** + * Port numbers for maximum 16 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[CUPTI_MAX_NVLINK_PORTS]; + + /** + * Bandwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; +} CUpti_ActivityNvLink2; + +/** +* \brief NVLink information. +* +* This structure gives capabilities of each logical NVLink connection between two devices, +* gpu<->gpu or gpu<->CPU which can be used to understand the topology. +* NvLink information are now reported using the +* CUpti_ActivityNvLink4 activity record. +*/ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_NVLINK. + */ + CUpti_ActivityKind kind; + /** + * NvLink version. + */ + uint32_t nvlinkVersion; + + /** + * Type of device 0 \ref CUpti_DevType + */ + CUpti_DevType typeDev0; + + /** + * Type of device 1 \ref CUpti_DevType + */ + CUpti_DevType typeDev1; + + /** + * If typeDev0 is CUPTI_DEV_TYPE_GPU, UUID for device 0. \ref CUpti_ActivityDevice5. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev0; + + /** + * If typeDev1 is CUPTI_DEV_TYPE_GPU, UUID for device 1. \ref CUpti_ActivityDevice5. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, struct npu for NPU. + */ + union { + CUuuid uuidDev; + struct { + /** + * Index of the NPU. First index will always be zero. + */ + uint32_t index; + + /** + * Domain ID of NPU. On Linux, this can be queried using lspci. + */ + uint32_t domainId; + } npu; + } idDev1; + + /** + * Flag gives capabilities of the link \see CUpti_LinkFlag + */ + uint32_t flag; + + /** + * Number of physical NVLinks present between two devices. + */ + uint32_t physicalNvLinkCount; + + /** + * Port numbers for maximum 16 NVLinks connected to device 0. + * If typeDev0 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev0[CUPTI_MAX_NVLINK_PORTS]; + + /** + * Port numbers for maximum 16 NVLinks connected to device 1. + * If typeDev1 is CUPTI_DEV_TYPE_NPU, ignore this field. + * In case of invalid/unknown port number, this field will be set + * to value CUPTI_NVLINK_INVALID_PORT. + * This will be used to correlate the metric values to individual + * physical link and attribute traffic to the logical NVLink in + * the topology. + */ + int8_t portDev1[CUPTI_MAX_NVLINK_PORTS]; + + /** + * Bandwidth of NVLink in kbytes/sec + */ + uint64_t bandwidth; + + /** + * NVSwitch is connected as an intermediate node. + */ + uint8_t nvswitchConnected; + + /** + * Undefined. reserved for internal use + */ + uint8_t pad[7]; +} CUpti_ActivityNvLink3; + +/** + * \brief The activity record for trace of graph execution. + * + * This activity record represents execution for a graph without giving visibility + * about the execution of its nodes. This is intended to reduce overheads in tracing + * each node. The activity kind is CUPTI_ACTIVITY_KIND_GRAPH_TRACE + * Graph trace activity is now reported using CUpti_ActivityGraphTrace2 record. + */ +typedef struct { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_GRAPH_TRACE + */ + CUpti_ActivityKind kind; + + /** + * The correlation ID of the graph launch. Each graph launch is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver API activity record that launched + * the graph. + */ + uint32_t correlationId; + + /** + * The start timestamp for the graph execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the graph. + */ + uint64_t start; + + /** + * The end timestamp for the graph execution, in ns. A value of 0 + * for both the start and end timestamps indicates that timestamp + * information could not be collected for the graph. + */ + uint64_t end; + + /** + * The ID of the device where the graph execution is occurring. + */ + uint32_t deviceId; + + /** + * The unique ID of the graph that is launched. + */ + uint32_t graphId; + + /** + * The ID of the context where the graph is being launched. + */ + uint32_t contextId; + + /** + * The ID of the stream where the graph is being launched. + */ + uint32_t streamId; + + /** + * This field is reserved for internal use + */ + void *reserved; +} CUpti_ActivityGraphTrace; + +/** + * \brief The activity record for a context. + * + * This activity record represents information about a context + * (CUPTI_ACTIVITY_KIND_CONTEXT). + * Context activity is now reported using CUpti_ActivityContext2 record + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind, must be CUPTI_ACTIVITY_KIND_CONTEXT. + */ + CUpti_ActivityKind kind; + + /** + * The context ID. + */ + uint32_t contextId; + + /** + * The device ID. + */ + uint32_t deviceId; + + /** + * The compute API kind. \see CUpti_ActivityComputeApiKind + */ + uint16_t computeApiKind; + + /** + * The ID for the NULL stream in this context + */ + uint16_t nullStreamId; +} CUpti_ActivityContext; + +/** + * \brief The activity record for JIT operations. + * This activity represents the JIT operations (compile, load, store) of a CUmodule + * from the Compute Cache. + * Gives the exact hashed path of where the cached module is loaded from, + * or where the module will be stored after Just-In-Time (JIT) compilation. + * + * JIT activity is now reported using CUpti_ActivityJit2 record + */ +typedef struct PACKED_ALIGNMENT { + /** + * The activity record kind must be CUPTI_ACTIVITY_KIND_JIT. + */ + CUpti_ActivityKind kind; + + /** + * The JIT entry type. + */ + CUpti_ActivityJitEntryType jitEntryType; + + /** + * The JIT operation type. + */ + CUpti_ActivityJitOperationType jitOperationType; + + /** + * The device ID. + */ + uint32_t deviceId; + + /** + * The start timestamp for the JIT operation, in ns. A value of 0 for + * both the start and end timestamps indicates that timestamp + * information could not be collected for the JIT operation. + */ + uint64_t start; + + /** + * The end timestamp for the JIT operation, in ns. A value of 0 for both + * the start and end timestamps indicates that timestamp information + * could not be collected for the JIT operation. + */ + uint64_t end; + + /** + * The correlation ID of the JIT operation to which + * records belong to. Each JIT operation is + * assigned a unique correlation ID that is identical to the + * correlation ID in the driver or runtime API activity record that + * launched the JIT operation. + */ + uint32_t correlationId; + + /** + * Internal use. + */ + uint32_t padding; + + /** + * The correlation ID to correlate JIT compilation, load and store operations. + * Each JIT compilation unit is assigned a unique correlation ID + * at the time of the JIT compilation. This correlation id can be used + * to find the matching JIT cache load/store records. + */ + uint64_t jitOperationCorrelationId; + + /** + * The size of compute cache. + */ + uint64_t cacheSize; + + /** + * The path where the fat binary is cached. + */ + const char* cachePath; +} CUpti_ActivityJit; + + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +#if defined(__cplusplus) +} +#endif + +#endif /*_CUPTI_ACTIVITY_DEPRECATED_H_*/ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_driver_cbid.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_driver_cbid.h new file mode 100644 index 0000000000000000000000000000000000000000..9abd20a7adf6135987468498bafdb3654ad09df3 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_driver_cbid.h @@ -0,0 +1,690 @@ + +// ************************************************************************* +// Definitions of indices for API functions, unique across entire API +// ************************************************************************* + +// This file is generated. Any changes you make will be lost during the next clean build. +// CUDA public interface, for type definitions and cu* function prototypes + +typedef enum CUpti_driver_api_trace_cbid_enum { + CUPTI_DRIVER_TRACE_CBID_INVALID = 0, + CUPTI_DRIVER_TRACE_CBID_cuInit = 1, + CUPTI_DRIVER_TRACE_CBID_cuDriverGetVersion = 2, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGet = 3, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetCount = 4, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetName = 5, + CUPTI_DRIVER_TRACE_CBID_cuDeviceComputeCapability = 6, + CUPTI_DRIVER_TRACE_CBID_cuDeviceTotalMem = 7, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetProperties = 8, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetAttribute = 9, + CUPTI_DRIVER_TRACE_CBID_cuCtxCreate = 10, + CUPTI_DRIVER_TRACE_CBID_cuCtxDestroy = 11, + CUPTI_DRIVER_TRACE_CBID_cuCtxAttach = 12, + CUPTI_DRIVER_TRACE_CBID_cuCtxDetach = 13, + CUPTI_DRIVER_TRACE_CBID_cuCtxPushCurrent = 14, + CUPTI_DRIVER_TRACE_CBID_cuCtxPopCurrent = 15, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetDevice = 16, + CUPTI_DRIVER_TRACE_CBID_cuCtxSynchronize = 17, + CUPTI_DRIVER_TRACE_CBID_cuModuleLoad = 18, + CUPTI_DRIVER_TRACE_CBID_cuModuleLoadData = 19, + CUPTI_DRIVER_TRACE_CBID_cuModuleLoadDataEx = 20, + CUPTI_DRIVER_TRACE_CBID_cuModuleLoadFatBinary = 21, + CUPTI_DRIVER_TRACE_CBID_cuModuleUnload = 22, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetFunction = 23, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetGlobal = 24, + CUPTI_DRIVER_TRACE_CBID_cu64ModuleGetGlobal = 25, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetTexRef = 26, + CUPTI_DRIVER_TRACE_CBID_cuMemGetInfo = 27, + CUPTI_DRIVER_TRACE_CBID_cu64MemGetInfo = 28, + CUPTI_DRIVER_TRACE_CBID_cuMemAlloc = 29, + CUPTI_DRIVER_TRACE_CBID_cu64MemAlloc = 30, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocPitch = 31, + CUPTI_DRIVER_TRACE_CBID_cu64MemAllocPitch = 32, + CUPTI_DRIVER_TRACE_CBID_cuMemFree = 33, + CUPTI_DRIVER_TRACE_CBID_cu64MemFree = 34, + CUPTI_DRIVER_TRACE_CBID_cuMemGetAddressRange = 35, + CUPTI_DRIVER_TRACE_CBID_cu64MemGetAddressRange = 36, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocHost = 37, + CUPTI_DRIVER_TRACE_CBID_cuMemFreeHost = 38, + CUPTI_DRIVER_TRACE_CBID_cuMemHostAlloc = 39, + CUPTI_DRIVER_TRACE_CBID_cuMemHostGetDevicePointer = 40, + CUPTI_DRIVER_TRACE_CBID_cu64MemHostGetDevicePointer = 41, + CUPTI_DRIVER_TRACE_CBID_cuMemHostGetFlags = 42, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoD = 43, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyHtoD = 44, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoH = 45, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyDtoH = 46, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoD = 47, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyDtoD = 48, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoA = 49, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyDtoA = 50, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoD = 51, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyAtoD = 52, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoA = 53, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoH = 54, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoA = 55, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2D = 56, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DUnaligned = 57, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3D = 58, + CUPTI_DRIVER_TRACE_CBID_cu64Memcpy3D = 59, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoDAsync = 60, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyHtoDAsync = 61, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoHAsync = 62, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyDtoHAsync = 63, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoDAsync = 64, + CUPTI_DRIVER_TRACE_CBID_cu64MemcpyDtoDAsync = 65, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoAAsync = 66, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoHAsync = 67, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DAsync = 68, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DAsync = 69, + CUPTI_DRIVER_TRACE_CBID_cu64Memcpy3DAsync = 70, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD8 = 71, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD8 = 72, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD16 = 73, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD16 = 74, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD32 = 75, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD32 = 76, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D8 = 77, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D8 = 78, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D16 = 79, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D16 = 80, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D32 = 81, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D32 = 82, + CUPTI_DRIVER_TRACE_CBID_cuFuncSetBlockShape = 83, + CUPTI_DRIVER_TRACE_CBID_cuFuncSetSharedSize = 84, + CUPTI_DRIVER_TRACE_CBID_cuFuncGetAttribute = 85, + CUPTI_DRIVER_TRACE_CBID_cuFuncSetCacheConfig = 86, + CUPTI_DRIVER_TRACE_CBID_cuArrayCreate = 87, + CUPTI_DRIVER_TRACE_CBID_cuArrayGetDescriptor = 88, + CUPTI_DRIVER_TRACE_CBID_cuArrayDestroy = 89, + CUPTI_DRIVER_TRACE_CBID_cuArray3DCreate = 90, + CUPTI_DRIVER_TRACE_CBID_cuArray3DGetDescriptor = 91, + CUPTI_DRIVER_TRACE_CBID_cuTexRefCreate = 92, + CUPTI_DRIVER_TRACE_CBID_cuTexRefDestroy = 93, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetArray = 94, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddress = 95, + CUPTI_DRIVER_TRACE_CBID_cu64TexRefSetAddress = 96, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddress2D = 97, + CUPTI_DRIVER_TRACE_CBID_cu64TexRefSetAddress2D = 98, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetFormat = 99, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddressMode = 100, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetFilterMode = 101, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetFlags = 102, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetAddress = 103, + CUPTI_DRIVER_TRACE_CBID_cu64TexRefGetAddress = 104, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetArray = 105, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetAddressMode = 106, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetFilterMode = 107, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetFormat = 108, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetFlags = 109, + CUPTI_DRIVER_TRACE_CBID_cuParamSetSize = 110, + CUPTI_DRIVER_TRACE_CBID_cuParamSeti = 111, + CUPTI_DRIVER_TRACE_CBID_cuParamSetf = 112, + CUPTI_DRIVER_TRACE_CBID_cuParamSetv = 113, + CUPTI_DRIVER_TRACE_CBID_cuParamSetTexRef = 114, + CUPTI_DRIVER_TRACE_CBID_cuLaunch = 115, + CUPTI_DRIVER_TRACE_CBID_cuLaunchGrid = 116, + CUPTI_DRIVER_TRACE_CBID_cuLaunchGridAsync = 117, + CUPTI_DRIVER_TRACE_CBID_cuEventCreate = 118, + CUPTI_DRIVER_TRACE_CBID_cuEventRecord = 119, + CUPTI_DRIVER_TRACE_CBID_cuEventQuery = 120, + CUPTI_DRIVER_TRACE_CBID_cuEventSynchronize = 121, + CUPTI_DRIVER_TRACE_CBID_cuEventDestroy = 122, + CUPTI_DRIVER_TRACE_CBID_cuEventElapsedTime = 123, + CUPTI_DRIVER_TRACE_CBID_cuStreamCreate = 124, + CUPTI_DRIVER_TRACE_CBID_cuStreamQuery = 125, + CUPTI_DRIVER_TRACE_CBID_cuStreamSynchronize = 126, + CUPTI_DRIVER_TRACE_CBID_cuStreamDestroy = 127, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsUnregisterResource = 128, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsSubResourceGetMappedArray = 129, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceGetMappedPointer = 130, + CUPTI_DRIVER_TRACE_CBID_cu64GraphicsResourceGetMappedPointer = 131, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceSetMapFlags = 132, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsMapResources = 133, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsUnmapResources = 134, + CUPTI_DRIVER_TRACE_CBID_cuGetExportTable = 135, + CUPTI_DRIVER_TRACE_CBID_cuCtxSetLimit = 136, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetLimit = 137, + CUPTI_DRIVER_TRACE_CBID_cuD3D10GetDevice = 138, + CUPTI_DRIVER_TRACE_CBID_cuD3D10CtxCreate = 139, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsD3D10RegisterResource = 140, + CUPTI_DRIVER_TRACE_CBID_cuD3D10RegisterResource = 141, + CUPTI_DRIVER_TRACE_CBID_cuD3D10UnregisterResource = 142, + CUPTI_DRIVER_TRACE_CBID_cuD3D10MapResources = 143, + CUPTI_DRIVER_TRACE_CBID_cuD3D10UnmapResources = 144, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceSetMapFlags = 145, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedArray = 146, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedPointer = 147, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedSize = 148, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedPitch = 149, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetSurfaceDimensions = 150, + CUPTI_DRIVER_TRACE_CBID_cuD3D11GetDevice = 151, + CUPTI_DRIVER_TRACE_CBID_cuD3D11CtxCreate = 152, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsD3D11RegisterResource = 153, + CUPTI_DRIVER_TRACE_CBID_cuD3D9GetDevice = 154, + CUPTI_DRIVER_TRACE_CBID_cuD3D9CtxCreate = 155, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsD3D9RegisterResource = 156, + CUPTI_DRIVER_TRACE_CBID_cuD3D9GetDirect3DDevice = 157, + CUPTI_DRIVER_TRACE_CBID_cuD3D9RegisterResource = 158, + CUPTI_DRIVER_TRACE_CBID_cuD3D9UnregisterResource = 159, + CUPTI_DRIVER_TRACE_CBID_cuD3D9MapResources = 160, + CUPTI_DRIVER_TRACE_CBID_cuD3D9UnmapResources = 161, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceSetMapFlags = 162, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetSurfaceDimensions = 163, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedArray = 164, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedPointer = 165, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedSize = 166, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedPitch = 167, + CUPTI_DRIVER_TRACE_CBID_cuD3D9Begin = 168, + CUPTI_DRIVER_TRACE_CBID_cuD3D9End = 169, + CUPTI_DRIVER_TRACE_CBID_cuD3D9RegisterVertexBuffer = 170, + CUPTI_DRIVER_TRACE_CBID_cuD3D9MapVertexBuffer = 171, + CUPTI_DRIVER_TRACE_CBID_cuD3D9UnmapVertexBuffer = 172, + CUPTI_DRIVER_TRACE_CBID_cuD3D9UnregisterVertexBuffer = 173, + CUPTI_DRIVER_TRACE_CBID_cuGLCtxCreate = 174, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsGLRegisterBuffer = 175, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsGLRegisterImage = 176, + CUPTI_DRIVER_TRACE_CBID_cuWGLGetDevice = 177, + CUPTI_DRIVER_TRACE_CBID_cuGLInit = 178, + CUPTI_DRIVER_TRACE_CBID_cuGLRegisterBufferObject = 179, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObject = 180, + CUPTI_DRIVER_TRACE_CBID_cuGLUnmapBufferObject = 181, + CUPTI_DRIVER_TRACE_CBID_cuGLUnregisterBufferObject = 182, + CUPTI_DRIVER_TRACE_CBID_cuGLSetBufferObjectMapFlags = 183, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObjectAsync = 184, + CUPTI_DRIVER_TRACE_CBID_cuGLUnmapBufferObjectAsync = 185, + CUPTI_DRIVER_TRACE_CBID_cuVDPAUGetDevice = 186, + CUPTI_DRIVER_TRACE_CBID_cuVDPAUCtxCreate = 187, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsVDPAURegisterVideoSurface = 188, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsVDPAURegisterOutputSurface = 189, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetSurfRef = 190, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefCreate = 191, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefDestroy = 192, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefSetFormat = 193, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefSetArray = 194, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefGetFormat = 195, + CUPTI_DRIVER_TRACE_CBID_cuSurfRefGetArray = 196, + CUPTI_DRIVER_TRACE_CBID_cu64DeviceTotalMem = 197, + CUPTI_DRIVER_TRACE_CBID_cu64D3D10ResourceGetMappedPointer = 198, + CUPTI_DRIVER_TRACE_CBID_cu64D3D10ResourceGetMappedSize = 199, + CUPTI_DRIVER_TRACE_CBID_cu64D3D10ResourceGetMappedPitch = 200, + CUPTI_DRIVER_TRACE_CBID_cu64D3D10ResourceGetSurfaceDimensions = 201, + CUPTI_DRIVER_TRACE_CBID_cu64D3D9ResourceGetSurfaceDimensions = 202, + CUPTI_DRIVER_TRACE_CBID_cu64D3D9ResourceGetMappedPointer = 203, + CUPTI_DRIVER_TRACE_CBID_cu64D3D9ResourceGetMappedSize = 204, + CUPTI_DRIVER_TRACE_CBID_cu64D3D9ResourceGetMappedPitch = 205, + CUPTI_DRIVER_TRACE_CBID_cu64D3D9MapVertexBuffer = 206, + CUPTI_DRIVER_TRACE_CBID_cu64GLMapBufferObject = 207, + CUPTI_DRIVER_TRACE_CBID_cu64GLMapBufferObjectAsync = 208, + CUPTI_DRIVER_TRACE_CBID_cuD3D11GetDevices = 209, + CUPTI_DRIVER_TRACE_CBID_cuD3D11CtxCreateOnDevice = 210, + CUPTI_DRIVER_TRACE_CBID_cuD3D10GetDevices = 211, + CUPTI_DRIVER_TRACE_CBID_cuD3D10CtxCreateOnDevice = 212, + CUPTI_DRIVER_TRACE_CBID_cuD3D9GetDevices = 213, + CUPTI_DRIVER_TRACE_CBID_cuD3D9CtxCreateOnDevice = 214, + CUPTI_DRIVER_TRACE_CBID_cu64MemHostAlloc = 215, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD8Async = 216, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD8Async = 217, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD16Async = 218, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD16Async = 219, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD32Async = 220, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD32Async = 221, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D8Async = 222, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D8Async = 223, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D16Async = 224, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D16Async = 225, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D32Async = 226, + CUPTI_DRIVER_TRACE_CBID_cu64MemsetD2D32Async = 227, + CUPTI_DRIVER_TRACE_CBID_cu64ArrayCreate = 228, + CUPTI_DRIVER_TRACE_CBID_cu64ArrayGetDescriptor = 229, + CUPTI_DRIVER_TRACE_CBID_cu64Array3DCreate = 230, + CUPTI_DRIVER_TRACE_CBID_cu64Array3DGetDescriptor = 231, + CUPTI_DRIVER_TRACE_CBID_cu64Memcpy2D = 232, + CUPTI_DRIVER_TRACE_CBID_cu64Memcpy2DUnaligned = 233, + CUPTI_DRIVER_TRACE_CBID_cu64Memcpy2DAsync = 234, + CUPTI_DRIVER_TRACE_CBID_cuCtxCreate_v2 = 235, + CUPTI_DRIVER_TRACE_CBID_cuD3D10CtxCreate_v2 = 236, + CUPTI_DRIVER_TRACE_CBID_cuD3D11CtxCreate_v2 = 237, + CUPTI_DRIVER_TRACE_CBID_cuD3D9CtxCreate_v2 = 238, + CUPTI_DRIVER_TRACE_CBID_cuGLCtxCreate_v2 = 239, + CUPTI_DRIVER_TRACE_CBID_cuVDPAUCtxCreate_v2 = 240, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetGlobal_v2 = 241, + CUPTI_DRIVER_TRACE_CBID_cuMemGetInfo_v2 = 242, + CUPTI_DRIVER_TRACE_CBID_cuMemAlloc_v2 = 243, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocPitch_v2 = 244, + CUPTI_DRIVER_TRACE_CBID_cuMemFree_v2 = 245, + CUPTI_DRIVER_TRACE_CBID_cuMemGetAddressRange_v2 = 246, + CUPTI_DRIVER_TRACE_CBID_cuMemHostGetDevicePointer_v2 = 247, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy_v2 = 248, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD8_v2 = 249, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD16_v2 = 250, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD32_v2 = 251, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D8_v2 = 252, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D16_v2 = 253, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D32_v2 = 254, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddress_v2 = 255, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddress2D_v2 = 256, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetAddress_v2 = 257, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceGetMappedPointer_v2 = 258, + CUPTI_DRIVER_TRACE_CBID_cuDeviceTotalMem_v2 = 259, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedPointer_v2 = 260, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedSize_v2 = 261, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetMappedPitch_v2 = 262, + CUPTI_DRIVER_TRACE_CBID_cuD3D10ResourceGetSurfaceDimensions_v2 = 263, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetSurfaceDimensions_v2 = 264, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedPointer_v2 = 265, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedSize_v2 = 266, + CUPTI_DRIVER_TRACE_CBID_cuD3D9ResourceGetMappedPitch_v2 = 267, + CUPTI_DRIVER_TRACE_CBID_cuD3D9MapVertexBuffer_v2 = 268, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObject_v2 = 269, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObjectAsync_v2 = 270, + CUPTI_DRIVER_TRACE_CBID_cuMemHostAlloc_v2 = 271, + CUPTI_DRIVER_TRACE_CBID_cuArrayCreate_v2 = 272, + CUPTI_DRIVER_TRACE_CBID_cuArrayGetDescriptor_v2 = 273, + CUPTI_DRIVER_TRACE_CBID_cuArray3DCreate_v2 = 274, + CUPTI_DRIVER_TRACE_CBID_cuArray3DGetDescriptor_v2 = 275, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoD_v2 = 276, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoDAsync_v2 = 277, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoH_v2 = 278, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoHAsync_v2 = 279, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoD_v2 = 280, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoDAsync_v2 = 281, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoH_v2 = 282, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoHAsync_v2 = 283, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoD_v2 = 284, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoA_v2 = 285, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoA_v2 = 286, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2D_v2 = 287, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DUnaligned_v2 = 288, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DAsync_v2 = 289, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3D_v2 = 290, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DAsync_v2 = 291, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoA_v2 = 292, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoAAsync_v2 = 293, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocHost_v2 = 294, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitEvent = 295, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetApiVersion = 296, + CUPTI_DRIVER_TRACE_CBID_cuD3D10GetDirect3DDevice = 297, + CUPTI_DRIVER_TRACE_CBID_cuD3D11GetDirect3DDevice = 298, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetCacheConfig = 299, + CUPTI_DRIVER_TRACE_CBID_cuCtxSetCacheConfig = 300, + CUPTI_DRIVER_TRACE_CBID_cuMemHostRegister = 301, + CUPTI_DRIVER_TRACE_CBID_cuMemHostUnregister = 302, + CUPTI_DRIVER_TRACE_CBID_cuCtxSetCurrent = 303, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetCurrent = 304, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy = 305, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAsync = 306, + CUPTI_DRIVER_TRACE_CBID_cuLaunchKernel = 307, + CUPTI_DRIVER_TRACE_CBID_cuProfilerStart = 308, + CUPTI_DRIVER_TRACE_CBID_cuProfilerStop = 309, + CUPTI_DRIVER_TRACE_CBID_cuPointerGetAttribute = 310, + CUPTI_DRIVER_TRACE_CBID_cuProfilerInitialize = 311, + CUPTI_DRIVER_TRACE_CBID_cuDeviceCanAccessPeer = 312, + CUPTI_DRIVER_TRACE_CBID_cuCtxEnablePeerAccess = 313, + CUPTI_DRIVER_TRACE_CBID_cuCtxDisablePeerAccess = 314, + CUPTI_DRIVER_TRACE_CBID_cuMemPeerRegister = 315, + CUPTI_DRIVER_TRACE_CBID_cuMemPeerUnregister = 316, + CUPTI_DRIVER_TRACE_CBID_cuMemPeerGetDevicePointer = 317, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyPeer = 318, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyPeerAsync = 319, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DPeer = 320, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DPeerAsync = 321, + CUPTI_DRIVER_TRACE_CBID_cuCtxDestroy_v2 = 322, + CUPTI_DRIVER_TRACE_CBID_cuCtxPushCurrent_v2 = 323, + CUPTI_DRIVER_TRACE_CBID_cuCtxPopCurrent_v2 = 324, + CUPTI_DRIVER_TRACE_CBID_cuEventDestroy_v2 = 325, + CUPTI_DRIVER_TRACE_CBID_cuStreamDestroy_v2 = 326, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetAddress2D_v3 = 327, + CUPTI_DRIVER_TRACE_CBID_cuIpcGetMemHandle = 328, + CUPTI_DRIVER_TRACE_CBID_cuIpcOpenMemHandle = 329, + CUPTI_DRIVER_TRACE_CBID_cuIpcCloseMemHandle = 330, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetByPCIBusId = 331, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetPCIBusId = 332, + CUPTI_DRIVER_TRACE_CBID_cuGLGetDevices = 333, + CUPTI_DRIVER_TRACE_CBID_cuIpcGetEventHandle = 334, + CUPTI_DRIVER_TRACE_CBID_cuIpcOpenEventHandle = 335, + CUPTI_DRIVER_TRACE_CBID_cuCtxSetSharedMemConfig = 336, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetSharedMemConfig = 337, + CUPTI_DRIVER_TRACE_CBID_cuFuncSetSharedMemConfig = 338, + CUPTI_DRIVER_TRACE_CBID_cuTexObjectCreate = 339, + CUPTI_DRIVER_TRACE_CBID_cuTexObjectDestroy = 340, + CUPTI_DRIVER_TRACE_CBID_cuTexObjectGetResourceDesc = 341, + CUPTI_DRIVER_TRACE_CBID_cuTexObjectGetTextureDesc = 342, + CUPTI_DRIVER_TRACE_CBID_cuSurfObjectCreate = 343, + CUPTI_DRIVER_TRACE_CBID_cuSurfObjectDestroy = 344, + CUPTI_DRIVER_TRACE_CBID_cuSurfObjectGetResourceDesc = 345, + CUPTI_DRIVER_TRACE_CBID_cuStreamAddCallback = 346, + CUPTI_DRIVER_TRACE_CBID_cuMipmappedArrayCreate = 347, + CUPTI_DRIVER_TRACE_CBID_cuMipmappedArrayGetLevel = 348, + CUPTI_DRIVER_TRACE_CBID_cuMipmappedArrayDestroy = 349, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetMipmappedArray = 350, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetMipmapFilterMode = 351, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetMipmapLevelBias = 352, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetMipmapLevelClamp = 353, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetMaxAnisotropy = 354, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetMipmappedArray = 355, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetMipmapFilterMode = 356, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetMipmapLevelBias = 357, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetMipmapLevelClamp = 358, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetMaxAnisotropy = 359, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceGetMappedMipmappedArray = 360, + CUPTI_DRIVER_TRACE_CBID_cuTexObjectGetResourceViewDesc = 361, + CUPTI_DRIVER_TRACE_CBID_cuLinkCreate = 362, + CUPTI_DRIVER_TRACE_CBID_cuLinkAddData = 363, + CUPTI_DRIVER_TRACE_CBID_cuLinkAddFile = 364, + CUPTI_DRIVER_TRACE_CBID_cuLinkComplete = 365, + CUPTI_DRIVER_TRACE_CBID_cuLinkDestroy = 366, + CUPTI_DRIVER_TRACE_CBID_cuStreamCreateWithPriority = 367, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetPriority = 368, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetFlags = 369, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetStreamPriorityRange = 370, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocManaged = 371, + CUPTI_DRIVER_TRACE_CBID_cuGetErrorString = 372, + CUPTI_DRIVER_TRACE_CBID_cuGetErrorName = 373, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxActiveBlocksPerMultiprocessor = 374, + CUPTI_DRIVER_TRACE_CBID_cuCompilePtx = 375, + CUPTI_DRIVER_TRACE_CBID_cuBinaryFree = 376, + CUPTI_DRIVER_TRACE_CBID_cuStreamAttachMemAsync = 377, + CUPTI_DRIVER_TRACE_CBID_cuPointerSetAttribute = 378, + CUPTI_DRIVER_TRACE_CBID_cuMemHostRegister_v2 = 379, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceSetMapFlags_v2 = 380, + CUPTI_DRIVER_TRACE_CBID_cuLinkCreate_v2 = 381, + CUPTI_DRIVER_TRACE_CBID_cuLinkAddData_v2 = 382, + CUPTI_DRIVER_TRACE_CBID_cuLinkAddFile_v2 = 383, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxPotentialBlockSize = 384, + CUPTI_DRIVER_TRACE_CBID_cuGLGetDevices_v2 = 385, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxRetain = 386, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxRelease = 387, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxSetFlags = 388, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxReset = 389, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsEGLRegisterImage = 390, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetFlags = 391, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxGetState = 392, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamConsumerConnect = 393, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamConsumerDisconnect = 394, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamConsumerAcquireFrame = 395, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamConsumerReleaseFrame = 396, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoD_v2_ptds = 397, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoH_v2_ptds = 398, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoD_v2_ptds = 399, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoA_v2_ptds = 400, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoD_v2_ptds = 401, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoA_v2_ptds = 402, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoH_v2_ptds = 403, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoA_v2_ptds = 404, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2D_v2_ptds = 405, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DUnaligned_v2_ptds = 406, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3D_v2_ptds = 407, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy_ptds = 408, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyPeer_ptds = 409, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DPeer_ptds = 410, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD8_v2_ptds = 411, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD16_v2_ptds = 412, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD32_v2_ptds = 413, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D8_v2_ptds = 414, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D16_v2_ptds = 415, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D32_v2_ptds = 416, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObject_v2_ptds = 417, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAsync_ptsz = 418, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoAAsync_v2_ptsz = 419, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyAtoHAsync_v2_ptsz = 420, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyHtoDAsync_v2_ptsz = 421, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoHAsync_v2_ptsz = 422, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyDtoDAsync_v2_ptsz = 423, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy2DAsync_v2_ptsz = 424, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DAsync_v2_ptsz = 425, + CUPTI_DRIVER_TRACE_CBID_cuMemcpyPeerAsync_ptsz = 426, + CUPTI_DRIVER_TRACE_CBID_cuMemcpy3DPeerAsync_ptsz = 427, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD8Async_ptsz = 428, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD16Async_ptsz = 429, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD32Async_ptsz = 430, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D8Async_ptsz = 431, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D16Async_ptsz = 432, + CUPTI_DRIVER_TRACE_CBID_cuMemsetD2D32Async_ptsz = 433, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetPriority_ptsz = 434, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetFlags_ptsz = 435, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitEvent_ptsz = 436, + CUPTI_DRIVER_TRACE_CBID_cuStreamAddCallback_ptsz = 437, + CUPTI_DRIVER_TRACE_CBID_cuStreamAttachMemAsync_ptsz = 438, + CUPTI_DRIVER_TRACE_CBID_cuStreamQuery_ptsz = 439, + CUPTI_DRIVER_TRACE_CBID_cuStreamSynchronize_ptsz = 440, + CUPTI_DRIVER_TRACE_CBID_cuEventRecord_ptsz = 441, + CUPTI_DRIVER_TRACE_CBID_cuLaunchKernel_ptsz = 442, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsMapResources_ptsz = 443, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsUnmapResources_ptsz = 444, + CUPTI_DRIVER_TRACE_CBID_cuGLMapBufferObjectAsync_v2_ptsz = 445, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamProducerConnect = 446, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamProducerDisconnect = 447, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamProducerPresentFrame = 448, + CUPTI_DRIVER_TRACE_CBID_cuGraphicsResourceGetMappedEglFrame = 449, + CUPTI_DRIVER_TRACE_CBID_cuPointerGetAttributes = 450, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags = 451, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxPotentialBlockSizeWithFlags = 452, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamProducerReturnFrame = 453, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetP2PAttribute = 454, + CUPTI_DRIVER_TRACE_CBID_cuTexRefSetBorderColor = 455, + CUPTI_DRIVER_TRACE_CBID_cuTexRefGetBorderColor = 456, + CUPTI_DRIVER_TRACE_CBID_cuMemAdvise = 457, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue32 = 458, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue32_ptsz = 459, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue32 = 460, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue32_ptsz = 461, + CUPTI_DRIVER_TRACE_CBID_cuStreamBatchMemOp = 462, + CUPTI_DRIVER_TRACE_CBID_cuStreamBatchMemOp_ptsz = 463, + CUPTI_DRIVER_TRACE_CBID_cuNVNbufferGetPointer = 464, + CUPTI_DRIVER_TRACE_CBID_cuNVNtextureGetArray = 465, + CUPTI_DRIVER_TRACE_CBID_cuNNSetAllocator = 466, + CUPTI_DRIVER_TRACE_CBID_cuMemPrefetchAsync = 467, + CUPTI_DRIVER_TRACE_CBID_cuMemPrefetchAsync_ptsz = 468, + CUPTI_DRIVER_TRACE_CBID_cuEventCreateFromNVNSync = 469, + CUPTI_DRIVER_TRACE_CBID_cuEGLStreamConsumerConnectWithFlags = 470, + CUPTI_DRIVER_TRACE_CBID_cuMemRangeGetAttribute = 471, + CUPTI_DRIVER_TRACE_CBID_cuMemRangeGetAttributes = 472, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue64 = 473, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue64_ptsz = 474, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue64 = 475, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue64_ptsz = 476, + CUPTI_DRIVER_TRACE_CBID_cuLaunchCooperativeKernel = 477, + CUPTI_DRIVER_TRACE_CBID_cuLaunchCooperativeKernel_ptsz = 478, + CUPTI_DRIVER_TRACE_CBID_cuEventCreateFromEGLSync = 479, + CUPTI_DRIVER_TRACE_CBID_cuLaunchCooperativeKernelMultiDevice = 480, + CUPTI_DRIVER_TRACE_CBID_cuFuncSetAttribute = 481, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetUuid = 482, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCtx = 483, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCtx_ptsz = 484, + CUPTI_DRIVER_TRACE_CBID_cuImportExternalMemory = 485, + CUPTI_DRIVER_TRACE_CBID_cuExternalMemoryGetMappedBuffer = 486, + CUPTI_DRIVER_TRACE_CBID_cuExternalMemoryGetMappedMipmappedArray = 487, + CUPTI_DRIVER_TRACE_CBID_cuDestroyExternalMemory = 488, + CUPTI_DRIVER_TRACE_CBID_cuImportExternalSemaphore = 489, + CUPTI_DRIVER_TRACE_CBID_cuSignalExternalSemaphoresAsync = 490, + CUPTI_DRIVER_TRACE_CBID_cuSignalExternalSemaphoresAsync_ptsz = 491, + CUPTI_DRIVER_TRACE_CBID_cuWaitExternalSemaphoresAsync = 492, + CUPTI_DRIVER_TRACE_CBID_cuWaitExternalSemaphoresAsync_ptsz = 493, + CUPTI_DRIVER_TRACE_CBID_cuDestroyExternalSemaphore = 494, + CUPTI_DRIVER_TRACE_CBID_cuStreamBeginCapture = 495, + CUPTI_DRIVER_TRACE_CBID_cuStreamBeginCapture_ptsz = 496, + CUPTI_DRIVER_TRACE_CBID_cuStreamEndCapture = 497, + CUPTI_DRIVER_TRACE_CBID_cuStreamEndCapture_ptsz = 498, + CUPTI_DRIVER_TRACE_CBID_cuStreamIsCapturing = 499, + CUPTI_DRIVER_TRACE_CBID_cuStreamIsCapturing_ptsz = 500, + CUPTI_DRIVER_TRACE_CBID_cuGraphCreate = 501, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddKernelNode = 502, + CUPTI_DRIVER_TRACE_CBID_cuGraphKernelNodeGetParams = 503, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddMemcpyNode = 504, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemcpyNodeGetParams = 505, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddMemsetNode = 506, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemsetNodeGetParams = 507, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemsetNodeSetParams = 508, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeGetType = 509, + CUPTI_DRIVER_TRACE_CBID_cuGraphGetRootNodes = 510, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeGetDependencies = 511, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeGetDependentNodes = 512, + CUPTI_DRIVER_TRACE_CBID_cuGraphInstantiate = 513, + CUPTI_DRIVER_TRACE_CBID_cuGraphLaunch = 514, + CUPTI_DRIVER_TRACE_CBID_cuGraphLaunch_ptsz = 515, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecDestroy = 516, + CUPTI_DRIVER_TRACE_CBID_cuGraphDestroy = 517, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddDependencies = 518, + CUPTI_DRIVER_TRACE_CBID_cuGraphRemoveDependencies = 519, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemcpyNodeSetParams = 520, + CUPTI_DRIVER_TRACE_CBID_cuGraphKernelNodeSetParams = 521, + CUPTI_DRIVER_TRACE_CBID_cuGraphDestroyNode = 522, + CUPTI_DRIVER_TRACE_CBID_cuGraphClone = 523, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeFindInClone = 524, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddChildGraphNode = 525, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddEmptyNode = 526, + CUPTI_DRIVER_TRACE_CBID_cuLaunchHostFunc = 527, + CUPTI_DRIVER_TRACE_CBID_cuLaunchHostFunc_ptsz = 528, + CUPTI_DRIVER_TRACE_CBID_cuGraphChildGraphNodeGetGraph = 529, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddHostNode = 530, + CUPTI_DRIVER_TRACE_CBID_cuGraphHostNodeGetParams = 531, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetLuid = 532, + CUPTI_DRIVER_TRACE_CBID_cuGraphHostNodeSetParams = 533, + CUPTI_DRIVER_TRACE_CBID_cuGraphGetNodes = 534, + CUPTI_DRIVER_TRACE_CBID_cuGraphGetEdges = 535, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCaptureInfo = 536, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCaptureInfo_ptsz = 537, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecKernelNodeSetParams = 538, + CUPTI_DRIVER_TRACE_CBID_cuStreamBeginCapture_v2 = 539, + CUPTI_DRIVER_TRACE_CBID_cuStreamBeginCapture_v2_ptsz = 540, + CUPTI_DRIVER_TRACE_CBID_cuThreadExchangeStreamCaptureMode = 541, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetNvSciSyncAttributes = 542, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyAvailableDynamicSMemPerBlock = 543, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxRelease_v2 = 544, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxReset_v2 = 545, + CUPTI_DRIVER_TRACE_CBID_cuDevicePrimaryCtxSetFlags_v2 = 546, + CUPTI_DRIVER_TRACE_CBID_cuMemAddressReserve = 547, + CUPTI_DRIVER_TRACE_CBID_cuMemAddressFree = 548, + CUPTI_DRIVER_TRACE_CBID_cuMemCreate = 549, + CUPTI_DRIVER_TRACE_CBID_cuMemRelease = 550, + CUPTI_DRIVER_TRACE_CBID_cuMemMap = 551, + CUPTI_DRIVER_TRACE_CBID_cuMemUnmap = 552, + CUPTI_DRIVER_TRACE_CBID_cuMemSetAccess = 553, + CUPTI_DRIVER_TRACE_CBID_cuMemExportToShareableHandle = 554, + CUPTI_DRIVER_TRACE_CBID_cuMemImportFromShareableHandle = 555, + CUPTI_DRIVER_TRACE_CBID_cuMemGetAllocationGranularity = 556, + CUPTI_DRIVER_TRACE_CBID_cuMemGetAllocationPropertiesFromHandle = 557, + CUPTI_DRIVER_TRACE_CBID_cuMemGetAccess = 558, + CUPTI_DRIVER_TRACE_CBID_cuStreamSetFlags = 559, + CUPTI_DRIVER_TRACE_CBID_cuStreamSetFlags_ptsz = 560, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecUpdate = 561, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecMemcpyNodeSetParams = 562, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecMemsetNodeSetParams = 563, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecHostNodeSetParams = 564, + CUPTI_DRIVER_TRACE_CBID_cuMemRetainAllocationHandle = 565, + CUPTI_DRIVER_TRACE_CBID_cuFuncGetModule = 566, + CUPTI_DRIVER_TRACE_CBID_cuIpcOpenMemHandle_v2 = 567, + CUPTI_DRIVER_TRACE_CBID_cuCtxResetPersistingL2Cache = 568, + CUPTI_DRIVER_TRACE_CBID_cuGraphKernelNodeCopyAttributes = 569, + CUPTI_DRIVER_TRACE_CBID_cuGraphKernelNodeGetAttribute = 570, + CUPTI_DRIVER_TRACE_CBID_cuGraphKernelNodeSetAttribute = 571, + CUPTI_DRIVER_TRACE_CBID_cuStreamCopyAttributes = 572, + CUPTI_DRIVER_TRACE_CBID_cuStreamCopyAttributes_ptsz = 573, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetAttribute = 574, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetAttribute_ptsz = 575, + CUPTI_DRIVER_TRACE_CBID_cuStreamSetAttribute = 576, + CUPTI_DRIVER_TRACE_CBID_cuStreamSetAttribute_ptsz = 577, + CUPTI_DRIVER_TRACE_CBID_cuGraphInstantiate_v2 = 578, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetTexture1DLinearMaxWidth = 579, + CUPTI_DRIVER_TRACE_CBID_cuGraphUpload = 580, + CUPTI_DRIVER_TRACE_CBID_cuGraphUpload_ptsz = 581, + CUPTI_DRIVER_TRACE_CBID_cuArrayGetSparseProperties = 582, + CUPTI_DRIVER_TRACE_CBID_cuMipmappedArrayGetSparseProperties = 583, + CUPTI_DRIVER_TRACE_CBID_cuMemMapArrayAsync = 584, + CUPTI_DRIVER_TRACE_CBID_cuMemMapArrayAsync_ptsz = 585, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecChildGraphNodeSetParams = 586, + CUPTI_DRIVER_TRACE_CBID_cuEventRecordWithFlags = 587, + CUPTI_DRIVER_TRACE_CBID_cuEventRecordWithFlags_ptsz = 588, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddEventRecordNode = 589, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddEventWaitNode = 590, + CUPTI_DRIVER_TRACE_CBID_cuGraphEventRecordNodeGetEvent = 591, + CUPTI_DRIVER_TRACE_CBID_cuGraphEventWaitNodeGetEvent = 592, + CUPTI_DRIVER_TRACE_CBID_cuGraphEventRecordNodeSetEvent = 593, + CUPTI_DRIVER_TRACE_CBID_cuGraphEventWaitNodeSetEvent = 594, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecEventRecordNodeSetEvent = 595, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecEventWaitNodeSetEvent = 596, + CUPTI_DRIVER_TRACE_CBID_cuArrayGetPlane = 597, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocAsync = 598, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocAsync_ptsz = 599, + CUPTI_DRIVER_TRACE_CBID_cuMemFreeAsync = 600, + CUPTI_DRIVER_TRACE_CBID_cuMemFreeAsync_ptsz = 601, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolTrimTo = 602, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolSetAttribute = 603, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolGetAttribute = 604, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolSetAccess = 605, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetDefaultMemPool = 606, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolCreate = 607, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolDestroy = 608, + CUPTI_DRIVER_TRACE_CBID_cuDeviceSetMemPool = 609, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetMemPool = 610, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocFromPoolAsync = 611, + CUPTI_DRIVER_TRACE_CBID_cuMemAllocFromPoolAsync_ptsz = 612, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolExportToShareableHandle = 613, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolImportFromShareableHandle = 614, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolExportPointer = 615, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolImportPointer = 616, + CUPTI_DRIVER_TRACE_CBID_cuMemPoolGetAccess = 617, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddExternalSemaphoresSignalNode = 618, + CUPTI_DRIVER_TRACE_CBID_cuGraphExternalSemaphoresSignalNodeGetParams = 619, + CUPTI_DRIVER_TRACE_CBID_cuGraphExternalSemaphoresSignalNodeSetParams = 620, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddExternalSemaphoresWaitNode = 621, + CUPTI_DRIVER_TRACE_CBID_cuGraphExternalSemaphoresWaitNodeGetParams = 622, + CUPTI_DRIVER_TRACE_CBID_cuGraphExternalSemaphoresWaitNodeSetParams = 623, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecExternalSemaphoresSignalNodeSetParams = 624, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecExternalSemaphoresWaitNodeSetParams = 625, + CUPTI_DRIVER_TRACE_CBID_cuGetProcAddress = 626, + CUPTI_DRIVER_TRACE_CBID_cuFlushGPUDirectRDMAWrites = 627, + CUPTI_DRIVER_TRACE_CBID_cuGraphDebugDotPrint = 628, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCaptureInfo_v2 = 629, + CUPTI_DRIVER_TRACE_CBID_cuStreamGetCaptureInfo_v2_ptsz = 630, + CUPTI_DRIVER_TRACE_CBID_cuStreamUpdateCaptureDependencies = 631, + CUPTI_DRIVER_TRACE_CBID_cuStreamUpdateCaptureDependencies_ptsz = 632, + CUPTI_DRIVER_TRACE_CBID_cuUserObjectCreate = 633, + CUPTI_DRIVER_TRACE_CBID_cuUserObjectRetain = 634, + CUPTI_DRIVER_TRACE_CBID_cuUserObjectRelease = 635, + CUPTI_DRIVER_TRACE_CBID_cuGraphRetainUserObject = 636, + CUPTI_DRIVER_TRACE_CBID_cuGraphReleaseUserObject = 637, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddMemAllocNode = 638, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddMemFreeNode = 639, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGraphMemTrim = 640, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetGraphMemAttribute = 641, + CUPTI_DRIVER_TRACE_CBID_cuDeviceSetGraphMemAttribute = 642, + CUPTI_DRIVER_TRACE_CBID_cuGraphInstantiateWithFlags = 643, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetExecAffinitySupport = 644, + CUPTI_DRIVER_TRACE_CBID_cuCtxCreate_v3 = 645, + CUPTI_DRIVER_TRACE_CBID_cuCtxGetExecAffinity = 646, + CUPTI_DRIVER_TRACE_CBID_cuDeviceGetUuid_v2 = 647, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemAllocNodeGetParams = 648, + CUPTI_DRIVER_TRACE_CBID_cuGraphMemFreeNodeGetParams = 649, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeSetEnabled = 650, + CUPTI_DRIVER_TRACE_CBID_cuGraphNodeGetEnabled = 651, + CUPTI_DRIVER_TRACE_CBID_cuLaunchKernelEx = 652, + CUPTI_DRIVER_TRACE_CBID_cuLaunchKernelEx_ptsz = 653, + CUPTI_DRIVER_TRACE_CBID_cuArrayGetMemoryRequirements = 654, + CUPTI_DRIVER_TRACE_CBID_cuMipmappedArrayGetMemoryRequirements = 655, + CUPTI_DRIVER_TRACE_CBID_cuGraphInstantiateWithParams = 656, + CUPTI_DRIVER_TRACE_CBID_cuGraphInstantiateWithParams_ptsz = 657, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecGetFlags = 658, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue32_v2 = 659, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue32_v2_ptsz = 660, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue64_v2 = 661, + CUPTI_DRIVER_TRACE_CBID_cuStreamWaitValue64_v2_ptsz = 662, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue32_v2 = 663, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue32_v2_ptsz = 664, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue64_v2 = 665, + CUPTI_DRIVER_TRACE_CBID_cuStreamWriteValue64_v2_ptsz = 666, + CUPTI_DRIVER_TRACE_CBID_cuStreamBatchMemOp_v2 = 667, + CUPTI_DRIVER_TRACE_CBID_cuStreamBatchMemOp_v2_ptsz = 668, + CUPTI_DRIVER_TRACE_CBID_cuGraphAddBatchMemOpNode = 669, + CUPTI_DRIVER_TRACE_CBID_cuGraphBatchMemOpNodeGetParams = 670, + CUPTI_DRIVER_TRACE_CBID_cuGraphBatchMemOpNodeSetParams = 671, + CUPTI_DRIVER_TRACE_CBID_cuGraphExecBatchMemOpNodeSetParams = 672, + CUPTI_DRIVER_TRACE_CBID_cuModuleGetLoadingMode = 673, + CUPTI_DRIVER_TRACE_CBID_cuMemGetHandleForAddressRange = 674, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxPotentialClusterSize = 675, + CUPTI_DRIVER_TRACE_CBID_cuOccupancyMaxActiveClusters = 676, + CUPTI_DRIVER_TRACE_CBID_SIZE = 677, + CUPTI_DRIVER_TRACE_CBID_FORCE_INT = 0x7fffffff +} CUpti_driver_api_trace_cbid; + diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling.h new file mode 100644 index 0000000000000000000000000000000000000000..ed965bb5d663bea7ef20593fb4de5cff86136cee --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling.h @@ -0,0 +1,923 @@ +/* + * Copyright 2020-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(_CUPTI_PCSAMPLING_H_) +#define _CUPTI_PCSAMPLING_H_ + +#include +#include +#include +#include "cupti_result.h" + +#ifndef CUPTIAPI +#ifdef _WIN32 +#define CUPTIAPI __stdcall +#else +#define CUPTIAPI +#endif +#endif + +#define ACTIVITY_RECORD_ALIGNMENT 8 +#if defined(_WIN32) // Windows 32- and 64-bit +#define START_PACKED_ALIGNMENT __pragma(pack(push,1)) // exact fit - no padding +#define PACKED_ALIGNMENT __declspec(align(ACTIVITY_RECORD_ALIGNMENT)) +#define END_PACKED_ALIGNMENT __pragma(pack(pop)) +#elif defined(__GNUC__) // GCC +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT __attribute__ ((__packed__)) __attribute__ ((aligned (ACTIVITY_RECORD_ALIGNMENT))) +#define END_PACKED_ALIGNMENT +#else // all other compilers +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT +#define END_PACKED_ALIGNMENT +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \defgroup CUPTI_PCSAMPLING_API CUPTI PC Sampling API + * Functions, types, and enums that implement the CUPTI PC Sampling API. + * @{ + */ + +#ifndef CUPTI_PCSAMPLING_STRUCT_SIZE +#define CUPTI_PCSAMPLING_STRUCT_SIZE(type_, lastfield_) (offsetof(type_, lastfield_) + sizeof(((type_*)0)->lastfield_)) +#endif + +#ifndef CUPTI_STALL_REASON_STRING_SIZE +#define CUPTI_STALL_REASON_STRING_SIZE 128 +#endif + +/** + * \brief PC Sampling collection mode + */ +typedef enum +{ + /** + * INVALID Value + */ + CUPTI_PC_SAMPLING_COLLECTION_MODE_INVALID = 0, + /** + * Continuous mode. Kernels are not serialized in this mode. + */ + CUPTI_PC_SAMPLING_COLLECTION_MODE_CONTINUOUS = 1, + /** + * Serialized mode. Kernels are serialized in this mode. + */ + CUPTI_PC_SAMPLING_COLLECTION_MODE_KERNEL_SERIALIZED = 2, +} CUpti_PCSamplingCollectionMode; + +/** + * \brief PC Sampling stall reasons + */ +typedef struct PACKED_ALIGNMENT +{ + /** + * [r] Collected stall reason index + */ + uint32_t pcSamplingStallReasonIndex; + /** + * [r] Number of times the PC was sampled with the stallReason. + */ + uint32_t samples; +} CUpti_PCSamplingStallReason; + +/** + * \brief PC Sampling data + */ +typedef struct PACKED_ALIGNMENT +{ + /** + * [w] Size of the data structure. + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [r] Unique cubin id + */ + uint64_t cubinCrc; + /** + * [r] PC offset + */ + uint64_t pcOffset; + /** + * The function's unique symbol index in the module. + */ + uint32_t functionIndex; + /** + * Padding + */ + uint32_t pad; + /** + * [r] The function name. This name string might be shared across all the records + * including records from activity APIs representing the same function, and so it should not be + * modified or freed until post processing of all the records is done. Once done, it is user’s responsibility to + * free the memory using free() function. + */ + char* functionName; + /** + * [r] Collected stall reason count + */ + size_t stallReasonCount; + /** + * [r] Stall reason id + * Total samples + */ + CUpti_PCSamplingStallReason *stallReason; +} CUpti_PCSamplingPCData; + +/** + * \brief PC Sampling output data format + */ +typedef enum +{ + CUPTI_PC_SAMPLING_OUTPUT_DATA_FORMAT_INVALID = 0, + /** + * HW buffer data will be parsed during collection of data + */ + CUPTI_PC_SAMPLING_OUTPUT_DATA_FORMAT_PARSED = 1, +} CUpti_PCSamplingOutputDataFormat; + +/** + * \brief Collected PC Sampling data + * + */ +typedef struct PACKED_ALIGNMENT +{ + /** + * [w] Size of the data structure. + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Number of PCs to be collected + */ + size_t collectNumPcs; + /** + * [r] Number of samples collected across all PCs. + * It includes samples for user modules, samples for non-user kernels and dropped samples. + * It includes counts for all non selected stall reasons. + * CUPTI does not provide PC records for non-user kernels. + * CUPTI does not provide PC records for instructions for which all selected stall reason metrics counts are zero. + */ + uint64_t totalSamples; + /** + * [r] Number of samples that were dropped by hardware due to backpressure/overflow. + */ + uint64_t droppedSamples; + /** + * [r] Number of PCs collected + */ + size_t totalNumPcs; + /** + * [r] Number of PCs available for collection + */ + size_t remainingNumPcs; + /** + * [r] Unique identifier for each range. + * Data collected across multiple ranges in multiple buffers can be identified using range id. + */ + uint64_t rangeId; + /** + * [r] Profiled PC data + * This data struct should have enough memory to collect number of PCs mentioned in \brief collectNumPcs + */ + CUpti_PCSamplingPCData *pPcData; + /** + * [r] Number of samples collected across all non user kernels PCs. + * It includes samples for non-user kernels. + * It includes counts for all non selected stall reasons as well. + * CUPTI does not provide PC records for non-user kernels. + */ + uint64_t nonUsrKernelsTotalSamples; +} CUpti_PCSamplingData; + +/** + * \brief PC Sampling configuration attributes + * + * PC Sampling configuration attribute types. These attributes can be read + * using \ref cuptiPCSamplingGetConfigurationAttribute and can be written + * using \ref cuptiPCSamplingSetConfigurationAttribute. Attributes marked + * [r] can only be read using \ref cuptiPCSamplingGetConfigurationAttribute + * [w] can only be written using \ref cuptiPCSamplingSetConfigurationAttribute + * [rw] can be read using \ref cuptiPCSamplingGetConfigurationAttribute and + * written using \ref cuptiPCSamplingSetConfigurationAttribute + */ +typedef enum +{ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_INVALID = 0, + /** + * [rw] Sampling period for PC Sampling. + * DEFAULT - CUPTI defined value based on number of SMs + * Valid values for the sampling + * periods are between 5 to 31 both inclusive. This will set the + * sampling period to (2^samplingPeriod) cycles. + * For e.g. for sampling period = 5 to 31, cycles = 32, 64, 128,..., 2^31 + * Value is a uint32_t + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SAMPLING_PERIOD = 1, + /** + * [w] Number of stall reasons to collect. + * DEFAULT - All stall reasons will be collected + * Value is a size_t + * [w] Stall reasons to collect + * DEFAULT - All stall reasons will be collected + * Input value should be a pointer pointing to array of stall reason indexes + * containing all the stall reason indexes to collect. + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_STALL_REASON = 2, + /** + * [rw] Size of SW buffer for raw PC counter data downloaded from HW buffer + * DEFAULT - 1 MB, which can accommodate approximately 5500 PCs + * with all stall reasons + * Approximately it takes 16 Bytes (and some fixed size memory) + * to accommodate one PC with one stall reason + * For e.g. 1 PC with 1 stall reason = 32 Bytes + * 1 PC with 2 stall reason = 48 Bytes + * 1 PC with 4 stall reason = 96 Bytes + * Value is a size_t + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SCRATCH_BUFFER_SIZE = 3, + /** + * [rw] Size of HW buffer in bytes + * DEFAULT - 512 MB + * If sampling period is too less, HW buffer can overflow + * and drop PC data + * Value is a size_t + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_HARDWARE_BUFFER_SIZE = 4, + /** + * [rw] PC Sampling collection mode + * DEFAULT - CUPTI_PC_SAMPLING_COLLECTION_MODE_CONTINUOUS + * Input value should be of type \ref CUpti_PCSamplingCollectionMode. + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_COLLECTION_MODE = 5, + /** + * [rw] Control over PC Sampling data collection range + * Default - 0 + * 1 - Allows user to start and stop PC Sampling using APIs - + * \ref cuptiPCSamplingStart() - Start PC Sampling + * \ref cuptiPCSamplingStop() - Stop PC Sampling + * Value is a uint32_t + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL = 6, + /** + * [w] Value for output data format + * Default - CUPTI_PC_SAMPLING_OUTPUT_DATA_FORMAT_PARSED + * Input value should be of type \ref CUpti_PCSamplingOutputDataFormat. + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_OUTPUT_DATA_FORMAT = 7, + /** + * [w] Data buffer to hold collected PC Sampling data PARSED_DATA + * Default - none. + * Buffer type is void * which can point to PARSED_DATA + * Refer \ref CUpti_PCSamplingData for buffer format for PARSED_DATA + */ + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SAMPLING_DATA_BUFFER = 8, + CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_FORCE_INT = 0x7fffffff, +} CUpti_PCSamplingConfigurationAttributeType; + +/** + * \brief PC sampling configuration information structure + * + * This structure provides \ref CUpti_PCSamplingConfigurationAttributeType which can be configured + * or queried for PC sampling configuration + */ +typedef struct +{ + /** + * Refer \ref CUpti_PCSamplingConfigurationAttributeType for all supported attribute types + */ + CUpti_PCSamplingConfigurationAttributeType attributeType; + /* + * Configure or query status for \p attributeType + * CUPTI_SUCCESS for valid \p attributeType and \p attributeData + * CUPTI_ERROR_INVALID_OPERATION if \p attributeData is not valid + * CUPTI_ERROR_INVALID_PARAMETER if \p attributeType is not valid + */ + CUptiResult attributeStatus; + union + { + /** + * Invalid Value + */ + struct + { + uint64_t data[3]; + } invalidData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SAMPLING_PERIOD + */ + struct + { + uint32_t samplingPeriod; + } samplingPeriodData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_STALL_REASON + */ + struct + { + size_t stallReasonCount; + uint32_t *pStallReasonIndex; + } stallReasonData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SCRATCH_BUFFER_SIZE + */ + struct + { + size_t scratchBufferSize; + } scratchBufferSizeData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_HARDWARE_BUFFER_SIZE + */ + struct + { + size_t hardwareBufferSize; + } hardwareBufferSizeData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_COLLECTION_MODE + */ + struct + { + CUpti_PCSamplingCollectionMode collectionMode; + } collectionModeData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL + */ + struct + { + uint32_t enableStartStopControl; + } enableStartStopControlData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_OUTPUT_DATA_FORMAT + */ + struct + { + CUpti_PCSamplingOutputDataFormat outputDataFormat; + } outputDataFormatData; + /** + * Refer \ref CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_SAMPLING_DATA_BUFFER + */ + struct + { + void *samplingDataBuffer; + } samplingDataBufferData; + } attributeData; +} CUpti_PCSamplingConfigurationInfo; + +/** + * \brief PC sampling configuration structure + * + * This structure configures PC sampling using \ref cuptiPCSamplingSetConfigurationAttribute + * and queries PC sampling default configuration using \ref cuptiPCSamplingGetConfigurationAttribute + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingConfigurationInfoParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; + /** + * [w] Number of attributes to configure using \ref cuptiPCSamplingSetConfigurationAttribute or query + * using \ref cuptiPCSamplingGetConfigurationAttribute + */ + size_t numAttributes; + /** + * Refer \ref CUpti_PCSamplingConfigurationInfo + */ + CUpti_PCSamplingConfigurationInfo *pPCSamplingConfigurationInfo; +} CUpti_PCSamplingConfigurationInfoParams; +#define CUpti_PCSamplingConfigurationInfoParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingConfigurationInfoParams,pPCSamplingConfigurationInfo) + +/** + * \brief Write PC Sampling configuration attribute. + * + * \param pParams A pointer to \ref CUpti_PCSamplingConfigurationInfoParams + * containing PC sampling configuration. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called with + * some invalid \p attrib. + * \retval CUPTI_ERROR_INVALID_PARAMETER if attribute \p value is not valid + * or any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingSetConfigurationAttribute(CUpti_PCSamplingConfigurationInfoParams *pParams); + +/** + * \brief Read PC Sampling configuration attribute. + * + * \param pParams A pointer to \ref CUpti_PCSamplingConfigurationInfoParams + * containing PC sampling configuration. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called with + * some invalid attribute. + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p attrib is not valid + * or any \p pParams is not valid + * \retval CUPTI_ERROR_PARAMETER_SIZE_NOT_SUFFICIENT indicates that + * the \p value buffer is too small to hold the attribute value + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingGetConfigurationAttribute(CUpti_PCSamplingConfigurationInfoParams *pParams); + +/** + * \brief Params for cuptiPCSamplingEnable + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingGetDataParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; + /** + * \param pcSamplingData Data buffer to hold collected PC Sampling data PARSED_DATA + * Buffer type is void * which can point to PARSED_DATA + * Refer \ref CUpti_PCSamplingData for buffer format for PARSED_DATA + */ + void *pcSamplingData; +} CUpti_PCSamplingGetDataParams; +#define CUpti_PCSamplingGetDataParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingGetDataParams, pcSamplingData) +/** + * \brief Flush GPU PC sampling data periodically. + * + * Flushing of GPU PC Sampling data is required at following point to maintain uniqueness of PCs: + * For \brief CUPTI_PC_SAMPLING_COLLECTION_MODE_CONTINUOUS, after every module load-unload-load + * For \brief CUPTI_PC_SAMPLING_COLLECTION_MODE_KERNEL_SERIALIZED, after every kernel ends + * If configuration option \brief CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL + * is enabled, then after every range end i.e. \brief cuptiPCSamplingStop() + * + * If application is profiled in \brief CUPTI_PC_SAMPLING_COLLECTION_MODE_CONTINUOUS, with disabled + * \brief CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL, and there is no module unload, + * user can collect data in two ways: + * Use \brief cuptiPCSamplingGetData() API periodically + * Use \brief cuptiPCSamplingDisable() on application exit and read GPU PC sampling data from sampling + * data buffer passed during configuration. + * Note: In case, \brief cuptiPCSamplingGetData() API is not called periodically, then sampling data buffer + * passed during configuration should be large enough to hold all PCs data. + * \brief cuptiPCSamplingGetData() API never does device synchronization. + * It is possible that when the API is called there is some unconsumed data from the HW buffer. In this case + * CUPTI provides only the data available with it at that moment. + * + * \param Refer \ref CUpti_PCSamplingGetDataParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called without + * enabling PC sampling. + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingGetData(CUpti_PCSamplingGetDataParams *pParams); + +/** + * \brief Params for cuptiPCSamplingEnable + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingEnableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; +} CUpti_PCSamplingEnableParams; +#define CUpti_PCSamplingEnableParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingEnableParams, ctx) + +/** + * \brief Enable PC sampling. + * + * \param Refer \ref CUpti_PCSamplingEnableParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingEnable(CUpti_PCSamplingEnableParams *pParams); + +/** + * \brief Params for cuptiPCSamplingDisable + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; +} CUpti_PCSamplingDisableParams; +#define CUpti_PCSamplingDisableParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingDisableParams, ctx) + +/** + * \brief Disable PC sampling. + * + * For application which doesn't destroy the CUDA context explicitly, + * this API does the PC Sampling tear-down, joins threads and copies PC records in the buffer provided + * during the PC sampling configuration. PC records which can't be accommodated in the buffer are discarded. + * + * \param Refer \ref CUpti_PCSamplingDisableParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingDisable(CUpti_PCSamplingDisableParams *pParams); + +/** + * \brief Params for cuptiPCSamplingStart + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingStartParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; +} CUpti_PCSamplingStartParams; +#define CUpti_PCSamplingStartParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingStartParams, ctx) + +/** + * \brief Start PC sampling. + * + * User can collect PC Sampling data for user-defined range specified by Start/Stop APIs. + * This API can be used to mark starting of range. Set configuration option + * \brief CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL to use this API. + * + * \param Refer \ref CUpti_PCSamplingStartParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called with + * incorrect PC Sampling configuration. + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingStart(CUpti_PCSamplingStartParams *pParams); + +/** + * \brief Params for cuptiPCSamplingStop + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingStopParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; +} CUpti_PCSamplingStopParams; +#define CUpti_PCSamplingStopParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingStopParams, ctx) + +/** + * \brief Stop PC sampling. + * + * User can collect PC Sampling data for user-defined range specified by Start/Stop APIs. + * This API can be used to mark end of range. Set configuration option + * \brief CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_ENABLE_START_STOP_CONTROL to use this API. + * + * \param Refer \ref CUpti_PCSamplingStopParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called with + * incorrect PC Sampling configuration. + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingStop(CUpti_PCSamplingStopParams *pParams); + +/** + * \brief Params for cuptiPCSamplingGetNumStallReasons + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingGetNumStallReasonsParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; + /** + * [r] Number of stall reasons + */ + size_t *numStallReasons; +} CUpti_PCSamplingGetNumStallReasonsParams; +#define CUpti_PCSamplingGetNumStallReasonsParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingGetNumStallReasonsParams, numStallReasons) + +/** + * \brief Get PC sampling stall reason count. + * + * \param Refer \ref CUpti_PCSamplingGetNumStallReasonsParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingGetNumStallReasons(CUpti_PCSamplingGetNumStallReasonsParams *pParams); + +/** + * \brief Params for cuptiPCSamplingGetStallReasons + */ +typedef struct +{ + /** + * [w] Size of the data structure i.e. CUpti_PCSamplingGetStallReasonsParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Assign to NULL + */ + void* pPriv; + /** + * [w] CUcontext + */ + CUcontext ctx; + /** + * [w] Number of stall reasons + */ + size_t numStallReasons; + /** + * [r] Stall reason index + */ + uint32_t *stallReasonIndex; + /** + * [r] Stall reasons name + */ + char **stallReasons; +} CUpti_PCSamplingGetStallReasonsParams; +#define CUpti_PCSamplingGetStallReasonsParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_PCSamplingGetStallReasonsParams, stallReasons) + +/** + * \brief Get PC sampling stall reasons. + * + * \param Refer \ref CUpti_PCSamplingGetStallReasonsParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device + * does not support the API + */ +CUptiResult CUPTIAPI cuptiPCSamplingGetStallReasons(CUpti_PCSamplingGetStallReasonsParams *pParams); + +/** + * \brief Params for cuptiGetSassToSourceCorrelation + */ +typedef struct { + /** + * [w] Size of the data structure i.e. CUpti_GetSassToSourceCorrelationParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Pointer to cubin binary where function belongs. + */ + const void* cubin; + /** + * [w] Function name to which PC belongs. + */ + const char *functionName; + /** + * [w] Size of cubin binary. + */ + size_t cubinSize; + /** + * [r] Line number in the source code. + */ + uint32_t lineNumber; + /** + * [w] PC offset + */ + uint64_t pcOffset; + /** + * [r] Path for the source file. + */ + char *fileName; + /** + * [r] Path for the directory of source file. + */ + char *dirName; +} CUpti_GetSassToSourceCorrelationParams; +#define CUpti_GetSassToSourceCorrelationParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_GetSassToSourceCorrelationParams, dirName) + +/** + * \brief SASS to Source correlation. + * + * \param Refer \ref CUpti_GetSassToSourceCorrelationParams + * + * It is expected from user to free allocated memory for fileName and dirName after use. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if either of the parameters cubin or functionName + * is NULL or cubinSize is zero or size field is not set correctly. + * \retval CUPTI_ERROR_INVALID_MODULE provided cubin is invalid. + * \retval CUPTI_ERROR_UNKNOWN an internal error occurred. + * This error code is also used for cases when the function is not present in the module. + * A better error code will be returned in the future release. + */ +CUptiResult CUPTIAPI cuptiGetSassToSourceCorrelation(CUpti_GetSassToSourceCorrelationParams *pParams); + +/** + * \brief Params for cuptiGetCubinCrc + */ +typedef struct { + /** + * [w] Size of configuration structure. + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * [w] Size of cubin binary. + */ + size_t cubinSize; + /** + * [w] Pointer to cubin binary + */ + const void* cubin; + /** + * [r] Computed CRC will be stored in it. + */ + uint64_t cubinCrc; +} CUpti_GetCubinCrcParams; +#define CUpti_GetCubinCrcParamsSize CUPTI_PCSAMPLING_STRUCT_SIZE(CUpti_GetCubinCrcParams, cubinCrc) + +/** + * \brief Get the CRC of cubin. + * + * This function returns the CRC of provided cubin binary. + * + * \param Refer \ref CUpti_GetCubinCrcParams + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if parameter cubin is NULL or + * provided cubinSize is zero or size field is not set. + */ +CUptiResult CUPTIAPI cuptiGetCubinCrc(CUpti_GetCubinCrcParams *pParams); + +/** + * \brief Function type for callback used by CUPTI to request crc of + * loaded module. + * + * This callback function ask for crc of provided module in function. + * The provided crc will be stored in PC sampling records i.e. in the field 'cubinCrc' of the PC sampling + * struct CUpti_PCSamplingPCData. The CRC is uses during the offline source correlation to uniquely identify the module. + * + * \param cubin The pointer to cubin binary + * \param cubinSize The size of cubin binary. + * \param cubinCrc Returns the computed crc of cubin. + */ +typedef void (CUPTIAPI *CUpti_ComputeCrcCallbackFunc)( + const void* cubin, + size_t cubinSize, + uint64_t *cubinCrc); + +/** + * \brief Register callback function with CUPTI to use + * your own algorithm to compute cubin crc. + * + * This function registers a callback function and it gets called + * from CUPTI when a CUDA module is loaded. + * + * \param funcComputeCubinCrc callback is invoked when a CUDA module + * is loaded. + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p funcComputeCubinCrc is NULL. + */ +CUptiResult CUPTIAPI cuptiRegisterComputeCrcCallback(CUpti_ComputeCrcCallbackFunc funcComputeCubinCrc); + +/** @} */ /* END CUPTI_PCSAMPLING_API */ + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +#if defined(__cplusplus) +} +#endif + +#endif /*_CUPTI_PCSAMPLING_H_*/ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_sass_metrics.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_sass_metrics.h new file mode 100644 index 0000000000000000000000000000000000000000..acb59cf8e5882a5ff13b4a1b0fdc6bc7b0ec47f7 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_sass_metrics.h @@ -0,0 +1,436 @@ +/* + * Copyright 2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(_CUPTI_SASS_METRICS_H_) +#define _CUPTI_SASS_METRICS_H_ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \defgroup CUPTI_SASS_METRICS_API CUPTI SASS Metrics API + * Functions, types, and enums that implement the CUPTI SASS Metrics API. + * @{ + */ + +typedef enum +{ + /// SASS metric data will be collected at GPU level. + /// In CUpti_SassMetricsGetDataProperties_Params struct the numOfInstances will be equal to 1 + CUPTI_SASS_METRICS_OUTPUT_GRANULARITY_GPU = 0, + + /// SASS metric data will be collected at SM level + /// In CUpti_SassMetricsGetDataProperties_Params struct the numOfInstances will be equal to number of SMs in the GPU + CUPTI_SASS_METRICS_OUTPUT_GRANULARITY_SM = 1, + + /// SASS metric data will be collected at SM sub-partition level + /// In CUpti_SassMetricsGetDataProperties_Params struct the numOfInstances will be equal to number of SM sub-partitions in the GPU + CUPTI_SASS_METRICS_OUTPUT_GRANULARITY_SMSP = 2, + + CUPTI_SASS_METRICS_OUTPUT_GRANULARITY_INVALID +} CUpti_SassMetrics_OutputGranularity; + +typedef struct CUpti_SassMetrics_MetricDetails +{ + /// unique ID for the SASS metric + uint64_t metricId; + /// metric name + const char* pMetricName; + /// metric description + const char* pMetricDescription; +} CUpti_SassMetrics_MetricDetails; + +/** + * \brief Params for cuptiSassMetricsGetNumOfMetrics + */ +typedef struct CUpti_SassMetrics_GetNumOfMetrics_Params +{ + /// [in] should be equal to CUpti_SassMetrics_GetNumOfMetrics_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] chip name for which metrics will be queried + const char* pChipName; + /// [out] number of metrics supported for the queried chip + size_t numOfMetrics; +} CUpti_SassMetrics_GetNumOfMetrics_Params; + +#define CUpti_SassMetrics_GetNumOfMetrics_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetrics_GetNumOfMetrics_Params, numOfMetrics) + +/** + * \brief Get the number of supported SASS metrics for the chip. + * + * \param pParams A pointer to \ref CUpti_SassMetrics_GetNumOfMetrics_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric collection + */ +CUptiResult CUPTIAPI cuptiSassMetricsGetNumOfMetrics(CUpti_SassMetrics_GetNumOfMetrics_Params* pParams); + +/** + * \brief Params for cuptiSassMetricsGetMetrics + */ +typedef struct CUpti_SassMetrics_GetMetrics_Params +{ + /// [in] should be equal to CUpti_SassMetrics_GetMetrics_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] chip name for which metrics will be queried + const char* pChipName; + /// [in] number of metrics supported for the queried chip (can be queried using cuptiSassMetricsGetNumOfMetrics()) + size_t numOfMetrics; + /// [out] list of metrics supported for queried chip + CUpti_SassMetrics_MetricDetails* pMetricsList; +} CUpti_SassMetrics_GetMetrics_Params; +#define CUpti_SassMetrics_GetMetrics_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetrics_GetMetrics_Params, pMetricsList) + +/** + * \brief Get the list of all supported SASS metrics for the chip. + * + * \param pParams A pointer to \ref CUpti_SassMetrics_GetMetrics_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric collection + */ +CUptiResult CUPTIAPI cuptiSassMetricsGetMetrics(CUpti_SassMetrics_GetMetrics_Params* pParams); + +/** + * \brief Params for cuptiSassMetricsGetProperties + */ +typedef struct CUpti_SassMetrics_GetProperties_Params +{ + /// [in] should be equal to CUpti_SassMetrics_GetProperties_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] chip name for which metric will be queried + const char* pChipName; + /// [in] metric name + const char* pMetricName; + /// [out] returns the metric ID and the metric description + CUpti_SassMetrics_MetricDetails metric; +} CUpti_SassMetrics_GetProperties_Params; +#define CUpti_SassMetrics_GetProperties_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetrics_GetProperties_Params, metric) + +/** + * \brief Get metric properties for the queried metric. + * For a given metric the results will be put in CUpti_SassMetrics_MetricDetails which + * stores metric ID, description of the metric. + * + * \param pParams A pointer to \ref CUpti_SassMetrics_GetProperties_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + */ +CUptiResult CUPTIAPI cuptiSassMetricsGetProperties(CUpti_SassMetrics_GetProperties_Params *pParams); + +typedef struct CUpti_SassMetrics_Config +{ + /// [in] unique id for the SASS metric, can be queried using cuptiSassMetricsGetProperties() + uint64_t metricId; + /// [in] CUpti_SassMetrics_OutputGranularity + uint8_t outputGranularity; +} CUpti_SassMetrics_Config; + +/** + * \brief Params for cuptiSassMetricsSetConfig + */ +typedef struct CUpti_SassMetricsSetConfig_Params +{ + /// [in] equal to CUpti_SassMetricsSetConfig_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] num of metric configs, will be equal to number of metrics queried + size_t numOfMetricConfig; + /// [in] list of metric config generated for given sass metrics + CUpti_SassMetrics_Config* pConfigs; + /// [in] device index for which config will be set, user can call this once for + /// the device on which the the SASS metric data will be collected + uint32_t deviceIndex; +} CUpti_SassMetricsSetConfig_Params; +#define CUpti_SassMetricsSetConfig_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsSetConfig_Params, deviceIndex) + +/** + * \brief Set config for the SASS metric data collection for a device. + * User need to call this API before calling any of the SASS metric data collection APIs. + * Each set config API call need to be followed by cuptiSassPatchingUnSetConfig API + * before calling the cuptiSassMetricsSetConfig() API again for the same device. + * + * \param pParams A pointer to \ref CUpti_SassMetricsSetConfig_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_INVALID_CONTEXT if any cuda context has not been created prior to this API call + * \retval CUPTI_ERROR_INVALID_OPERATION if this is called multiple times for the device without calling unset config API + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + */ +CUptiResult CUPTIAPI cuptiSassMetricsSetConfig(CUpti_SassMetricsSetConfig_Params *pParams); + +/** + * \brief Params for cuptiSassMetricsUnsetConfig + */ +typedef struct CUpti_SassMetricsUnsetConfig_Params +{ + /// [in] equal to CUpti_SassMetricsUnsetConfig_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] device index for which SASS metric data collection config will get reset, user need to call this API for + /// all the devices on which the the SASS metric data collection have been configured. + uint32_t deviceIndex; +} CUpti_SassMetricsUnsetConfig_Params; +#define CUpti_SassMetricsUnsetConfig_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsUnsetConfig_Params, deviceIndex) + +/** + * \brief Unset config API will reset the SASS metric data collection configuration for the device. + * Once this API called CUPTI will deallocate all the memory allocated and remove all + * the configuration for SASS metric data collection. User can only call this API for a device where + * cuptiSassMetricsSetConfig() API has been called earlier for the device. + * + * \param pParams A pointer to \ref CUpti_SassMetricsSetConfig_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_INVALID_CONTEXT if any cuda context has not been created prior to this API call + * \retval CUPTI_ERROR_INVALID_OPERATION if this is called multiple times for the device without calling set config API + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + */ +CUptiResult CUPTIAPI cuptiSassMetricsUnsetConfig(CUpti_SassMetricsUnsetConfig_Params *pParams); + +/** + * \brief Params for cuptiSassMetricsEnable + */ +typedef struct CUpti_SassMetricsEnable_Params +{ + /// [in] equal to CUpti_SassMetricsEnable_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] CUDA context on which SASS metric data collection will be enabled. + /// If set NULL, default context will be consider for SASS metric data collection. + CUcontext ctx; + /// [in] if false, all the functions will patched regardless of their execution with cuptiSassMetricsEnable() API call. + /// when this parameter is set to true, metric data collection for the function will be done at the very first execution in the enable/disble + /// range. + uint8_t enableLazyPatching; +} CUpti_SassMetricsEnable_Params; +#define CUpti_SassMetricsEnable_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsEnable_Params, enableLazyPatching) + +/** + * \brief Sass metric data collection enable API will mark the start of a range, between which kernel + * will be profiled for SASS metrics. + * + * \param pParams A pointer to \ref CUpti_SassMetricsEnable_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + * \retval CUPTI_ERROR_INVALID_CONTEXT if any cuda context has not been created prior to this API call + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called multiple times for a cuda context without calling + * cuptiSassMetricsDisable() API or called before cuptiSassMetricsSetConfig() API call. + */ +CUptiResult CUPTIAPI cuptiSassMetricsEnable(CUpti_SassMetricsEnable_Params* pParams); + +/** + * \brief Params for cuptiSassMetricsDisable + */ +typedef struct CUpti_SassMetricsDisable_Params +{ + /// [in] equal to CUpti_SassMetricsDisable_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] CUDA context on which SASS metric data collection will be disabled. + /// If set NULL, default context will be consider for SASS metric data collection. + CUcontext ctx; + /// [out] Num of dropped SASS records will be equal to numOfPatchedInstructions * numOfInstances. + /// Number of dropped records will be zero when data is flushed prior to calling the disable API. + size_t numOfDroppedRecords; +} CUpti_SassMetricsDisable_Params; +#define CUpti_SassMetricsDisable_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsDisable_Params, numOfDroppedRecords) + +/** + * \brief SASS metric data collection disable API will mark the end of a range, any kernel launched after this + * API call will not be profiled for the SASS metrics. + * + * \param pParams A pointer to \ref CUpti_SassMetricsDisable_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + * \retval CUPTI_ERROR_INVALID_CONTEXT if any cuda context has not been created prior to this API call + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called multiple times for a cuda context without calling + * cuptiSassMetricsEnable() API or called before cuptiSassMetricsSetConfig() API call. + */ +CUptiResult CUPTIAPI cuptiSassMetricsDisable(CUpti_SassMetricsDisable_Params* pParams); + +/** + * \brief Params for cuptiSassMetricsGetDataProperties + */ +typedef struct CUpti_SassMetricsGetDataProperties_Params +{ + /// [in] equal to CUpti_SassMetricsGetDataProperties_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] CUDA context on which SASS metric data collection was enabled. + /// If set NULL, default context will be consider for SASS metric data collection. + CUcontext ctx; + /// [out] total number of SASS records has been collected + size_t numOfPatchedInstructionRecords; + /// [out] number of instances for each metric value per instruction. + /// This will depend on CUpti_SassPatching_OutputGranularity level set for the metric config. + size_t numOfInstances; +} CUpti_SassMetricsGetDataProperties_Params; + +#define CUpti_SassMetricsGetDataProperties_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsGetDataProperties_Params, numOfInstances) +/** + * \brief SASS metric data properties API will give the data regarding number of instances of a metric + * value and number of SASS instruction data has been collected. The number of instances of a metric + * will vary as per user set the output granularity level with CUpti_SassMetrics_OutputGranularity value. + * User need to allocate memory for retriving the SASS data using cuptiSassMetricsFlushData() API. + * + * \param pParams A pointer to \ref CUpti_SassMetricsGetDataProperties_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called outside the enable/disable range. + */ +CUptiResult CUPTIAPI cuptiSassMetricsGetDataProperties(CUpti_SassMetricsGetDataProperties_Params* pParams); + +typedef struct CUpti_SassMetrics_InstanceValue +{ + // unique id of the metric + uint64_t metricId; + // metric value + uint64_t value; +} CUpti_SassMetrics_InstanceValue; +#define CUpti_SassMetrics_InstanceValue_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetrics_InstanceValue, value) + +typedef struct CUpti_SassMetrics_Data +{ + /// [in] equal to CUpti_SassMetricsFlushData_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [out] Unique cubin id + uint32_t cubinCrc; + /// [out] function's unique symbol index in the module. + uint32_t functionIndex; + /// [out] The function name + const char* functionName; + /// [out] pc offset for the function in a module + uint32_t pcOffset; + /// [out] array of size equal to number of instances per metric, which contains the metric ID and metric value. + CUpti_SassMetrics_InstanceValue* pInstanceValues; +} CUpti_SassMetrics_Data; + +/** + * \brief Params for cuptiSassMetricsFlushData + */ +typedef struct CUpti_SassMetricsFlushData_Params +{ + /// [in] equal to CUpti_SassMetricsFlushData_Params_STRUCT_SIZE + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] CUDA context on which SASS metric data collection was enabled. + /// If set NULL, default context will be consider for SASS metric data collection. + CUcontext ctx; + /// [in] number of patched instruction record will be retrived, user can call cuptiSassMetricsGetDataProperties() + /// for getting total number of records available. + size_t numOfPatchedInstructionRecords; + /// [in] number of patched instruction record instances for a metric, user can call cuptiSassMetricsGetDataProperties() + /// for getting total number of instances for each record per metric available. + size_t numOfInstances; + /// [out] + CUpti_SassMetrics_Data* pMetricsData; +} CUpti_SassMetricsFlushData_Params; +#define CUpti_SassMetricsFlushData_Params_STRUCT_SIZE CUPTI_PROFILER_STRUCT_SIZE(CUpti_SassMetricsFlushData_Params, numOfInstances) + +/** + * \brief Flush SASS metrics data from CUPTI internal buffer to the user buffer. + * User needs to allocate the buffer for retrieving the data. The number of records collected + * can be queried using the API cuptiSassMetricsGetDataProperties(). + * + * \param pParams A pointer to \ref CUpti_SassMetricsFlushData_Params + * + * \retval CUPTI_SUCCESS + * \retval CUPTI_ERROR_INVALID_PARAMETER if any \p pParams is not valid + * \retval CUPTI_ERROR_NOT_SUPPORTED indicates that the system/device doesn't support SASS metric data collection. + * \retval CUPTI_ERROR_INVALID_OPERATION if this API is called outside the enable/disable range. + */ +CUptiResult CUPTIAPI cuptiSassMetricsFlushData(CUpti_SassMetricsFlushData_Params* pParams); + +/** @} */ /* END CUPTI_SASS_METRICS_API */ + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +#ifdef __cplusplus +} /* extern "C" */ +#endif + +#endif // _CUPTI_SASS_METRICS_H_ \ No newline at end of file diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaGL_meta.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaGL_meta.h new file mode 100644 index 0000000000000000000000000000000000000000..7a52e194b265d32f61d47bd3081f4958755bff46 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaGL_meta.h @@ -0,0 +1,116 @@ +// This file is generated. Any changes you make will be lost during the next clean build. + +// Dependent includes +#ifdef __APPLE__ +#include +#else +#include +#endif + +// CUDA public interface, for type definitions and cu* function prototypes +#include "cudaGL.h" + + +// ************************************************************************* +// Definitions of structs to hold parameters for each function +// ************************************************************************* + +typedef struct cuGraphicsGLRegisterBuffer_params_st { + CUgraphicsResource *pCudaResource; + GLuint buffer; + unsigned int Flags; +} cuGraphicsGLRegisterBuffer_params; + +typedef struct cuGraphicsGLRegisterImage_params_st { + CUgraphicsResource *pCudaResource; + GLuint image; + GLenum target; + unsigned int Flags; +} cuGraphicsGLRegisterImage_params; + +typedef struct cuGLGetDevices_v2_params_st { + unsigned int *pCudaDeviceCount; + CUdevice *pCudaDevices; + unsigned int cudaDeviceCount; + CUGLDeviceList deviceList; +} cuGLGetDevices_v2_params; + +typedef struct cuGLCtxCreate_v2_params_st { + CUcontext *pCtx; + unsigned int Flags; + CUdevice device; +} cuGLCtxCreate_v2_params; + +typedef struct cuGLRegisterBufferObject_params_st { + GLuint buffer; +} cuGLRegisterBufferObject_params; + +typedef struct cuGLMapBufferObject_v2_ptds_params_st { + CUdeviceptr *dptr; + size_t *size; + GLuint buffer; +} cuGLMapBufferObject_v2_ptds_params; + +typedef struct cuGLUnmapBufferObject_params_st { + GLuint buffer; +} cuGLUnmapBufferObject_params; + +typedef struct cuGLUnregisterBufferObject_params_st { + GLuint buffer; +} cuGLUnregisterBufferObject_params; + +typedef struct cuGLSetBufferObjectMapFlags_params_st { + GLuint buffer; + unsigned int Flags; +} cuGLSetBufferObjectMapFlags_params; + +typedef struct cuGLMapBufferObjectAsync_v2_ptsz_params_st { + CUdeviceptr *dptr; + size_t *size; + GLuint buffer; + CUstream hStream; +} cuGLMapBufferObjectAsync_v2_ptsz_params; + +typedef struct cuGLUnmapBufferObjectAsync_params_st { + GLuint buffer; + CUstream hStream; +} cuGLUnmapBufferObjectAsync_params; + +typedef struct cuGLGetDevices_params_st { + unsigned int *pCudaDeviceCount; + CUdevice *pCudaDevices; + unsigned int cudaDeviceCount; + CUGLDeviceList deviceList; +} cuGLGetDevices_params; + +typedef struct cuGLMapBufferObject_v2_params_st { + CUdeviceptr *dptr; + size_t *size; + GLuint buffer; +} cuGLMapBufferObject_v2_params; + +typedef struct cuGLMapBufferObjectAsync_v2_params_st { + CUdeviceptr *dptr; + size_t *size; + GLuint buffer; + CUstream hStream; +} cuGLMapBufferObjectAsync_v2_params; + +typedef struct cuGLCtxCreate_params_st { + CUcontext *pCtx; + unsigned int Flags; + CUdevice device; +} cuGLCtxCreate_params; + +typedef struct cuGLMapBufferObject_params_st { + CUdeviceptr_v1 *dptr; + unsigned int *size; + GLuint buffer; +} cuGLMapBufferObject_params; + +typedef struct cuGLMapBufferObjectAsync_params_st { + CUdeviceptr_v1 *dptr; + unsigned int *size; + GLuint buffer; + CUstream hStream; +} cuGLMapBufferObjectAsync_params; diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaVDPAU_meta.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaVDPAU_meta.h new file mode 100644 index 0000000000000000000000000000000000000000..abc603c8d9be21e012a9b1641330c2e203d623b2 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cudaVDPAU_meta.h @@ -0,0 +1,46 @@ +// This file is generated. Any changes you make will be lost during the next clean build. + +// Dependent includes +#include + +// CUDA public interface, for type definitions and cu* function prototypes +#include "cudaVDPAU.h" + + +// ************************************************************************* +// Definitions of structs to hold parameters for each function +// ************************************************************************* + +typedef struct cuVDPAUGetDevice_params_st { + CUdevice *pDevice; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cuVDPAUGetDevice_params; + +typedef struct cuVDPAUCtxCreate_v2_params_st { + CUcontext *pCtx; + unsigned int flags; + CUdevice device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cuVDPAUCtxCreate_v2_params; + +typedef struct cuGraphicsVDPAURegisterVideoSurface_params_st { + CUgraphicsResource *pCudaResource; + VdpVideoSurface vdpSurface; + unsigned int flags; +} cuGraphicsVDPAURegisterVideoSurface_params; + +typedef struct cuGraphicsVDPAURegisterOutputSurface_params_st { + CUgraphicsResource *pCudaResource; + VdpOutputSurface vdpSurface; + unsigned int flags; +} cuGraphicsVDPAURegisterOutputSurface_params; + +typedef struct cuVDPAUCtxCreate_params_st { + CUcontext *pCtx; + unsigned int flags; + CUdevice device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cuVDPAUCtxCreate_params; diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h new file mode 100644 index 0000000000000000000000000000000000000000..88e79d1957925c4bbacd381e9461d5072de88f24 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h @@ -0,0 +1,38 @@ +// This file is generated. Any changes you make will be lost during the next clean build. + +// CUDA public interface, for type definitions and api function prototypes +#include "cuda_vdpau_interop.h" + +// ************************************************************************* +// Definitions of structs to hold parameters for each function +// ************************************************************************* + +// Currently used parameter trace structures +typedef struct cudaVDPAUGetDevice_v3020_params_st { + int *device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cudaVDPAUGetDevice_v3020_params; + +typedef struct cudaVDPAUSetVDPAUDevice_v3020_params_st { + int device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cudaVDPAUSetVDPAUDevice_v3020_params; + +typedef struct cudaGraphicsVDPAURegisterVideoSurface_v3020_params_st { + struct cudaGraphicsResource **resource; + VdpVideoSurface vdpSurface; + unsigned int flags; +} cudaGraphicsVDPAURegisterVideoSurface_v3020_params; + +typedef struct cudaGraphicsVDPAURegisterOutputSurface_v3020_params_st { + struct cudaGraphicsResource **resource; + VdpOutputSurface vdpSurface; + unsigned int flags; +} cudaGraphicsVDPAURegisterOutputSurface_v3020_params; + +// Parameter trace structures for removed functions + + +// End of parameter trace structures diff --git a/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/nvperf_target.h b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/nvperf_target.h new file mode 100644 index 0000000000000000000000000000000000000000..4145af5bf3a0604ef4fa46e9295c82110b7b7002 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cuda_cupti/include/nvperf_target.h @@ -0,0 +1,570 @@ +#ifndef NVPERF_TARGET_H +#define NVPERF_TARGET_H + +/* + * Copyright 2014-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO USER: + * + * This source code is subject to NVIDIA ownership rights under U.S. and + * international Copyright laws. + * + * This software and the information contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and conditions + * of a form of NVIDIA software license agreement. + * + * NVIDIA MAKES NO REPRESENTATION ABOUT THE SUITABILITY OF THIS SOURCE + * CODE FOR ANY PURPOSE. IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR + * IMPLIED WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH + * REGARD TO THIS SOURCE CODE, INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY, NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, + * OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS + * OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE + * OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE + * OR PERFORMANCE OF THIS SOURCE CODE. + * + * U.S. Government End Users. This source code is a "commercial item" as + * that term is defined at 48 C.F.R. 2.101 (OCT 1995), consisting of + * "commercial computer software" and "commercial computer software + * documentation" as such terms are used in 48 C.F.R. 12.212 (SEPT 1995) + * and is provided to the U.S. Government only as a commercial end item. + * Consistent with 48 C.F.R.12.212 and 48 C.F.R. 227.7202-1 through + * 227.7202-4 (JUNE 1995), all U.S. Government End Users acquire the + * source code with only those rights set forth herein. + * + * Any use of this source code in individual and commercial software must + * include, in the user documentation and internal comments to the code, + * the above Disclaimer and U.S. Government End Users Notice. + */ + +#include +#include +#include "nvperf_common.h" + +#if defined(__GNUC__) && defined(NVPA_SHARED_LIB) + #pragma GCC visibility push(default) + #if !defined(NVPW_LOCAL) + #define NVPW_LOCAL __attribute__ ((visibility ("hidden"))) + #endif +#else + #if !defined(NVPW_LOCAL) + #define NVPW_LOCAL + #endif +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file nvperf_target.h + */ + +#ifndef NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_DEFINED +#define NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_DEFINED + /// GPU architecture support level + typedef enum NVPW_GpuArchitectureSupportLevel + { + NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_UNSUPPORTED, + NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_SUPPORTED + } NVPW_GpuArchitectureSupportLevel; +#endif //NVPW_GPU_ARCHITECTURE_SUPPORT_LEVEL_DEFINED + +#ifndef NVPW_SLI_SUPPORT_LEVEL_DEFINED +#define NVPW_SLI_SUPPORT_LEVEL_DEFINED + /// SLI configuration support level + typedef enum NVPW_SliSupportLevel + { + NVPW_SLI_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_SLI_SUPPORT_LEVEL_UNSUPPORTED, + /// Only Non-SLI configurations are supported. + NVPW_SLI_SUPPORT_LEVEL_SUPPORTED_NON_SLI_CONFIGURATION + } NVPW_SliSupportLevel; +#endif //NVPW_SLI_SUPPORT_LEVEL_DEFINED + +#ifndef NVPW_VGPU_SUPPORT_LEVEL_DEFINED +#define NVPW_VGPU_SUPPORT_LEVEL_DEFINED + /// Virtualized GPU configuration support level + typedef enum NVPW_VGpuSupportLevel + { + NVPW_VGPU_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_VGPU_SUPPORT_LEVEL_UNSUPPORTED, + /// Supported but not allowed by system admin. + NVPW_VGPU_SUPPORT_LEVEL_SUPPORTED_DISALLOWED, + NVPW_VGPU_SUPPORT_LEVEL_SUPPORTED_ALLOWED, + NVPW_VGPU_SUPPORT_LEVEL_SUPPORTED_NON_VGPU_CONFIGURATION + } NVPW_VGpuSupportLevel; +#endif //NVPW_VGPU_SUPPORT_LEVEL_DEFINED + +#ifndef NVPW_CONF_COMPUTE_SUPPORT_LEVEL_DEFINED +#define NVPW_CONF_COMPUTE_SUPPORT_LEVEL_DEFINED + /// Confidential Compute mode support level + typedef enum NVPW_ConfidentialComputeSupportLevel + { + NVPW_CONF_COMPUTE_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_CONF_COMPUTE_SUPPORT_LEVEL_UNSUPPORTED, + NVPW_CONF_COMPUTE_SUPPORT_LEVEL_SUPPORTED_NON_CONF_COMPUTE_CONFIGURATION + } NVPW_ConfidentialComputeSupportLevel; +#endif //NVPW_CONF_COMPUTE_SUPPORT_LEVEL_DEFINED + +#ifndef NVPW_CMP_SUPPORT_LEVEL_DEFINED +#define NVPW_CMP_SUPPORT_LEVEL_DEFINED + /// CMP support level + typedef enum NVPW_CmpSupportLevel + { + NVPW_CMP_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_CMP_SUPPORT_LEVEL_UNSUPPORTED, + NVPW_CMP_SUPPORT_LEVEL_SUPPORTED_NON_CMP_CONFIGURATON + } NVPW_CmpSupportLevel; +#endif //NVPW_CMP_SUPPORT_LEVEL_DEFINED + +#ifndef NVPW_WSL_SUPPORT_LEVEL_DEFINED +#define NVPW_WSL_SUPPORT_LEVEL_DEFINED + /// WSL support level + typedef enum NVPW_WslSupportLevel + { + NVPW_WSL_SUPPORT_LEVEL_UNKNOWN = 0, + NVPW_WSL_SUPPORT_LEVEL_UNSUPPORTED_INSUFFICIENT_DRIVER_VERSION, + NVPW_WSL_SUPPORT_LEVEL_SUPPORTED, + NVPW_WSL_SUPPORT_LEVEL_SUPPORTED_NON_WSL_CONFIGURATION + } NVPW_WslSupportLevel; +#endif //NVPW_WSL_SUPPORT_LEVEL_DEFINED + + typedef struct NVPW_InitializeTarget_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + } NVPW_InitializeTarget_Params; +#define NVPW_InitializeTarget_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_InitializeTarget_Params, pPriv) + + /// Load the target library. + NVPA_Status NVPW_InitializeTarget(NVPW_InitializeTarget_Params* pParams); + + typedef struct NVPW_GetDeviceCount_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + size_t numDevices; + } NVPW_GetDeviceCount_Params; +#define NVPW_GetDeviceCount_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_GetDeviceCount_Params, numDevices) + + NVPA_Status NVPW_GetDeviceCount(NVPW_GetDeviceCount_Params* pParams); + + typedef struct NVPW_Device_GetNames_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + size_t deviceIndex; + const char* pDeviceName; + const char* pChipName; + } NVPW_Device_GetNames_Params; +#define NVPW_Device_GetNames_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Device_GetNames_Params, pChipName) + + NVPA_Status NVPW_Device_GetNames(NVPW_Device_GetNames_Params* pParams); + + typedef struct NVPW_PciBusId + { + /// The PCI domain on which the device bus resides. + uint32_t domain; + /// The bus on which the device resides. + uint16_t bus; + /// device ID. + uint16_t device; + } NVPW_PciBusId; +#define NVPW_PciBusId_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_PciBusId, device) + + typedef struct NVPW_Device_GetPciBusIds_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] caller-allocated array of NVPW_PciBusId, indexed by NVPW deviceIndex + NVPW_PciBusId* pBusIds; + /// [in] size of the pBusIDs array; use result from NVPW_GetDeviceCount + size_t numDevices; + } NVPW_Device_GetPciBusIds_Params; +#define NVPW_Device_GetPciBusIds_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Device_GetPciBusIds_Params, numDevices) + + NVPA_Status NVPW_Device_GetPciBusIds(NVPW_Device_GetPciBusIds_Params* pParams); + + +#define NVPW_DEVICE_MIG_GPU_INSTANCE_ID_INVALID 0xFFFFFFFFu +#define NVPW_DEVICE_MIG_GPU_INSTANCE_ID_FULLCHIP 0xFFFFFFFEu + + + typedef struct NVPW_Device_GetMigAttributes_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + size_t deviceIndex; + /// [out] + NVPA_Bool isMigPartition; + /// [out] + uint32_t gpuInstanceId; + /// [out] + uint32_t computeInstanceId; + } NVPW_Device_GetMigAttributes_Params; +#define NVPW_Device_GetMigAttributes_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Device_GetMigAttributes_Params, computeInstanceId) + + NVPA_Status NVPW_Device_GetMigAttributes(NVPW_Device_GetMigAttributes_Params* pParams); + + typedef struct NVPW_Adapter_GetDeviceIndex_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + struct IDXGIAdapter* pAdapter; + /// [in] + size_t sliIndex; + /// [out] + size_t deviceIndex; + } NVPW_Adapter_GetDeviceIndex_Params; +#define NVPW_Adapter_GetDeviceIndex_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Adapter_GetDeviceIndex_Params, deviceIndex) + + NVPA_Status NVPW_Adapter_GetDeviceIndex(NVPW_Adapter_GetDeviceIndex_Params* pParams); + + typedef struct NVPW_CounterData_GetNumRanges_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + const uint8_t* pCounterDataImage; + size_t numRanges; + } NVPW_CounterData_GetNumRanges_Params; +#define NVPW_CounterData_GetNumRanges_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_CounterData_GetNumRanges_Params, numRanges) + + NVPA_Status NVPW_CounterData_GetNumRanges(NVPW_CounterData_GetNumRanges_Params* pParams); + + typedef struct NVPW_CounterData_GetChipName_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pCounterDataImage; + /// [in] + size_t counterDataImageSize; + /// [out] + const char* pChipName; + } NVPW_CounterData_GetChipName_Params; +#define NVPW_CounterData_GetChipName_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_CounterData_GetChipName_Params, pChipName) + + NVPA_Status NVPW_CounterData_GetChipName(NVPW_CounterData_GetChipName_Params* pParams); + + typedef struct NVPW_Config_GetNumPasses_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pConfig; + /// [out] + size_t numPipelinedPasses; + /// [out] + size_t numIsolatedPasses; + } NVPW_Config_GetNumPasses_Params; +#define NVPW_Config_GetNumPasses_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Config_GetNumPasses_Params, numIsolatedPasses) + + /// Total num passes = numPipelinedPasses + numIsolatedPasses * numNestingLevels + NVPA_Status NVPW_Config_GetNumPasses(NVPW_Config_GetNumPasses_Params* pParams); + + typedef struct NVPW_Config_GetNumPasses_V2_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pConfig; + /// [out] + size_t numPasses; + } NVPW_Config_GetNumPasses_V2_Params; +#define NVPW_Config_GetNumPasses_V2_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Config_GetNumPasses_V2_Params, numPasses) + + /// Total num passes = numPasses * numNestingLevels + NVPA_Status NVPW_Config_GetNumPasses_V2(NVPW_Config_GetNumPasses_V2_Params* pParams); + +#define NVPW_API_SET_CUDA_PROFILER 0x18209d0775b2f89dULL + +#define NVPW_API_SET_D3D11_PROFILER 0xca55c6738445db2bULL + +#define NVPW_API_SET_D3D12_PROFILER 0xc0c2d46dd7c7ad78ULL + +#define NVPW_API_SET_EGL_PROFILER 0x3c3747dae1f9565cULL + +#define NVPW_API_SET_GPU_PERIODICSAMPLER 0x9f4c2571fc0b2e8aULL + +#define NVPW_API_SET_METRICSCONTEXT 0x7c8579f6f2144beaULL + +#define NVPW_API_SET_METRICSEVALUATOR 0x0368a8768d811af9ULL + +#define NVPW_API_SET_METRICS_GA100_COMP 0x16b7d8c20d8b4915ULL + +#define NVPW_API_SET_METRICS_GA100_GRFX 0xc94eaabec04a94faULL + +#define NVPW_API_SET_METRICS_GA10X_COMP 0xb5d6391c2e299ab5ULL + +#define NVPW_API_SET_METRICS_GA10X_GRFX 0x6ebc121178b5ce0bULL + +#define NVPW_API_SET_METRICS_GV100_COMP 0x863705cc57919f72ULL + +#define NVPW_API_SET_METRICS_GV100_GRFX 0x9900da75d164fecfULL + +#define NVPW_API_SET_METRICS_GV11B_COMP 0xd3f79a859235848fULL + +#define NVPW_API_SET_METRICS_GV11B_GRFX 0xeb8e26220106e227ULL + +#define NVPW_API_SET_METRICS_TU10X_COMP 0x70f40be0afd35da8ULL + +#define NVPW_API_SET_METRICS_TU10X_GRFX 0xdf219cb838db6968ULL + +#define NVPW_API_SET_METRICS_TU11X_COMP 0xeb0069d7d0956678ULL + +#define NVPW_API_SET_METRICS_TU11X_GRFX 0x0977d9342bd62743ULL + +#define NVPW_API_SET_OPENGL_PROFILER 0xe4cd9ea40f2ee777ULL + +#define NVPW_API_SET_VULKAN_PROFILER 0x8c56b6a03d779689ULL + + typedef struct NVPW_QueryVersionNumber_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + uint64_t apiSet; + /// [out] + uint32_t major; + /// [out] + uint32_t minor; + /// [out] + uint32_t patch; + /// [out] + uint32_t relMajor; + /// [out] + uint32_t relMinor; + /// [out] + uint32_t relPatch; + } NVPW_QueryVersionNumber_Params; +#define NVPW_QueryVersionNumber_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_QueryVersionNumber_Params, relPatch) + + /// Query version number of an API set + NVPA_Status NVPW_QueryVersionNumber(NVPW_QueryVersionNumber_Params* pParams); + + typedef enum NVPW_Device_ClockStatus + { + /// clock status is unknown + NVPW_DEVICE_CLOCK_STATUS_UNKNOWN, + /// clocks are locked to rated tdp values + NVPW_DEVICE_CLOCK_STATUS_LOCKED_TO_RATED_TDP, + /// clocks are not locked and can boost above rated tdp + NVPW_DEVICE_CLOCK_STATUS_BOOST_ENABLED, + /// clocks are not locked and will not go above rated tdp + NVPW_DEVICE_CLOCK_STATUS_BOOST_DISABLED, + NVPW_DEVICE_CLOCK_STATUS__COUNT + } NVPW_Device_ClockStatus; + + typedef struct NVPW_Device_GetClockStatus_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + size_t deviceIndex; + /// [in] + NVPW_Device_ClockStatus clockStatus; + } NVPW_Device_GetClockStatus_Params; +#define NVPW_Device_GetClockStatus_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Device_GetClockStatus_Params, clockStatus) + + NVPA_Status NVPW_Device_GetClockStatus(NVPW_Device_GetClockStatus_Params* pParams); + + typedef enum NVPW_Device_ClockSetting + { + /// invalid op, specify valid clocks operation during profiling + NVPW_DEVICE_CLOCK_SETTING_INVALID, + /// default to driver/application config (normally unlocked and not boosted, but could be unlocked boosted, or + /// locked to rated TDP) + NVPW_DEVICE_CLOCK_SETTING_DEFAULT, + /// lock clocks at rated tdp base values + NVPW_DEVICE_CLOCK_SETTING_LOCK_TO_RATED_TDP, + NVPW_DEVICE_CLOCK_SETTING__COUNT + } NVPW_Device_ClockSetting; + + typedef struct NVPW_Device_SetClockSetting_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + size_t deviceIndex; + /// [in] + NVPW_Device_ClockSetting clockSetting; + } NVPW_Device_SetClockSetting_Params; +#define NVPW_Device_SetClockSetting_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Device_SetClockSetting_Params, clockSetting) + + NVPA_Status NVPW_Device_SetClockSetting(NVPW_Device_SetClockSetting_Params* pParams); + + typedef struct NVPW_CounterData_GetRangeDescriptions_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + const uint8_t* pCounterDataImage; + size_t rangeIndex; + /// [inout] Number of descriptions allocated in ppDescriptions + size_t numDescriptions; + const char** ppDescriptions; + } NVPW_CounterData_GetRangeDescriptions_Params; +#define NVPW_CounterData_GetRangeDescriptions_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_CounterData_GetRangeDescriptions_Params, ppDescriptions) + + NVPA_Status NVPW_CounterData_GetRangeDescriptions(NVPW_CounterData_GetRangeDescriptions_Params* pParams); + + typedef struct NVPW_Profiler_CounterData_GetRangeDescriptions_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + const uint8_t* pCounterDataImage; + size_t rangeIndex; + /// [inout] Number of descriptions allocated in ppDescriptions + size_t numDescriptions; + const char** ppDescriptions; + } NVPW_Profiler_CounterData_GetRangeDescriptions_Params; +#define NVPW_Profiler_CounterData_GetRangeDescriptions_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_Profiler_CounterData_GetRangeDescriptions_Params, ppDescriptions) + + NVPA_Status NVPW_Profiler_CounterData_GetRangeDescriptions(NVPW_Profiler_CounterData_GetRangeDescriptions_Params* pParams); + +#ifndef NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE_DEFINED +#define NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE_DEFINED + typedef enum NVPW_PeriodicSampler_CounterData_AppendMode + { + NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE_LINEAR = 0, + NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE_CIRCULAR = 1, + NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE__COUNT + } NVPW_PeriodicSampler_CounterData_AppendMode; +#endif //NVPW_PERIODIC_SAMPLER_COUNTER_DATA_APPEND_MODE_DEFINED + + typedef struct NVPW_PeriodicSampler_CounterData_GetSampleTime_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pCounterDataImage; + /// [in] + size_t rangeIndex; + /// [out] + uint64_t timestampStart; + /// [out] + uint64_t timestampEnd; + } NVPW_PeriodicSampler_CounterData_GetSampleTime_Params; +#define NVPW_PeriodicSampler_CounterData_GetSampleTime_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_PeriodicSampler_CounterData_GetSampleTime_Params, timestampEnd) + + NVPA_Status NVPW_PeriodicSampler_CounterData_GetSampleTime(NVPW_PeriodicSampler_CounterData_GetSampleTime_Params* pParams); + + typedef struct NVPW_PeriodicSampler_CounterData_TrimInPlace_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + uint8_t* pCounterDataImage; + /// [in] + size_t counterDataImageSize; + /// [out] + size_t counterDataImageTrimmedSize; + } NVPW_PeriodicSampler_CounterData_TrimInPlace_Params; +#define NVPW_PeriodicSampler_CounterData_TrimInPlace_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_PeriodicSampler_CounterData_TrimInPlace_Params, counterDataImageTrimmedSize) + + NVPA_Status NVPW_PeriodicSampler_CounterData_TrimInPlace(NVPW_PeriodicSampler_CounterData_TrimInPlace_Params* pParams); + + typedef struct NVPW_PeriodicSampler_CounterData_GetInfo_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pCounterDataImage; + /// [in] + size_t counterDataImageSize; + /// [out] total number of ranges in the counter data + size_t numTotalRanges; + /// [out] if in "linear" mode, this API returns the number of "populated" ranges; if it's in "circular" mode, + /// then it returns the last "populated" range index + 1, when there is no such range, it returns 0. + size_t numPopulatedRanges; + /// [out] if in "linear" mode, this API returns the number of "completed" ranges; if it's in "circular" mode, + /// then it returns the last "completed" range index + 1, when there is no such range, it returns 0. + size_t numCompletedRanges; + } NVPW_PeriodicSampler_CounterData_GetInfo_Params; +#define NVPW_PeriodicSampler_CounterData_GetInfo_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_PeriodicSampler_CounterData_GetInfo_Params, numCompletedRanges) + + /// In periodic sampler, a range in counter data stores exactly one sample's data. For better performance, periodic + /// sampler may operate in an out-of-order fashion when populating sample data, i.e. it may not fully populate all + /// counters of a sample/range before starting to populate the next sample/range. As a result, we have two concepts + /// here, "populated" & "completed": a range is considered "populated" even if only partial counters have been + /// written; on the other hand, a range is only considered "completed" if all the collecting counters have been + /// written. + NVPA_Status NVPW_PeriodicSampler_CounterData_GetInfo(NVPW_PeriodicSampler_CounterData_GetInfo_Params* pParams); + + typedef struct NVPW_PeriodicSampler_CounterData_GetTriggerCount_Params + { + /// [in] + size_t structSize; + /// [in] assign to NULL + void* pPriv; + /// [in] + const uint8_t* pCounterDataImage; + /// [in] + size_t counterDataImageSize; + /// [in] + size_t rangeIndex; + /// [out] + uint32_t triggerCount; + } NVPW_PeriodicSampler_CounterData_GetTriggerCount_Params; +#define NVPW_PeriodicSampler_CounterData_GetTriggerCount_Params_STRUCT_SIZE NVPA_STRUCT_SIZE(NVPW_PeriodicSampler_CounterData_GetTriggerCount_Params, triggerCount) + + NVPA_Status NVPW_PeriodicSampler_CounterData_GetTriggerCount(NVPW_PeriodicSampler_CounterData_GetTriggerCount_Params* pParams); + + + typedef struct NVPW_TimestampReport + { + uint32_t payload; + uint8_t reserved0004[4]; + uint64_t timestamp; + } NVPW_TimestampReport; + + + + +#ifdef __cplusplus +} // extern "C" +#endif + +#if defined(__GNUC__) && defined(NVPA_SHARED_LIB) + #pragma GCC visibility pop +#endif + +#endif // NVPERF_TARGET_H diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv.h new file mode 100644 index 0000000000000000000000000000000000000000..b67d6529aa4e6f9a3605ce7b34499714fe4057aa --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv.h @@ -0,0 +1,671 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cudnn_adv : cuDNN's advanced and experimental features. + +*/ + +#if !defined(CUDNN_ADV_H_) +#define CUDNN_ADV_H_ + +#include + +#include "cudnn_version.h" +#include "cudnn_ops.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_ADV_MAJOR 9 +#define CUDNN_ADV_MINOR 1 +#define CUDNN_ADV_PATCH 0 + +#if (CUDNN_ADV_MAJOR != CUDNN_MAJOR) || (CUDNN_ADV_MINOR != CUDNN_MINOR) || (CUDNN_ADV_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN ADV INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* BASIC RNN API */ + +typedef enum { + CUDNN_RNN_ALGO_STANDARD = 0, + CUDNN_RNN_ALGO_PERSIST_STATIC = 1, + CUDNN_RNN_ALGO_PERSIST_DYNAMIC = 2, + CUDNN_RNN_ALGO_PERSIST_STATIC_SMALL_H = 3, + CUDNN_RNN_ALGO_COUNT = 4, +} cudnnRNNAlgo_t; + +typedef enum { + CUDNN_FWD_MODE_INFERENCE = 0, + CUDNN_FWD_MODE_TRAINING = 1, +} cudnnForwardMode_t; + +typedef enum { + CUDNN_RNN_RELU = 0, /* basic RNN cell type with ReLu activation */ + CUDNN_RNN_TANH = 1, /* basic RNN cell type with tanh activation */ + CUDNN_LSTM = 2, /* LSTM with optional recurrent projection and clipping */ + CUDNN_GRU = 3, /* Using h' = tanh(r * Uh(t-1) + Wx) and h = (1 - z) * h' + z * h(t-1); */ +} cudnnRNNMode_t; + +typedef enum { + CUDNN_RNN_NO_BIAS = 0, /* rnn cell formulas do not use biases */ + CUDNN_RNN_SINGLE_INP_BIAS = 1, /* rnn cell formulas use one input bias in input GEMM */ + CUDNN_RNN_DOUBLE_BIAS = 2, /* default, rnn cell formulas use two bias vectors */ + CUDNN_RNN_SINGLE_REC_BIAS = 3 /* rnn cell formulas use one recurrent bias in recurrent GEMM */ +} cudnnRNNBiasMode_t; + +typedef enum { + CUDNN_UNIDIRECTIONAL = 0, /* single direction network */ + CUDNN_BIDIRECTIONAL = 1, /* output concatination at each layer */ +} cudnnDirectionMode_t; + +typedef enum { + CUDNN_LINEAR_INPUT = 0, /* adjustable weight matrix in first layer input GEMM */ + CUDNN_SKIP_INPUT = 1, /* fixed identity matrix in the first layer input GEMM */ +} cudnnRNNInputMode_t; + +typedef enum { + CUDNN_RNN_CLIP_NONE = 0, /* disables LSTM cell clipping */ + CUDNN_RNN_CLIP_MINMAX = 1, /* enables LSTM cell clipping */ +} cudnnRNNClipMode_t; + +typedef enum { + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_UNPACKED = 0, /* padded, outer stride from one time-step to the next */ + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_PACKED = 1, /* sequence length sorted and packed as in basic RNN api */ + CUDNN_RNN_DATA_LAYOUT_BATCH_MAJOR_UNPACKED = 2, /* padded, outer stride from one batch to the next */ +} cudnnRNNDataLayout_t; + +/* For auxFlags in cudnnSetRNNDescriptor_v8() */ +#define CUDNN_RNN_PADDED_IO_DISABLED 0 +#define CUDNN_RNN_PADDED_IO_ENABLED (1U << 0) + +struct cudnnRNNStruct; +typedef struct cudnnRNNStruct *cudnnRNNDescriptor_t; + +struct cudnnRNNDataStruct; +typedef struct cudnnRNNDataStruct *cudnnRNNDataDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDescriptor(cudnnRNNDescriptor_t *rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDescriptor(cudnnRNNDescriptor_t rnnDesc); + +/* + * mathPrec in cudnnSetRNNDescriptor_v8() specifies compute precision. + * Compute precision is further modified by mathType that sets the + * preferred option for using NVIDIA Tensor Cores. dataType specify + * input/output data type and weight/bias type. + */ + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t algo, + cudnnRNNMode_t cellMode, + cudnnRNNBiasMode_t biasMode, + cudnnDirectionMode_t dirMode, + cudnnRNNInputMode_t inputMode, + cudnnDataType_t dataType, + cudnnDataType_t mathPrec, + cudnnMathType_t mathType, + int32_t inputSize, + int32_t hiddenSize, + int32_t projSize, + int32_t numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + uint32_t auxFlags); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t *algo, + cudnnRNNMode_t *cellMode, + cudnnRNNBiasMode_t *biasMode, + cudnnDirectionMode_t *dirMode, + cudnnRNNInputMode_t *inputMode, + cudnnDataType_t *dataType, + cudnnDataType_t *mathPrec, + cudnnMathType_t *mathType, + int32_t *inputSize, + int32_t *hiddenSize, + int32_t *projSize, + int32_t *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + uint32_t *auxFlags); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip_v9(cudnnRNNDescriptor_t rnnDesc, cudnnRNNClipMode_t clipMode, double lclip, double rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip_v9(cudnnRNNDescriptor_t rnnDesc, cudnnRNNClipMode_t *clipMode, double *lclip, double *rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnBuildRNNDynamic(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, int miniBatch); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTempSpaceSizes(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + cudnnRNNDataDescriptor_t xDesc, + size_t *workSpaceSize, + size_t *reserveSpaceSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightSpaceSize(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, size_t *weightSpaceSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightParams(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int32_t pseudoLayer, + size_t weightSpaceSize, + const void *weightSpace, + int32_t linLayerID, + cudnnTensorDescriptor_t mDesc, + void **mAddr, + cudnnTensorDescriptor_t bDesc, + void **bAddr); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDataDescriptor(cudnnRNNDataDescriptor_t *rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t dataType, + cudnnRNNDataLayout_t layout, + int maxSeqLength, + int batchSize, + int vectorSize, + const int seqLengthArray[], /* length of each sequence in the batch */ + void *paddingFill); /* symbol for filling padding position in output */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t *dataType, + cudnnRNNDataLayout_t *layout, + int *maxSeqLength, + int *batchSize, + int *vectorSize, + int arrayLengthRequested, + int seqLengthArray[], + void *paddingFill); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNForward(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnRNNDataDescriptor_t yDesc, + void *y, + cudnnTensorDescriptor_t hDesc, + const void *hx, + void *hy, + cudnnTensorDescriptor_t cDesc, + const void *cx, + void *cy, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +/* Sequence data descriptor */ + +typedef enum { + CUDNN_SEQDATA_TIME_DIM = 0, /* index in time */ + CUDNN_SEQDATA_BATCH_DIM = 1, /* index in batch */ + CUDNN_SEQDATA_BEAM_DIM = 2, /* index in beam */ + CUDNN_SEQDATA_VECT_DIM = 3 /* index in vector */ +} cudnnSeqDataAxis_t; + +struct cudnnSeqDataStruct; +typedef struct cudnnSeqDataStruct *cudnnSeqDataDescriptor_t CUDNN_DEPRECATED; + +#define CUDNN_SEQDATA_DIM_COUNT 4 /* dimension count */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreateSeqDataDescriptor(cudnnSeqDataDescriptor_t *seqDataDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroySeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetSeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t dataType, + int nbDims, + const int dimA[], + const cudnnSeqDataAxis_t axes[], + size_t seqLengthArraySize, + const int seqLengthArray[], + void *paddingFill); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetSeqDataDescriptor(const cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t *dataType, + int *nbDims, + int nbDimsRequested, + int dimA[], + cudnnSeqDataAxis_t axes[], + size_t *seqLengthArraySize, + size_t seqLengthSizeRequested, + int seqLengthArray[], + void *paddingFill); + +/* Multihead Attention */ + +/* + * Multi-head attention options passed via 'attnMode' in cudnnSetAttnDescriptor(). + * Use the bitwise OR operator to combine several settings listed below. Additional + * minor options can be added here w/o changing or introducing new API functions. + */ +#define CUDNN_ATTN_QUERYMAP_ALL_TO_ONE 0 /* multiple Q-s map to a single (K,V) set when beam size > 1 */ +#define CUDNN_ATTN_QUERYMAP_ONE_TO_ONE (1U << 0) /* multiple Q-s map to multiple (K,V) sets when beam size > 1 */ +#define CUDNN_ATTN_DISABLE_PROJ_BIASES 0 /* no biases in attention input and output projections */ +#define CUDNN_ATTN_ENABLE_PROJ_BIASES (1U << 1) /* use biases in attention input and output projections */ + +struct cudnnAttnStruct; +typedef struct cudnnAttnStruct *cudnnAttnDescriptor_t CUDNN_DEPRECATED; + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreateAttnDescriptor(cudnnAttnDescriptor_t *attnDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyAttnDescriptor(cudnnAttnDescriptor_t attnDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned attnMode, + int nHeads, + double smScaler, + cudnnDataType_t dataType, + cudnnDataType_t computePrec, + cudnnMathType_t mathType, + cudnnDropoutDescriptor_t attnDropoutDesc, + cudnnDropoutDescriptor_t postDropoutDesc, + int qSize, + int kSize, + int vSize, + int qProjSize, + int kProjSize, + int vProjSize, + int oProjSize, + int qoMaxSeqLength, + int kvMaxSeqLength, + int maxBatchSize, + int maxBeamSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned *attnMode, + int *nHeads, + double *smScaler, + cudnnDataType_t *dataType, + cudnnDataType_t *computePrec, + cudnnMathType_t *mathType, + cudnnDropoutDescriptor_t *attnDropoutDesc, + cudnnDropoutDescriptor_t *postDropoutDesc, + int *qSize, + int *kSize, + int *vSize, + int *qProjSize, + int *kProjSize, + int *vProjSize, + int *oProjSize, + int *qoMaxSeqLength, + int *kvMaxSeqLength, + int *maxBatchSize, + int *maxBeamSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnBuffers(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + size_t *weightSizeInBytes, + size_t *workSpaceSizeInBytes, + size_t *reserveSpaceSizeInBytes); + +typedef enum { + CUDNN_MH_ATTN_Q_WEIGHTS = 0, /* input projection weights for 'queries' */ + CUDNN_MH_ATTN_K_WEIGHTS = 1, /* input projection weights for 'keys' */ + CUDNN_MH_ATTN_V_WEIGHTS = 2, /* input projection weights for 'values' */ + CUDNN_MH_ATTN_O_WEIGHTS = 3, /* output projection weights */ + CUDNN_MH_ATTN_Q_BIASES = 4, /* input projection bias tensor for 'queries' */ + CUDNN_MH_ATTN_K_BIASES = 5, /* input projection bias for 'keys' */ + CUDNN_MH_ATTN_V_BIASES = 6, /* input projection bias for 'values' */ + CUDNN_MH_ATTN_O_BIASES = 7, /* output projection biases */ +} cudnnMultiHeadAttnWeightKind_t; + +#define CUDNN_ATTN_WKIND_COUNT 8 /* Number of attention weight/bias tensors */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnMultiHeadAttnWeightKind_t wKind, + size_t weightSizeInBytes, + const void *weights, + cudnnTensorDescriptor_t wDesc, + void **wAddr); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnForward(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + int currIdx, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsQO[], + const int devSeqLengthsKV[], + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const void *residuals, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t oDesc, + void *out, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnAdvVersionCheck(void); + +typedef enum { + CUDNN_WGRAD_MODE_ADD = 0, /* add partial gradients to wgrad output buffers */ + CUDNN_WGRAD_MODE_SET = 1, /* write partial gradients to wgrad output buffers */ +} cudnnWgradMode_t; + +cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardData_v8(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t yDesc, + const void *y, + const void *dy, + cudnnRNNDataDescriptor_t xDesc, + void *dx, + cudnnTensorDescriptor_t hDesc, + const void *hx, + const void *dhy, + void *dhx, + cudnnTensorDescriptor_t cDesc, + const void *cx, + const void *dcy, + void *dcx, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardWeights_v8(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnWgradMode_t addGrad, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnTensorDescriptor_t hDesc, + const void *hx, + cudnnRNNDataDescriptor_t yDesc, + const void *y, + size_t weightSpaceSize, + void *dweightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnBackwardData(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsDQDO[], + const int devSeqLengthsDKDV[], + const cudnnSeqDataDescriptor_t doDesc, + const void *dout, + const cudnnSeqDataDescriptor_t dqDesc, + void *dqueries, + const void *queries, + const cudnnSeqDataDescriptor_t dkDesc, + void *dkeys, + const void *keys, + const cudnnSeqDataDescriptor_t dvDesc, + void *dvalues, + const void *values, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnBackwardWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnWgradMode_t addGrad, + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t doDesc, + const void *dout, + size_t weightSizeInBytes, + const void *weights, + void *dweights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* +* CTC (Connectionist Temporal Classification) loss descriptor create/destory/set/get functions +*/ +/* Input normalization mode for loss function */ +typedef enum { + CUDNN_LOSS_NORMALIZATION_NONE = 0, + CUDNN_LOSS_NORMALIZATION_SOFTMAX = 1, +} cudnnLossNormalizationMode_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateCTCLossDescriptor(cudnnCTCLossDescriptor_t *ctcLossDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc, cudnnDataType_t compType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptorEx(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t compType, + cudnnLossNormalizationMode_t normMode, + cudnnNanPropagation_t gradMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptor_v8(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t compType, + cudnnLossNormalizationMode_t normMode, + cudnnNanPropagation_t gradMode, + int maxLabelLength); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptor_v9(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t compType, + cudnnLossNormalizationMode_t normMode, + cudnnCTCGradMode_t ctcGradMode, + int maxLabelLength); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc, cudnnDataType_t *compType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptorEx(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t *compType, + cudnnLossNormalizationMode_t *normMode, + cudnnNanPropagation_t *gradMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptor_v8(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t *compType, + cudnnLossNormalizationMode_t *normMode, + cudnnNanPropagation_t *gradMode, + int *maxLabelLength); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptor_v9(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t *compType, + cudnnLossNormalizationMode_t *normMode, + cudnnCTCGradMode_t *ctcGradMode, + int *maxLabelLength); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc); + +/* return the ctc costs and gradients, given the probabilities and labels */ +cudnnStatus_t CUDNNWINAPI +cudnnCTCLoss( + cudnnHandle_t handle, + const cudnnTensorDescriptor_t + probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the timing steps, N is the + mini batch size, A is the alphabet size) */ + const void *probs, /* probabilities after softmax, in GPU memory */ + const int hostLabels[], /* labels, in CPU memory */ + const int hostLabelLengths[], /* the length of each label, in CPU memory */ + const int hostInputLengths[], /* the lengths of timing steps in each batch, in CPU memory */ + void *costs, /* the returned costs of CTC, in GPU memory */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the dimensions are T,N,A */ + void *gradients, /* the returned CTC gradients, in GPU memory, to compute costs only, set it to NULL */ + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + void *workspace, /* pointer to the workspace, in GPU memory */ + size_t workSpaceSizeInBytes); /* size of the workspace */ + +/* return the ctc costs and gradients, given the probabilities and labels */ +cudnnStatus_t CUDNNWINAPI +cudnnCTCLoss_v8( + cudnnHandle_t handle, + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + const cudnnTensorDescriptor_t + probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the timing steps, N is the + mini batch size, A is the alphabet size) */ + const void *probs, /* probabilities after softmax, in GPU memory */ + const int labels[], /* labels, in GPU memory */ + const int labelLengths[], /* the length of each label, in GPU memory */ + const int inputLengths[], /* the lengths of timing steps in each batch, in GPU memory */ + void *costs, /* the returned costs of CTC, in GPU memory */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the dimensions are T,N,A */ + void *gradients, /* the returned CTC gradients, in GPU memory, to compute costs only, set it to NULL */ + size_t workSpaceSizeInBytes, /* size of the workspace */ + void *workspace); /* pointer to the workspace, in GPU memory */ + +/* return the workspace size needed for ctc */ +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossWorkspaceSize( + cudnnHandle_t handle, + const cudnnTensorDescriptor_t probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the + timing steps, N is the mini batch size, A is the alphabet size) */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the + dimensions are T,N,A. To compute costs + only, set it to NULL */ + const int *labels, /* labels, in CPU memory */ + const int *labelLengths, /* the length of each label, in CPU memory */ + const int *inputLengths, /* the lengths of timing steps in each batch, in CPU memory */ + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + size_t *sizeInBytes); /* pointer to the returned workspace size */ + +/* return the workspace size needed for ctc */ +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossWorkspaceSize_v8( + cudnnHandle_t handle, + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + const cudnnTensorDescriptor_t probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the + timing steps, N is the mini batch size, A is the alphabet size) */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the + dimensions are T,N,A. To compute costs + only, set it to NULL */ + size_t *sizeInBytes); /* pointer to the returned workspace size */ + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_ADV_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h new file mode 100644 index 0000000000000000000000000000000000000000..1aa47bbc71d664de3af742f1c5223b149ee5d3f3 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h @@ -0,0 +1,658 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cudnn_adv_infer : cuDNN's advanced and experimental features. + +*/ + +#if !defined(CUDNN_ADV_INFER_H_) +#define CUDNN_ADV_INFER_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_ADV_INFER_MAJOR 8 +#define CUDNN_ADV_INFER_MINOR 7 +#define CUDNN_ADV_INFER_PATCH 0 + +#if (CUDNN_ADV_INFER_MAJOR != CUDNN_MAJOR) || (CUDNN_ADV_INFER_MINOR != CUDNN_MINOR) || \ + (CUDNN_ADV_INFER_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN ADV INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* BASIC RNN API */ + +typedef enum { + CUDNN_FWD_MODE_INFERENCE = 0, + CUDNN_FWD_MODE_TRAINING = 1, +} cudnnForwardMode_t; + +typedef enum { + CUDNN_RNN_RELU = 0, /* basic RNN cell type with ReLu activation */ + CUDNN_RNN_TANH = 1, /* basic RNN cell type with tanh activation */ + CUDNN_LSTM = 2, /* LSTM with optional recurrent projection and clipping */ + CUDNN_GRU = 3, /* Using h' = tanh(r * Uh(t-1) + Wx) and h = (1 - z) * h' + z * h(t-1); */ +} cudnnRNNMode_t; + +typedef enum { + CUDNN_RNN_NO_BIAS = 0, /* rnn cell formulas do not use biases */ + CUDNN_RNN_SINGLE_INP_BIAS = 1, /* rnn cell formulas use one input bias in input GEMM */ + CUDNN_RNN_DOUBLE_BIAS = 2, /* default, rnn cell formulas use two bias vectors */ + CUDNN_RNN_SINGLE_REC_BIAS = 3 /* rnn cell formulas use one recurrent bias in recurrent GEMM */ +} cudnnRNNBiasMode_t; + +typedef enum { + CUDNN_UNIDIRECTIONAL = 0, /* single direction network */ + CUDNN_BIDIRECTIONAL = 1, /* output concatination at each layer */ +} cudnnDirectionMode_t; + +typedef enum { + CUDNN_LINEAR_INPUT = 0, /* adjustable weight matrix in first layer input GEMM */ + CUDNN_SKIP_INPUT = 1, /* fixed identity matrix in the first layer input GEMM */ +} cudnnRNNInputMode_t; + +typedef enum { + CUDNN_RNN_CLIP_NONE = 0, /* disables LSTM cell clipping */ + CUDNN_RNN_CLIP_MINMAX = 1, /* enables LSTM cell clipping */ +} cudnnRNNClipMode_t; + +typedef enum { + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_UNPACKED = 0, /* padded, outer stride from one time-step to the next */ + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_PACKED = 1, /* sequence length sorted and packed as in basic RNN api */ + CUDNN_RNN_DATA_LAYOUT_BATCH_MAJOR_UNPACKED = 2, /* padded, outer stride from one batch to the next */ +} cudnnRNNDataLayout_t; + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnRNNPaddingMode_t; + +/* For auxFlags in cudnnSetRNNDescriptor_v8() and cudnnSetRNNPaddingMode() */ +#define CUDNN_RNN_PADDED_IO_DISABLED 0 +#define CUDNN_RNN_PADDED_IO_ENABLED (1U << 0) + +struct cudnnRNNStruct; +typedef struct cudnnRNNStruct *cudnnRNNDescriptor_t; + +struct cudnnPersistentRNNPlan; +typedef struct cudnnPersistentRNNPlan *cudnnPersistentRNNPlan_t; + +struct cudnnRNNDataStruct; +typedef struct cudnnRNNDataStruct *cudnnRNNDataDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDescriptor(cudnnRNNDescriptor_t *rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDescriptor(cudnnRNNDescriptor_t rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t algo, + cudnnRNNMode_t cellMode, + cudnnRNNBiasMode_t biasMode, + cudnnDirectionMode_t dirMode, + cudnnRNNInputMode_t inputMode, + cudnnDataType_t dataType, + cudnnDataType_t mathPrec, + cudnnMathType_t mathType, + int32_t inputSize, + int32_t hiddenSize, + int32_t projSize, + int32_t numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + uint32_t auxFlags); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t *algo, + cudnnRNNMode_t *cellMode, + cudnnRNNBiasMode_t *biasMode, + cudnnDirectionMode_t *dirMode, + cudnnRNNInputMode_t *inputMode, + cudnnDataType_t *dataType, + cudnnDataType_t *mathPrec, + cudnnMathType_t *mathType, + int32_t *inputSize, + int32_t *hiddenSize, + int32_t *projSize, + int32_t *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + uint32_t *auxFlags); + +/* + * mathPrec in cudnnSetRNNDescriptor_v6() specifies compute precision + * compute precision is further modified by cudnnSetRNNMatrixMathType() + * dataType in cudnnGetRNNParamsSize() and wDesc specify weight storage + * dropout is between RNN layers, not between recurrent steps + */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int hiddenSize, + const int numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + cudnnRNNInputMode_t inputMode, + cudnnDirectionMode_t direction, + cudnnRNNMode_t cellMode, + cudnnRNNAlgo_t algo, + cudnnDataType_t mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int *hiddenSize, + int *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + cudnnRNNInputMode_t *inputMode, + cudnnDirectionMode_t *direction, + cudnnRNNMode_t *cellMode, + cudnnRNNAlgo_t *algo, + cudnnDataType_t *mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t *mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t biasMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t *biasMode); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNProjectionLayers(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int recProjSize, + const int outProjSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNProjectionLayers(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + int *recProjSize, + int *outProjSize); + +/* Expensive. Creates the plan for the specific settings. */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreatePersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, + const int minibatch, + const cudnnDataType_t dataType, + cudnnPersistentRNNPlan_t *plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyPersistentRNNPlan(cudnnPersistentRNNPlan_t plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetPersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, cudnnPersistentRNNPlan_t plan); + +cudnnStatus_t CUDNNWINAPI +cudnnBuildRNNDynamic(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, int miniBatch); + +/* dataType in weight descriptors and input descriptors is used to describe storage */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWorkspaceSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTrainingReserveSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTempSpaceSizes(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fMode, + cudnnRNNDataDescriptor_t xDesc, + size_t *workSpaceSize, + size_t *reserveSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNParamsSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + cudnnDataType_t dataType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightSpaceSize(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, size_t *weightSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerMatrixParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerMatDesc, + void **linLayerMat); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerBiasParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerBiasDesc, + void **linLayerBias); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightParams(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int32_t pseudoLayer, + size_t weightSpaceSize, + const void *weightSpace, + int32_t linLayerID, + cudnnTensorDescriptor_t mDesc, + void **mAddr, + cudnnTensorDescriptor_t bDesc, + void **bAddr); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInference(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + void *workSpace, + size_t workSpaceSizeInBytes); + +/* RNN EX API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned paddingMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned *paddingMode); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDataDescriptor(cudnnRNNDataDescriptor_t *rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t dataType, + cudnnRNNDataLayout_t layout, + int maxSeqLength, + int batchSize, + int vectorSize, + const int seqLengthArray[], /* length of each sequence in the batch */ + void *paddingFill); /* symbol for filling padding position in output */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t *dataType, + cudnnRNNDataLayout_t *layout, + int *maxSeqLength, + int *batchSize, + int *vectorSize, + int arrayLengthRequested, + int seqLengthArray[], + void *paddingFill); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInferenceEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnRNNDataDescriptor_t yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const cudnnRNNDataDescriptor_t kDesc, /* reserved, should pass NULL */ + const void *keys, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t cDesc, /* reserved, should pass NULL */ + void *cAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t iDesc, /* reserved, should pass NULL */ + void *iAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t qDesc, /* reserved, should pass NULL */ + void *queries, /* reserved, should pass NULL */ + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNForward(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnRNNDataDescriptor_t yDesc, + void *y, + cudnnTensorDescriptor_t hDesc, + const void *hx, + void *hy, + cudnnTensorDescriptor_t cDesc, + const void *cx, + void *cy, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +/* RNN FIND API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNAlgorithmDescriptor(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, cudnnAlgorithmDescriptor_t algoDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNForwardInferenceAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNForwardInferenceAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + void *workspace, + size_t workSpaceSizeInBytes); + +/* Sequence data descriptor */ + +typedef enum { + CUDNN_SEQDATA_TIME_DIM = 0, /* index in time */ + CUDNN_SEQDATA_BATCH_DIM = 1, /* index in batch */ + CUDNN_SEQDATA_BEAM_DIM = 2, /* index in beam */ + CUDNN_SEQDATA_VECT_DIM = 3 /* index in vector */ +} cudnnSeqDataAxis_t; + +struct cudnnSeqDataStruct; +typedef struct cudnnSeqDataStruct *cudnnSeqDataDescriptor_t; + +#define CUDNN_SEQDATA_DIM_COUNT 4 /* dimension count */ + +cudnnStatus_t CUDNNWINAPI +cudnnCreateSeqDataDescriptor(cudnnSeqDataDescriptor_t *seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroySeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetSeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t dataType, + int nbDims, + const int dimA[], + const cudnnSeqDataAxis_t axes[], + size_t seqLengthArraySize, + const int seqLengthArray[], + void *paddingFill); + +cudnnStatus_t CUDNNWINAPI +cudnnGetSeqDataDescriptor(const cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t *dataType, + int *nbDims, + int nbDimsRequested, + int dimA[], + cudnnSeqDataAxis_t axes[], + size_t *seqLengthArraySize, + size_t seqLengthSizeRequested, + int seqLengthArray[], + void *paddingFill); + +/* Multihead Attention */ + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnAttnQueryMap_t; + +/* + * Multi-head attention options passed via 'attnMode' in cudnnSetAttnDescriptor(). + * Use the bitwise OR operator to combine several settings listed below. Additional + * minor options can be added here w/o changing or introducing new API functions. + */ +#define CUDNN_ATTN_QUERYMAP_ALL_TO_ONE 0 /* multiple Q-s map to a single (K,V) set when beam size > 1 */ +#define CUDNN_ATTN_QUERYMAP_ONE_TO_ONE (1U << 0) /* multiple Q-s map to multiple (K,V) sets when beam size > 1 */ +#define CUDNN_ATTN_DISABLE_PROJ_BIASES 0 /* no biases in attention input and output projections */ +#define CUDNN_ATTN_ENABLE_PROJ_BIASES (1U << 1) /* use biases in attention input and output projections */ + +struct cudnnAttnStruct; +typedef struct cudnnAttnStruct *cudnnAttnDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateAttnDescriptor(cudnnAttnDescriptor_t *attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyAttnDescriptor(cudnnAttnDescriptor_t attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned attnMode, + int nHeads, + double smScaler, + cudnnDataType_t dataType, + cudnnDataType_t computePrec, + cudnnMathType_t mathType, + cudnnDropoutDescriptor_t attnDropoutDesc, + cudnnDropoutDescriptor_t postDropoutDesc, + int qSize, + int kSize, + int vSize, + int qProjSize, + int kProjSize, + int vProjSize, + int oProjSize, + int qoMaxSeqLength, + int kvMaxSeqLength, + int maxBatchSize, + int maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned *attnMode, + int *nHeads, + double *smScaler, + cudnnDataType_t *dataType, + cudnnDataType_t *computePrec, + cudnnMathType_t *mathType, + cudnnDropoutDescriptor_t *attnDropoutDesc, + cudnnDropoutDescriptor_t *postDropoutDesc, + int *qSize, + int *kSize, + int *vSize, + int *qProjSize, + int *kProjSize, + int *vProjSize, + int *oProjSize, + int *qoMaxSeqLength, + int *kvMaxSeqLength, + int *maxBatchSize, + int *maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnBuffers(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + size_t *weightSizeInBytes, + size_t *workSpaceSizeInBytes, + size_t *reserveSpaceSizeInBytes); + +typedef enum { + CUDNN_MH_ATTN_Q_WEIGHTS = 0, /* input projection weights for 'queries' */ + CUDNN_MH_ATTN_K_WEIGHTS = 1, /* input projection weights for 'keys' */ + CUDNN_MH_ATTN_V_WEIGHTS = 2, /* input projection weights for 'values' */ + CUDNN_MH_ATTN_O_WEIGHTS = 3, /* output projection weights */ + CUDNN_MH_ATTN_Q_BIASES = 4, /* input projection bias tensor for 'queries' */ + CUDNN_MH_ATTN_K_BIASES = 5, /* input projection bias for 'keys' */ + CUDNN_MH_ATTN_V_BIASES = 6, /* input projection bias for 'values' */ + CUDNN_MH_ATTN_O_BIASES = 7, /* output projection biases */ +} cudnnMultiHeadAttnWeightKind_t; + +#define CUDNN_ATTN_WKIND_COUNT 8 /* Number of attention weight/bias tensors */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnMultiHeadAttnWeightKind_t wKind, + size_t weightSizeInBytes, + const void *weights, + cudnnTensorDescriptor_t wDesc, + void **wAddr); + +cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnForward(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + int currIdx, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsQO[], + const int devSeqLengthsKV[], + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const void *residuals, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t oDesc, + void *out, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnAdvInferVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_ADV_INFER_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer_v8.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..1aa47bbc71d664de3af742f1c5223b149ee5d3f3 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer_v8.h @@ -0,0 +1,658 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cudnn_adv_infer : cuDNN's advanced and experimental features. + +*/ + +#if !defined(CUDNN_ADV_INFER_H_) +#define CUDNN_ADV_INFER_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_ADV_INFER_MAJOR 8 +#define CUDNN_ADV_INFER_MINOR 7 +#define CUDNN_ADV_INFER_PATCH 0 + +#if (CUDNN_ADV_INFER_MAJOR != CUDNN_MAJOR) || (CUDNN_ADV_INFER_MINOR != CUDNN_MINOR) || \ + (CUDNN_ADV_INFER_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN ADV INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* BASIC RNN API */ + +typedef enum { + CUDNN_FWD_MODE_INFERENCE = 0, + CUDNN_FWD_MODE_TRAINING = 1, +} cudnnForwardMode_t; + +typedef enum { + CUDNN_RNN_RELU = 0, /* basic RNN cell type with ReLu activation */ + CUDNN_RNN_TANH = 1, /* basic RNN cell type with tanh activation */ + CUDNN_LSTM = 2, /* LSTM with optional recurrent projection and clipping */ + CUDNN_GRU = 3, /* Using h' = tanh(r * Uh(t-1) + Wx) and h = (1 - z) * h' + z * h(t-1); */ +} cudnnRNNMode_t; + +typedef enum { + CUDNN_RNN_NO_BIAS = 0, /* rnn cell formulas do not use biases */ + CUDNN_RNN_SINGLE_INP_BIAS = 1, /* rnn cell formulas use one input bias in input GEMM */ + CUDNN_RNN_DOUBLE_BIAS = 2, /* default, rnn cell formulas use two bias vectors */ + CUDNN_RNN_SINGLE_REC_BIAS = 3 /* rnn cell formulas use one recurrent bias in recurrent GEMM */ +} cudnnRNNBiasMode_t; + +typedef enum { + CUDNN_UNIDIRECTIONAL = 0, /* single direction network */ + CUDNN_BIDIRECTIONAL = 1, /* output concatination at each layer */ +} cudnnDirectionMode_t; + +typedef enum { + CUDNN_LINEAR_INPUT = 0, /* adjustable weight matrix in first layer input GEMM */ + CUDNN_SKIP_INPUT = 1, /* fixed identity matrix in the first layer input GEMM */ +} cudnnRNNInputMode_t; + +typedef enum { + CUDNN_RNN_CLIP_NONE = 0, /* disables LSTM cell clipping */ + CUDNN_RNN_CLIP_MINMAX = 1, /* enables LSTM cell clipping */ +} cudnnRNNClipMode_t; + +typedef enum { + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_UNPACKED = 0, /* padded, outer stride from one time-step to the next */ + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_PACKED = 1, /* sequence length sorted and packed as in basic RNN api */ + CUDNN_RNN_DATA_LAYOUT_BATCH_MAJOR_UNPACKED = 2, /* padded, outer stride from one batch to the next */ +} cudnnRNNDataLayout_t; + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnRNNPaddingMode_t; + +/* For auxFlags in cudnnSetRNNDescriptor_v8() and cudnnSetRNNPaddingMode() */ +#define CUDNN_RNN_PADDED_IO_DISABLED 0 +#define CUDNN_RNN_PADDED_IO_ENABLED (1U << 0) + +struct cudnnRNNStruct; +typedef struct cudnnRNNStruct *cudnnRNNDescriptor_t; + +struct cudnnPersistentRNNPlan; +typedef struct cudnnPersistentRNNPlan *cudnnPersistentRNNPlan_t; + +struct cudnnRNNDataStruct; +typedef struct cudnnRNNDataStruct *cudnnRNNDataDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDescriptor(cudnnRNNDescriptor_t *rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDescriptor(cudnnRNNDescriptor_t rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t algo, + cudnnRNNMode_t cellMode, + cudnnRNNBiasMode_t biasMode, + cudnnDirectionMode_t dirMode, + cudnnRNNInputMode_t inputMode, + cudnnDataType_t dataType, + cudnnDataType_t mathPrec, + cudnnMathType_t mathType, + int32_t inputSize, + int32_t hiddenSize, + int32_t projSize, + int32_t numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + uint32_t auxFlags); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t *algo, + cudnnRNNMode_t *cellMode, + cudnnRNNBiasMode_t *biasMode, + cudnnDirectionMode_t *dirMode, + cudnnRNNInputMode_t *inputMode, + cudnnDataType_t *dataType, + cudnnDataType_t *mathPrec, + cudnnMathType_t *mathType, + int32_t *inputSize, + int32_t *hiddenSize, + int32_t *projSize, + int32_t *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + uint32_t *auxFlags); + +/* + * mathPrec in cudnnSetRNNDescriptor_v6() specifies compute precision + * compute precision is further modified by cudnnSetRNNMatrixMathType() + * dataType in cudnnGetRNNParamsSize() and wDesc specify weight storage + * dropout is between RNN layers, not between recurrent steps + */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int hiddenSize, + const int numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + cudnnRNNInputMode_t inputMode, + cudnnDirectionMode_t direction, + cudnnRNNMode_t cellMode, + cudnnRNNAlgo_t algo, + cudnnDataType_t mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int *hiddenSize, + int *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + cudnnRNNInputMode_t *inputMode, + cudnnDirectionMode_t *direction, + cudnnRNNMode_t *cellMode, + cudnnRNNAlgo_t *algo, + cudnnDataType_t *mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t *mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t biasMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t *biasMode); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNProjectionLayers(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int recProjSize, + const int outProjSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNProjectionLayers(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + int *recProjSize, + int *outProjSize); + +/* Expensive. Creates the plan for the specific settings. */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreatePersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, + const int minibatch, + const cudnnDataType_t dataType, + cudnnPersistentRNNPlan_t *plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyPersistentRNNPlan(cudnnPersistentRNNPlan_t plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetPersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, cudnnPersistentRNNPlan_t plan); + +cudnnStatus_t CUDNNWINAPI +cudnnBuildRNNDynamic(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, int miniBatch); + +/* dataType in weight descriptors and input descriptors is used to describe storage */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWorkspaceSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTrainingReserveSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTempSpaceSizes(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fMode, + cudnnRNNDataDescriptor_t xDesc, + size_t *workSpaceSize, + size_t *reserveSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNParamsSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + cudnnDataType_t dataType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightSpaceSize(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, size_t *weightSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerMatrixParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerMatDesc, + void **linLayerMat); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerBiasParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerBiasDesc, + void **linLayerBias); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightParams(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int32_t pseudoLayer, + size_t weightSpaceSize, + const void *weightSpace, + int32_t linLayerID, + cudnnTensorDescriptor_t mDesc, + void **mAddr, + cudnnTensorDescriptor_t bDesc, + void **bAddr); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInference(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + void *workSpace, + size_t workSpaceSizeInBytes); + +/* RNN EX API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned paddingMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned *paddingMode); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDataDescriptor(cudnnRNNDataDescriptor_t *rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t dataType, + cudnnRNNDataLayout_t layout, + int maxSeqLength, + int batchSize, + int vectorSize, + const int seqLengthArray[], /* length of each sequence in the batch */ + void *paddingFill); /* symbol for filling padding position in output */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t *dataType, + cudnnRNNDataLayout_t *layout, + int *maxSeqLength, + int *batchSize, + int *vectorSize, + int arrayLengthRequested, + int seqLengthArray[], + void *paddingFill); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInferenceEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnRNNDataDescriptor_t yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const cudnnRNNDataDescriptor_t kDesc, /* reserved, should pass NULL */ + const void *keys, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t cDesc, /* reserved, should pass NULL */ + void *cAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t iDesc, /* reserved, should pass NULL */ + void *iAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t qDesc, /* reserved, should pass NULL */ + void *queries, /* reserved, should pass NULL */ + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNForward(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnRNNDataDescriptor_t yDesc, + void *y, + cudnnTensorDescriptor_t hDesc, + const void *hx, + void *hy, + cudnnTensorDescriptor_t cDesc, + const void *cx, + void *cy, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +/* RNN FIND API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNAlgorithmDescriptor(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, cudnnAlgorithmDescriptor_t algoDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNForwardInferenceAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNForwardInferenceAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + void *workspace, + size_t workSpaceSizeInBytes); + +/* Sequence data descriptor */ + +typedef enum { + CUDNN_SEQDATA_TIME_DIM = 0, /* index in time */ + CUDNN_SEQDATA_BATCH_DIM = 1, /* index in batch */ + CUDNN_SEQDATA_BEAM_DIM = 2, /* index in beam */ + CUDNN_SEQDATA_VECT_DIM = 3 /* index in vector */ +} cudnnSeqDataAxis_t; + +struct cudnnSeqDataStruct; +typedef struct cudnnSeqDataStruct *cudnnSeqDataDescriptor_t; + +#define CUDNN_SEQDATA_DIM_COUNT 4 /* dimension count */ + +cudnnStatus_t CUDNNWINAPI +cudnnCreateSeqDataDescriptor(cudnnSeqDataDescriptor_t *seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroySeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetSeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t dataType, + int nbDims, + const int dimA[], + const cudnnSeqDataAxis_t axes[], + size_t seqLengthArraySize, + const int seqLengthArray[], + void *paddingFill); + +cudnnStatus_t CUDNNWINAPI +cudnnGetSeqDataDescriptor(const cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t *dataType, + int *nbDims, + int nbDimsRequested, + int dimA[], + cudnnSeqDataAxis_t axes[], + size_t *seqLengthArraySize, + size_t seqLengthSizeRequested, + int seqLengthArray[], + void *paddingFill); + +/* Multihead Attention */ + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnAttnQueryMap_t; + +/* + * Multi-head attention options passed via 'attnMode' in cudnnSetAttnDescriptor(). + * Use the bitwise OR operator to combine several settings listed below. Additional + * minor options can be added here w/o changing or introducing new API functions. + */ +#define CUDNN_ATTN_QUERYMAP_ALL_TO_ONE 0 /* multiple Q-s map to a single (K,V) set when beam size > 1 */ +#define CUDNN_ATTN_QUERYMAP_ONE_TO_ONE (1U << 0) /* multiple Q-s map to multiple (K,V) sets when beam size > 1 */ +#define CUDNN_ATTN_DISABLE_PROJ_BIASES 0 /* no biases in attention input and output projections */ +#define CUDNN_ATTN_ENABLE_PROJ_BIASES (1U << 1) /* use biases in attention input and output projections */ + +struct cudnnAttnStruct; +typedef struct cudnnAttnStruct *cudnnAttnDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateAttnDescriptor(cudnnAttnDescriptor_t *attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyAttnDescriptor(cudnnAttnDescriptor_t attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned attnMode, + int nHeads, + double smScaler, + cudnnDataType_t dataType, + cudnnDataType_t computePrec, + cudnnMathType_t mathType, + cudnnDropoutDescriptor_t attnDropoutDesc, + cudnnDropoutDescriptor_t postDropoutDesc, + int qSize, + int kSize, + int vSize, + int qProjSize, + int kProjSize, + int vProjSize, + int oProjSize, + int qoMaxSeqLength, + int kvMaxSeqLength, + int maxBatchSize, + int maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned *attnMode, + int *nHeads, + double *smScaler, + cudnnDataType_t *dataType, + cudnnDataType_t *computePrec, + cudnnMathType_t *mathType, + cudnnDropoutDescriptor_t *attnDropoutDesc, + cudnnDropoutDescriptor_t *postDropoutDesc, + int *qSize, + int *kSize, + int *vSize, + int *qProjSize, + int *kProjSize, + int *vProjSize, + int *oProjSize, + int *qoMaxSeqLength, + int *kvMaxSeqLength, + int *maxBatchSize, + int *maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnBuffers(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + size_t *weightSizeInBytes, + size_t *workSpaceSizeInBytes, + size_t *reserveSpaceSizeInBytes); + +typedef enum { + CUDNN_MH_ATTN_Q_WEIGHTS = 0, /* input projection weights for 'queries' */ + CUDNN_MH_ATTN_K_WEIGHTS = 1, /* input projection weights for 'keys' */ + CUDNN_MH_ATTN_V_WEIGHTS = 2, /* input projection weights for 'values' */ + CUDNN_MH_ATTN_O_WEIGHTS = 3, /* output projection weights */ + CUDNN_MH_ATTN_Q_BIASES = 4, /* input projection bias tensor for 'queries' */ + CUDNN_MH_ATTN_K_BIASES = 5, /* input projection bias for 'keys' */ + CUDNN_MH_ATTN_V_BIASES = 6, /* input projection bias for 'values' */ + CUDNN_MH_ATTN_O_BIASES = 7, /* output projection biases */ +} cudnnMultiHeadAttnWeightKind_t; + +#define CUDNN_ATTN_WKIND_COUNT 8 /* Number of attention weight/bias tensors */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnMultiHeadAttnWeightKind_t wKind, + size_t weightSizeInBytes, + const void *weights, + cudnnTensorDescriptor_t wDesc, + void **wAddr); + +cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnForward(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + int currIdx, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsQO[], + const int devSeqLengthsKV[], + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const void *residuals, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t oDesc, + void *out, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnAdvInferVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_ADV_INFER_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_train.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_train.h new file mode 100644 index 0000000000000000000000000000000000000000..2f1d6c07ffbce6289c4dba773ee73a52bcc99059 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_train.h @@ -0,0 +1,540 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cudnn_adv_train : cuDNN's advanced and experimental features. + +*/ + +#if !defined(CUDNN_ADV_TRAIN_H_) +#define CUDNN_ADV_TRAIN_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" +#include "cudnn_ops_train.h" +#include "cudnn_adv_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_ADV_TRAIN_MAJOR 8 +#define CUDNN_ADV_TRAIN_MINOR 7 +#define CUDNN_ADV_TRAIN_PATCH 0 + +#if (CUDNN_ADV_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_ADV_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_ADV_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN ADV TRAIN!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +typedef enum { + CUDNN_WGRAD_MODE_ADD = 0, /* add partial gradients to wgrad output buffers */ + CUDNN_WGRAD_MODE_SET = 1, /* write partial gradients to wgrad output buffers */ +} cudnnWgradMode_t; + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardTraining(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardData(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *yDesc, + const void *y, + const cudnnTensorDescriptor_t *dyDesc, + const void *dy, + const cudnnTensorDescriptor_t dhyDesc, + const void *dhy, + const cudnnTensorDescriptor_t dcyDesc, + const void *dcy, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnTensorDescriptor_t *dxDesc, + void *dx, + const cudnnTensorDescriptor_t dhxDesc, + void *dhx, + const cudnnTensorDescriptor_t dcxDesc, + void *dcx, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardData_v8(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t yDesc, + const void *y, + const void *dy, + cudnnRNNDataDescriptor_t xDesc, + void *dx, + cudnnTensorDescriptor_t hDesc, + const void *hx, + const void *dhy, + void *dhx, + cudnnTensorDescriptor_t cDesc, + const void *cx, + const void *dcy, + void *dcx, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardWeights(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t *yDesc, + const void *y, + const void *workSpace, + size_t workSpaceSizeInBytes, + const cudnnFilterDescriptor_t dwDesc, + void *dw, + const void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardWeights_v8(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnWgradMode_t addGrad, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnTensorDescriptor_t hDesc, + const void *hx, + cudnnRNNDataDescriptor_t yDesc, + const void *y, + size_t weightSpaceSize, + void *dweightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +/* RNN EX API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardTrainingEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnRNNDataDescriptor_t yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const cudnnRNNDataDescriptor_t kDesc, /* reserved, should pass NULL */ + const void *keys, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t cDesc, /* reserved, should pass NULL */ + void *cAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t iDesc, /* reserved, should pass NULL */ + void *iAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t qDesc, /* reserved, should pass NULL */ + void *queries, /* reserved, should pass NULL */ + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardDataEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t yDesc, + const void *y, + const cudnnRNNDataDescriptor_t dyDesc, + const void *dy, + const cudnnRNNDataDescriptor_t dcDesc, /* reserved, should pass NULL */ + const void *dcAttn, /* reserved, should pass NULL */ + const cudnnTensorDescriptor_t dhyDesc, + const void *dhy, + const cudnnTensorDescriptor_t dcyDesc, + const void *dcy, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnRNNDataDescriptor_t dxDesc, + void *dx, + const cudnnTensorDescriptor_t dhxDesc, + void *dhx, + const cudnnTensorDescriptor_t dcxDesc, + void *dcx, + const cudnnRNNDataDescriptor_t dkDesc, /* reserved, should pass NULL */ + void *dkeys, /* reserved, should pass NULL */ + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNBackwardWeightsEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnRNNDataDescriptor_t yDesc, + const void *y, + void *workSpace, + size_t workSpaceSizeInBytes, + const cudnnFilterDescriptor_t dwDesc, + void *dw, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* RNN FIND API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNForwardTrainingAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNForwardTrainingAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNBackwardDataAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNBackwardDataAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *yDesc, + const void *y, + const cudnnTensorDescriptor_t *dyDesc, + const void *dy, + const cudnnTensorDescriptor_t dhyDesc, + const void *dhy, + const cudnnTensorDescriptor_t dcyDesc, + const void *dcy, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnTensorDescriptor_t *dxDesc, + void *dx, + const cudnnTensorDescriptor_t dhxDesc, + void *dhx, + const cudnnTensorDescriptor_t dcxDesc, + void *dcx, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNBackwardWeightsAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNBackwardWeightsAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t *yDesc, + const void *y, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + const void *workspace, + size_t workSpaceSizeInBytes, + const cudnnFilterDescriptor_t dwDesc, + void *dw, + const void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnBackwardData(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsDQDO[], + const int devSeqLengthsDKDV[], + const cudnnSeqDataDescriptor_t doDesc, + const void *dout, + const cudnnSeqDataDescriptor_t dqDesc, + void *dqueries, + const void *queries, + const cudnnSeqDataDescriptor_t dkDesc, + void *dkeys, + const void *keys, + const cudnnSeqDataDescriptor_t dvDesc, + void *dvalues, + const void *values, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnBackwardWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnWgradMode_t addGrad, + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t doDesc, + const void *dout, + size_t weightSizeInBytes, + const void *weights, + void *dweights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* +* CTC (Connectionist Temporal Classification) loss descriptor create/destory/set/get functions +*/ +/* Input normalization mode for loss function */ +typedef enum { + CUDNN_LOSS_NORMALIZATION_NONE = 0, + CUDNN_LOSS_NORMALIZATION_SOFTMAX = 1, +} cudnnLossNormalizationMode_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateCTCLossDescriptor(cudnnCTCLossDescriptor_t *ctcLossDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc, cudnnDataType_t compType); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptorEx(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t compType, + cudnnLossNormalizationMode_t normMode, + cudnnNanPropagation_t gradMode); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCTCLossDescriptor_v8(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t compType, + cudnnLossNormalizationMode_t normMode, + cudnnNanPropagation_t gradMode, + int maxLabelLength); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc, cudnnDataType_t *compType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptorEx(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t *compType, + cudnnLossNormalizationMode_t *normMode, + cudnnNanPropagation_t *gradMode); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossDescriptor_v8(cudnnCTCLossDescriptor_t ctcLossDesc, + cudnnDataType_t *compType, + cudnnLossNormalizationMode_t *normMode, + cudnnNanPropagation_t *gradMode, + int *maxLabelLength); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyCTCLossDescriptor(cudnnCTCLossDescriptor_t ctcLossDesc); + +/* return the ctc costs and gradients, given the probabilities and labels */ +cudnnStatus_t CUDNNWINAPI +cudnnCTCLoss( + cudnnHandle_t handle, + const cudnnTensorDescriptor_t + probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the timing steps, N is the + mini batch size, A is the alphabet size) */ + const void *probs, /* probabilities after softmax, in GPU memory */ + const int hostLabels[], /* labels, in CPU memory */ + const int hostLabelLengths[], /* the length of each label, in CPU memory */ + const int hostInputLengths[], /* the lengths of timing steps in each batch, in CPU memory */ + void *costs, /* the returned costs of CTC, in GPU memory */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the dimensions are T,N,A */ + void *gradients, /* the returned CTC gradients, in GPU memory, to compute costs only, set it to NULL */ + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + void *workspace, /* pointer to the workspace, in GPU memory */ + size_t workSpaceSizeInBytes); /* size of the workspace */ + +/* return the ctc costs and gradients, given the probabilities and labels */ +cudnnStatus_t CUDNNWINAPI +cudnnCTCLoss_v8( + cudnnHandle_t handle, + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + const cudnnTensorDescriptor_t + probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the timing steps, N is the + mini batch size, A is the alphabet size) */ + const void *probs, /* probabilities after softmax, in GPU memory */ + const int labels[], /* labels, in GPU memory */ + const int labelLengths[], /* the length of each label, in GPU memory */ + const int inputLengths[], /* the lengths of timing steps in each batch, in GPU memory */ + void *costs, /* the returned costs of CTC, in GPU memory */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the dimensions are T,N,A */ + void *gradients, /* the returned CTC gradients, in GPU memory, to compute costs only, set it to NULL */ + size_t workSpaceSizeInBytes, /* size of the workspace */ + void *workspace); /* pointer to the workspace, in GPU memory */ + +/* return the workspace size needed for ctc */ +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossWorkspaceSize( + cudnnHandle_t handle, + const cudnnTensorDescriptor_t probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the + timing steps, N is the mini batch size, A is the alphabet size) */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the + dimensions are T,N,A. To compute costs + only, set it to NULL */ + const int *labels, /* labels, in CPU memory */ + const int *labelLengths, /* the length of each label, in CPU memory */ + const int *inputLengths, /* the lengths of timing steps in each batch, in CPU memory */ + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + size_t *sizeInBytes); /* pointer to the returned workspace size */ + +/* return the workspace size needed for ctc */ +cudnnStatus_t CUDNNWINAPI +cudnnGetCTCLossWorkspaceSize_v8( + cudnnHandle_t handle, + cudnnCTCLossAlgo_t algo, /* algorithm selected, supported now 0 and 1 */ + cudnnCTCLossDescriptor_t ctcLossDesc, + const cudnnTensorDescriptor_t probsDesc, /* Tensor descriptor for probabilities, the dimensions are T,N,A (T is the + timing steps, N is the mini batch size, A is the alphabet size) */ + const cudnnTensorDescriptor_t gradientsDesc, /* Tensor descriptor for gradients, the + dimensions are T,N,A. To compute costs + only, set it to NULL */ + size_t *sizeInBytes); /* pointer to the returned workspace size */ + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnAdvTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_ADV_TRAIN_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend_v9.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend_v9.h new file mode 100644 index 0000000000000000000000000000000000000000..5a378e2087f7a45c423f65d213d98c4fa20f3a52 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend_v9.h @@ -0,0 +1,60 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef _CUDNN_BACKEND_H_ +#define _CUDNN_BACKEND_H_ + +/* + * The content of this header has been moved into cudnn_graph.h. + * This header is kept for the backward compatibility purpose. + */ + +#include "cudnn_graph.h" + +#endif /* _CUDNN_BACKEND_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train.h new file mode 100644 index 0000000000000000000000000000000000000000..20d706f5448ffdc177b1a6f457a2f788162d80c2 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train.h @@ -0,0 +1,219 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_cnn_train : cuDNN's basic definitions and inference CNN functions. + */ + +#pragma once +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" +#include "cudnn_ops_train.h" +#include "cudnn_cnn_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_CNN_TRAIN_MAJOR 8 +#define CUDNN_CNN_TRAIN_MINOR 7 +#define CUDNN_CNN_TRAIN_PATCH 0 + +#if (CUDNN_CNN_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_CNN_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_CNN_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN CNN INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* helper function to provide the convolution backward filter algo that fit best the requirement */ + +typedef struct cudnnConvolutionBwdFilterAlgoPerfStruct { + cudnnConvolutionBwdFilterAlgo_t algo; + cudnnStatus_t status; + float time; + size_t memory; + cudnnDeterminism_t determinism; + cudnnMathType_t mathType; + int reserved[3]; +} cudnnConvolutionBwdFilterAlgoPerf_t; + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterAlgorithmMaxCount(cudnnHandle_t handle, int *count); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardFilterAlgorithm(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t dwDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardFilterAlgorithmEx(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *y, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t dwDesc, + void *dw, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults, + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterAlgorithm_v7(cudnnHandle_t handle, + const cudnnTensorDescriptor_t srcDesc, + const cudnnTensorDescriptor_t diffDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t gradDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults); + +/* + * convolution algorithm (which requires potentially some workspace) + */ + +/* Helper function to return the minimum size of the workspace to be passed to the convolution given an algo*/ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterWorkspaceSize(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t gradDesc, + cudnnConvolutionBwdFilterAlgo_t algo, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBackwardFilter(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnConvolutionDescriptor_t convDesc, + cudnnConvolutionBwdFilterAlgo_t algo, + void *workSpace, + size_t workSpaceSizeInBytes, + const void *beta, + const cudnnFilterDescriptor_t dwDesc, + void *dw); + +/* Function to compute the bias gradient for batch convolution */ +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBackwardBias(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *beta, + const cudnnTensorDescriptor_t dbDesc, + void *db); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsConstParamPack(cudnnFusedOpsConstParamPack_t *constPack, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsConstParamPack(cudnnFusedOpsConstParamPack_t constPack); + +cudnnStatus_t CUDNNWINAPI +cudnnSetFusedOpsConstParamPackAttribute(cudnnFusedOpsConstParamPack_t constPack, + cudnnFusedOpsConstParamLabel_t paramLabel, + const void *param); + +cudnnStatus_t CUDNNWINAPI +cudnnGetFusedOpsConstParamPackAttribute(const cudnnFusedOpsConstParamPack_t constPack, + cudnnFusedOpsConstParamLabel_t paramLabel, + void *param, + int *isNULL); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsVariantParamPack(cudnnFusedOpsVariantParamPack_t *varPack, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsVariantParamPack(cudnnFusedOpsVariantParamPack_t varPack); + +cudnnStatus_t CUDNNWINAPI +cudnnSetFusedOpsVariantParamPackAttribute(cudnnFusedOpsVariantParamPack_t varPack, + cudnnFusedOpsVariantParamLabel_t paramLabel, + void *ptr); + +cudnnStatus_t CUDNNWINAPI +cudnnGetFusedOpsVariantParamPackAttribute(const cudnnFusedOpsVariantParamPack_t varPack, + cudnnFusedOpsVariantParamLabel_t paramLabel, + void *ptr); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsPlan(cudnnFusedOpsPlan_t *plan, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsPlan(cudnnFusedOpsPlan_t plan); + +cudnnStatus_t CUDNNWINAPI +cudnnMakeFusedOpsPlan(cudnnHandle_t handle, + cudnnFusedOpsPlan_t plan, + const cudnnFusedOpsConstParamPack_t constPack, + size_t *workspaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnFusedOpsExecute(cudnnHandle_t handle, const cudnnFusedOpsPlan_t plan, cudnnFusedOpsVariantParamPack_t varPack); + +cudnnStatus_t CUDNNWINAPI +cudnnCnnTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph.h new file mode 100644 index 0000000000000000000000000000000000000000..c5394671423f9e950b47a61d59f9842f59a247d1 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph.h @@ -0,0 +1,909 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_graph : cuDNN's basic definitions operations. + */ + +#if !defined(CUDNN_GRAPH_H_) +#define CUDNN_GRAPH_H_ + +#include +#include + +#include + +#include "cudnn_version.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_GRAPH_MAJOR 9 +#define CUDNN_GRAPH_MINOR 1 +#define CUDNN_GRAPH_PATCH 0 + +#if (CUDNN_GRAPH_MAJOR != CUDNN_MAJOR) || (CUDNN_GRAPH_MINOR != CUDNN_MINOR) || (CUDNN_GRAPH_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN GRAPH!!! +#endif + +#ifndef CUDNNWINAPI +#ifdef _WIN32 +#define CUDNNWINAPI __stdcall +#else +#define CUDNNWINAPI +#endif +#endif + +/* Warnings for deprecated API-s are enabled using the CUDNN_WARN_DEPRECATED macro */ +#if defined(CUDNN_WARN_DEPRECATED) && (defined(__GNUC__) || defined(__clang__)) +/* GCC, Intel C/C++, Cray C/C++, CLANG, IBM XL C/C++ little endian */ +#define CUDNN_DEPRECATED __attribute__((deprecated)) +#define CUDNN_DEPRECATED_ENUM __attribute__((deprecated)) +#elif defined(CUDNN_WARN_DEPRECATED) && defined(_MSC_VER) +/* Microsoft Visual C++ */ +#define CUDNN_DEPRECATED __declspec(deprecated) +#define CUDNN_DEPRECATED_ENUM __declspec(deprecated) +#elif defined(CUDNN_WARN_DEPRECATED) && (__cplusplus >= 201402L) +/* C++14 compilers */ +#define CUDNN_DEPRECATED [[deprecated]] +#define CUDNN_DEPRECATED_ENUM [[deprecated]] +#else +/* No support for the deprecated attribute */ +#define CUDNN_DEPRECATED +#define CUDNN_DEPRECATED_ENUM +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +struct cudnnContext; +typedef struct cudnnContext *cudnnHandle_t; + +size_t CUDNNWINAPI +cudnnGetVersion(void); + +size_t CUDNNWINAPI +cudnnGetMaxDeviceVersion(void); + +/* Returns CUDA Runtime version statically linked against cudnn */ +size_t CUDNNWINAPI +cudnnGetCudartVersion(void); + +/* + * CUDNN return codes + */ +typedef enum { + CUDNN_STATUS_SUCCESS = 0, + + /* Uncategorized errors */ + CUDNN_STATUS_NOT_INITIALIZED = 1001, + CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH = 1002, + CUDNN_STATUS_SERIALIZATION_VERSION_MISMATCH = 1003, + CUDNN_STATUS_DEPRECATED = 1004, + CUDNN_STATUS_LICENSE_ERROR = 1005, + CUDNN_STATUS_RUNTIME_IN_PROGRESS = 1006, + CUDNN_STATUS_RUNTIME_FP_OVERFLOW = 1007, + + CUDNN_STATUS_BAD_PARAM = 2000, + CUDNN_STATUS_BAD_PARAM_NULL_POINTER = 2002, + CUDNN_STATUS_BAD_PARAM_MISALIGNED_POINTER = 2003, + CUDNN_STATUS_BAD_PARAM_NOT_FINALIZED = 2004, + CUDNN_STATUS_BAD_PARAM_OUT_OF_BOUND = 2005, + CUDNN_STATUS_BAD_PARAM_SIZE_INSUFFICIENT = 2006, + CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH = 2007, + CUDNN_STATUS_BAD_PARAM_SHAPE_MISMATCH = 2008, + CUDNN_STATUS_BAD_PARAM_DUPLICATED_ENTRIES = 2009, + CUDNN_STATUS_BAD_PARAM_ATTRIBUTE_TYPE = 2010, + + CUDNN_STATUS_NOT_SUPPORTED = 3000, + CUDNN_STATUS_NOT_SUPPORTED_GRAPH_PATTERN = 3001, + CUDNN_STATUS_NOT_SUPPORTED_SHAPE = 3002, + CUDNN_STATUS_NOT_SUPPORTED_DATA_TYPE = 3003, + CUDNN_STATUS_NOT_SUPPORTED_LAYOUT = 3004, + CUDNN_STATUS_NOT_SUPPORTED_INCOMPATIBLE_CUDA_DRIVER = 3005, + CUDNN_STATUS_NOT_SUPPORTED_INCOMPATIBLE_CUDART = 3006, + CUDNN_STATUS_NOT_SUPPORTED_ARCH_MISMATCH = 3007, + CUDNN_STATUS_NOT_SUPPORTED_RUNTIME_PREREQUISITE_MISSING = 3008, + CUDNN_STATUS_NOT_SUPPORTED_SUBLIBRARY_UNAVAILABLE = 3009, + CUDNN_STATUS_NOT_SUPPORTED_SHARED_MEMORY_INSUFFICIENT = 3010, + CUDNN_STATUS_NOT_SUPPORTED_PADDING = 3011, + CUDNN_STATUS_NOT_SUPPORTED_BAD_LAUNCH_PARAM = 3012, + + CUDNN_STATUS_INTERNAL_ERROR = 4000, + CUDNN_STATUS_INTERNAL_ERROR_COMPILATION_FAILED = 4001, + CUDNN_STATUS_INTERNAL_ERROR_UNEXPECTED_VALUE = 4002, + CUDNN_STATUS_INTERNAL_ERROR_HOST_ALLOCATION_FAILED = 4003, + CUDNN_STATUS_INTERNAL_ERROR_DEVICE_ALLOCATION_FAILED = 4004, + CUDNN_STATUS_INTERNAL_ERROR_BAD_LAUNCH_PARAM = 4005, + CUDNN_STATUS_INTERNAL_ERROR_TEXTURE_CREATION_FAILED = 4006, + + CUDNN_STATUS_EXECUTION_FAILED = 5000, + CUDNN_STATUS_EXECUTION_FAILED_CUDA_DRIVER = 5001, + CUDNN_STATUS_EXECUTION_FAILED_CUBLAS = 5002, + CUDNN_STATUS_EXECUTION_FAILED_CUDART = 5003, + CUDNN_STATUS_EXECUTION_FAILED_CURAND = 5004, + + CUDNN_STATUS_ALLOC_FAILED CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_INTERNAL_ERROR_HOST_ALLOCATION_FAILED, + CUDNN_STATUS_INVALID_VALUE CUDNN_DEPRECATED_ENUM = 2001 /* please transition to CUDNN_STATUS_BAD_PARAM instead */, + CUDNN_STATUS_ARCH_MISMATCH CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_NOT_SUPPORTED_ARCH_MISMATCH, + CUDNN_STATUS_MAPPING_ERROR CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_INTERNAL_ERROR_TEXTURE_CREATION_FAILED, + CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING CUDNN_DEPRECATED_ENUM = + CUDNN_STATUS_NOT_SUPPORTED_RUNTIME_PREREQUISITE_MISSING, + CUDNN_STATUS_VERSION_MISMATCH CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH, +} cudnnStatus_t; + +#define CUDNN_STATUS_FULL_ERROR_CODE(category, specific_err) ((cudnnStatus_t)(0 + (category) + (specific_err))) +#define CUDNN_STATUS_CATEGORY(full_error_code) ((full_error_code) / 1000 * 1000) +#define CUDNN_STATUS_SPECIFIC_ERROR(full_error_code) ((full_error_code) % 1000) + +/* human-readable error messages */ +const char *CUDNNWINAPI +cudnnGetErrorString(cudnnStatus_t status); + +void CUDNNWINAPI +cudnnGetLastErrorString(char *message, size_t max_size); + +/* Forward definition in this version only */ +typedef struct cudnnRuntimeTag_t cudnnRuntimeTag_t CUDNN_DEPRECATED; + +typedef enum { + CUDNN_ERRQUERY_RAWCODE = 0, + CUDNN_ERRQUERY_NONBLOCKING = 1, + CUDNN_ERRQUERY_BLOCKING = 2, +} cudnnErrQueryMode_t; + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnQueryRuntimeError(cudnnHandle_t handle, cudnnStatus_t *rstatus, cudnnErrQueryMode_t mode, cudnnRuntimeTag_t *tag); + +cudnnStatus_t CUDNNWINAPI +cudnnGetProperty(libraryPropertyType type, int *value); + +cudnnStatus_t CUDNNWINAPI +cudnnCreate(cudnnHandle_t *handle); +cudnnStatus_t CUDNNWINAPI +cudnnDestroy(cudnnHandle_t handle); +cudnnStatus_t CUDNNWINAPI +cudnnSetStream(cudnnHandle_t handle, cudaStream_t streamId); +cudnnStatus_t CUDNNWINAPI +cudnnGetStream(cudnnHandle_t handle, cudaStream_t *streamId); +/* + * CUDNN data type + */ +typedef enum { + CUDNN_DATA_FLOAT = 0, + CUDNN_DATA_DOUBLE = 1, + CUDNN_DATA_HALF = 2, + CUDNN_DATA_INT8 = 3, + CUDNN_DATA_INT32 = 4, + CUDNN_DATA_INT8x4 CUDNN_DEPRECATED_ENUM = 5, + CUDNN_DATA_UINT8 = 6, + CUDNN_DATA_UINT8x4 CUDNN_DEPRECATED_ENUM = 7, + CUDNN_DATA_INT8x32 CUDNN_DEPRECATED_ENUM = 8, + CUDNN_DATA_BFLOAT16 = 9, + CUDNN_DATA_INT64 = 10, + CUDNN_DATA_BOOLEAN = 11, + CUDNN_DATA_FP8_E4M3 = 12, + CUDNN_DATA_FP8_E5M2 = 13, + CUDNN_DATA_FAST_FLOAT_FOR_FP8 = 14, +} cudnnDataType_t; + +/* + * CUDNN math type + */ +typedef enum { + CUDNN_DEFAULT_MATH = 0, + CUDNN_TENSOR_OP_MATH = 1, + CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION = 2, + CUDNN_FMA_MATH = 3, +} cudnnMathType_t; + +/* + * CUDNN propagate Nan + */ +typedef enum { + CUDNN_NOT_PROPAGATE_NAN CUDNN_DEPRECATED_ENUM = 0, + CUDNN_PROPAGATE_NAN CUDNN_DEPRECATED_ENUM = 1, +} cudnnNanPropagation_t; + +/* + * Behavior for OOB samples. OOB samples are samples where L+R > T is encountered during the gradient calculation. If + * gradMode is set to CUDNN_CTC_SKIP_OOB_GRADIENTS, then the CTC loss function does not write to the gradient buffer for + * that sample. Instead, the current values, even not finite, are retained. If gradMode is set to + * CUDNN_CTC_ZERO_OOB_GRADIENTS, then the gradient for that sample is set to zero. This guarantees a finite gradient. +*/ +typedef enum { + CUDNN_CTC_ZERO_OOB_GRADIENTS = 0, + CUDNN_CTC_SKIP_OOB_GRADIENTS = 1, +} cudnnCTCGradMode_t; + +typedef enum { + CUDNN_TENSOR_NCHW = 0, /* row major (wStride = 1, hStride = w) */ + CUDNN_TENSOR_NHWC = 1, /* feature maps interleaved ( cStride = 1 )*/ + CUDNN_TENSOR_NCHW_VECT_C = 2, /* each image point is vector of element of C, vector length in data type */ +} cudnnTensorFormat_t; + +/* + * CUDNN ReduceTensor op type + */ +typedef enum { + CUDNN_REDUCE_TENSOR_ADD = 0, + CUDNN_REDUCE_TENSOR_MUL = 1, + CUDNN_REDUCE_TENSOR_MIN = 2, + CUDNN_REDUCE_TENSOR_MAX = 3, + CUDNN_REDUCE_TENSOR_AMAX = 4, + CUDNN_REDUCE_TENSOR_AVG = 5, + CUDNN_REDUCE_TENSOR_NORM1 = 6, + CUDNN_REDUCE_TENSOR_NORM2 = 7, + CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS = 8, +} cudnnReduceTensorOp_t; + +/* + * activation mode + */ +typedef enum { + CUDNN_ACTIVATION_SIGMOID = 0, + CUDNN_ACTIVATION_RELU = 1, + CUDNN_ACTIVATION_TANH = 2, + CUDNN_ACTIVATION_CLIPPED_RELU = 3, + CUDNN_ACTIVATION_ELU = 4, + CUDNN_ACTIVATION_IDENTITY = 5, + CUDNN_ACTIVATION_SWISH = 6 +} cudnnActivationMode_t CUDNN_DEPRECATED; + +typedef enum { + CUDNN_SEV_FATAL = 0, + CUDNN_SEV_ERROR = 1, + CUDNN_SEV_WARNING = 2, + CUDNN_SEV_INFO = 3, +} cudnnSeverity_t; + +/* Message masks to be used with cudnnSetCallback() */ +#define CUDNN_SEV_ERROR_EN (1U << CUDNN_SEV_ERROR) +#define CUDNN_SEV_WARNING_EN (1U << CUDNN_SEV_WARNING) +#define CUDNN_SEV_INFO_EN (1U << CUDNN_SEV_INFO) + +/* struct containing useful informaiton for each API call */ +typedef struct cudnnDebugStruct { + unsigned cudnn_version; + cudnnStatus_t cudnnStatus; + unsigned time_sec; /* epoch time in seconds */ + unsigned time_usec; /* microseconds part of epoch time */ + unsigned time_delta; /* time since start in seconds */ + cudnnHandle_t handle; /* cudnn handle */ + cudaStream_t stream; /* cuda stream ID */ + unsigned long long pid; /* process ID */ + unsigned long long tid; /* thread ID */ + int cudaDeviceId; /* CUDA device ID */ + int reserved[15]; /* reserved for future use */ +} cudnnDebug_t; + +typedef void (*cudnnCallback_t)(cudnnSeverity_t sev, void *udata, const cudnnDebug_t *dbg, const char *msg); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCallback(unsigned mask, void *udata, cudnnCallback_t fptr); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCallback(unsigned *mask, void **udata, cudnnCallback_t *fptr); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnGraphVersionCheck(void); + +/* Maximum supported number of tensor dimensions */ +#define CUDNN_DIM_MAX 8 + +/* + * convolution mode + */ +typedef enum { CUDNN_CONVOLUTION = 0, CUDNN_CROSS_CORRELATION = 1 } cudnnConvolutionMode_t; + +/* + * CUDNN Reorder + */ +typedef enum { + CUDNN_DEFAULT_REORDER = 0, + CUDNN_NO_REORDER = 1, +} cudnnReorderType_t CUDNN_DEPRECATED; + +typedef void *cudnnBackendDescriptor_t; + +typedef struct cudnnFractionStruct { + int64_t numerator; + int64_t denominator; +} cudnnFraction_t; + +typedef enum { + CUDNN_POINTWISE_ADD = 0, + CUDNN_POINTWISE_ADD_SQUARE = 5, + CUDNN_POINTWISE_DIV = 6, + CUDNN_POINTWISE_MAX = 3, + CUDNN_POINTWISE_MIN = 2, + CUDNN_POINTWISE_MOD = 7, + CUDNN_POINTWISE_MUL = 1, + CUDNN_POINTWISE_POW = 8, + CUDNN_POINTWISE_SUB = 9, + + CUDNN_POINTWISE_ABS = 10, + CUDNN_POINTWISE_CEIL = 11, + CUDNN_POINTWISE_COS = 12, + CUDNN_POINTWISE_EXP = 13, + CUDNN_POINTWISE_FLOOR = 14, + CUDNN_POINTWISE_LOG = 15, + CUDNN_POINTWISE_NEG = 16, + CUDNN_POINTWISE_RSQRT = 17, + CUDNN_POINTWISE_SIN = 18, + CUDNN_POINTWISE_SQRT = 4, + CUDNN_POINTWISE_TAN = 19, + CUDNN_POINTWISE_ERF = 20, + CUDNN_POINTWISE_IDENTITY = 21, + CUDNN_POINTWISE_RECIPROCAL = 22, + CUDNN_POINTWISE_ATAN2 = 23, + + CUDNN_POINTWISE_RELU_FWD = 100, + CUDNN_POINTWISE_TANH_FWD = 101, + CUDNN_POINTWISE_SIGMOID_FWD = 102, + CUDNN_POINTWISE_ELU_FWD = 103, + CUDNN_POINTWISE_GELU_FWD = 104, + CUDNN_POINTWISE_SOFTPLUS_FWD = 105, + CUDNN_POINTWISE_SWISH_FWD = 106, + CUDNN_POINTWISE_GELU_APPROX_TANH_FWD = 107, + + CUDNN_POINTWISE_RELU_BWD = 200, + CUDNN_POINTWISE_TANH_BWD = 201, + CUDNN_POINTWISE_SIGMOID_BWD = 202, + CUDNN_POINTWISE_ELU_BWD = 203, + CUDNN_POINTWISE_GELU_BWD = 204, + CUDNN_POINTWISE_SOFTPLUS_BWD = 205, + CUDNN_POINTWISE_SWISH_BWD = 206, + CUDNN_POINTWISE_GELU_APPROX_TANH_BWD = 207, + + CUDNN_POINTWISE_CMP_EQ = 300, + CUDNN_POINTWISE_CMP_NEQ = 301, + CUDNN_POINTWISE_CMP_GT = 302, + CUDNN_POINTWISE_CMP_GE = 303, + CUDNN_POINTWISE_CMP_LT = 304, + CUDNN_POINTWISE_CMP_LE = 305, + + CUDNN_POINTWISE_LOGICAL_AND = 400, + CUDNN_POINTWISE_LOGICAL_OR = 401, + CUDNN_POINTWISE_LOGICAL_NOT = 402, + + CUDNN_POINTWISE_GEN_INDEX = 501, + + CUDNN_POINTWISE_BINARY_SELECT = 601, +} cudnnPointwiseMode_t; + +typedef enum { + CUDNN_RESAMPLE_NEAREST = 0, + CUDNN_RESAMPLE_BILINEAR = 1, + CUDNN_RESAMPLE_AVGPOOL = 2, + CUDNN_RESAMPLE_AVGPOOL_INCLUDE_PADDING = 2, + CUDNN_RESAMPLE_AVGPOOL_EXCLUDE_PADDING = 4, + CUDNN_RESAMPLE_MAXPOOL = 3, +} cudnnResampleMode_t; + +typedef enum { + CUDNN_SIGNAL_SET = 0, + CUDNN_SIGNAL_WAIT = 1, +} cudnnSignalMode_t; + +typedef enum { + CUDNN_GENSTATS_SUM_SQSUM = 0, +} cudnnGenStatsMode_t; + +typedef enum { + CUDNN_BN_FINALIZE_STATISTICS_TRAINING = 0, + CUDNN_BN_FINALIZE_STATISTICS_INFERENCE = 1, +} cudnnBnFinalizeStatsMode_t; + +typedef enum { + CUDNN_RNG_DISTRIBUTION_BERNOULLI, + CUDNN_RNG_DISTRIBUTION_UNIFORM, + CUDNN_RNG_DISTRIBUTION_NORMAL, +} cudnnRngDistribution_t; + +typedef enum { + CUDNN_ATTR_POINTWISE_MODE = 0, + CUDNN_ATTR_POINTWISE_MATH_PREC = 1, + CUDNN_ATTR_POINTWISE_NAN_PROPAGATION CUDNN_DEPRECATED_ENUM = 2, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP = 3, + CUDNN_ATTR_POINTWISE_RELU_UPPER_CLIP = 4, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP_SLOPE = 5, + CUDNN_ATTR_POINTWISE_ELU_ALPHA = 6, + CUDNN_ATTR_POINTWISE_SOFTPLUS_BETA = 7, + CUDNN_ATTR_POINTWISE_SWISH_BETA = 8, + CUDNN_ATTR_POINTWISE_AXIS = 9, + + CUDNN_ATTR_CONVOLUTION_COMP_TYPE = 100, + CUDNN_ATTR_CONVOLUTION_CONV_MODE = 101, + CUDNN_ATTR_CONVOLUTION_DILATIONS = 102, + CUDNN_ATTR_CONVOLUTION_FILTER_STRIDES = 103, + CUDNN_ATTR_CONVOLUTION_POST_PADDINGS = 104, + CUDNN_ATTR_CONVOLUTION_PRE_PADDINGS = 105, + CUDNN_ATTR_CONVOLUTION_SPATIAL_DIMS = 106, + + CUDNN_ATTR_ENGINEHEUR_MODE = 200, + CUDNN_ATTR_ENGINEHEUR_OPERATION_GRAPH = 201, + CUDNN_ATTR_ENGINEHEUR_RESULTS = 202, + CUDNN_ATTR_ENGINEHEUR_SM_COUNT_TARGET = 203, + + CUDNN_ATTR_ENGINECFG_ENGINE = 300, + CUDNN_ATTR_ENGINECFG_INTERMEDIATE_INFO = 301, + CUDNN_ATTR_ENGINECFG_KNOB_CHOICES = 302, + + CUDNN_ATTR_EXECUTION_PLAN_HANDLE = 400, + CUDNN_ATTR_EXECUTION_PLAN_ENGINE_CONFIG = 401, + CUDNN_ATTR_EXECUTION_PLAN_WORKSPACE_SIZE = 402, + CUDNN_ATTR_EXECUTION_PLAN_COMPUTED_INTERMEDIATE_UIDS = 403, + CUDNN_ATTR_EXECUTION_PLAN_RUN_ONLY_INTERMEDIATE_UIDS = 404, + CUDNN_ATTR_EXECUTION_PLAN_JSON_REPRESENTATION = 405, + + CUDNN_ATTR_INTERMEDIATE_INFO_UNIQUE_ID = 500, + CUDNN_ATTR_INTERMEDIATE_INFO_SIZE = 501, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_DATA_UIDS = 502, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_ATTRIBUTES = 503, + + CUDNN_ATTR_KNOB_CHOICE_KNOB_TYPE = 600, + CUDNN_ATTR_KNOB_CHOICE_KNOB_VALUE = 601, + + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_ALPHA = 700, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_BETA = 701, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_CONV_DESC = 702, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_W = 703, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_X = 704, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_Y = 705, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_ALPHA = 706, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_BETA = 707, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_CONV_DESC = 708, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_W = 709, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DX = 710, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DY = 711, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_ALPHA = 712, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_BETA = 713, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_CONV_DESC = 714, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DW = 715, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_X = 716, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DY = 717, + + CUDNN_ATTR_OPERATION_POINTWISE_PW_DESCRIPTOR = 750, + CUDNN_ATTR_OPERATION_POINTWISE_XDESC = 751, + CUDNN_ATTR_OPERATION_POINTWISE_BDESC = 752, + CUDNN_ATTR_OPERATION_POINTWISE_YDESC = 753, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA1 = 754, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA2 = 755, + CUDNN_ATTR_OPERATION_POINTWISE_DXDESC = 756, + CUDNN_ATTR_OPERATION_POINTWISE_DYDESC = 757, + CUDNN_ATTR_OPERATION_POINTWISE_TDESC = 758, + + CUDNN_ATTR_OPERATION_GENSTATS_MODE = 770, + CUDNN_ATTR_OPERATION_GENSTATS_MATH_PREC = 771, + CUDNN_ATTR_OPERATION_GENSTATS_XDESC = 772, + CUDNN_ATTR_OPERATION_GENSTATS_SUMDESC = 773, + CUDNN_ATTR_OPERATION_GENSTATS_SQSUMDESC = 774, + + CUDNN_ATTR_OPERATION_BN_FINALIZE_STATS_MODE = 780, + CUDNN_ATTR_OPERATION_BN_FINALIZE_MATH_PREC = 781, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SUM_DESC = 782, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SQ_SUM_DESC = 783, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SCALE_DESC = 784, + CUDNN_ATTR_OPERATION_BN_FINALIZE_BIAS_DESC = 785, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_MEAN_DESC = 786, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_VAR_DESC = 787, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_MEAN_DESC = 788, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_VAR_DESC = 789, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_MEAN_DESC = 790, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_INV_STD_DESC = 791, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_SCALE_DESC = 792, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_BIAS_DESC = 793, + CUDNN_ATTR_OPERATION_BN_FINALIZE_ACCUM_COUNT_DESC = 794, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EPSILON_DESC = 795, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EXP_AVERATE_FACTOR_DESC = 796, + + CUDNN_ATTR_OPERATIONGRAPH_HANDLE = 800, + CUDNN_ATTR_OPERATIONGRAPH_OPS = 801, + CUDNN_ATTR_OPERATIONGRAPH_ENGINE_GLOBAL_COUNT = 802, + + CUDNN_ATTR_TENSOR_BYTE_ALIGNMENT = 900, + CUDNN_ATTR_TENSOR_DATA_TYPE = 901, + CUDNN_ATTR_TENSOR_DIMENSIONS = 902, + CUDNN_ATTR_TENSOR_STRIDES = 903, + CUDNN_ATTR_TENSOR_VECTOR_COUNT = 904, + CUDNN_ATTR_TENSOR_VECTORIZED_DIMENSION = 905, + CUDNN_ATTR_TENSOR_UNIQUE_ID = 906, + CUDNN_ATTR_TENSOR_IS_VIRTUAL = 907, + CUDNN_ATTR_TENSOR_IS_BY_VALUE = 908, + CUDNN_ATTR_TENSOR_REORDERING_MODE = 909, + CUDNN_ATTR_TENSOR_RAGGED_OFFSET_DESC = 913, + + CUDNN_ATTR_VARIANT_PACK_UNIQUE_IDS = 1000, + CUDNN_ATTR_VARIANT_PACK_DATA_POINTERS = 1001, + CUDNN_ATTR_VARIANT_PACK_INTERMEDIATES = 1002, + CUDNN_ATTR_VARIANT_PACK_WORKSPACE = 1003, + + CUDNN_ATTR_LAYOUT_INFO_TENSOR_UID = 1100, + CUDNN_ATTR_LAYOUT_INFO_TYPES = 1101, + + CUDNN_ATTR_KNOB_INFO_TYPE = 1200, + CUDNN_ATTR_KNOB_INFO_MAXIMUM_VALUE = 1201, + CUDNN_ATTR_KNOB_INFO_MINIMUM_VALUE = 1202, + CUDNN_ATTR_KNOB_INFO_STRIDE = 1203, + + CUDNN_ATTR_ENGINE_OPERATION_GRAPH = 1300, + CUDNN_ATTR_ENGINE_GLOBAL_INDEX = 1301, + CUDNN_ATTR_ENGINE_KNOB_INFO = 1302, + CUDNN_ATTR_ENGINE_NUMERICAL_NOTE = 1303, + CUDNN_ATTR_ENGINE_LAYOUT_INFO = 1304, + CUDNN_ATTR_ENGINE_BEHAVIOR_NOTE = 1305, + CUDNN_ATTR_ENGINE_SM_COUNT_TARGET = 1306, + + CUDNN_ATTR_MATMUL_COMP_TYPE = 1500, + CUDNN_ATTR_MATMUL_PADDING_VALUE = 1503, + + CUDNN_ATTR_OPERATION_MATMUL_ADESC = 1520, + CUDNN_ATTR_OPERATION_MATMUL_BDESC = 1521, + CUDNN_ATTR_OPERATION_MATMUL_CDESC = 1522, + CUDNN_ATTR_OPERATION_MATMUL_DESC = 1523, + CUDNN_ATTR_OPERATION_MATMUL_IRREGULARLY_STRIDED_BATCH_COUNT CUDNN_DEPRECATED_ENUM = 1524, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_M_OVERRIDE_DESC = 1525, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_N_OVERRIDE_DESC = 1526, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_K_OVERRIDE_DESC = 1527, + + CUDNN_ATTR_REDUCTION_OPERATOR = 1600, + CUDNN_ATTR_REDUCTION_COMP_TYPE = 1601, + + CUDNN_ATTR_OPERATION_REDUCTION_XDESC = 1610, + CUDNN_ATTR_OPERATION_REDUCTION_YDESC = 1611, + CUDNN_ATTR_OPERATION_REDUCTION_DESC = 1612, + + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MATH_PREC = 1620, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MEAN_DESC = 1621, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_INVSTD_DESC = 1622, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_BN_SCALE_DESC = 1623, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_X_DESC = 1624, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DY_DESC = 1625, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_SCALE_DESC = 1626, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_BIAS_DESC = 1627, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_DY_SCALE_DESC = 1628, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_X_SCALE_DESC = 1629, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_BIAS = 1630, + + CUDNN_ATTR_RESAMPLE_MODE = 1700, + CUDNN_ATTR_RESAMPLE_COMP_TYPE = 1701, + CUDNN_ATTR_RESAMPLE_SPATIAL_DIMS = 1702, + CUDNN_ATTR_RESAMPLE_POST_PADDINGS = 1703, + CUDNN_ATTR_RESAMPLE_PRE_PADDINGS = 1704, + CUDNN_ATTR_RESAMPLE_STRIDES = 1705, + CUDNN_ATTR_RESAMPLE_WINDOW_DIMS = 1706, + CUDNN_ATTR_RESAMPLE_NAN_PROPAGATION = 1707, + CUDNN_ATTR_RESAMPLE_PADDING_MODE = 1708, + + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_XDESC = 1710, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_YDESC = 1711, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_IDXDESC = 1712, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_ALPHA CUDNN_DEPRECATED_ENUM = 1713, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_BETA CUDNN_DEPRECATED_ENUM = 1714, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_DESC = 1716, + + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DXDESC = 1720, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DYDESC = 1721, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_IDXDESC = 1722, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_ALPHA CUDNN_DEPRECATED_ENUM = 1723, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_BETA CUDNN_DEPRECATED_ENUM = 1724, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DESC = 1725, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_XDESC = 1726, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_YDESC = 1727, + + CUDNN_ATTR_OPERATION_CONCAT_AXIS = 1800, + CUDNN_ATTR_OPERATION_CONCAT_INPUT_DESCS = 1801, + CUDNN_ATTR_OPERATION_CONCAT_INPLACE_INDEX = 1802, + CUDNN_ATTR_OPERATION_CONCAT_OUTPUT_DESC = 1803, + + CUDNN_ATTR_OPERATION_SIGNAL_MODE = 1900, + CUDNN_ATTR_OPERATION_SIGNAL_FLAGDESC = 1901, + CUDNN_ATTR_OPERATION_SIGNAL_VALUE = 1902, + CUDNN_ATTR_OPERATION_SIGNAL_XDESC = 1903, + CUDNN_ATTR_OPERATION_SIGNAL_YDESC = 1904, + + CUDNN_ATTR_OPERATION_NORM_FWD_MODE = 2000, + CUDNN_ATTR_OPERATION_NORM_FWD_PHASE = 2001, + CUDNN_ATTR_OPERATION_NORM_FWD_XDESC = 2002, + CUDNN_ATTR_OPERATION_NORM_FWD_MEAN_DESC = 2003, + CUDNN_ATTR_OPERATION_NORM_FWD_INV_VARIANCE_DESC = 2004, + CUDNN_ATTR_OPERATION_NORM_FWD_SCALE_DESC = 2005, + CUDNN_ATTR_OPERATION_NORM_FWD_BIAS_DESC = 2006, + CUDNN_ATTR_OPERATION_NORM_FWD_EPSILON_DESC = 2007, + CUDNN_ATTR_OPERATION_NORM_FWD_EXP_AVG_FACTOR_DESC = 2008, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_MEAN_DESC = 2009, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_VAR_DESC = 2010, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_MEAN_DESC = 2011, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_VAR_DESC = 2012, + CUDNN_ATTR_OPERATION_NORM_FWD_YDESC = 2013, + CUDNN_ATTR_OPERATION_NORM_FWD_PEER_STAT_DESCS = 2014, + + CUDNN_ATTR_OPERATION_NORM_BWD_MODE = 2100, + CUDNN_ATTR_OPERATION_NORM_BWD_XDESC = 2101, + CUDNN_ATTR_OPERATION_NORM_BWD_MEAN_DESC = 2102, + CUDNN_ATTR_OPERATION_NORM_BWD_INV_VARIANCE_DESC = 2103, + CUDNN_ATTR_OPERATION_NORM_BWD_DYDESC = 2104, + CUDNN_ATTR_OPERATION_NORM_BWD_SCALE_DESC = 2105, + CUDNN_ATTR_OPERATION_NORM_BWD_EPSILON_DESC = 2106, + CUDNN_ATTR_OPERATION_NORM_BWD_DSCALE_DESC = 2107, + CUDNN_ATTR_OPERATION_NORM_BWD_DBIAS_DESC = 2108, + CUDNN_ATTR_OPERATION_NORM_BWD_DXDESC = 2109, + CUDNN_ATTR_OPERATION_NORM_BWD_PEER_STAT_DESCS = 2110, + + CUDNN_ATTR_OPERATION_RESHAPE_XDESC = 2200, + CUDNN_ATTR_OPERATION_RESHAPE_YDESC = 2201, + + CUDNN_ATTR_RNG_DISTRIBUTION = 2300, + CUDNN_ATTR_RNG_NORMAL_DIST_MEAN = 2301, + CUDNN_ATTR_RNG_NORMAL_DIST_STANDARD_DEVIATION = 2302, + CUDNN_ATTR_RNG_UNIFORM_DIST_MAXIMUM = 2303, + CUDNN_ATTR_RNG_UNIFORM_DIST_MINIMUM = 2304, + CUDNN_ATTR_RNG_BERNOULLI_DIST_PROBABILITY = 2305, + + CUDNN_ATTR_OPERATION_RNG_YDESC = 2310, + CUDNN_ATTR_OPERATION_RNG_SEED = 2311, + CUDNN_ATTR_OPERATION_RNG_DESC = 2312, + CUDNN_ATTR_OPERATION_RNG_OFFSET_DESC = 2313, +} cudnnBackendAttributeName_t; + +typedef enum { + CUDNN_TYPE_HANDLE = 0, + CUDNN_TYPE_DATA_TYPE, + CUDNN_TYPE_BOOLEAN, + CUDNN_TYPE_INT64, + CUDNN_TYPE_FLOAT, + CUDNN_TYPE_DOUBLE, + CUDNN_TYPE_VOID_PTR, + CUDNN_TYPE_CONVOLUTION_MODE, + CUDNN_TYPE_HEUR_MODE, + CUDNN_TYPE_KNOB_TYPE, + CUDNN_TYPE_NAN_PROPOGATION CUDNN_DEPRECATED_ENUM, + CUDNN_TYPE_NUMERICAL_NOTE, + CUDNN_TYPE_LAYOUT_TYPE, + CUDNN_TYPE_ATTRIB_NAME, + CUDNN_TYPE_POINTWISE_MODE, + CUDNN_TYPE_BACKEND_DESCRIPTOR, + CUDNN_TYPE_GENSTATS_MODE, + CUDNN_TYPE_BN_FINALIZE_STATS_MODE, + CUDNN_TYPE_REDUCTION_OPERATOR_TYPE, + CUDNN_TYPE_BEHAVIOR_NOTE, + CUDNN_TYPE_TENSOR_REORDERING_MODE, + CUDNN_TYPE_RESAMPLE_MODE, + CUDNN_TYPE_PADDING_MODE, + CUDNN_TYPE_INT32, + CUDNN_TYPE_CHAR, + CUDNN_TYPE_SIGNAL_MODE, + CUDNN_TYPE_FRACTION, + CUDNN_TYPE_NORM_MODE, + CUDNN_TYPE_NORM_FWD_PHASE, + CUDNN_TYPE_RNG_DISTRIBUTION +} cudnnBackendAttributeType_t; + +typedef enum { + CUDNN_BACKEND_POINTWISE_DESCRIPTOR = 0, + CUDNN_BACKEND_CONVOLUTION_DESCRIPTOR, + CUDNN_BACKEND_ENGINE_DESCRIPTOR, + CUDNN_BACKEND_ENGINECFG_DESCRIPTOR, + CUDNN_BACKEND_ENGINEHEUR_DESCRIPTOR, + CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR, + CUDNN_BACKEND_INTERMEDIATE_INFO_DESCRIPTOR, + CUDNN_BACKEND_KNOB_CHOICE_DESCRIPTOR, + CUDNN_BACKEND_KNOB_INFO_DESCRIPTOR, + CUDNN_BACKEND_LAYOUT_INFO_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_FILTER_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_DATA_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_POINTWISE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_GEN_STATS_DESCRIPTOR, + CUDNN_BACKEND_OPERATIONGRAPH_DESCRIPTOR, + CUDNN_BACKEND_VARIANT_PACK_DESCRIPTOR, + CUDNN_BACKEND_TENSOR_DESCRIPTOR, + CUDNN_BACKEND_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_FINALIZE_STATISTICS_DESCRIPTOR, + CUDNN_BACKEND_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_BWD_WEIGHTS_DESCRIPTOR, + CUDNN_BACKEND_RESAMPLE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_FWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_BWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONCAT_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_SIGNAL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_BACKWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESHAPE_DESCRIPTOR, + CUDNN_BACKEND_RNG_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RNG_DESCRIPTOR, +} cudnnBackendDescriptorType_t; + +typedef enum { + CUDNN_NUMERICAL_NOTE_TENSOR_CORE = 0, + CUDNN_NUMERICAL_NOTE_DOWN_CONVERT_INPUTS, + CUDNN_NUMERICAL_NOTE_REDUCED_PRECISION_REDUCTION, + CUDNN_NUMERICAL_NOTE_FFT, + CUDNN_NUMERICAL_NOTE_NONDETERMINISTIC, + CUDNN_NUMERICAL_NOTE_WINOGRAD, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_4x4, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_6x6, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_13x13, + CUDNN_NUMERICAL_NOTE_STRICT_NAN_PROP, + CUDNN_NUMERICAL_NOTE_TYPE_COUNT, +} cudnnBackendNumericalNote_t; + +typedef enum { + CUDNN_BEHAVIOR_NOTE_RUNTIME_COMPILATION = 0, + CUDNN_BEHAVIOR_NOTE_REQUIRES_FILTER_INT8x32_REORDER = 1, + CUDNN_BEHAVIOR_NOTE_REQUIRES_BIAS_INT8x32_REORDER = 2, + CUDNN_BEHAVIOR_NOTE_TYPE_COUNT, +} cudnnBackendBehaviorNote_t; + +typedef enum { + CUDNN_KNOB_TYPE_SPLIT_K CUDNN_DEPRECATED_ENUM = 0, + CUDNN_KNOB_TYPE_SWIZZLE = 1, + CUDNN_KNOB_TYPE_TILE_SIZE = 2, + CUDNN_KNOB_TYPE_USE_TEX CUDNN_DEPRECATED_ENUM = 3, + CUDNN_KNOB_TYPE_EDGE = 4, + CUDNN_KNOB_TYPE_KBLOCK CUDNN_DEPRECATED_ENUM = 5, + CUDNN_KNOB_TYPE_LDGA CUDNN_DEPRECATED_ENUM = 6, + CUDNN_KNOB_TYPE_LDGB CUDNN_DEPRECATED_ENUM = 7, + CUDNN_KNOB_TYPE_CHUNK_K CUDNN_DEPRECATED_ENUM = 8, + CUDNN_KNOB_TYPE_SPLIT_H CUDNN_DEPRECATED_ENUM = 9, + CUDNN_KNOB_TYPE_WINO_TILE CUDNN_DEPRECATED_ENUM = 10, + CUDNN_KNOB_TYPE_MULTIPLY = 11, + CUDNN_KNOB_TYPE_SPLIT_K_BUF = 12, + CUDNN_KNOB_TYPE_TILEK = 13, + CUDNN_KNOB_TYPE_STAGES = 14, + CUDNN_KNOB_TYPE_REDUCTION_MODE = 15, + CUDNN_KNOB_TYPE_CTA_SPLIT_K_MODE CUDNN_DEPRECATED_ENUM = 16, + CUDNN_KNOB_TYPE_SPLIT_K_SLC = 17, + CUDNN_KNOB_TYPE_IDX_MODE CUDNN_DEPRECATED_ENUM = 18, + CUDNN_KNOB_TYPE_SLICED CUDNN_DEPRECATED_ENUM = 19, + CUDNN_KNOB_TYPE_SPLIT_RS CUDNN_DEPRECATED_ENUM = 20, + CUDNN_KNOB_TYPE_SINGLEBUFFER CUDNN_DEPRECATED_ENUM = 21, + CUDNN_KNOB_TYPE_LDGC CUDNN_DEPRECATED_ENUM = 22, + CUDNN_KNOB_TYPE_SPECFILT = 23, + CUDNN_KNOB_TYPE_KERNEL_CFG = 24, + CUDNN_KNOB_TYPE_WORKSPACE = 25, + CUDNN_KNOB_TYPE_TILE_CGA CUDNN_DEPRECATED_ENUM = 26, + CUDNN_KNOB_TYPE_TILE_CGA_M = 27, + CUDNN_KNOB_TYPE_TILE_CGA_N = 28, + CUDNN_KNOB_TYPE_BLOCK_SIZE = 29, + CUDNN_KNOB_TYPE_OCCUPANCY = 30, + CUDNN_KNOB_TYPE_ARRAY_SIZE_PER_THREAD = 31, + CUDNN_KNOB_TYPE_NUM_C_PER_BLOCK CUDNN_DEPRECATED_ENUM = 32, + CUDNN_KNOB_TYPE_SPLIT_COLS = 33, + CUDNN_KNOB_TYPE_TILE_ROWS = 34, + CUDNN_KNOB_TYPE_TILE_COLS = 35, + CUDNN_KNOB_TYPE_LOAD_SIZE = 36, + CUDNN_KNOB_TYPE_COUNTS, +} cudnnBackendKnobType_t; + +typedef enum { + CUDNN_LAYOUT_TYPE_PREFERRED_NCHW = 0, + CUDNN_LAYOUT_TYPE_PREFERRED_NHWC = 1, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD4CK = 2, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD8CK = 3, + CUDNN_LAYOUT_TYPE_COUNT = 4, +} cudnnBackendLayoutType_t; + +typedef enum { + CUDNN_HEUR_MODE_INSTANT = 0, + CUDNN_HEUR_MODE_B = 1, + CUDNN_HEUR_MODE_FALLBACK = 2, + CUDNN_HEUR_MODE_A = 3, + CUDNN_HEUR_MODES_COUNT = 4, +} cudnnBackendHeurMode_t; + +typedef enum { + CUDNN_TENSOR_REORDERING_NONE = 0, + CUDNN_TENSOR_REORDERING_INT8x32 = 1, + CUDNN_TENSOR_REORDERING_F16x16 = 2, +} cudnnBackendTensorReordering_t; + +typedef enum { + CUDNN_ZERO_PAD = 0, + CUDNN_NEG_INF_PAD = 1, + CUDNN_EDGE_VAL_PAD = 2, +} cudnnPaddingMode_t; + +typedef enum { + CUDNN_LAYER_NORM = 0, + CUDNN_INSTANCE_NORM = 1, + CUDNN_BATCH_NORM = 2, + CUDNN_GROUP_NORM = 3, + CUDNN_RMS_NORM = 4, +} cudnnBackendNormMode_t; + +typedef enum { + CUDNN_NORM_FWD_INFERENCE = 0, + CUDNN_NORM_FWD_TRAINING = 1, +} cudnnBackendNormFwdPhase_t; + +cudnnStatus_t CUDNNWINAPI +cudnnBackendCreateDescriptor(cudnnBackendDescriptorType_t descriptorType, cudnnBackendDescriptor_t *descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendDestroyDescriptor(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendInitialize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendFinalize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendSetAttribute(cudnnBackendDescriptor_t descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t elementCount, + const void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendGetAttribute(cudnnBackendDescriptor_t const descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t requestedElementCount, + int64_t *elementCount, + void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendExecute(cudnnHandle_t handle, cudnnBackendDescriptor_t executionPlan, cudnnBackendDescriptor_t variantPack); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_GRAPH_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph_v9.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph_v9.h new file mode 100644 index 0000000000000000000000000000000000000000..c5394671423f9e950b47a61d59f9842f59a247d1 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_graph_v9.h @@ -0,0 +1,909 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_graph : cuDNN's basic definitions operations. + */ + +#if !defined(CUDNN_GRAPH_H_) +#define CUDNN_GRAPH_H_ + +#include +#include + +#include + +#include "cudnn_version.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_GRAPH_MAJOR 9 +#define CUDNN_GRAPH_MINOR 1 +#define CUDNN_GRAPH_PATCH 0 + +#if (CUDNN_GRAPH_MAJOR != CUDNN_MAJOR) || (CUDNN_GRAPH_MINOR != CUDNN_MINOR) || (CUDNN_GRAPH_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN GRAPH!!! +#endif + +#ifndef CUDNNWINAPI +#ifdef _WIN32 +#define CUDNNWINAPI __stdcall +#else +#define CUDNNWINAPI +#endif +#endif + +/* Warnings for deprecated API-s are enabled using the CUDNN_WARN_DEPRECATED macro */ +#if defined(CUDNN_WARN_DEPRECATED) && (defined(__GNUC__) || defined(__clang__)) +/* GCC, Intel C/C++, Cray C/C++, CLANG, IBM XL C/C++ little endian */ +#define CUDNN_DEPRECATED __attribute__((deprecated)) +#define CUDNN_DEPRECATED_ENUM __attribute__((deprecated)) +#elif defined(CUDNN_WARN_DEPRECATED) && defined(_MSC_VER) +/* Microsoft Visual C++ */ +#define CUDNN_DEPRECATED __declspec(deprecated) +#define CUDNN_DEPRECATED_ENUM __declspec(deprecated) +#elif defined(CUDNN_WARN_DEPRECATED) && (__cplusplus >= 201402L) +/* C++14 compilers */ +#define CUDNN_DEPRECATED [[deprecated]] +#define CUDNN_DEPRECATED_ENUM [[deprecated]] +#else +/* No support for the deprecated attribute */ +#define CUDNN_DEPRECATED +#define CUDNN_DEPRECATED_ENUM +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +struct cudnnContext; +typedef struct cudnnContext *cudnnHandle_t; + +size_t CUDNNWINAPI +cudnnGetVersion(void); + +size_t CUDNNWINAPI +cudnnGetMaxDeviceVersion(void); + +/* Returns CUDA Runtime version statically linked against cudnn */ +size_t CUDNNWINAPI +cudnnGetCudartVersion(void); + +/* + * CUDNN return codes + */ +typedef enum { + CUDNN_STATUS_SUCCESS = 0, + + /* Uncategorized errors */ + CUDNN_STATUS_NOT_INITIALIZED = 1001, + CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH = 1002, + CUDNN_STATUS_SERIALIZATION_VERSION_MISMATCH = 1003, + CUDNN_STATUS_DEPRECATED = 1004, + CUDNN_STATUS_LICENSE_ERROR = 1005, + CUDNN_STATUS_RUNTIME_IN_PROGRESS = 1006, + CUDNN_STATUS_RUNTIME_FP_OVERFLOW = 1007, + + CUDNN_STATUS_BAD_PARAM = 2000, + CUDNN_STATUS_BAD_PARAM_NULL_POINTER = 2002, + CUDNN_STATUS_BAD_PARAM_MISALIGNED_POINTER = 2003, + CUDNN_STATUS_BAD_PARAM_NOT_FINALIZED = 2004, + CUDNN_STATUS_BAD_PARAM_OUT_OF_BOUND = 2005, + CUDNN_STATUS_BAD_PARAM_SIZE_INSUFFICIENT = 2006, + CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH = 2007, + CUDNN_STATUS_BAD_PARAM_SHAPE_MISMATCH = 2008, + CUDNN_STATUS_BAD_PARAM_DUPLICATED_ENTRIES = 2009, + CUDNN_STATUS_BAD_PARAM_ATTRIBUTE_TYPE = 2010, + + CUDNN_STATUS_NOT_SUPPORTED = 3000, + CUDNN_STATUS_NOT_SUPPORTED_GRAPH_PATTERN = 3001, + CUDNN_STATUS_NOT_SUPPORTED_SHAPE = 3002, + CUDNN_STATUS_NOT_SUPPORTED_DATA_TYPE = 3003, + CUDNN_STATUS_NOT_SUPPORTED_LAYOUT = 3004, + CUDNN_STATUS_NOT_SUPPORTED_INCOMPATIBLE_CUDA_DRIVER = 3005, + CUDNN_STATUS_NOT_SUPPORTED_INCOMPATIBLE_CUDART = 3006, + CUDNN_STATUS_NOT_SUPPORTED_ARCH_MISMATCH = 3007, + CUDNN_STATUS_NOT_SUPPORTED_RUNTIME_PREREQUISITE_MISSING = 3008, + CUDNN_STATUS_NOT_SUPPORTED_SUBLIBRARY_UNAVAILABLE = 3009, + CUDNN_STATUS_NOT_SUPPORTED_SHARED_MEMORY_INSUFFICIENT = 3010, + CUDNN_STATUS_NOT_SUPPORTED_PADDING = 3011, + CUDNN_STATUS_NOT_SUPPORTED_BAD_LAUNCH_PARAM = 3012, + + CUDNN_STATUS_INTERNAL_ERROR = 4000, + CUDNN_STATUS_INTERNAL_ERROR_COMPILATION_FAILED = 4001, + CUDNN_STATUS_INTERNAL_ERROR_UNEXPECTED_VALUE = 4002, + CUDNN_STATUS_INTERNAL_ERROR_HOST_ALLOCATION_FAILED = 4003, + CUDNN_STATUS_INTERNAL_ERROR_DEVICE_ALLOCATION_FAILED = 4004, + CUDNN_STATUS_INTERNAL_ERROR_BAD_LAUNCH_PARAM = 4005, + CUDNN_STATUS_INTERNAL_ERROR_TEXTURE_CREATION_FAILED = 4006, + + CUDNN_STATUS_EXECUTION_FAILED = 5000, + CUDNN_STATUS_EXECUTION_FAILED_CUDA_DRIVER = 5001, + CUDNN_STATUS_EXECUTION_FAILED_CUBLAS = 5002, + CUDNN_STATUS_EXECUTION_FAILED_CUDART = 5003, + CUDNN_STATUS_EXECUTION_FAILED_CURAND = 5004, + + CUDNN_STATUS_ALLOC_FAILED CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_INTERNAL_ERROR_HOST_ALLOCATION_FAILED, + CUDNN_STATUS_INVALID_VALUE CUDNN_DEPRECATED_ENUM = 2001 /* please transition to CUDNN_STATUS_BAD_PARAM instead */, + CUDNN_STATUS_ARCH_MISMATCH CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_NOT_SUPPORTED_ARCH_MISMATCH, + CUDNN_STATUS_MAPPING_ERROR CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_INTERNAL_ERROR_TEXTURE_CREATION_FAILED, + CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING CUDNN_DEPRECATED_ENUM = + CUDNN_STATUS_NOT_SUPPORTED_RUNTIME_PREREQUISITE_MISSING, + CUDNN_STATUS_VERSION_MISMATCH CUDNN_DEPRECATED_ENUM = CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH, +} cudnnStatus_t; + +#define CUDNN_STATUS_FULL_ERROR_CODE(category, specific_err) ((cudnnStatus_t)(0 + (category) + (specific_err))) +#define CUDNN_STATUS_CATEGORY(full_error_code) ((full_error_code) / 1000 * 1000) +#define CUDNN_STATUS_SPECIFIC_ERROR(full_error_code) ((full_error_code) % 1000) + +/* human-readable error messages */ +const char *CUDNNWINAPI +cudnnGetErrorString(cudnnStatus_t status); + +void CUDNNWINAPI +cudnnGetLastErrorString(char *message, size_t max_size); + +/* Forward definition in this version only */ +typedef struct cudnnRuntimeTag_t cudnnRuntimeTag_t CUDNN_DEPRECATED; + +typedef enum { + CUDNN_ERRQUERY_RAWCODE = 0, + CUDNN_ERRQUERY_NONBLOCKING = 1, + CUDNN_ERRQUERY_BLOCKING = 2, +} cudnnErrQueryMode_t; + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnQueryRuntimeError(cudnnHandle_t handle, cudnnStatus_t *rstatus, cudnnErrQueryMode_t mode, cudnnRuntimeTag_t *tag); + +cudnnStatus_t CUDNNWINAPI +cudnnGetProperty(libraryPropertyType type, int *value); + +cudnnStatus_t CUDNNWINAPI +cudnnCreate(cudnnHandle_t *handle); +cudnnStatus_t CUDNNWINAPI +cudnnDestroy(cudnnHandle_t handle); +cudnnStatus_t CUDNNWINAPI +cudnnSetStream(cudnnHandle_t handle, cudaStream_t streamId); +cudnnStatus_t CUDNNWINAPI +cudnnGetStream(cudnnHandle_t handle, cudaStream_t *streamId); +/* + * CUDNN data type + */ +typedef enum { + CUDNN_DATA_FLOAT = 0, + CUDNN_DATA_DOUBLE = 1, + CUDNN_DATA_HALF = 2, + CUDNN_DATA_INT8 = 3, + CUDNN_DATA_INT32 = 4, + CUDNN_DATA_INT8x4 CUDNN_DEPRECATED_ENUM = 5, + CUDNN_DATA_UINT8 = 6, + CUDNN_DATA_UINT8x4 CUDNN_DEPRECATED_ENUM = 7, + CUDNN_DATA_INT8x32 CUDNN_DEPRECATED_ENUM = 8, + CUDNN_DATA_BFLOAT16 = 9, + CUDNN_DATA_INT64 = 10, + CUDNN_DATA_BOOLEAN = 11, + CUDNN_DATA_FP8_E4M3 = 12, + CUDNN_DATA_FP8_E5M2 = 13, + CUDNN_DATA_FAST_FLOAT_FOR_FP8 = 14, +} cudnnDataType_t; + +/* + * CUDNN math type + */ +typedef enum { + CUDNN_DEFAULT_MATH = 0, + CUDNN_TENSOR_OP_MATH = 1, + CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION = 2, + CUDNN_FMA_MATH = 3, +} cudnnMathType_t; + +/* + * CUDNN propagate Nan + */ +typedef enum { + CUDNN_NOT_PROPAGATE_NAN CUDNN_DEPRECATED_ENUM = 0, + CUDNN_PROPAGATE_NAN CUDNN_DEPRECATED_ENUM = 1, +} cudnnNanPropagation_t; + +/* + * Behavior for OOB samples. OOB samples are samples where L+R > T is encountered during the gradient calculation. If + * gradMode is set to CUDNN_CTC_SKIP_OOB_GRADIENTS, then the CTC loss function does not write to the gradient buffer for + * that sample. Instead, the current values, even not finite, are retained. If gradMode is set to + * CUDNN_CTC_ZERO_OOB_GRADIENTS, then the gradient for that sample is set to zero. This guarantees a finite gradient. +*/ +typedef enum { + CUDNN_CTC_ZERO_OOB_GRADIENTS = 0, + CUDNN_CTC_SKIP_OOB_GRADIENTS = 1, +} cudnnCTCGradMode_t; + +typedef enum { + CUDNN_TENSOR_NCHW = 0, /* row major (wStride = 1, hStride = w) */ + CUDNN_TENSOR_NHWC = 1, /* feature maps interleaved ( cStride = 1 )*/ + CUDNN_TENSOR_NCHW_VECT_C = 2, /* each image point is vector of element of C, vector length in data type */ +} cudnnTensorFormat_t; + +/* + * CUDNN ReduceTensor op type + */ +typedef enum { + CUDNN_REDUCE_TENSOR_ADD = 0, + CUDNN_REDUCE_TENSOR_MUL = 1, + CUDNN_REDUCE_TENSOR_MIN = 2, + CUDNN_REDUCE_TENSOR_MAX = 3, + CUDNN_REDUCE_TENSOR_AMAX = 4, + CUDNN_REDUCE_TENSOR_AVG = 5, + CUDNN_REDUCE_TENSOR_NORM1 = 6, + CUDNN_REDUCE_TENSOR_NORM2 = 7, + CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS = 8, +} cudnnReduceTensorOp_t; + +/* + * activation mode + */ +typedef enum { + CUDNN_ACTIVATION_SIGMOID = 0, + CUDNN_ACTIVATION_RELU = 1, + CUDNN_ACTIVATION_TANH = 2, + CUDNN_ACTIVATION_CLIPPED_RELU = 3, + CUDNN_ACTIVATION_ELU = 4, + CUDNN_ACTIVATION_IDENTITY = 5, + CUDNN_ACTIVATION_SWISH = 6 +} cudnnActivationMode_t CUDNN_DEPRECATED; + +typedef enum { + CUDNN_SEV_FATAL = 0, + CUDNN_SEV_ERROR = 1, + CUDNN_SEV_WARNING = 2, + CUDNN_SEV_INFO = 3, +} cudnnSeverity_t; + +/* Message masks to be used with cudnnSetCallback() */ +#define CUDNN_SEV_ERROR_EN (1U << CUDNN_SEV_ERROR) +#define CUDNN_SEV_WARNING_EN (1U << CUDNN_SEV_WARNING) +#define CUDNN_SEV_INFO_EN (1U << CUDNN_SEV_INFO) + +/* struct containing useful informaiton for each API call */ +typedef struct cudnnDebugStruct { + unsigned cudnn_version; + cudnnStatus_t cudnnStatus; + unsigned time_sec; /* epoch time in seconds */ + unsigned time_usec; /* microseconds part of epoch time */ + unsigned time_delta; /* time since start in seconds */ + cudnnHandle_t handle; /* cudnn handle */ + cudaStream_t stream; /* cuda stream ID */ + unsigned long long pid; /* process ID */ + unsigned long long tid; /* thread ID */ + int cudaDeviceId; /* CUDA device ID */ + int reserved[15]; /* reserved for future use */ +} cudnnDebug_t; + +typedef void (*cudnnCallback_t)(cudnnSeverity_t sev, void *udata, const cudnnDebug_t *dbg, const char *msg); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCallback(unsigned mask, void *udata, cudnnCallback_t fptr); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCallback(unsigned *mask, void **udata, cudnnCallback_t *fptr); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_SUBLIBRARY_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnGraphVersionCheck(void); + +/* Maximum supported number of tensor dimensions */ +#define CUDNN_DIM_MAX 8 + +/* + * convolution mode + */ +typedef enum { CUDNN_CONVOLUTION = 0, CUDNN_CROSS_CORRELATION = 1 } cudnnConvolutionMode_t; + +/* + * CUDNN Reorder + */ +typedef enum { + CUDNN_DEFAULT_REORDER = 0, + CUDNN_NO_REORDER = 1, +} cudnnReorderType_t CUDNN_DEPRECATED; + +typedef void *cudnnBackendDescriptor_t; + +typedef struct cudnnFractionStruct { + int64_t numerator; + int64_t denominator; +} cudnnFraction_t; + +typedef enum { + CUDNN_POINTWISE_ADD = 0, + CUDNN_POINTWISE_ADD_SQUARE = 5, + CUDNN_POINTWISE_DIV = 6, + CUDNN_POINTWISE_MAX = 3, + CUDNN_POINTWISE_MIN = 2, + CUDNN_POINTWISE_MOD = 7, + CUDNN_POINTWISE_MUL = 1, + CUDNN_POINTWISE_POW = 8, + CUDNN_POINTWISE_SUB = 9, + + CUDNN_POINTWISE_ABS = 10, + CUDNN_POINTWISE_CEIL = 11, + CUDNN_POINTWISE_COS = 12, + CUDNN_POINTWISE_EXP = 13, + CUDNN_POINTWISE_FLOOR = 14, + CUDNN_POINTWISE_LOG = 15, + CUDNN_POINTWISE_NEG = 16, + CUDNN_POINTWISE_RSQRT = 17, + CUDNN_POINTWISE_SIN = 18, + CUDNN_POINTWISE_SQRT = 4, + CUDNN_POINTWISE_TAN = 19, + CUDNN_POINTWISE_ERF = 20, + CUDNN_POINTWISE_IDENTITY = 21, + CUDNN_POINTWISE_RECIPROCAL = 22, + CUDNN_POINTWISE_ATAN2 = 23, + + CUDNN_POINTWISE_RELU_FWD = 100, + CUDNN_POINTWISE_TANH_FWD = 101, + CUDNN_POINTWISE_SIGMOID_FWD = 102, + CUDNN_POINTWISE_ELU_FWD = 103, + CUDNN_POINTWISE_GELU_FWD = 104, + CUDNN_POINTWISE_SOFTPLUS_FWD = 105, + CUDNN_POINTWISE_SWISH_FWD = 106, + CUDNN_POINTWISE_GELU_APPROX_TANH_FWD = 107, + + CUDNN_POINTWISE_RELU_BWD = 200, + CUDNN_POINTWISE_TANH_BWD = 201, + CUDNN_POINTWISE_SIGMOID_BWD = 202, + CUDNN_POINTWISE_ELU_BWD = 203, + CUDNN_POINTWISE_GELU_BWD = 204, + CUDNN_POINTWISE_SOFTPLUS_BWD = 205, + CUDNN_POINTWISE_SWISH_BWD = 206, + CUDNN_POINTWISE_GELU_APPROX_TANH_BWD = 207, + + CUDNN_POINTWISE_CMP_EQ = 300, + CUDNN_POINTWISE_CMP_NEQ = 301, + CUDNN_POINTWISE_CMP_GT = 302, + CUDNN_POINTWISE_CMP_GE = 303, + CUDNN_POINTWISE_CMP_LT = 304, + CUDNN_POINTWISE_CMP_LE = 305, + + CUDNN_POINTWISE_LOGICAL_AND = 400, + CUDNN_POINTWISE_LOGICAL_OR = 401, + CUDNN_POINTWISE_LOGICAL_NOT = 402, + + CUDNN_POINTWISE_GEN_INDEX = 501, + + CUDNN_POINTWISE_BINARY_SELECT = 601, +} cudnnPointwiseMode_t; + +typedef enum { + CUDNN_RESAMPLE_NEAREST = 0, + CUDNN_RESAMPLE_BILINEAR = 1, + CUDNN_RESAMPLE_AVGPOOL = 2, + CUDNN_RESAMPLE_AVGPOOL_INCLUDE_PADDING = 2, + CUDNN_RESAMPLE_AVGPOOL_EXCLUDE_PADDING = 4, + CUDNN_RESAMPLE_MAXPOOL = 3, +} cudnnResampleMode_t; + +typedef enum { + CUDNN_SIGNAL_SET = 0, + CUDNN_SIGNAL_WAIT = 1, +} cudnnSignalMode_t; + +typedef enum { + CUDNN_GENSTATS_SUM_SQSUM = 0, +} cudnnGenStatsMode_t; + +typedef enum { + CUDNN_BN_FINALIZE_STATISTICS_TRAINING = 0, + CUDNN_BN_FINALIZE_STATISTICS_INFERENCE = 1, +} cudnnBnFinalizeStatsMode_t; + +typedef enum { + CUDNN_RNG_DISTRIBUTION_BERNOULLI, + CUDNN_RNG_DISTRIBUTION_UNIFORM, + CUDNN_RNG_DISTRIBUTION_NORMAL, +} cudnnRngDistribution_t; + +typedef enum { + CUDNN_ATTR_POINTWISE_MODE = 0, + CUDNN_ATTR_POINTWISE_MATH_PREC = 1, + CUDNN_ATTR_POINTWISE_NAN_PROPAGATION CUDNN_DEPRECATED_ENUM = 2, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP = 3, + CUDNN_ATTR_POINTWISE_RELU_UPPER_CLIP = 4, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP_SLOPE = 5, + CUDNN_ATTR_POINTWISE_ELU_ALPHA = 6, + CUDNN_ATTR_POINTWISE_SOFTPLUS_BETA = 7, + CUDNN_ATTR_POINTWISE_SWISH_BETA = 8, + CUDNN_ATTR_POINTWISE_AXIS = 9, + + CUDNN_ATTR_CONVOLUTION_COMP_TYPE = 100, + CUDNN_ATTR_CONVOLUTION_CONV_MODE = 101, + CUDNN_ATTR_CONVOLUTION_DILATIONS = 102, + CUDNN_ATTR_CONVOLUTION_FILTER_STRIDES = 103, + CUDNN_ATTR_CONVOLUTION_POST_PADDINGS = 104, + CUDNN_ATTR_CONVOLUTION_PRE_PADDINGS = 105, + CUDNN_ATTR_CONVOLUTION_SPATIAL_DIMS = 106, + + CUDNN_ATTR_ENGINEHEUR_MODE = 200, + CUDNN_ATTR_ENGINEHEUR_OPERATION_GRAPH = 201, + CUDNN_ATTR_ENGINEHEUR_RESULTS = 202, + CUDNN_ATTR_ENGINEHEUR_SM_COUNT_TARGET = 203, + + CUDNN_ATTR_ENGINECFG_ENGINE = 300, + CUDNN_ATTR_ENGINECFG_INTERMEDIATE_INFO = 301, + CUDNN_ATTR_ENGINECFG_KNOB_CHOICES = 302, + + CUDNN_ATTR_EXECUTION_PLAN_HANDLE = 400, + CUDNN_ATTR_EXECUTION_PLAN_ENGINE_CONFIG = 401, + CUDNN_ATTR_EXECUTION_PLAN_WORKSPACE_SIZE = 402, + CUDNN_ATTR_EXECUTION_PLAN_COMPUTED_INTERMEDIATE_UIDS = 403, + CUDNN_ATTR_EXECUTION_PLAN_RUN_ONLY_INTERMEDIATE_UIDS = 404, + CUDNN_ATTR_EXECUTION_PLAN_JSON_REPRESENTATION = 405, + + CUDNN_ATTR_INTERMEDIATE_INFO_UNIQUE_ID = 500, + CUDNN_ATTR_INTERMEDIATE_INFO_SIZE = 501, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_DATA_UIDS = 502, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_ATTRIBUTES = 503, + + CUDNN_ATTR_KNOB_CHOICE_KNOB_TYPE = 600, + CUDNN_ATTR_KNOB_CHOICE_KNOB_VALUE = 601, + + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_ALPHA = 700, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_BETA = 701, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_CONV_DESC = 702, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_W = 703, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_X = 704, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_Y = 705, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_ALPHA = 706, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_BETA = 707, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_CONV_DESC = 708, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_W = 709, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DX = 710, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DY = 711, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_ALPHA = 712, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_BETA = 713, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_CONV_DESC = 714, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DW = 715, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_X = 716, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DY = 717, + + CUDNN_ATTR_OPERATION_POINTWISE_PW_DESCRIPTOR = 750, + CUDNN_ATTR_OPERATION_POINTWISE_XDESC = 751, + CUDNN_ATTR_OPERATION_POINTWISE_BDESC = 752, + CUDNN_ATTR_OPERATION_POINTWISE_YDESC = 753, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA1 = 754, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA2 = 755, + CUDNN_ATTR_OPERATION_POINTWISE_DXDESC = 756, + CUDNN_ATTR_OPERATION_POINTWISE_DYDESC = 757, + CUDNN_ATTR_OPERATION_POINTWISE_TDESC = 758, + + CUDNN_ATTR_OPERATION_GENSTATS_MODE = 770, + CUDNN_ATTR_OPERATION_GENSTATS_MATH_PREC = 771, + CUDNN_ATTR_OPERATION_GENSTATS_XDESC = 772, + CUDNN_ATTR_OPERATION_GENSTATS_SUMDESC = 773, + CUDNN_ATTR_OPERATION_GENSTATS_SQSUMDESC = 774, + + CUDNN_ATTR_OPERATION_BN_FINALIZE_STATS_MODE = 780, + CUDNN_ATTR_OPERATION_BN_FINALIZE_MATH_PREC = 781, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SUM_DESC = 782, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SQ_SUM_DESC = 783, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SCALE_DESC = 784, + CUDNN_ATTR_OPERATION_BN_FINALIZE_BIAS_DESC = 785, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_MEAN_DESC = 786, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_VAR_DESC = 787, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_MEAN_DESC = 788, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_VAR_DESC = 789, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_MEAN_DESC = 790, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_INV_STD_DESC = 791, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_SCALE_DESC = 792, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_BIAS_DESC = 793, + CUDNN_ATTR_OPERATION_BN_FINALIZE_ACCUM_COUNT_DESC = 794, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EPSILON_DESC = 795, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EXP_AVERATE_FACTOR_DESC = 796, + + CUDNN_ATTR_OPERATIONGRAPH_HANDLE = 800, + CUDNN_ATTR_OPERATIONGRAPH_OPS = 801, + CUDNN_ATTR_OPERATIONGRAPH_ENGINE_GLOBAL_COUNT = 802, + + CUDNN_ATTR_TENSOR_BYTE_ALIGNMENT = 900, + CUDNN_ATTR_TENSOR_DATA_TYPE = 901, + CUDNN_ATTR_TENSOR_DIMENSIONS = 902, + CUDNN_ATTR_TENSOR_STRIDES = 903, + CUDNN_ATTR_TENSOR_VECTOR_COUNT = 904, + CUDNN_ATTR_TENSOR_VECTORIZED_DIMENSION = 905, + CUDNN_ATTR_TENSOR_UNIQUE_ID = 906, + CUDNN_ATTR_TENSOR_IS_VIRTUAL = 907, + CUDNN_ATTR_TENSOR_IS_BY_VALUE = 908, + CUDNN_ATTR_TENSOR_REORDERING_MODE = 909, + CUDNN_ATTR_TENSOR_RAGGED_OFFSET_DESC = 913, + + CUDNN_ATTR_VARIANT_PACK_UNIQUE_IDS = 1000, + CUDNN_ATTR_VARIANT_PACK_DATA_POINTERS = 1001, + CUDNN_ATTR_VARIANT_PACK_INTERMEDIATES = 1002, + CUDNN_ATTR_VARIANT_PACK_WORKSPACE = 1003, + + CUDNN_ATTR_LAYOUT_INFO_TENSOR_UID = 1100, + CUDNN_ATTR_LAYOUT_INFO_TYPES = 1101, + + CUDNN_ATTR_KNOB_INFO_TYPE = 1200, + CUDNN_ATTR_KNOB_INFO_MAXIMUM_VALUE = 1201, + CUDNN_ATTR_KNOB_INFO_MINIMUM_VALUE = 1202, + CUDNN_ATTR_KNOB_INFO_STRIDE = 1203, + + CUDNN_ATTR_ENGINE_OPERATION_GRAPH = 1300, + CUDNN_ATTR_ENGINE_GLOBAL_INDEX = 1301, + CUDNN_ATTR_ENGINE_KNOB_INFO = 1302, + CUDNN_ATTR_ENGINE_NUMERICAL_NOTE = 1303, + CUDNN_ATTR_ENGINE_LAYOUT_INFO = 1304, + CUDNN_ATTR_ENGINE_BEHAVIOR_NOTE = 1305, + CUDNN_ATTR_ENGINE_SM_COUNT_TARGET = 1306, + + CUDNN_ATTR_MATMUL_COMP_TYPE = 1500, + CUDNN_ATTR_MATMUL_PADDING_VALUE = 1503, + + CUDNN_ATTR_OPERATION_MATMUL_ADESC = 1520, + CUDNN_ATTR_OPERATION_MATMUL_BDESC = 1521, + CUDNN_ATTR_OPERATION_MATMUL_CDESC = 1522, + CUDNN_ATTR_OPERATION_MATMUL_DESC = 1523, + CUDNN_ATTR_OPERATION_MATMUL_IRREGULARLY_STRIDED_BATCH_COUNT CUDNN_DEPRECATED_ENUM = 1524, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_M_OVERRIDE_DESC = 1525, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_N_OVERRIDE_DESC = 1526, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_K_OVERRIDE_DESC = 1527, + + CUDNN_ATTR_REDUCTION_OPERATOR = 1600, + CUDNN_ATTR_REDUCTION_COMP_TYPE = 1601, + + CUDNN_ATTR_OPERATION_REDUCTION_XDESC = 1610, + CUDNN_ATTR_OPERATION_REDUCTION_YDESC = 1611, + CUDNN_ATTR_OPERATION_REDUCTION_DESC = 1612, + + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MATH_PREC = 1620, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MEAN_DESC = 1621, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_INVSTD_DESC = 1622, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_BN_SCALE_DESC = 1623, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_X_DESC = 1624, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DY_DESC = 1625, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_SCALE_DESC = 1626, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_BIAS_DESC = 1627, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_DY_SCALE_DESC = 1628, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_X_SCALE_DESC = 1629, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_BIAS = 1630, + + CUDNN_ATTR_RESAMPLE_MODE = 1700, + CUDNN_ATTR_RESAMPLE_COMP_TYPE = 1701, + CUDNN_ATTR_RESAMPLE_SPATIAL_DIMS = 1702, + CUDNN_ATTR_RESAMPLE_POST_PADDINGS = 1703, + CUDNN_ATTR_RESAMPLE_PRE_PADDINGS = 1704, + CUDNN_ATTR_RESAMPLE_STRIDES = 1705, + CUDNN_ATTR_RESAMPLE_WINDOW_DIMS = 1706, + CUDNN_ATTR_RESAMPLE_NAN_PROPAGATION = 1707, + CUDNN_ATTR_RESAMPLE_PADDING_MODE = 1708, + + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_XDESC = 1710, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_YDESC = 1711, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_IDXDESC = 1712, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_ALPHA CUDNN_DEPRECATED_ENUM = 1713, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_BETA CUDNN_DEPRECATED_ENUM = 1714, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_DESC = 1716, + + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DXDESC = 1720, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DYDESC = 1721, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_IDXDESC = 1722, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_ALPHA CUDNN_DEPRECATED_ENUM = 1723, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_BETA CUDNN_DEPRECATED_ENUM = 1724, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DESC = 1725, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_XDESC = 1726, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_YDESC = 1727, + + CUDNN_ATTR_OPERATION_CONCAT_AXIS = 1800, + CUDNN_ATTR_OPERATION_CONCAT_INPUT_DESCS = 1801, + CUDNN_ATTR_OPERATION_CONCAT_INPLACE_INDEX = 1802, + CUDNN_ATTR_OPERATION_CONCAT_OUTPUT_DESC = 1803, + + CUDNN_ATTR_OPERATION_SIGNAL_MODE = 1900, + CUDNN_ATTR_OPERATION_SIGNAL_FLAGDESC = 1901, + CUDNN_ATTR_OPERATION_SIGNAL_VALUE = 1902, + CUDNN_ATTR_OPERATION_SIGNAL_XDESC = 1903, + CUDNN_ATTR_OPERATION_SIGNAL_YDESC = 1904, + + CUDNN_ATTR_OPERATION_NORM_FWD_MODE = 2000, + CUDNN_ATTR_OPERATION_NORM_FWD_PHASE = 2001, + CUDNN_ATTR_OPERATION_NORM_FWD_XDESC = 2002, + CUDNN_ATTR_OPERATION_NORM_FWD_MEAN_DESC = 2003, + CUDNN_ATTR_OPERATION_NORM_FWD_INV_VARIANCE_DESC = 2004, + CUDNN_ATTR_OPERATION_NORM_FWD_SCALE_DESC = 2005, + CUDNN_ATTR_OPERATION_NORM_FWD_BIAS_DESC = 2006, + CUDNN_ATTR_OPERATION_NORM_FWD_EPSILON_DESC = 2007, + CUDNN_ATTR_OPERATION_NORM_FWD_EXP_AVG_FACTOR_DESC = 2008, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_MEAN_DESC = 2009, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_VAR_DESC = 2010, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_MEAN_DESC = 2011, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_VAR_DESC = 2012, + CUDNN_ATTR_OPERATION_NORM_FWD_YDESC = 2013, + CUDNN_ATTR_OPERATION_NORM_FWD_PEER_STAT_DESCS = 2014, + + CUDNN_ATTR_OPERATION_NORM_BWD_MODE = 2100, + CUDNN_ATTR_OPERATION_NORM_BWD_XDESC = 2101, + CUDNN_ATTR_OPERATION_NORM_BWD_MEAN_DESC = 2102, + CUDNN_ATTR_OPERATION_NORM_BWD_INV_VARIANCE_DESC = 2103, + CUDNN_ATTR_OPERATION_NORM_BWD_DYDESC = 2104, + CUDNN_ATTR_OPERATION_NORM_BWD_SCALE_DESC = 2105, + CUDNN_ATTR_OPERATION_NORM_BWD_EPSILON_DESC = 2106, + CUDNN_ATTR_OPERATION_NORM_BWD_DSCALE_DESC = 2107, + CUDNN_ATTR_OPERATION_NORM_BWD_DBIAS_DESC = 2108, + CUDNN_ATTR_OPERATION_NORM_BWD_DXDESC = 2109, + CUDNN_ATTR_OPERATION_NORM_BWD_PEER_STAT_DESCS = 2110, + + CUDNN_ATTR_OPERATION_RESHAPE_XDESC = 2200, + CUDNN_ATTR_OPERATION_RESHAPE_YDESC = 2201, + + CUDNN_ATTR_RNG_DISTRIBUTION = 2300, + CUDNN_ATTR_RNG_NORMAL_DIST_MEAN = 2301, + CUDNN_ATTR_RNG_NORMAL_DIST_STANDARD_DEVIATION = 2302, + CUDNN_ATTR_RNG_UNIFORM_DIST_MAXIMUM = 2303, + CUDNN_ATTR_RNG_UNIFORM_DIST_MINIMUM = 2304, + CUDNN_ATTR_RNG_BERNOULLI_DIST_PROBABILITY = 2305, + + CUDNN_ATTR_OPERATION_RNG_YDESC = 2310, + CUDNN_ATTR_OPERATION_RNG_SEED = 2311, + CUDNN_ATTR_OPERATION_RNG_DESC = 2312, + CUDNN_ATTR_OPERATION_RNG_OFFSET_DESC = 2313, +} cudnnBackendAttributeName_t; + +typedef enum { + CUDNN_TYPE_HANDLE = 0, + CUDNN_TYPE_DATA_TYPE, + CUDNN_TYPE_BOOLEAN, + CUDNN_TYPE_INT64, + CUDNN_TYPE_FLOAT, + CUDNN_TYPE_DOUBLE, + CUDNN_TYPE_VOID_PTR, + CUDNN_TYPE_CONVOLUTION_MODE, + CUDNN_TYPE_HEUR_MODE, + CUDNN_TYPE_KNOB_TYPE, + CUDNN_TYPE_NAN_PROPOGATION CUDNN_DEPRECATED_ENUM, + CUDNN_TYPE_NUMERICAL_NOTE, + CUDNN_TYPE_LAYOUT_TYPE, + CUDNN_TYPE_ATTRIB_NAME, + CUDNN_TYPE_POINTWISE_MODE, + CUDNN_TYPE_BACKEND_DESCRIPTOR, + CUDNN_TYPE_GENSTATS_MODE, + CUDNN_TYPE_BN_FINALIZE_STATS_MODE, + CUDNN_TYPE_REDUCTION_OPERATOR_TYPE, + CUDNN_TYPE_BEHAVIOR_NOTE, + CUDNN_TYPE_TENSOR_REORDERING_MODE, + CUDNN_TYPE_RESAMPLE_MODE, + CUDNN_TYPE_PADDING_MODE, + CUDNN_TYPE_INT32, + CUDNN_TYPE_CHAR, + CUDNN_TYPE_SIGNAL_MODE, + CUDNN_TYPE_FRACTION, + CUDNN_TYPE_NORM_MODE, + CUDNN_TYPE_NORM_FWD_PHASE, + CUDNN_TYPE_RNG_DISTRIBUTION +} cudnnBackendAttributeType_t; + +typedef enum { + CUDNN_BACKEND_POINTWISE_DESCRIPTOR = 0, + CUDNN_BACKEND_CONVOLUTION_DESCRIPTOR, + CUDNN_BACKEND_ENGINE_DESCRIPTOR, + CUDNN_BACKEND_ENGINECFG_DESCRIPTOR, + CUDNN_BACKEND_ENGINEHEUR_DESCRIPTOR, + CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR, + CUDNN_BACKEND_INTERMEDIATE_INFO_DESCRIPTOR, + CUDNN_BACKEND_KNOB_CHOICE_DESCRIPTOR, + CUDNN_BACKEND_KNOB_INFO_DESCRIPTOR, + CUDNN_BACKEND_LAYOUT_INFO_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_FILTER_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_DATA_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_POINTWISE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_GEN_STATS_DESCRIPTOR, + CUDNN_BACKEND_OPERATIONGRAPH_DESCRIPTOR, + CUDNN_BACKEND_VARIANT_PACK_DESCRIPTOR, + CUDNN_BACKEND_TENSOR_DESCRIPTOR, + CUDNN_BACKEND_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_FINALIZE_STATISTICS_DESCRIPTOR, + CUDNN_BACKEND_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_BWD_WEIGHTS_DESCRIPTOR, + CUDNN_BACKEND_RESAMPLE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_FWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_BWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONCAT_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_SIGNAL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_BACKWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESHAPE_DESCRIPTOR, + CUDNN_BACKEND_RNG_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RNG_DESCRIPTOR, +} cudnnBackendDescriptorType_t; + +typedef enum { + CUDNN_NUMERICAL_NOTE_TENSOR_CORE = 0, + CUDNN_NUMERICAL_NOTE_DOWN_CONVERT_INPUTS, + CUDNN_NUMERICAL_NOTE_REDUCED_PRECISION_REDUCTION, + CUDNN_NUMERICAL_NOTE_FFT, + CUDNN_NUMERICAL_NOTE_NONDETERMINISTIC, + CUDNN_NUMERICAL_NOTE_WINOGRAD, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_4x4, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_6x6, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_13x13, + CUDNN_NUMERICAL_NOTE_STRICT_NAN_PROP, + CUDNN_NUMERICAL_NOTE_TYPE_COUNT, +} cudnnBackendNumericalNote_t; + +typedef enum { + CUDNN_BEHAVIOR_NOTE_RUNTIME_COMPILATION = 0, + CUDNN_BEHAVIOR_NOTE_REQUIRES_FILTER_INT8x32_REORDER = 1, + CUDNN_BEHAVIOR_NOTE_REQUIRES_BIAS_INT8x32_REORDER = 2, + CUDNN_BEHAVIOR_NOTE_TYPE_COUNT, +} cudnnBackendBehaviorNote_t; + +typedef enum { + CUDNN_KNOB_TYPE_SPLIT_K CUDNN_DEPRECATED_ENUM = 0, + CUDNN_KNOB_TYPE_SWIZZLE = 1, + CUDNN_KNOB_TYPE_TILE_SIZE = 2, + CUDNN_KNOB_TYPE_USE_TEX CUDNN_DEPRECATED_ENUM = 3, + CUDNN_KNOB_TYPE_EDGE = 4, + CUDNN_KNOB_TYPE_KBLOCK CUDNN_DEPRECATED_ENUM = 5, + CUDNN_KNOB_TYPE_LDGA CUDNN_DEPRECATED_ENUM = 6, + CUDNN_KNOB_TYPE_LDGB CUDNN_DEPRECATED_ENUM = 7, + CUDNN_KNOB_TYPE_CHUNK_K CUDNN_DEPRECATED_ENUM = 8, + CUDNN_KNOB_TYPE_SPLIT_H CUDNN_DEPRECATED_ENUM = 9, + CUDNN_KNOB_TYPE_WINO_TILE CUDNN_DEPRECATED_ENUM = 10, + CUDNN_KNOB_TYPE_MULTIPLY = 11, + CUDNN_KNOB_TYPE_SPLIT_K_BUF = 12, + CUDNN_KNOB_TYPE_TILEK = 13, + CUDNN_KNOB_TYPE_STAGES = 14, + CUDNN_KNOB_TYPE_REDUCTION_MODE = 15, + CUDNN_KNOB_TYPE_CTA_SPLIT_K_MODE CUDNN_DEPRECATED_ENUM = 16, + CUDNN_KNOB_TYPE_SPLIT_K_SLC = 17, + CUDNN_KNOB_TYPE_IDX_MODE CUDNN_DEPRECATED_ENUM = 18, + CUDNN_KNOB_TYPE_SLICED CUDNN_DEPRECATED_ENUM = 19, + CUDNN_KNOB_TYPE_SPLIT_RS CUDNN_DEPRECATED_ENUM = 20, + CUDNN_KNOB_TYPE_SINGLEBUFFER CUDNN_DEPRECATED_ENUM = 21, + CUDNN_KNOB_TYPE_LDGC CUDNN_DEPRECATED_ENUM = 22, + CUDNN_KNOB_TYPE_SPECFILT = 23, + CUDNN_KNOB_TYPE_KERNEL_CFG = 24, + CUDNN_KNOB_TYPE_WORKSPACE = 25, + CUDNN_KNOB_TYPE_TILE_CGA CUDNN_DEPRECATED_ENUM = 26, + CUDNN_KNOB_TYPE_TILE_CGA_M = 27, + CUDNN_KNOB_TYPE_TILE_CGA_N = 28, + CUDNN_KNOB_TYPE_BLOCK_SIZE = 29, + CUDNN_KNOB_TYPE_OCCUPANCY = 30, + CUDNN_KNOB_TYPE_ARRAY_SIZE_PER_THREAD = 31, + CUDNN_KNOB_TYPE_NUM_C_PER_BLOCK CUDNN_DEPRECATED_ENUM = 32, + CUDNN_KNOB_TYPE_SPLIT_COLS = 33, + CUDNN_KNOB_TYPE_TILE_ROWS = 34, + CUDNN_KNOB_TYPE_TILE_COLS = 35, + CUDNN_KNOB_TYPE_LOAD_SIZE = 36, + CUDNN_KNOB_TYPE_COUNTS, +} cudnnBackendKnobType_t; + +typedef enum { + CUDNN_LAYOUT_TYPE_PREFERRED_NCHW = 0, + CUDNN_LAYOUT_TYPE_PREFERRED_NHWC = 1, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD4CK = 2, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD8CK = 3, + CUDNN_LAYOUT_TYPE_COUNT = 4, +} cudnnBackendLayoutType_t; + +typedef enum { + CUDNN_HEUR_MODE_INSTANT = 0, + CUDNN_HEUR_MODE_B = 1, + CUDNN_HEUR_MODE_FALLBACK = 2, + CUDNN_HEUR_MODE_A = 3, + CUDNN_HEUR_MODES_COUNT = 4, +} cudnnBackendHeurMode_t; + +typedef enum { + CUDNN_TENSOR_REORDERING_NONE = 0, + CUDNN_TENSOR_REORDERING_INT8x32 = 1, + CUDNN_TENSOR_REORDERING_F16x16 = 2, +} cudnnBackendTensorReordering_t; + +typedef enum { + CUDNN_ZERO_PAD = 0, + CUDNN_NEG_INF_PAD = 1, + CUDNN_EDGE_VAL_PAD = 2, +} cudnnPaddingMode_t; + +typedef enum { + CUDNN_LAYER_NORM = 0, + CUDNN_INSTANCE_NORM = 1, + CUDNN_BATCH_NORM = 2, + CUDNN_GROUP_NORM = 3, + CUDNN_RMS_NORM = 4, +} cudnnBackendNormMode_t; + +typedef enum { + CUDNN_NORM_FWD_INFERENCE = 0, + CUDNN_NORM_FWD_TRAINING = 1, +} cudnnBackendNormFwdPhase_t; + +cudnnStatus_t CUDNNWINAPI +cudnnBackendCreateDescriptor(cudnnBackendDescriptorType_t descriptorType, cudnnBackendDescriptor_t *descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendDestroyDescriptor(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendInitialize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendFinalize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendSetAttribute(cudnnBackendDescriptor_t descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t elementCount, + const void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendGetAttribute(cudnnBackendDescriptor_t const descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t requestedElementCount, + int64_t *elementCount, + void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendExecute(cudnnHandle_t handle, cudnnBackendDescriptor_t executionPlan, cudnnBackendDescriptor_t variantPack); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_GRAPH_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h new file mode 100644 index 0000000000000000000000000000000000000000..b16897b7626ebc9d22fd8031932800eb023e65df --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h @@ -0,0 +1,501 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_ops_train : cuDNN's basic training operations and algorithms. + */ + +#if !defined(CUDNN_OPS_TRAIN_H_) +#define CUDNN_OPS_TRAIN_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_OPS_TRAIN_MAJOR 8 +#define CUDNN_OPS_TRAIN_MINOR 7 +#define CUDNN_OPS_TRAIN_PATCH 0 + +#if (CUDNN_OPS_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_OPS_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_OPS_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN OPS TRAIN!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* Function to perform backward softmax */ +cudnnStatus_t CUDNNWINAPI +cudnnSoftmaxBackward(cudnnHandle_t handle, + cudnnSoftmaxAlgorithm_t algo, + cudnnSoftmaxMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward pooling */ +cudnnStatus_t CUDNNWINAPI +cudnnPoolingBackward(cudnnHandle_t handle, + const cudnnPoolingDescriptor_t poolingDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward activation */ +cudnnStatus_t CUDNNWINAPI +cudnnActivationBackward(cudnnHandle_t handle, + cudnnActivationDescriptor_t activationDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* LRN cross-channel backward computation. Double parameters cast to tensor data type */ +cudnnStatus_t CUDNNWINAPI +cudnnLRNCrossChannelBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnLRNMode_t lrnMode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +cudnnStatus_t CUDNNWINAPI +cudnnDivisiveNormalizationBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnDivNormMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, means, dy, temp, temp2 */ + const void *x, + const void *means, /* if NULL, means are assumed to be zero */ + const void *dy, + void *temp, + void *temp2, + const void *beta, + const cudnnTensorDescriptor_t dXdMeansDesc, /* same desc for dx, dMeans */ + void *dx, /* output x differential */ + void *dMeans); /* output means differential, can be NULL */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationForwardTrainingExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationBackwardExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationTrainingExReserveSpaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes); + +/* Computes y = BN(x). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTraining( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *x, /* NxCxHxW */ + const cudnnTensorDescriptor_t yDesc, + void *y, /* NxCxHxW */ + + /* Shared desc for the next 6 tensors in the argument list. + Data type to be set as follows: + type = (typeOf(x) == double) ? double : float + Dimensions for this descriptor depend on normalization mode + - Spatial Normalization : tensors are expected to have dims 1xCx1x1 + (normalization is performed across NxHxW) + - Per-Activation Normalization : tensors are expected to have dims of 1xCxHxW + (normalization is performed across N) */ + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + + /* 'Gamma' and 'Beta' respectively in Ioffe and Szegedy's paper's notation */ + const void *bnScale, + const void *bnBias, + + /* MUST use factor=1 in the very first call of a complete training cycle. + Use a factor=1/(1+n) at N-th call to the function to get + Cumulative Moving Average (CMA) behavior + CMA[n] = (x[1]+...+x[n])/n + Since CMA[n+1] = (n*CMA[n]+x[n+1])/(n+1) = + ((n+1)*CMA[n]-CMA[n])/(n+1) + x[n+1]/(n+1) = + CMA[n]*(1-1/(n+1)) + x[n+1]*1/(n+1) */ + double exponentialAverageFactor, + + /* Used in Training phase only. + runningMean = newMean*factor + runningMean*(1-factor) */ + void *resultRunningMean, + /* Output in training mode, input in inference. Is the moving average + of variance[x] (factor is applied in the same way as for runningMean) */ + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance); + +/* Computes y = relu(BN(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTrainingEx( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const void *bnScale, + const void *bnBias, + + double exponentialAverageFactor, + void *resultRunningMean, + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + + cudnnActivationDescriptor_t activationDesc, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* Performs backward pass of Batch Normalization layer. Returns x gradient, +* bnScale gradient and bnBias gradient */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackward(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, dx, dy */ + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScale, /* bnBias doesn't affect backpropagation */ + /* scale and bias diff are not backpropagated below this layer */ + void *dBnScaleResult, + void *dBnBiasResult, + /* Same epsilon as forward pass */ + double epsilon, + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance); + +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackwardEx(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScaleData, + const void *bnBiasData, /* needed if there is activation */ + void *dBnScaleData, + void *dBnBiasData, + double epsilon, /* Same epsilon as forward pass */ + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationForwardTrainingWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationBackwardWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationTrainingReserveSpaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +/* Computes y = relu(Norm(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationForwardTraining(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const void *normScale, + const void *normBias, + double exponentialAverageFactor, + const cudnnTensorDescriptor_t normMeanVarDesc, + void *resultRunningMean, + void *resultRunningVariance, + /* Has to be >= 0. Should be the same in forward and backward functions. */ + double epsilon, + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationBackward(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const void *normScaleData, + const void *normBiasData, /* needed if there is activation */ + void *dNormScaleData, + void *dNormBiasData, + double epsilon, /* Same epsilon as forward pass */ + const cudnnTensorDescriptor_t normMeanVarDesc, + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfGridGeneratorBackward(cudnnHandle_t handle, + const cudnnSpatialTransformerDescriptor_t stDesc, + const void *dgrid, + void *dtheta); + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfSamplerBackward(cudnnHandle_t handle, + cudnnSpatialTransformerDescriptor_t stDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + const void *alphaDgrid, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *grid, + const void *betaDgrid, + void *dgrid); + +cudnnStatus_t CUDNNWINAPI +cudnnDropoutBackward(cudnnHandle_t handle, + const cudnnDropoutDescriptor_t dropoutDesc, + const cudnnTensorDescriptor_t dydesc, + const void *dy, + const cudnnTensorDescriptor_t dxdesc, + void *dx, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnOpsTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_OPS_TRAIN_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v8.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..a6ff223dbf1791512913b378c42f3695cf9bb86a --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v8.h @@ -0,0 +1,70 @@ +/* + * Copyright 2017-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/** + * \file: The master cuDNN version file. + */ + +#ifndef CUDNN_VERSION_H_ +#define CUDNN_VERSION_H_ + +#define CUDNN_MAJOR 8 +#define CUDNN_MINOR 7 +#define CUDNN_PATCHLEVEL 0 + +#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL) + +/* cannot use constexpr here since this is a C-only file */ +/* Below is the max SM version this cuDNN library is aware of and supports natively */ + +#define CUDNN_MAX_SM_MAJOR_NUMBER 9 +#define CUDNN_MAX_SM_MINOR_NUMBER 0 +#define CUDNN_MAX_DEVICE_VERSION (CUDNN_MAX_SM_MAJOR_NUMBER * 100) + (CUDNN_MAX_SM_MINOR_NUMBER * 10) + +#endif /* CUDNN_VERSION_H */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v9.h b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v9.h new file mode 100644 index 0000000000000000000000000000000000000000..51964033f41c8bd5e94886634a0425288091e383 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_version_v9.h @@ -0,0 +1,70 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/** + * \file: The master cuDNN version file. + */ + +#ifndef CUDNN_VERSION_H_ +#define CUDNN_VERSION_H_ + +#define CUDNN_MAJOR 9 +#define CUDNN_MINOR 1 +#define CUDNN_PATCHLEVEL 0 + +#define CUDNN_VERSION (CUDNN_MAJOR * 10000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL) + +/* cannot use constexpr here since this is a C-only file */ +/* Below is the max SM version this cuDNN library is aware of and supports natively */ + +#define CUDNN_MAX_SM_MAJOR_NUMBER 9 +#define CUDNN_MAX_SM_MINOR_NUMBER 0 +#define CUDNN_MAX_DEVICE_VERSION (CUDNN_MAX_SM_MAJOR_NUMBER * 100 + CUDNN_MAX_SM_MINOR_NUMBER * 10) + +#endif /* CUDNN_VERSION_H */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1080bc276c795a354489e8248f63bfb6924ed97a Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cudalibxt.h b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cudalibxt.h new file mode 100644 index 0000000000000000000000000000000000000000..94fcf4745fafa04f57678ba5ee64103f8ebd6444 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cudalibxt.h @@ -0,0 +1,97 @@ + /* Copyright 2013,2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/*! +* \file cudalibxt.h +* \brief Public header file for the NVIDIA library multi-GPU support structures +*/ + +#ifndef _CUDA_LIB_XT_H_ +#define _CUDA_LIB_XT_H_ +#include + +#define CUDA_XT_DESCRIPTOR_VERSION 0x01000000 // This is added to CUDART_VERSION + +enum cudaXtCopyType_t { + LIB_XT_COPY_HOST_TO_DEVICE, + LIB_XT_COPY_DEVICE_TO_HOST, + LIB_XT_COPY_DEVICE_TO_DEVICE +} ; +typedef enum cudaXtCopyType_t cudaLibXtCopyType; + +enum libFormat_t { + LIB_FORMAT_CUFFT = 0x0, + LIB_FORMAT_UNDEFINED = 0x1 +}; + +typedef enum libFormat_t libFormat; + +#define MAX_CUDA_DESCRIPTOR_GPUS 64 + +struct cudaXtDesc_t{ + int version; //descriptor version + int nGPUs; //number of GPUs + int GPUs[MAX_CUDA_DESCRIPTOR_GPUS]; //array of device IDs + void *data[MAX_CUDA_DESCRIPTOR_GPUS]; //array of pointers to data, one per GPU + size_t size[MAX_CUDA_DESCRIPTOR_GPUS]; //array of data sizes, one per GPU + void *cudaXtState; //opaque CUDA utility structure +}; +typedef struct cudaXtDesc_t cudaXtDesc; + +struct cudaLibXtDesc_t{ + int version; //descriptor version + cudaXtDesc *descriptor; //multi-GPU memory descriptor + libFormat library; //which library recognizes the format + int subFormat; //library specific enumerator of sub formats + void *libDescriptor; //library specific descriptor e.g. FFT transform plan object +}; +typedef struct cudaLibXtDesc_t cudaLibXtDesc; + + +#endif + diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufft.h b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufft.h new file mode 100644 index 0000000000000000000000000000000000000000..3d11c6a2579a7dba4e61dda86be5a2541d7d21b7 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufft.h @@ -0,0 +1,322 @@ + /* Copyright 2005-2021 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/*! +* \file cufft.h +* \brief Public header file for the NVIDIA CUDA FFT library (CUFFT) +*/ + +#ifndef _CUFFT_H_ +#define _CUFFT_H_ + + +#include "cuComplex.h" +#include "driver_types.h" +#include "library_types.h" + +#ifndef CUFFTAPI +#ifdef _WIN32 +#define CUFFTAPI __stdcall +#elif __GNUC__ >= 4 +#define CUFFTAPI __attribute__ ((visibility ("default"))) +#else +#define CUFFTAPI +#endif +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +#define CUFFT_VER_MAJOR 10 +#define CUFFT_VER_MINOR 9 +#define CUFFT_VER_PATCH 0 +#define CUFFT_VER_BUILD 58 + +// cuFFT library version +// +// CUFFT_VERSION / 1000 - major version +// CUFFT_VERSION / 100 % 100 - minor version +// CUFFT_VERSION % 100 - patch level +#define CUFFT_VERSION 10900 + +// CUFFT API function return values +typedef enum cufftResult_t { + CUFFT_SUCCESS = 0x0, + CUFFT_INVALID_PLAN = 0x1, + CUFFT_ALLOC_FAILED = 0x2, + CUFFT_INVALID_TYPE = 0x3, + CUFFT_INVALID_VALUE = 0x4, + CUFFT_INTERNAL_ERROR = 0x5, + CUFFT_EXEC_FAILED = 0x6, + CUFFT_SETUP_FAILED = 0x7, + CUFFT_INVALID_SIZE = 0x8, + CUFFT_UNALIGNED_DATA = 0x9, + CUFFT_INCOMPLETE_PARAMETER_LIST = 0xA, + CUFFT_INVALID_DEVICE = 0xB, + CUFFT_PARSE_ERROR = 0xC, + CUFFT_NO_WORKSPACE = 0xD, + CUFFT_NOT_IMPLEMENTED = 0xE, + CUFFT_LICENSE_ERROR = 0x0F, + CUFFT_NOT_SUPPORTED = 0x10 + +} cufftResult; + +#define MAX_CUFFT_ERROR 0x11 + + +// CUFFT defines and supports the following data types + + +// cufftReal is a single-precision, floating-point real data type. +// cufftDoubleReal is a double-precision, real data type. +typedef float cufftReal; +typedef double cufftDoubleReal; + +// cufftComplex is a single-precision, floating-point complex data type that +// consists of interleaved real and imaginary components. +// cufftDoubleComplex is the double-precision equivalent. +typedef cuComplex cufftComplex; +typedef cuDoubleComplex cufftDoubleComplex; + +// CUFFT transform directions +#define CUFFT_FORWARD -1 // Forward FFT +#define CUFFT_INVERSE 1 // Inverse FFT + +// CUFFT supports the following transform types +typedef enum cufftType_t { + CUFFT_R2C = 0x2a, // Real to Complex (interleaved) + CUFFT_C2R = 0x2c, // Complex (interleaved) to Real + CUFFT_C2C = 0x29, // Complex to Complex, interleaved + CUFFT_D2Z = 0x6a, // Double to Double-Complex + CUFFT_Z2D = 0x6c, // Double-Complex to Double + CUFFT_Z2Z = 0x69 // Double-Complex to Double-Complex +} cufftType; + +// CUFFT supports the following data layouts +typedef enum cufftCompatibility_t { + CUFFT_COMPATIBILITY_FFTW_PADDING = 0x01 // The default value +} cufftCompatibility; + +#define CUFFT_COMPATIBILITY_DEFAULT CUFFT_COMPATIBILITY_FFTW_PADDING + +// +// structure definition used by the shim between old and new APIs +// +#define MAX_SHIM_RANK 3 + +// cufftHandle is a handle type used to store and access CUFFT plans. +typedef int cufftHandle; + + +cufftResult CUFFTAPI cufftPlan1d(cufftHandle *plan, + int nx, + cufftType type, + int batch); + +cufftResult CUFFTAPI cufftPlan2d(cufftHandle *plan, + int nx, int ny, + cufftType type); + +cufftResult CUFFTAPI cufftPlan3d(cufftHandle *plan, + int nx, int ny, int nz, + cufftType type); + +cufftResult CUFFTAPI cufftPlanMany(cufftHandle *plan, + int rank, + int *n, + int *inembed, int istride, int idist, + int *onembed, int ostride, int odist, + cufftType type, + int batch); + +cufftResult CUFFTAPI cufftMakePlan1d(cufftHandle plan, + int nx, + cufftType type, + int batch, + size_t *workSize); + +cufftResult CUFFTAPI cufftMakePlan2d(cufftHandle plan, + int nx, int ny, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftMakePlan3d(cufftHandle plan, + int nx, int ny, int nz, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftMakePlanMany(cufftHandle plan, + int rank, + int *n, + int *inembed, int istride, int idist, + int *onembed, int ostride, int odist, + cufftType type, + int batch, + size_t *workSize); + +cufftResult CUFFTAPI cufftMakePlanMany64(cufftHandle plan, + int rank, + long long int *n, + long long int *inembed, + long long int istride, + long long int idist, + long long int *onembed, + long long int ostride, long long int odist, + cufftType type, + long long int batch, + size_t * workSize); + +cufftResult CUFFTAPI cufftGetSizeMany64(cufftHandle plan, + int rank, + long long int *n, + long long int *inembed, + long long int istride, long long int idist, + long long int *onembed, + long long int ostride, long long int odist, + cufftType type, + long long int batch, + size_t *workSize); + + + + +cufftResult CUFFTAPI cufftEstimate1d(int nx, + cufftType type, + int batch, + size_t *workSize); + +cufftResult CUFFTAPI cufftEstimate2d(int nx, int ny, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftEstimate3d(int nx, int ny, int nz, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftEstimateMany(int rank, + int *n, + int *inembed, int istride, int idist, + int *onembed, int ostride, int odist, + cufftType type, + int batch, + size_t *workSize); + +cufftResult CUFFTAPI cufftCreate(cufftHandle * handle); + +cufftResult CUFFTAPI cufftGetSize1d(cufftHandle handle, + int nx, + cufftType type, + int batch, + size_t *workSize ); + +cufftResult CUFFTAPI cufftGetSize2d(cufftHandle handle, + int nx, int ny, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftGetSize3d(cufftHandle handle, + int nx, int ny, int nz, + cufftType type, + size_t *workSize); + +cufftResult CUFFTAPI cufftGetSizeMany(cufftHandle handle, + int rank, int *n, + int *inembed, int istride, int idist, + int *onembed, int ostride, int odist, + cufftType type, int batch, size_t *workArea); + +cufftResult CUFFTAPI cufftGetSize(cufftHandle handle, size_t *workSize); + +cufftResult CUFFTAPI cufftSetWorkArea(cufftHandle plan, void *workArea); + +cufftResult CUFFTAPI cufftSetAutoAllocation(cufftHandle plan, int autoAllocate); + +cufftResult CUFFTAPI cufftExecC2C(cufftHandle plan, + cufftComplex *idata, + cufftComplex *odata, + int direction); + +cufftResult CUFFTAPI cufftExecR2C(cufftHandle plan, + cufftReal *idata, + cufftComplex *odata); + +cufftResult CUFFTAPI cufftExecC2R(cufftHandle plan, + cufftComplex *idata, + cufftReal *odata); + +cufftResult CUFFTAPI cufftExecZ2Z(cufftHandle plan, + cufftDoubleComplex *idata, + cufftDoubleComplex *odata, + int direction); + +cufftResult CUFFTAPI cufftExecD2Z(cufftHandle plan, + cufftDoubleReal *idata, + cufftDoubleComplex *odata); + +cufftResult CUFFTAPI cufftExecZ2D(cufftHandle plan, + cufftDoubleComplex *idata, + cufftDoubleReal *odata); + + +// utility functions +cufftResult CUFFTAPI cufftSetStream(cufftHandle plan, + cudaStream_t stream); + +cufftResult CUFFTAPI cufftDestroy(cufftHandle plan); + +cufftResult CUFFTAPI cufftGetVersion(int *version); + +cufftResult CUFFTAPI cufftGetProperty(libraryPropertyType type, + int *value); + +#ifdef __cplusplus +} +#endif + +#endif /* _CUFFT_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftXt.h b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftXt.h new file mode 100644 index 0000000000000000000000000000000000000000..511f5c7445d2f5f4bf9b84ebd766099b41837627 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftXt.h @@ -0,0 +1,269 @@ + + /* Copyright 2005-2021 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/*! +* \file cufftXt.h +* \brief Public header file for the NVIDIA CUDA FFT library (CUFFT) +*/ + +#ifndef _CUFFTXT_H_ +#define _CUFFTXT_H_ +#include "cudalibxt.h" +#include "cufft.h" + + +#ifndef CUFFTAPI +#ifdef _WIN32 +#define CUFFTAPI __stdcall +#else +#define CUFFTAPI +#endif +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +// +// cufftXtSubFormat identifies the data layout of +// a memory descriptor owned by cufft. +// note that multi GPU cufft does not yet support out-of-place transforms +// + +typedef enum cufftXtSubFormat_t { + CUFFT_XT_FORMAT_INPUT = 0x00, //by default input is in linear order across GPUs + CUFFT_XT_FORMAT_OUTPUT = 0x01, //by default output is in scrambled order depending on transform + CUFFT_XT_FORMAT_INPLACE = 0x02, //by default inplace is input order, which is linear across GPUs + CUFFT_XT_FORMAT_INPLACE_SHUFFLED = 0x03, //shuffled output order after execution of the transform + CUFFT_XT_FORMAT_1D_INPUT_SHUFFLED = 0x04, //shuffled input order prior to execution of 1D transforms + CUFFT_XT_FORMAT_DISTRIBUTED_INPUT = 0x05, + CUFFT_XT_FORMAT_DISTRIBUTED_OUTPUT = 0x06, + CUFFT_FORMAT_UNDEFINED = 0x07 +} cufftXtSubFormat; + +// +// cufftXtCopyType specifies the type of copy for cufftXtMemcpy +// +typedef enum cufftXtCopyType_t { + CUFFT_COPY_HOST_TO_DEVICE = 0x00, + CUFFT_COPY_DEVICE_TO_HOST = 0x01, + CUFFT_COPY_DEVICE_TO_DEVICE = 0x02, + CUFFT_COPY_UNDEFINED = 0x03 +} cufftXtCopyType; + +// +// cufftXtQueryType specifies the type of query for cufftXtQueryPlan +// +typedef enum cufftXtQueryType_t { + CUFFT_QUERY_1D_FACTORS = 0x00, + CUFFT_QUERY_UNDEFINED = 0x01 +} cufftXtQueryType; + +typedef struct cufftXt1dFactors_t { + long long int size; + long long int stringCount; + long long int stringLength; + long long int substringLength; + long long int factor1; + long long int factor2; + long long int stringMask; + long long int substringMask; + long long int factor1Mask; + long long int factor2Mask; + int stringShift; + int substringShift; + int factor1Shift; + int factor2Shift; +} cufftXt1dFactors; + +// +// cufftXtWorkAreaPolicy specifies policy for cufftXtSetWorkAreaPolicy +// +typedef enum cufftXtWorkAreaPolicy_t { + CUFFT_WORKAREA_MINIMAL = 0, /* maximum reduction */ + CUFFT_WORKAREA_USER = 1, /* use workSize parameter as limit */ + CUFFT_WORKAREA_PERFORMANCE = 2, /* default - 1x overhead or more, maximum performance */ +} cufftXtWorkAreaPolicy; + +// multi-GPU routines +cufftResult CUFFTAPI cufftXtSetGPUs(cufftHandle handle, int nGPUs, int *whichGPUs); + +cufftResult CUFFTAPI cufftXtMalloc(cufftHandle plan, + cudaLibXtDesc ** descriptor, + cufftXtSubFormat format); + +cufftResult CUFFTAPI cufftXtMemcpy(cufftHandle plan, + void *dstPointer, + void *srcPointer, + cufftXtCopyType type); + +cufftResult CUFFTAPI cufftXtFree(cudaLibXtDesc *descriptor); + +cufftResult CUFFTAPI cufftXtSetWorkArea(cufftHandle plan, void **workArea); + +cufftResult CUFFTAPI cufftXtExecDescriptorC2C(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output, + int direction); + +cufftResult CUFFTAPI cufftXtExecDescriptorR2C(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output); + +cufftResult CUFFTAPI cufftXtExecDescriptorC2R(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output); + +cufftResult CUFFTAPI cufftXtExecDescriptorZ2Z(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output, + int direction); + +cufftResult CUFFTAPI cufftXtExecDescriptorD2Z(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output); + +cufftResult CUFFTAPI cufftXtExecDescriptorZ2D(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output); + +// Utility functions + +cufftResult CUFFTAPI cufftXtQueryPlan(cufftHandle plan, void *queryStruct, cufftXtQueryType queryType); + + +// callbacks + + +typedef enum cufftXtCallbackType_t { + CUFFT_CB_LD_COMPLEX = 0x0, + CUFFT_CB_LD_COMPLEX_DOUBLE = 0x1, + CUFFT_CB_LD_REAL = 0x2, + CUFFT_CB_LD_REAL_DOUBLE = 0x3, + CUFFT_CB_ST_COMPLEX = 0x4, + CUFFT_CB_ST_COMPLEX_DOUBLE = 0x5, + CUFFT_CB_ST_REAL = 0x6, + CUFFT_CB_ST_REAL_DOUBLE = 0x7, + CUFFT_CB_UNDEFINED = 0x8 + +} cufftXtCallbackType; + +typedef cufftComplex (*cufftCallbackLoadC)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer); +typedef cufftDoubleComplex (*cufftCallbackLoadZ)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer); +typedef cufftReal (*cufftCallbackLoadR)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer); +typedef cufftDoubleReal(*cufftCallbackLoadD)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer); + +typedef void (*cufftCallbackStoreC)(void *dataOut, size_t offset, cufftComplex element, void *callerInfo, void *sharedPointer); +typedef void (*cufftCallbackStoreZ)(void *dataOut, size_t offset, cufftDoubleComplex element, void *callerInfo, void *sharedPointer); +typedef void (*cufftCallbackStoreR)(void *dataOut, size_t offset, cufftReal element, void *callerInfo, void *sharedPointer); +typedef void (*cufftCallbackStoreD)(void *dataOut, size_t offset, cufftDoubleReal element, void *callerInfo, void *sharedPointer); + + +cufftResult CUFFTAPI cufftXtSetCallback(cufftHandle plan, void **callback_routine, cufftXtCallbackType cbType, void **caller_info); +cufftResult CUFFTAPI cufftXtClearCallback(cufftHandle plan, cufftXtCallbackType cbType); +cufftResult CUFFTAPI cufftXtSetCallbackSharedSize(cufftHandle plan, cufftXtCallbackType cbType, size_t sharedSize); + +cufftResult CUFFTAPI cufftXtMakePlanMany(cufftHandle plan, + int rank, + long long int *n, + long long int *inembed, + long long int istride, + long long int idist, + cudaDataType inputtype, + long long int *onembed, + long long int ostride, + long long int odist, + cudaDataType outputtype, + long long int batch, + size_t *workSize, + cudaDataType executiontype); + +cufftResult CUFFTAPI cufftXtGetSizeMany(cufftHandle plan, + int rank, + long long int *n, + long long int *inembed, + long long int istride, + long long int idist, + cudaDataType inputtype, + long long int *onembed, + long long int ostride, + long long int odist, + cudaDataType outputtype, + long long int batch, + size_t *workSize, + cudaDataType executiontype); + + +cufftResult CUFFTAPI cufftXtExec(cufftHandle plan, + void *input, + void *output, + int direction); + +cufftResult CUFFTAPI cufftXtExecDescriptor(cufftHandle plan, + cudaLibXtDesc *input, + cudaLibXtDesc *output, + int direction); + +cufftResult CUFFTAPI cufftXtSetWorkAreaPolicy(cufftHandle plan, cufftXtWorkAreaPolicy policy, size_t *workSize); + +typedef struct cufftBox3d_t { + size_t lower[3]; + size_t upper[3]; + size_t strides[3]; +} cufftBox3d; + +cufftResult CUFFTAPI cufftXtSetDistribution(cufftHandle plan, + const cufftBox3d *box_in, + const cufftBox3d *box_out); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftw.h b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftw.h new file mode 100644 index 0000000000000000000000000000000000000000..6f12b4e1ea68c5a186d73b5d943d2cba0218312f --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cufft/include/cufftw.h @@ -0,0 +1,454 @@ + + /* Copyright 2005-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/*! +* \file cufftw.h +* \brief Public header file for the NVIDIA CUDA FFTW library (CUFFTW) +*/ + +#ifndef _CUFFTW_H_ +#define _CUFFTW_H_ + + +#include +#include "cufft.h" + +#ifdef __cplusplus +extern "C" { +#endif + +// transform direction +#define FFTW_FORWARD -1 +#define FFTW_INVERSE 1 +#define FFTW_BACKWARD 1 + +// Planner flags + +#define FFTW_ESTIMATE 0x01 +#define FFTW_MEASURE 0x02 +#define FFTW_PATIENT 0x03 +#define FFTW_EXHAUSTIVE 0x04 +#define FFTW_WISDOM_ONLY 0x05 + +//Algorithm restriction flags + +#define FFTW_DESTROY_INPUT 0x08 +#define FFTW_PRESERVE_INPUT 0x0C +#define FFTW_UNALIGNED 0x10 + +// CUFFTW defines and supports the following data types + +// note if complex.h has been included we use the C99 complex types +#if !defined(FFTW_NO_Complex) && defined(_Complex_I) && defined (complex) + typedef double _Complex fftw_complex; + typedef float _Complex fftwf_complex; +#else + typedef double fftw_complex[2]; + typedef float fftwf_complex[2]; +#endif + +typedef void *fftw_plan; + +typedef void *fftwf_plan; + +typedef struct { + int n; + int is; + int os; +} fftw_iodim; + +typedef fftw_iodim fftwf_iodim; + +typedef struct { + ptrdiff_t n; + ptrdiff_t is; + ptrdiff_t os; +} fftw_iodim64; + +typedef fftw_iodim64 fftwf_iodim64; + + +// CUFFTW defines and supports the following double precision APIs + + +fftw_plan CUFFTAPI fftw_plan_dft_1d(int n, + fftw_complex *in, + fftw_complex *out, + int sign, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_2d(int n0, + int n1, + fftw_complex *in, + fftw_complex *out, + int sign, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_3d(int n0, + int n1, + int n2, + fftw_complex *in, + fftw_complex *out, + int sign, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft(int rank, + const int *n, + fftw_complex *in, + fftw_complex *out, + int sign, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_r2c_1d(int n, + double *in, + fftw_complex *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_r2c_2d(int n0, + int n1, + double *in, + fftw_complex *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_r2c_3d(int n0, + int n1, + int n2, + double *in, + fftw_complex *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_r2c(int rank, + const int *n, + double *in, + fftw_complex *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_c2r_1d(int n, + fftw_complex *in, + double *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_c2r_2d(int n0, + int n1, + fftw_complex *in, + double *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_c2r_3d(int n0, + int n1, + int n2, + fftw_complex *in, + double *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_dft_c2r(int rank, + const int *n, + fftw_complex *in, + double *out, + unsigned flags); + + +fftw_plan CUFFTAPI fftw_plan_many_dft(int rank, + const int *n, + int batch, + fftw_complex *in, + const int *inembed, int istride, int idist, + fftw_complex *out, + const int *onembed, int ostride, int odist, + int sign, unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_many_dft_r2c(int rank, + const int *n, + int batch, + double *in, + const int *inembed, int istride, int idist, + fftw_complex *out, + const int *onembed, int ostride, int odist, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_many_dft_c2r(int rank, + const int *n, + int batch, + fftw_complex *in, + const int *inembed, int istride, int idist, + double *out, + const int *onembed, int ostride, int odist, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_guru_dft(int rank, const fftw_iodim *dims, + int batch_rank, const fftw_iodim *batch_dims, + fftw_complex *in, fftw_complex *out, + int sign, unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_guru_dft_r2c(int rank, const fftw_iodim *dims, + int batch_rank, const fftw_iodim *batch_dims, + double *in, fftw_complex *out, + unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_guru_dft_c2r(int rank, const fftw_iodim *dims, + int batch_rank, const fftw_iodim *batch_dims, + fftw_complex *in, double *out, + unsigned flags); + +void CUFFTAPI fftw_execute(const fftw_plan plan); + +void CUFFTAPI fftw_execute_dft(const fftw_plan plan, + fftw_complex *idata, + fftw_complex *odata); + +void CUFFTAPI fftw_execute_dft_r2c(const fftw_plan plan, + double *idata, + fftw_complex *odata); + +void CUFFTAPI fftw_execute_dft_c2r(const fftw_plan plan, + fftw_complex *idata, + double *odata); + + +// CUFFTW defines and supports the following single precision APIs + +fftwf_plan CUFFTAPI fftwf_plan_dft_1d(int n, + fftwf_complex *in, + fftwf_complex *out, + int sign, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_2d(int n0, + int n1, + fftwf_complex *in, + fftwf_complex *out, + int sign, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_3d(int n0, + int n1, + int n2, + fftwf_complex *in, + fftwf_complex *out, + int sign, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft(int rank, + const int *n, + fftwf_complex *in, + fftwf_complex *out, + int sign, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_1d(int n, + float *in, + fftwf_complex *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_2d(int n0, + int n1, + float *in, + fftwf_complex *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_3d(int n0, + int n1, + int n2, + float *in, + fftwf_complex *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_r2c(int rank, + const int *n, + float *in, + fftwf_complex *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_1d(int n, + fftwf_complex *in, + float *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_2d(int n0, + int n1, + fftwf_complex *in, + float *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_3d(int n0, + int n1, + int n2, + fftwf_complex *in, + float *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_dft_c2r(int rank, + const int *n, + fftwf_complex *in, + float *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_many_dft(int rank, + const int *n, + int batch, + fftwf_complex *in, + const int *inembed, int istride, int idist, + fftwf_complex *out, + const int *onembed, int ostride, int odist, + int sign, unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_many_dft_r2c(int rank, + const int *n, + int batch, + float *in, + const int *inembed, int istride, int idist, + fftwf_complex *out, + const int *onembed, int ostride, int odist, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_many_dft_c2r(int rank, + const int *n, + int batch, + fftwf_complex *in, + const int *inembed, int istride, int idist, + float *out, + const int *onembed, int ostride, int odist, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_guru_dft(int rank, const fftwf_iodim *dims, + int batch_rank, const fftwf_iodim *batch_dims, + fftwf_complex *in, fftwf_complex *out, + int sign, unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_guru_dft_r2c(int rank, const fftwf_iodim *dims, + int batch_rank, const fftwf_iodim *batch_dims, + float *in, fftwf_complex *out, + unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_guru_dft_c2r(int rank, const fftwf_iodim *dims, + int batch_rank, const fftwf_iodim *batch_dims, + fftwf_complex *in, float *out, + unsigned flags); + +void CUFFTAPI fftwf_execute(const fftw_plan plan); + +void CUFFTAPI fftwf_execute_dft(const fftwf_plan plan, + fftwf_complex *idata, + fftwf_complex *odata); + +void CUFFTAPI fftwf_execute_dft_r2c(const fftwf_plan plan, + float *idata, + fftwf_complex *odata); + +void CUFFTAPI fftwf_execute_dft_c2r(const fftwf_plan plan, + fftwf_complex *idata, + float *odata); + +/// CUFFTW 64-bit Guru Interface +/// dp +fftw_plan CUFFTAPI fftw_plan_guru64_dft(int rank, const fftw_iodim64* dims, int batch_rank, const fftw_iodim64* batch_dims, fftw_complex* in, fftw_complex* out, int sign, unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_guru64_dft_r2c(int rank, const fftw_iodim64* dims, int batch_rank, const fftw_iodim64* batch_dims, double* in, fftw_complex* out, unsigned flags); + +fftw_plan CUFFTAPI fftw_plan_guru64_dft_c2r(int rank, const fftw_iodim64* dims, int batch_rank, const fftw_iodim64* batch_dims, fftw_complex* in, double* out, unsigned flags); + +/// sp +fftwf_plan CUFFTAPI fftwf_plan_guru64_dft(int rank, const fftwf_iodim64* dims, int batch_rank, const fftwf_iodim64* batch_dims, fftwf_complex* in, fftwf_complex* out, int sign, unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_guru64_dft_r2c(int rank, const fftwf_iodim64* dims, int batch_rank, const fftwf_iodim64* batch_dims, float* in, fftwf_complex* out, unsigned flags); + +fftwf_plan CUFFTAPI fftwf_plan_guru64_dft_c2r(int rank, const fftwf_iodim64* dims, int batch_rank, const fftwf_iodim64* batch_dims, fftwf_complex* in, float* out, unsigned flags); + +#ifdef _WIN32 +#define _CUFFTAPI(T) T CUFFTAPI +#else +#define _CUFFTAPI(T) CUFFTAPI T +#endif + +// CUFFTW defines and supports the following support APIs +_CUFFTAPI(void *) fftw_malloc(size_t n); + +_CUFFTAPI(void *) fftwf_malloc(size_t n); + +void CUFFTAPI fftw_free(void *pointer); + +void CUFFTAPI fftwf_free(void *pointer); + +void CUFFTAPI fftw_export_wisdom_to_file(FILE * output_file); + +void CUFFTAPI fftwf_export_wisdom_to_file(FILE * output_file); + +void CUFFTAPI fftw_import_wisdom_from_file(FILE * input_file); + +void CUFFTAPI fftwf_import_wisdom_from_file(FILE * input_file); + +void CUFFTAPI fftw_print_plan(const fftw_plan plan); + +void CUFFTAPI fftwf_print_plan(const fftwf_plan plan); + +void CUFFTAPI fftw_set_timelimit(double seconds); + +void CUFFTAPI fftwf_set_timelimit(double seconds); + +double CUFFTAPI fftw_cost(const fftw_plan plan); + +double CUFFTAPI fftwf_cost(const fftw_plan plan); + +void CUFFTAPI fftw_flops(const fftw_plan plan, double *add, double *mul, double *fma); + +void CUFFTAPI fftwf_flops(const fftw_plan plan, double *add, double *mul, double *fma); + +void CUFFTAPI fftw_destroy_plan(fftw_plan plan); + +void CUFFTAPI fftwf_destroy_plan(fftwf_plan plan); + +void CUFFTAPI fftw_cleanup(void); + +void CUFFTAPI fftwf_cleanup(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _CUFFTW_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1de9822df664bea6bd0c738f76b454bc5f4a9da6 Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.10 b/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.10 new file mode 100644 index 0000000000000000000000000000000000000000..9b25e67296d2dca6182c2ab6d6f2360fb60e663b --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.10 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a592a5b2f359a9077550ee1fdadd58eb2cf9cc0bfab8fe397a374fb949da143 +size 1618440 diff --git a/pllava/lib/python3.10/site-packages/nvidia/curand/include/curand_uniform.h b/pllava/lib/python3.10/site-packages/nvidia/curand/include/curand_uniform.h new file mode 100644 index 0000000000000000000000000000000000000000..7a4af8afa328c186d9ea33a8c8226e19aba4793e --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/curand/include/curand_uniform.h @@ -0,0 +1,498 @@ + + /* Copyright 2010-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + + +#if !defined(CURAND_UNIFORM_H_) +#define CURAND_UNIFORM_H_ + +/** + * \defgroup DEVICE Device API + * + * @{ + */ + +#ifndef __CUDACC_RTC__ +#include +#endif // __CUDACC_RTC__ + +#include "curand_mrg32k3a.h" +#include "curand_mtgp32_kernel.h" +#include "curand_philox4x32_x.h" + + +QUALIFIERS float _curand_uniform(unsigned int x) +{ + return x * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); +} + +QUALIFIERS float4 _curand_uniform4(uint4 x) +{ + float4 y; + y.x = x.x * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); + y.y = x.y * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); + y.z = x.z * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); + y.w = x.w * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); + return y; +} + +QUALIFIERS float _curand_uniform(unsigned long long x) +{ + unsigned int t; + t = (unsigned int)(x >> 32); + return t * CURAND_2POW32_INV + (CURAND_2POW32_INV/2.0f); +} + +QUALIFIERS double _curand_uniform_double(unsigned int x) +{ + return x * CURAND_2POW32_INV_DOUBLE + CURAND_2POW32_INV_DOUBLE; +} + +QUALIFIERS double _curand_uniform_double(unsigned long long x) +{ + return (x >> 11) * CURAND_2POW53_INV_DOUBLE + (CURAND_2POW53_INV_DOUBLE/2.0); +} + +QUALIFIERS double _curand_uniform_double_hq(unsigned int x, unsigned int y) +{ + unsigned long long z = (unsigned long long)x ^ + ((unsigned long long)y << (53 - 32)); + return z * CURAND_2POW53_INV_DOUBLE + (CURAND_2POW53_INV_DOUBLE/2.0); +} + +QUALIFIERS float curand_uniform(curandStateTest_t *state) +{ + return _curand_uniform(curand(state)); +} + +QUALIFIERS double curand_uniform_double(curandStateTest_t *state) +{ + return _curand_uniform_double(curand(state)); +} + +/** + * \brief Return a uniformly distributed float from an XORWOW generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the XORWOW generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation may use any number of calls to \p curand() to + * get enough random bits to create the return value. The current + * implementation uses one call. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateXORWOW_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from an XORWOW generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the XORWOW generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * The implementation may use any number of calls to \p curand() to + * get enough random bits to create the return value. The current + * implementation uses exactly two calls. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateXORWOW_t *state) +{ + unsigned int x, y; + x = curand(state); + y = curand(state); + return _curand_uniform_double_hq(x, y); +} +/** + * \brief Return a uniformly distributed float from an MRG32k3a generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the MRG32k3a generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation returns up to 23 bits of mantissa, with the minimum + * return value \f$ 2^{-32} \f$ + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateMRG32k3a_t *state) +{ + return ((float)(curand_MRG32k3a(state)*MRG32K3A_NORM)); +} + +/** + * \brief Return a uniformly distributed double from an MRG32k3a generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the MRG32k3a generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * Note the implementation returns at most 32 random bits of mantissa as + * outlined in the seminal paper by L'Ecuyer. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateMRG32k3a_t *state) +{ + return curand_MRG32k3a(state)*MRG32K3A_NORM; +} + + + +/** + * \brief Return a uniformly distributed tuple of 2 doubles from an Philox4_32_10 generator. + * + * Return a uniformly distributed 2 doubles (double4) between \p 0.0 and \p 1.0 + * from the Philox4_32_10 generator in \p state, increment position of generator by 4. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * \param state - Pointer to state to update + * + * \return 2 uniformly distributed doubles between \p 0.0 and \p 1.0 + */ + +QUALIFIERS double2 curand_uniform2_double(curandStatePhilox4_32_10_t *state) +{ + uint4 _x; + double2 result; + _x = curand4(state); + result.x = _curand_uniform_double_hq(_x.x,_x.y); + result.y = _curand_uniform_double_hq(_x.z,_x.w); + return result; +} + + +// not a part of API +QUALIFIERS double4 curand_uniform4_double(curandStatePhilox4_32_10_t *state) +{ + uint4 _x, _y; + double4 result; + _x = curand4(state); + _y = curand4(state); + result.x = _curand_uniform_double_hq(_x.x,_x.y); + result.y = _curand_uniform_double_hq(_x.z,_x.w); + result.z = _curand_uniform_double_hq(_y.x,_y.y); + result.w = _curand_uniform_double_hq(_y.z,_y.w); + return result; +} + +/** + * \brief Return a uniformly distributed float from a Philox4_32_10 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the Philox4_32_10 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0 and \p 1.0 + * + */ +QUALIFIERS float curand_uniform(curandStatePhilox4_32_10_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed tuple of 4 floats from a Philox4_32_10 generator. + * + * Return a uniformly distributed 4 floats between \p 0.0f and \p 1.0f + * from the Philox4_32_10 generator in \p state, increment position of generator by 4. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0 and \p 1.0 + * + */ +QUALIFIERS float4 curand_uniform4(curandStatePhilox4_32_10_t *state) +{ + return _curand_uniform4(curand4(state)); +} + +/** + * \brief Return a uniformly distributed float from a MTGP32 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the MTGP32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateMtgp32_t *state) +{ + return _curand_uniform(curand(state)); +} +/** + * \brief Return a uniformly distributed double from a MTGP32 generator. + * + * Return a uniformly distributed double between \p 0.0f and \p 1.0f + * from the MTGP32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * Note that the implementation uses only 32 random bits to generate a single double + * precision value. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0f and \p 1.0f + */ +QUALIFIERS double curand_uniform_double(curandStateMtgp32_t *state) +{ + return _curand_uniform_double(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from a Philox4_32_10 generator. + * + * Return a uniformly distributed double between \p 0.0f and \p 1.0f + * from the Philox4_32_10 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * Note that the implementation uses only 32 random bits to generate a single double + * precision value. + * + * \p curand_uniform2_double() is recommended for higher quality uniformly distributed + * double precision values. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0f and \p 1.0f + */ + +QUALIFIERS double curand_uniform_double(curandStatePhilox4_32_10_t *state) +{ + return _curand_uniform_double(curand(state)); +} + + +/** + * \brief Return a uniformly distributed float from a Sobol32 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the Sobol32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand(). + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateSobol32_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from a Sobol32 generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the Sobol32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand() + * to preserve the quasirandom properties of the sequence. + * + * Note that the implementation uses only 32 random bits to generate a single double + * precision value. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateSobol32_t *state) +{ + return _curand_uniform_double(curand(state)); +} +/** + * \brief Return a uniformly distributed float from a scrambled Sobol32 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the scrambled Sobol32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand(). + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateScrambledSobol32_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from a scrambled Sobol32 generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the scrambled Sobol32 generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand() + * to preserve the quasirandom properties of the sequence. + * + * Note that the implementation uses only 32 random bits to generate a single double + * precision value. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateScrambledSobol32_t *state) +{ + return _curand_uniform_double(curand(state)); +} +/** + * \brief Return a uniformly distributed float from a Sobol64 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the Sobol64 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand(). + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateSobol64_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from a Sobol64 generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the Sobol64 generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand() + * to preserve the quasirandom properties of the sequence. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateSobol64_t *state) +{ + return _curand_uniform_double(curand(state)); +} +/** + * \brief Return a uniformly distributed float from a scrambled Sobol64 generator. + * + * Return a uniformly distributed float between \p 0.0f and \p 1.0f + * from the scrambled Sobol64 generator in \p state, increment position of generator. + * Output range excludes \p 0.0f but includes \p 1.0f. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand(). + * + * \param state - Pointer to state to update + * + * \return uniformly distributed float between \p 0.0f and \p 1.0f + */ +QUALIFIERS float curand_uniform(curandStateScrambledSobol64_t *state) +{ + return _curand_uniform(curand(state)); +} + +/** + * \brief Return a uniformly distributed double from a scrambled Sobol64 generator. + * + * Return a uniformly distributed double between \p 0.0 and \p 1.0 + * from the scrambled Sobol64 generator in \p state, increment position of generator. + * Output range excludes \p 0.0 but includes \p 1.0. Denormalized floating + * point outputs are never returned. + * + * The implementation is guaranteed to use a single call to \p curand() + * to preserve the quasirandom properties of the sequence. + * + * \param state - Pointer to state to update + * + * \return uniformly distributed double between \p 0.0 and \p 1.0 + */ +QUALIFIERS double curand_uniform_double(curandStateScrambledSobol64_t *state) +{ + return _curand_uniform_double(curand(state)); +} + +#endif // !defined(CURAND_UNIFORM_H_) diff --git a/pllava/lib/python3.10/site-packages/nvidia/curand/lib/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/curand/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusolver/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusolver/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d4ab54e02d20862d0dbb6aa99199f56a47b2ddaf Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusolver/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..e9f1fc22ece3a87abfa2f11a18638263b7783bb1 Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverDn.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverDn.h new file mode 100644 index 0000000000000000000000000000000000000000..fbf1534a79e3fdc727bc520d4e3e898d4ac38ef2 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverDn.h @@ -0,0 +1,4927 @@ +/* + * Copyright 2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cuSolverDN : Dense Linear Algebra Library + +*/ + +#if !defined(CUSOLVERDN_H_) + #define CUSOLVERDN_H_ + +struct cusolverDnContext; +typedef struct cusolverDnContext *cusolverDnHandle_t; + +struct syevjInfo; +typedef struct syevjInfo *syevjInfo_t; + +struct gesvdjInfo; +typedef struct gesvdjInfo *gesvdjInfo_t; + +//------------------------------------------------------ +// opaque cusolverDnIRS structure for IRS solver +struct cusolverDnIRSParams; +typedef struct cusolverDnIRSParams *cusolverDnIRSParams_t; + +struct cusolverDnIRSInfos; +typedef struct cusolverDnIRSInfos *cusolverDnIRSInfos_t; +//------------------------------------------------------ + +struct cusolverDnParams; +typedef struct cusolverDnParams *cusolverDnParams_t; + +typedef enum { + CUSOLVERDN_GETRF = 0, + CUSOLVERDN_POTRF = 1 +} cusolverDnFunction_t; + +typedef enum { + CUSOLVER_DETERMINISTIC_RESULTS = 1, + CUSOLVER_ALLOW_NON_DETERMINISTIC_RESULTS = 2 +} cusolverDeterministicMode_t; + + #include + + #include "cuComplex.h" /* import complex data type */ + #include "cublas_v2.h" + #include "cusolver_common.h" + + /*******************************************************************************/ + #ifdef __cplusplus +extern "C" { + #endif + + cusolverStatus_t CUSOLVERAPI cusolverDnCreate(cusolverDnHandle_t *handle); + cusolverStatus_t CUSOLVERAPI cusolverDnDestroy(cusolverDnHandle_t handle); + cusolverStatus_t CUSOLVERAPI + cusolverDnSetStream(cusolverDnHandle_t handle, cudaStream_t streamId); + cusolverStatus_t CUSOLVERAPI + cusolverDnGetStream(cusolverDnHandle_t handle, cudaStream_t *streamId); + + //============================================================ + // Deterministic Mode + //============================================================ + cusolverStatus_t CUSOLVERAPI cusolverDnSetDeterministicMode(cusolverDnHandle_t + handle, cusolverDeterministicMode_t mode); + cusolverStatus_t CUSOLVERAPI cusolverDnGetDeterministicMode(cusolverDnHandle_t + handle, cusolverDeterministicMode_t* mode); + + //============================================================ + // IRS headers + //============================================================ + + // ============================================================================= + // IRS helper function API + // ============================================================================= + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsCreate(cusolverDnIRSParams_t *params_ptr); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsDestroy(cusolverDnIRSParams_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetRefinementSolver( + cusolverDnIRSParams_t params, + cusolverIRSRefinement_t refinement_solver); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetSolverMainPrecision( + cusolverDnIRSParams_t params, + cusolverPrecType_t solver_main_precision); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetSolverLowestPrecision( + cusolverDnIRSParams_t params, + cusolverPrecType_t solver_lowest_precision); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetSolverPrecisions( + cusolverDnIRSParams_t params, + cusolverPrecType_t solver_main_precision, + cusolverPrecType_t solver_lowest_precision); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsSetTol(cusolverDnIRSParams_t params, double val); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsSetTolInner(cusolverDnIRSParams_t params, double val); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetMaxIters( + cusolverDnIRSParams_t params, + cusolver_int_t maxiters); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsSetMaxItersInner( + cusolverDnIRSParams_t params, + cusolver_int_t maxiters_inner); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSParamsGetMaxIters( + cusolverDnIRSParams_t params, + cusolver_int_t * maxiters); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsEnableFallback(cusolverDnIRSParams_t params); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSParamsDisableFallback(cusolverDnIRSParams_t params); + + // ============================================================================= + // cusolverDnIRSInfos prototypes + // ============================================================================= + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSInfosDestroy(cusolverDnIRSInfos_t infos); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSInfosCreate(cusolverDnIRSInfos_t *infos_ptr); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSInfosGetNiters( + cusolverDnIRSInfos_t infos, + cusolver_int_t * niters); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSInfosGetOuterNiters( + cusolverDnIRSInfos_t infos, + cusolver_int_t * outer_niters); + + cusolverStatus_t CUSOLVERAPI + cusolverDnIRSInfosRequestResidual(cusolverDnIRSInfos_t infos); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSInfosGetResidualHistory( + cusolverDnIRSInfos_t infos, + void ** residual_history); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSInfosGetMaxIters( + cusolverDnIRSInfos_t infos, + cusolver_int_t * maxiters); + + //============================================================ + // IRS functions API + //============================================================ + + /*******************************************************************************/ /* + * [ZZ, ZC, ZK, ZE, ZY, CC, CK, CE, CY, DD, DS, DH, DB, DX, SS, SH, SB, SX]gesv + * users API Prototypes */ + /*******************************************************************************/ + cusolverStatus_t CUSOLVERAPI cusolverDnZZgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZCgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZKgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZEgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZYgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCCgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCEgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCKgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCYgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDDgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDSgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDHgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDBgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDXgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSSgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSHgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSBgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSXgesv( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + /*******************************************************************************/ + + /*******************************************************************************/ /* + * [ZZ, ZC, ZK, ZE, ZY, CC, CK, CE, CY, DD, DS, DH, DB, DX, SS, SH, SB, SX]gesv_bufferSize + * users API Prototypes */ + /*******************************************************************************/ + cusolverStatus_t CUSOLVERAPI cusolverDnZZgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZCgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZKgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZEgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZYgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCCgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCKgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCEgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCYgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDDgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDSgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDHgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDBgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDXgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSSgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSHgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSBgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSXgesv_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + cusolver_int_t * dipiv, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + /*******************************************************************************/ + + /*******************************************************************************/ /* + * [ZZ, ZC, ZK, ZE, ZY, CC, CK, CE, CY, DD, DS, DH, DB, DX, SS, SH, SB, SX]gels + * users API Prototypes */ + /*******************************************************************************/ + cusolverStatus_t CUSOLVERAPI cusolverDnZZgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZCgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZKgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZEgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZYgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCCgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCKgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCEgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCYgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDDgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDSgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDHgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDBgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDXgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSSgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSHgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSBgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnSXgels( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * iter, + cusolver_int_t * d_info); + /*******************************************************************************/ + + /*******************************************************************************/ /* + * [ZZ, ZC, ZK, ZE, ZY, CC, CK, CE, CY, DD, DS, DH, DB, DX, SS, SH, SB, SX]gels_bufferSize + * API prototypes */ + /*******************************************************************************/ + cusolverStatus_t CUSOLVERAPI cusolverDnZZgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZCgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZKgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZEgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnZYgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuDoubleComplex * dA, + cusolver_int_t ldda, + cuDoubleComplex * dB, + cusolver_int_t lddb, + cuDoubleComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCCgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCKgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCEgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnCYgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + cuComplex * dA, + cusolver_int_t ldda, + cuComplex * dB, + cusolver_int_t lddb, + cuComplex * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDDgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDSgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDHgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDBgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnDXgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + double * dA, + cusolver_int_t ldda, + double * dB, + cusolver_int_t lddb, + double * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSSgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSHgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSBgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnSXgels_bufferSize( + cusolverDnHandle_t handle, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + float * dA, + cusolver_int_t ldda, + float * dB, + cusolver_int_t lddb, + float * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t * lwork_bytes); + /*******************************************************************************/ + + /*******************************************************************************/ /* + * expert users API for IRS Prototypes + * */ + /*******************************************************************************/ + cusolverStatus_t CUSOLVERAPI cusolverDnIRSXgesv( + cusolverDnHandle_t handle, + cusolverDnIRSParams_t gesv_irs_params, + cusolverDnIRSInfos_t gesv_irs_infos, + cusolver_int_t n, + cusolver_int_t nrhs, + void * dA, + cusolver_int_t ldda, + void * dB, + cusolver_int_t lddb, + void * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * niters, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSXgesv_bufferSize( + cusolverDnHandle_t handle, + cusolverDnIRSParams_t params, + cusolver_int_t n, + cusolver_int_t nrhs, + size_t * lwork_bytes); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSXgels( + cusolverDnHandle_t handle, + cusolverDnIRSParams_t gels_irs_params, + cusolverDnIRSInfos_t gels_irs_infos, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + void * dA, + cusolver_int_t ldda, + void * dB, + cusolver_int_t lddb, + void * dX, + cusolver_int_t lddx, + void * dWorkspace, + size_t lwork_bytes, + cusolver_int_t * niters, + cusolver_int_t * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnIRSXgels_bufferSize( + cusolverDnHandle_t handle, + cusolverDnIRSParams_t params, + cusolver_int_t m, + cusolver_int_t n, + cusolver_int_t nrhs, + size_t * lwork_bytes); + /*******************************************************************************/ + + /* Cholesky factorization and its solver */ + cusolverStatus_t CUSOLVERAPI cusolverDnSpotrf_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotrf_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotrf_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotrf_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSpotrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnSpotrs( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + const float * A, + int lda, + float * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotrs( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + const double * A, + int lda, + double * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotrs( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + const cuComplex * A, + int lda, + cuComplex * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotrs( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + const cuDoubleComplex *A, + int lda, + cuDoubleComplex * B, + int ldb, + int * devInfo); + + /* batched Cholesky factorization and its solver */ + cusolverStatus_t CUSOLVERAPI cusolverDnSpotrfBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * Aarray[], + int lda, + int * infoArray, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotrfBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * Aarray[], + int lda, + int * infoArray, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotrfBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * Aarray[], + int lda, + int * infoArray, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotrfBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * Aarray[], + int lda, + int * infoArray, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSpotrsBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, /* only support rhs = 1*/ + float * A[], + int lda, + float * B[], + int ldb, + int * d_info, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotrsBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, /* only support rhs = 1*/ + double * A[], + int lda, + double * B[], + int ldb, + int * d_info, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotrsBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, /* only support rhs = 1*/ + cuComplex * A[], + int lda, + cuComplex * B[], + int ldb, + int * d_info, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotrsBatched( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, /* only support rhs = 1*/ + cuDoubleComplex * A[], + int lda, + cuDoubleComplex * B[], + int ldb, + int * d_info, + int batchSize); + + /* s.p.d. matrix inversion (POTRI) and auxiliary routines (TRTRI and LAUUM) */ + cusolverStatus_t CUSOLVERAPI cusolverDnSpotri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSpotri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDpotri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCpotri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZpotri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnXtrtri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + cublasDiagType_t diag, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXtrtri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + cublasDiagType_t diag, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * devInfo); + + /* lauum, auxiliar routine for s.p.d matrix inversion */ + cusolverStatus_t CUSOLVERAPI cusolverDnSlauum_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDlauum_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnClauum_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZlauum_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSlauum( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDlauum( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnClauum( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZlauum( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * work, + int lwork, + int * devInfo); + + /* LU Factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSgetrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + float * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgetrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + double * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgetrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + cuComplex * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgetrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + cuDoubleComplex * A, + int lda, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgetrf( + cusolverDnHandle_t handle, + int m, + int n, + float * A, + int lda, + float * Workspace, + int * devIpiv, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgetrf( + cusolverDnHandle_t handle, + int m, + int n, + double * A, + int lda, + double * Workspace, + int * devIpiv, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgetrf( + cusolverDnHandle_t handle, + int m, + int n, + cuComplex * A, + int lda, + cuComplex * Workspace, + int * devIpiv, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgetrf( + cusolverDnHandle_t handle, + int m, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * Workspace, + int * devIpiv, + int * devInfo); + + /* Row pivoting */ + cusolverStatus_t CUSOLVERAPI cusolverDnSlaswp( + cusolverDnHandle_t handle, + int n, + float * A, + int lda, + int k1, + int k2, + const int * devIpiv, + int incx); + + cusolverStatus_t CUSOLVERAPI cusolverDnDlaswp( + cusolverDnHandle_t handle, + int n, + double * A, + int lda, + int k1, + int k2, + const int * devIpiv, + int incx); + + cusolverStatus_t CUSOLVERAPI cusolverDnClaswp( + cusolverDnHandle_t handle, + int n, + cuComplex * A, + int lda, + int k1, + int k2, + const int * devIpiv, + int incx); + + cusolverStatus_t CUSOLVERAPI cusolverDnZlaswp( + cusolverDnHandle_t handle, + int n, + cuDoubleComplex * A, + int lda, + int k1, + int k2, + const int * devIpiv, + int incx); + + /* LU solve */ + cusolverStatus_t CUSOLVERAPI cusolverDnSgetrs( + cusolverDnHandle_t handle, + cublasOperation_t trans, + int n, + int nrhs, + const float * A, + int lda, + const int * devIpiv, + float * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgetrs( + cusolverDnHandle_t handle, + cublasOperation_t trans, + int n, + int nrhs, + const double * A, + int lda, + const int * devIpiv, + double * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgetrs( + cusolverDnHandle_t handle, + cublasOperation_t trans, + int n, + int nrhs, + const cuComplex * A, + int lda, + const int * devIpiv, + cuComplex * B, + int ldb, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgetrs( + cusolverDnHandle_t handle, + cublasOperation_t trans, + int n, + int nrhs, + const cuDoubleComplex *A, + int lda, + const int * devIpiv, + cuDoubleComplex * B, + int ldb, + int * devInfo); + + /* QR factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSgeqrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + float * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgeqrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + double * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgeqrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + cuComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgeqrf_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + cuDoubleComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgeqrf( + cusolverDnHandle_t handle, + int m, + int n, + float * A, + int lda, + float * TAU, + float * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgeqrf( + cusolverDnHandle_t handle, + int m, + int n, + double * A, + int lda, + double * TAU, + double * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgeqrf( + cusolverDnHandle_t handle, + int m, + int n, + cuComplex * A, + int lda, + cuComplex * TAU, + cuComplex * Workspace, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgeqrf( + cusolverDnHandle_t handle, + int m, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * TAU, + cuDoubleComplex * Workspace, + int Lwork, + int * devInfo); + + /* generate unitary matrix Q from QR factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSorgqr_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int k, + const float * A, + int lda, + const float * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgqr_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int k, + const double * A, + int lda, + const double * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungqr_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int k, + const cuComplex * A, + int lda, + const cuComplex * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungqr_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int k, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSorgqr( + cusolverDnHandle_t handle, + int m, + int n, + int k, + float * A, + int lda, + const float * tau, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgqr( + cusolverDnHandle_t handle, + int m, + int n, + int k, + double * A, + int lda, + const double * tau, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungqr( + cusolverDnHandle_t handle, + int m, + int n, + int k, + cuComplex * A, + int lda, + const cuComplex * tau, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungqr( + cusolverDnHandle_t handle, + int m, + int n, + int k, + cuDoubleComplex * A, + int lda, + const cuDoubleComplex *tau, + cuDoubleComplex * work, + int lwork, + int * info); + + /* compute Q**T*b in solve min||A*x = b|| */ + cusolverStatus_t CUSOLVERAPI cusolverDnSormqr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const float * A, + int lda, + const float * tau, + const float * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDormqr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const double * A, + int lda, + const double * tau, + const double * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCunmqr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const cuComplex * A, + int lda, + const cuComplex * tau, + const cuComplex * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZunmqr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + const cuDoubleComplex *C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSormqr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const float * A, + int lda, + const float * tau, + float * C, + int ldc, + float * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDormqr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const double * A, + int lda, + const double * tau, + double * C, + int ldc, + double * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCunmqr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const cuComplex * A, + int lda, + const cuComplex * tau, + cuComplex * C, + int ldc, + cuComplex * work, + int lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZunmqr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasOperation_t trans, + int m, + int n, + int k, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + cuDoubleComplex * C, + int ldc, + cuDoubleComplex * work, + int lwork, + int * devInfo); + + /* L*D*L**T,U*D*U**T factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsytrf_bufferSize( + cusolverDnHandle_t handle, + int n, + float * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytrf_bufferSize( + cusolverDnHandle_t handle, + int n, + double * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCsytrf_bufferSize( + cusolverDnHandle_t handle, + int n, + cuComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZsytrf_bufferSize( + cusolverDnHandle_t handle, + int n, + cuDoubleComplex * A, + int lda, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsytrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + int * ipiv, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + int * ipiv, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCsytrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + int * ipiv, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZsytrf( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + int * ipiv, + cuDoubleComplex * work, + int lwork, + int * info); + + /* Symmetric indefinite solve (SYTRS) */ + cusolverStatus_t CUSOLVERAPI cusolverDnXsytrs_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + const int64_t * ipiv, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXsytrs( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + const int64_t * ipiv, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* Symmetric indefinite inversion (sytri) */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsytri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + const int * ipiv, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + const int * ipiv, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCsytri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + const int * ipiv, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZsytri_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + const int * ipiv, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsytri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + const int * ipiv, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + const int * ipiv, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCsytri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + const int * ipiv, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZsytri( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + const int * ipiv, + cuDoubleComplex * work, + int lwork, + int * info); + + /* bidiagonal factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSgebrd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgebrd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgebrd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgebrd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * Lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgebrd( + cusolverDnHandle_t handle, + int m, + int n, + float * A, + int lda, + float * D, + float * E, + float * TAUQ, + float * TAUP, + float * Work, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgebrd( + cusolverDnHandle_t handle, + int m, + int n, + double * A, + int lda, + double * D, + double * E, + double * TAUQ, + double * TAUP, + double * Work, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgebrd( + cusolverDnHandle_t handle, + int m, + int n, + cuComplex * A, + int lda, + float * D, + float * E, + cuComplex * TAUQ, + cuComplex * TAUP, + cuComplex * Work, + int Lwork, + int * devInfo); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgebrd( + cusolverDnHandle_t handle, + int m, + int n, + cuDoubleComplex * A, + int lda, + double * D, + double * E, + cuDoubleComplex * TAUQ, + cuDoubleComplex * TAUP, + cuDoubleComplex * Work, + int Lwork, + int * devInfo); + + /* generates one of the unitary matrices Q or P**T determined by GEBRD*/ + cusolverStatus_t CUSOLVERAPI cusolverDnSorgbr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + const float * A, + int lda, + const float * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgbr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + const double * A, + int lda, + const double * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungbr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + const cuComplex * A, + int lda, + const cuComplex * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungbr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSorgbr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + float * A, + int lda, + const float * tau, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgbr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + double * A, + int lda, + const double * tau, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungbr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + cuComplex * A, + int lda, + const cuComplex * tau, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungbr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + int m, + int n, + int k, + cuDoubleComplex * A, + int lda, + const cuDoubleComplex *tau, + cuDoubleComplex * work, + int lwork, + int * info); + + /* tridiagonal factorization */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsytrd_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * d, + const float * e, + const float * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytrd_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * d, + const double * e, + const double * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnChetrd_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const float * d, + const float * e, + const cuComplex * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhetrd_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const double * d, + const double * e, + const cuDoubleComplex *tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsytrd( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * d, + float * e, + float * tau, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsytrd( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * d, + double * e, + double * tau, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnChetrd( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + float * d, + float * e, + cuComplex * tau, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhetrd( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + double * d, + double * e, + cuDoubleComplex * tau, + cuDoubleComplex * work, + int lwork, + int * info); + + /* generate unitary Q comes from sytrd */ + cusolverStatus_t CUSOLVERAPI cusolverDnSorgtr_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgtr_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungtr_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const cuComplex * tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungtr_bufferSize( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSorgtr( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + const float * tau, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDorgtr( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + const double * tau, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCungtr( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + const cuComplex * tau, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZungtr( + cusolverDnHandle_t handle, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + const cuDoubleComplex *tau, + cuDoubleComplex * work, + int lwork, + int * info); + + /* compute op(Q)*C or C*op(Q) where Q comes from sytrd */ + cusolverStatus_t CUSOLVERAPI cusolverDnSormtr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + const float * A, + int lda, + const float * tau, + const float * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDormtr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + const double * A, + int lda, + const double * tau, + const double * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCunmtr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + const cuComplex * A, + int lda, + const cuComplex * tau, + const cuComplex * C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZunmtr_bufferSize( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *tau, + const cuDoubleComplex *C, + int ldc, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSormtr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + float * A, + int lda, + float * tau, + float * C, + int ldc, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDormtr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + double * A, + int lda, + double * tau, + double * C, + int ldc, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCunmtr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + cuComplex * A, + int lda, + cuComplex * tau, + cuComplex * C, + int ldc, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZunmtr( + cusolverDnHandle_t handle, + cublasSideMode_t side, + cublasFillMode_t uplo, + cublasOperation_t trans, + int m, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * tau, + cuDoubleComplex * C, + int ldc, + cuDoubleComplex * work, + int lwork, + int * info); + + /* singular value decomposition, A = U * Sigma * V^H */ + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvd_bufferSize( + cusolverDnHandle_t handle, + int m, + int n, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvd( + cusolverDnHandle_t handle, + signed char jobu, + signed char jobvt, + int m, + int n, + float * A, + int lda, + float * S, + float * U, + int ldu, + float * VT, + int ldvt, + float * work, + int lwork, + float * rwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvd( + cusolverDnHandle_t handle, + signed char jobu, + signed char jobvt, + int m, + int n, + double * A, + int lda, + double * S, + double * U, + int ldu, + double * VT, + int ldvt, + double * work, + int lwork, + double * rwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvd( + cusolverDnHandle_t handle, + signed char jobu, + signed char jobvt, + int m, + int n, + cuComplex * A, + int lda, + float * S, + cuComplex * U, + int ldu, + cuComplex * VT, + int ldvt, + cuComplex * work, + int lwork, + float * rwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvd( + cusolverDnHandle_t handle, + signed char jobu, + signed char jobvt, + int m, + int n, + cuDoubleComplex * A, + int lda, + double * S, + cuDoubleComplex * U, + int ldu, + cuDoubleComplex * VT, + int ldvt, + cuDoubleComplex * work, + int lwork, + double * rwork, + int * info); + + /* standard symmetric eigenvalue solver, A*x = lambda*x, by divide-and-conquer + */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevd( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * W, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevd( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * W, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevd( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + float * W, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevd( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + double * W, + cuDoubleComplex * work, + int lwork, + int * info); + + /* standard selective symmetric eigenvalue solver, A*x = lambda*x, by + * divide-and-conquer */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + float vl, + float vu, + int il, + int iu, + int * meig, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + double vl, + double vu, + int il, + int iu, + int * meig, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + float vl, + float vu, + int il, + int iu, + int * meig, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + double vl, + double vu, + int il, + int iu, + int * meig, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevdx( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float vl, + float vu, + int il, + int iu, + int * meig, + float * W, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevdx( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double vl, + double vu, + int il, + int iu, + int * meig, + double * W, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevdx( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + float vl, + float vu, + int il, + int iu, + int * meig, + float * W, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevdx( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + double vl, + double vu, + int il, + int iu, + int * meig, + double * W, + cuDoubleComplex * work, + int lwork, + int * info); + + /* selective generalized symmetric eigenvalue solver, A*x = lambda*B*x, by + * divide-and-conquer */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * B, + int ldb, + float vl, + float vu, + int il, + int iu, + int * meig, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * B, + int ldb, + double vl, + double vu, + int il, + int iu, + int * meig, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const cuComplex * B, + int ldb, + float vl, + float vu, + int il, + int iu, + int * meig, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvdx_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *B, + int ldb, + double vl, + double vu, + int il, + int iu, + int * meig, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvdx( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * B, + int ldb, + float vl, + float vu, + int il, + int iu, + int * meig, + float * W, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvdx( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * B, + int ldb, + double vl, + double vu, + int il, + int iu, + int * meig, + double * W, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvdx( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * B, + int ldb, + float vl, + float vu, + int il, + int iu, + int * meig, + float * W, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvdx( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * B, + int ldb, + double vl, + double vu, + int il, + int iu, + int * meig, + double * W, + cuDoubleComplex * work, + int lwork, + int * info); + + /* generalized symmetric eigenvalue solver, A*x = lambda*B*x, by + * divide-and-conquer */ + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * B, + int ldb, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * B, + int ldb, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const cuComplex * B, + int ldb, + const float * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvd_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *B, + int ldb, + const double * W, + int * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvd( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * B, + int ldb, + float * W, + float * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvd( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * B, + int ldb, + double * W, + double * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvd( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * B, + int ldb, + float * W, + cuComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvd( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * B, + int ldb, + double * W, + cuDoubleComplex * work, + int lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverDnCreateSyevjInfo(syevjInfo_t *info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDestroySyevjInfo(syevjInfo_t info); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXsyevjSetTolerance(syevjInfo_t info, double tolerance); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXsyevjSetMaxSweeps(syevjInfo_t info, int max_sweeps); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXsyevjSetSortEig(syevjInfo_t info, int sort_eig); + + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevjGetResidual( + cusolverDnHandle_t handle, + syevjInfo_t info, + double * residual); + + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevjGetSweeps( + cusolverDnHandle_t handle, + syevjInfo_t info, + int * executed_sweeps); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * W, + int * lwork, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * W, + int * lwork, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const float * W, + int * lwork, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const double * W, + int * lwork, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * W, + float * work, + int lwork, + int * info, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * W, + double * work, + int lwork, + int * info, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + float * W, + cuComplex * work, + int lwork, + int * info, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + double * W, + cuDoubleComplex * work, + int lwork, + int * info, + syevjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const float * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const double * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsyevj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * W, + float * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsyevj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * W, + double * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnCheevj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + float * W, + cuComplex * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZheevj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + double * W, + cuDoubleComplex * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const float * A, + int lda, + const float * B, + int ldb, + const float * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const double * A, + int lda, + const double * B, + int ldb, + const double * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuComplex * A, + int lda, + const cuComplex * B, + int ldb, + const float * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + const cuDoubleComplex *A, + int lda, + const cuDoubleComplex *B, + int ldb, + const double * W, + int * lwork, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnSsygvj( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + float * A, + int lda, + float * B, + int ldb, + float * W, + float * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDsygvj( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + double * A, + int lda, + double * B, + int ldb, + double * W, + double * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnChegvj( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuComplex * A, + int lda, + cuComplex * B, + int ldb, + float * W, + cuComplex * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZhegvj( + cusolverDnHandle_t handle, + cusolverEigType_t itype, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int n, + cuDoubleComplex * A, + int lda, + cuDoubleComplex * B, + int ldb, + double * W, + cuDoubleComplex * work, + int lwork, + int * info, + syevjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnCreateGesvdjInfo(gesvdjInfo_t *info); + + cusolverStatus_t CUSOLVERAPI cusolverDnDestroyGesvdjInfo(gesvdjInfo_t info); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXgesvdjSetTolerance(gesvdjInfo_t info, double tolerance); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXgesvdjSetMaxSweeps(gesvdjInfo_t info, int max_sweeps); + + cusolverStatus_t CUSOLVERAPI + cusolverDnXgesvdjSetSortEig(gesvdjInfo_t info, int sort_svd); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdjGetResidual( + cusolverDnHandle_t handle, + gesvdjInfo_t info, + double * residual); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdjGetSweeps( + cusolverDnHandle_t handle, + gesvdjInfo_t info, + int * executed_sweeps); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + const float * A, + int lda, + const float * S, + const float * U, + int ldu, + const float * V, + int ldv, + int * lwork, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + const double * A, + int lda, + const double * S, + const double * U, + int ldu, + const double * V, + int ldv, + int * lwork, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + const cuComplex * A, + int lda, + const float * S, + const cuComplex * U, + int ldu, + const cuComplex * V, + int ldv, + int * lwork, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdjBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + const cuDoubleComplex *A, + int lda, + const double * S, + const cuDoubleComplex *U, + int ldu, + const cuDoubleComplex *V, + int ldv, + int * lwork, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + float * A, + int lda, + float * S, + float * U, + int ldu, + float * V, + int ldv, + float * work, + int lwork, + int * info, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + double * A, + int lda, + double * S, + double * U, + int ldu, + double * V, + int ldv, + double * work, + int lwork, + int * info, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + cuComplex * A, + int lda, + float * S, + cuComplex * U, + int ldu, + cuComplex * V, + int ldv, + cuComplex * work, + int lwork, + int * info, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdjBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int m, + int n, + cuDoubleComplex * A, + int lda, + double * S, + cuDoubleComplex * U, + int ldu, + cuDoubleComplex * V, + int ldv, + cuDoubleComplex * work, + int lwork, + int * info, + gesvdjInfo_t params, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + const float * A, + int lda, + const float * S, + const float * U, + int ldu, + const float * V, + int ldv, + int * lwork, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + const double * A, + int lda, + const double * S, + const double * U, + int ldu, + const double * V, + int ldv, + int * lwork, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + const cuComplex * A, + int lda, + const float * S, + const cuComplex * U, + int ldu, + const cuComplex * V, + int ldv, + int * lwork, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdj_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + const cuDoubleComplex *A, + int lda, + const double * S, + const cuDoubleComplex *U, + int ldu, + const cuDoubleComplex *V, + int ldv, + int * lwork, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + float * A, + int lda, + float * S, + float * U, + int ldu, + float * V, + int ldv, + float * work, + int lwork, + int * info, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + double * A, + int lda, + double * S, + double * U, + int ldu, + double * V, + int ldv, + double * work, + int lwork, + int * info, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + cuComplex * A, + int lda, + float * S, + cuComplex * U, + int ldu, + cuComplex * V, + int ldv, + cuComplex * work, + int lwork, + int * info, + gesvdjInfo_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdj( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int econ, + int m, + int n, + cuDoubleComplex * A, + int lda, + double * S, + cuDoubleComplex * U, + int ldu, + cuDoubleComplex * V, + int ldv, + cuDoubleComplex * work, + int lwork, + int * info, + gesvdjInfo_t params); + + /* batched approximate SVD */ + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdaStridedBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const float * d_A, + int lda, + long long int strideA, + const float * d_S, + long long int strideS, + const float * d_U, + int ldu, + long long int strideU, + const float * d_V, + int ldv, + long long int strideV, + int * lwork, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdaStridedBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const double * d_A, + int lda, + long long int strideA, + const double * d_S, + long long int strideS, + const double * d_U, + int ldu, + long long int strideU, + const double * d_V, + int ldv, + long long int strideV, + int * lwork, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdaStridedBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const cuComplex * d_A, + int lda, + long long int strideA, + const float * d_S, + long long int strideS, + const cuComplex * d_U, + int ldu, + long long int strideU, + const cuComplex * d_V, + int ldv, + long long int strideV, + int * lwork, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdaStridedBatched_bufferSize( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const cuDoubleComplex *d_A, + int lda, + long long int strideA, + const double * d_S, + long long int strideS, + const cuDoubleComplex *d_U, + int ldu, + long long int strideU, + const cuDoubleComplex *d_V, + int ldv, + long long int strideV, + int * lwork, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnSgesvdaStridedBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const float * d_A, + int lda, + long long int strideA, + float * d_S, + long long int strideS, + float * d_U, + int ldu, + long long int strideU, + float * d_V, + int ldv, + long long int strideV, + float * d_work, + int lwork, + int * d_info, + double * h_R_nrmF, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnDgesvdaStridedBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const double * d_A, + int lda, + long long int strideA, + double * d_S, + long long int strideS, + double * d_U, + int ldu, + long long int strideU, + double * d_V, + int ldv, + long long int strideV, + double * d_work, + int lwork, + int * d_info, + double * h_R_nrmF, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnCgesvdaStridedBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const cuComplex * d_A, + int lda, + long long int strideA, + float * d_S, + long long int strideS, + cuComplex * d_U, + int ldu, + long long int strideU, + cuComplex * d_V, + int ldv, + long long int strideV, + cuComplex * d_work, + int lwork, + int * d_info, + double * h_R_nrmF, + int batchSize); + + cusolverStatus_t CUSOLVERAPI cusolverDnZgesvdaStridedBatched( + cusolverDnHandle_t handle, + cusolverEigMode_t jobz, + int rank, + int m, + int n, + const cuDoubleComplex *d_A, + int lda, + long long int strideA, + double * d_S, + long long int strideS, + cuDoubleComplex * d_U, + int ldu, + long long int strideU, + cuDoubleComplex * d_V, + int ldv, + long long int strideV, + cuDoubleComplex * d_work, + int lwork, + int * d_info, + double * h_R_nrmF, + int batchSize); + + cusolverStatus_t CUSOLVERAPI + cusolverDnCreateParams(cusolverDnParams_t *params); + + cusolverStatus_t CUSOLVERAPI + cusolverDnDestroyParams(cusolverDnParams_t params); + + cusolverStatus_t CUSOLVERAPI cusolverDnSetAdvOptions( + cusolverDnParams_t params, + cusolverDnFunction_t function, + cusolverAlgMode_t algo); + + /* 64-bit API for POTRF */ + CUSOLVER_DEPRECATED(cusolverDnXpotrf_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnPotrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXpotrf) + cusolverStatus_t CUSOLVERAPI cusolverDnPotrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* 64-bit API for POTRS */ + CUSOLVER_DEPRECATED(cusolverDnXpotrs) + cusolverStatus_t CUSOLVERAPI cusolverDnPotrs( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + int * info); + + /* 64-bit API for GEQRF */ + CUSOLVER_DEPRECATED(cusolverDnXgeqrf_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnGeqrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeTau, + const void * tau, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXgeqrf) + cusolverStatus_t CUSOLVERAPI cusolverDnGeqrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeTau, + void * tau, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* 64-bit API for GETRF */ + CUSOLVER_DEPRECATED(cusolverDnXgetrf_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnGetrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXgetrf) + cusolverStatus_t CUSOLVERAPI cusolverDnGetrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + int64_t * ipiv, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* 64-bit API for GETRS */ + CUSOLVER_DEPRECATED(cusolverDnXgetrs) + cusolverStatus_t CUSOLVERAPI cusolverDnGetrs( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasOperation_t trans, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + const int64_t * ipiv, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + int * info); + + /* 64-bit API for SYEVD */ + CUSOLVER_DEPRECATED(cusolverDnXsyevd_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnSyevd_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeW, + const void * W, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXsyevd) + cusolverStatus_t CUSOLVERAPI cusolverDnSyevd( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeW, + void * W, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* 64-bit API for SYEVDX */ + CUSOLVER_DEPRECATED(cusolverDnXsyevdx_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnSyevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + void * vl, + void * vu, + int64_t il, + int64_t iu, + int64_t * h_meig, + cudaDataType dataTypeW, + const void * W, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXsyevdx) + cusolverStatus_t CUSOLVERAPI cusolverDnSyevdx( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + void * vl, + void * vu, + int64_t il, + int64_t iu, + int64_t * meig64, + cudaDataType dataTypeW, + void * W, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* 64-bit API for GESVD */ + CUSOLVER_DEPRECATED(cusolverDnXgesvd_bufferSize) + cusolverStatus_t CUSOLVERAPI cusolverDnGesvd_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobvt, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeS, + const void * S, + cudaDataType dataTypeU, + const void * U, + int64_t ldu, + cudaDataType dataTypeVT, + const void * VT, + int64_t ldvt, + cudaDataType computeType, + size_t * workspaceInBytes); + + CUSOLVER_DEPRECATED(cusolverDnXgesvd) + cusolverStatus_t CUSOLVERAPI cusolverDnGesvd( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobvt, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeS, + void * S, + cudaDataType dataTypeU, + void * U, + int64_t ldu, + cudaDataType dataTypeVT, + void * VT, + int64_t ldvt, + cudaDataType computeType, + void * pBuffer, + size_t workspaceInBytes, + int * info); + + /* + * new 64-bit API + */ + /* 64-bit API for POTRF */ + cusolverStatus_t CUSOLVERAPI cusolverDnXpotrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXpotrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for POTRS */ + cusolverStatus_t CUSOLVERAPI cusolverDnXpotrs( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasFillMode_t uplo, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + int * info); + + /* 64-bit API for GEQRF */ + cusolverStatus_t CUSOLVERAPI cusolverDnXgeqrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeTau, + const void * tau, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgeqrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeTau, + void * tau, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for GETRF */ + cusolverStatus_t CUSOLVERAPI cusolverDnXgetrf_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgetrf( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + int64_t * ipiv, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for GETRS */ + cusolverStatus_t CUSOLVERAPI cusolverDnXgetrs( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cublasOperation_t trans, + int64_t n, + int64_t nrhs, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + const int64_t * ipiv, + cudaDataType dataTypeB, + void * B, + int64_t ldb, + int * info); + + /* 64-bit API for SYEVD */ + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevd_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeW, + const void * W, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevd( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeW, + void * W, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for SYEVDX */ + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevdx_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + void * vl, + void * vu, + int64_t il, + int64_t iu, + int64_t * h_meig, + cudaDataType dataTypeW, + const void * W, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXsyevdx( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + cusolverEigRange_t range, + cublasFillMode_t uplo, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + void * vl, + void * vu, + int64_t il, + int64_t iu, + int64_t * meig64, + cudaDataType dataTypeW, + void * W, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for GESVD */ + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvd_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobvt, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeS, + const void * S, + cudaDataType dataTypeU, + const void * U, + int64_t ldu, + cudaDataType dataTypeVT, + const void * VT, + int64_t ldvt, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvd( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobvt, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeS, + void * S, + cudaDataType dataTypeU, + void * U, + int64_t ldu, + cudaDataType dataTypeVT, + void * VT, + int64_t ldvt, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * info); + + /* 64-bit API for GESVDP */ + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdp_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + int econ, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeS, + const void * S, + cudaDataType dataTypeU, + const void * U, + int64_t ldu, + cudaDataType dataTypeV, + const void * V, + int64_t ldv, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdp( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverEigMode_t jobz, + int econ, + int64_t m, + int64_t n, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeS, + void * S, + cudaDataType dataTypeU, + void * U, + int64_t ldu, + cudaDataType dataTypeV, + void * V, + int64_t ldv, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * d_info, + double * h_err_sigma); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdr_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobv, + int64_t m, + int64_t n, + int64_t k, + int64_t p, + int64_t niters, + cudaDataType dataTypeA, + const void * A, + int64_t lda, + cudaDataType dataTypeSrand, + const void * Srand, + cudaDataType dataTypeUrand, + const void * Urand, + int64_t ldUrand, + cudaDataType dataTypeVrand, + const void * Vrand, + int64_t ldVrand, + cudaDataType computeType, + size_t * workspaceInBytesOnDevice, + size_t * workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXgesvdr( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + signed char jobu, + signed char jobv, + int64_t m, + int64_t n, + int64_t k, + int64_t p, + int64_t niters, + cudaDataType dataTypeA, + void * A, + int64_t lda, + cudaDataType dataTypeSrand, + void * Srand, + cudaDataType dataTypeUrand, + void * Urand, + int64_t ldUrand, + cudaDataType dataTypeVrand, + void * Vrand, + int64_t ldVrand, + cudaDataType computeType, + void * bufferOnDevice, + size_t workspaceInBytesOnDevice, + void * bufferOnHost, + size_t workspaceInBytesOnHost, + int * d_info); + + cusolverStatus_t CUSOLVERAPI cusolverDnXlarft_bufferSize( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverDirectMode_t direct, + cusolverStorevMode_t storev, + int64_t N, + int64_t K, + cudaDataType dataTypeV, + const void *d_V, + int64_t ldv, + cudaDataType dataTypeTau, + const void *d_tau, + cudaDataType dataTypeT, + void *d_T, + int64_t ldt, + cudaDataType computeType, + size_t *workspaceInBytesOnDevice, + size_t *workspaceInBytesOnHost); + + cusolverStatus_t CUSOLVERAPI cusolverDnXlarft( + cusolverDnHandle_t handle, + cusolverDnParams_t params, + cusolverDirectMode_t direct, + cusolverStorevMode_t storev, + int64_t N, + int64_t K, + cudaDataType dataTypeV, + const void *d_V, + int64_t ldv, + cudaDataType dataTypeTau, + const void *d_tau, + cudaDataType dataTypeT, + void *d_T, + int64_t ldt, + cudaDataType computeType, + void *bufferOnDevice, + size_t workspaceInBytesOnDevice, + void *bufferOnHost, + size_t workspaceInBytesOnHost); + + typedef void (*cusolverDnLoggerCallback_t)( + int logLevel, + const char *functionName, + const char *message); + + cusolverStatus_t CUSOLVERAPI + cusolverDnLoggerSetCallback(cusolverDnLoggerCallback_t callback); + + cusolverStatus_t CUSOLVERAPI cusolverDnLoggerSetFile(FILE *file); + + cusolverStatus_t CUSOLVERAPI cusolverDnLoggerOpenFile(const char *logFile); + + cusolverStatus_t CUSOLVERAPI cusolverDnLoggerSetLevel(int level); + + cusolverStatus_t CUSOLVERAPI cusolverDnLoggerSetMask(int mask); + + cusolverStatus_t CUSOLVERAPI cusolverDnLoggerForceDisable(); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif /* !defined(CUDENSE_H_) */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverMg.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverMg.h new file mode 100644 index 0000000000000000000000000000000000000000..7702191f7253d66cf998016f6ae9f14149fbbb0b --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverMg.h @@ -0,0 +1,318 @@ +/* + * Copyright 2019 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(CUSOLVERMG_H_) + #define CUSOLVERMG_H_ + + #include + #include "cusolverDn.h" + + #if defined(__cplusplus) +extern "C" { + #endif /* __cplusplus */ + + struct cusolverMgContext; + typedef struct cusolverMgContext *cusolverMgHandle_t; + + /** + * \beief This enum decides how 1D device Ids (or process ranks) get mapped to + * a 2D grid. + */ + typedef enum { + + CUDALIBMG_GRID_MAPPING_ROW_MAJOR = 1, + CUDALIBMG_GRID_MAPPING_COL_MAJOR = 0 + + } cusolverMgGridMapping_t; + + /** \brief Opaque structure of the distributed grid */ + typedef void *cudaLibMgGrid_t; + /** \brief Opaque structure of the distributed matrix descriptor */ + typedef void *cudaLibMgMatrixDesc_t; + + cusolverStatus_t CUSOLVERAPI cusolverMgCreate(cusolverMgHandle_t *handle); + + cusolverStatus_t CUSOLVERAPI cusolverMgDestroy(cusolverMgHandle_t handle); + + cusolverStatus_t CUSOLVERAPI cusolverMgDeviceSelect( + cusolverMgHandle_t handle, + int nbDevices, + int deviceId[]); + + /** + * \brief Allocates resources related to the shared memory device grid. + * \param[out] grid the opaque data strcuture that holds the grid + * \param[in] numRowDevices number of devices in the row + * \param[in] numColDevices number of devices in the column + * \param[in] deviceId This array of size height * width stores the + * device-ids of the 2D grid; each entry must correspond to a valid + * gpu or to -1 (denoting CPU). \param[in] mapping whether the 2D grid is in + * row/column major \returns the status code + */ + cusolverStatus_t CUSOLVERAPI cusolverMgCreateDeviceGrid( + cudaLibMgGrid_t * grid, + int32_t numRowDevices, + int32_t numColDevices, + const int32_t deviceId[], + cusolverMgGridMapping_t mapping); + + /** + * \brief Releases the allocated resources related to the distributed grid. + * \param[in] grid the opaque data strcuture that holds the distributed grid + * \returns the status code + */ + cusolverStatus_t CUSOLVERAPI cusolverMgDestroyGrid(cudaLibMgGrid_t grid); + + /** + * \brief Allocates resources related to the distributed matrix descriptor. + * \param[out] desc the opaque data strcuture that holds the descriptor + * \param[in] numRows number of total rows + * \param[in] numCols number of total columns + * \param[in] rowBlockSize row block size + * \param[in] colBlockSize column block size + * \param[in] dataType the data type of each element in cudaDataType + * \param[in] grid the opaque data structure of the distributed grid + * \returns the status code + */ + cusolverStatus_t CUSOLVERAPI cusolverMgCreateMatrixDesc( + cudaLibMgMatrixDesc_t *desc, + int64_t numRows, + int64_t numCols, + int64_t rowBlockSize, + int64_t colBlockSize, + cudaDataType dataType, + const cudaLibMgGrid_t grid); + + /** + * \brief Releases the allocated resources related to the distributed matrix + * descriptor. \param[in] desc the opaque data strcuture that holds the + * descriptor \returns the status code + */ + cusolverStatus_t CUSOLVERAPI + cusolverMgDestroyMatrixDesc(cudaLibMgMatrixDesc_t desc); + + cusolverStatus_t CUSOLVERAPI cusolverMgSyevd_bufferSize( + cusolverMgHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + void * W, + cudaDataType dataTypeW, + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgSyevd( + cusolverMgHandle_t handle, + cusolverEigMode_t jobz, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + void * W, + cudaDataType dataTypeW, + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverMgGetrf_bufferSize( + cusolverMgHandle_t handle, + int M, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + int * array_d_IPIV[], + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgGetrf( + cusolverMgHandle_t handle, + int M, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + int * array_d_IPIV[], + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverMgGetrs_bufferSize( + cusolverMgHandle_t handle, + cublasOperation_t TRANS, + int N, + int NRHS, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + int * array_d_IPIV[], + void * array_d_B[], + int IB, + int JB, + cudaLibMgMatrixDesc_t descrB, + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgGetrs( + cusolverMgHandle_t handle, + cublasOperation_t TRANS, + int N, + int NRHS, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + int * array_d_IPIV[], + void * array_d_B[], + int IB, + int JB, + cudaLibMgMatrixDesc_t descrB, + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * info); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotrf_bufferSize( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotrf( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * h_info); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotrs_bufferSize( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + void * array_d_B[], + int IB, + int JB, + cudaLibMgMatrixDesc_t descrB, + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotrs( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int n, + int nrhs, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + void * array_d_B[], + int IB, + int JB, + cudaLibMgMatrixDesc_t descrB, + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * h_info); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotri_bufferSize( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + cudaDataType computeType, + int64_t * lwork); + + cusolverStatus_t CUSOLVERAPI cusolverMgPotri( + cusolverMgHandle_t handle, + cublasFillMode_t uplo, + int N, + void * array_d_A[], + int IA, + int JA, + cudaLibMgMatrixDesc_t descrA, + cudaDataType computeType, + void * array_d_work[], + int64_t lwork, + int * h_info); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif // CUSOLVERMG_H_ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverRf.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverRf.h new file mode 100644 index 0000000000000000000000000000000000000000..c74e9ca6bb34a8d1214c10450c45e2599417636f --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverRf.h @@ -0,0 +1,339 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(CUSOLVERRF_H_) + #define CUSOLVERRF_H_ + + #include "driver_types.h" + #include "cuComplex.h" + #include "cusolver_common.h" + + #if defined(__cplusplus) +extern "C" { + #endif /* __cplusplus */ + + /* CUSOLVERRF mode */ + typedef enum { + CUSOLVERRF_RESET_VALUES_FAST_MODE_OFF = 0, // default + CUSOLVERRF_RESET_VALUES_FAST_MODE_ON = 1 + } cusolverRfResetValuesFastMode_t; + + /* CUSOLVERRF matrix format */ + typedef enum { + CUSOLVERRF_MATRIX_FORMAT_CSR = 0, // default + CUSOLVERRF_MATRIX_FORMAT_CSC = 1 + } cusolverRfMatrixFormat_t; + + /* CUSOLVERRF unit diagonal */ + typedef enum { + CUSOLVERRF_UNIT_DIAGONAL_STORED_L = 0, // default + CUSOLVERRF_UNIT_DIAGONAL_STORED_U = 1, + CUSOLVERRF_UNIT_DIAGONAL_ASSUMED_L = 2, + CUSOLVERRF_UNIT_DIAGONAL_ASSUMED_U = 3 + } cusolverRfUnitDiagonal_t; + + /* CUSOLVERRF factorization algorithm */ + typedef enum { + CUSOLVERRF_FACTORIZATION_ALG0 = 0, // default + CUSOLVERRF_FACTORIZATION_ALG1 = 1, + CUSOLVERRF_FACTORIZATION_ALG2 = 2, + } cusolverRfFactorization_t; + + /* CUSOLVERRF triangular solve algorithm */ + typedef enum { + CUSOLVERRF_TRIANGULAR_SOLVE_ALG1 = 1, // default + CUSOLVERRF_TRIANGULAR_SOLVE_ALG2 = 2, + CUSOLVERRF_TRIANGULAR_SOLVE_ALG3 = 3 + } cusolverRfTriangularSolve_t; + + /* CUSOLVERRF numeric boost report */ + typedef enum { + CUSOLVERRF_NUMERIC_BOOST_NOT_USED = 0, // default + CUSOLVERRF_NUMERIC_BOOST_USED = 1 + } cusolverRfNumericBoostReport_t; + + /* Opaque structure holding CUSOLVERRF library common */ + struct cusolverRfCommon; + typedef struct cusolverRfCommon* cusolverRfHandle_t; + + /* CUSOLVERRF create (allocate memory) and destroy (free memory) in the handle + */ + cusolverStatus_t CUSOLVERAPI cusolverRfCreate(cusolverRfHandle_t* handle); + cusolverStatus_t CUSOLVERAPI cusolverRfDestroy(cusolverRfHandle_t handle); + + /* CUSOLVERRF set and get input format */ + cusolverStatus_t CUSOLVERAPI cusolverRfGetMatrixFormat( + cusolverRfHandle_t handle, + cusolverRfMatrixFormat_t* format, + cusolverRfUnitDiagonal_t* diag); + + cusolverStatus_t CUSOLVERAPI cusolverRfSetMatrixFormat( + cusolverRfHandle_t handle, + cusolverRfMatrixFormat_t format, + cusolverRfUnitDiagonal_t diag); + + /* CUSOLVERRF set and get numeric properties */ + cusolverStatus_t CUSOLVERAPI cusolverRfSetNumericProperties( + cusolverRfHandle_t handle, + double zero, + double boost); + + cusolverStatus_t CUSOLVERAPI cusolverRfGetNumericProperties( + cusolverRfHandle_t handle, + double* zero, + double* boost); + + cusolverStatus_t CUSOLVERAPI cusolverRfGetNumericBoostReport( + cusolverRfHandle_t handle, + cusolverRfNumericBoostReport_t* report); + + /* CUSOLVERRF choose the triangular solve algorithm */ + cusolverStatus_t CUSOLVERAPI cusolverRfSetAlgs( + cusolverRfHandle_t handle, + cusolverRfFactorization_t factAlg, + cusolverRfTriangularSolve_t solveAlg); + + cusolverStatus_t CUSOLVERAPI cusolverRfGetAlgs( + cusolverRfHandle_t handle, + cusolverRfFactorization_t* factAlg, + cusolverRfTriangularSolve_t* solveAlg); + + /* CUSOLVERRF set and get fast mode */ + cusolverStatus_t CUSOLVERAPI cusolverRfGetResetValuesFastMode( + cusolverRfHandle_t handle, + cusolverRfResetValuesFastMode_t* fastMode); + + cusolverStatus_t CUSOLVERAPI cusolverRfSetResetValuesFastMode( + cusolverRfHandle_t handle, + cusolverRfResetValuesFastMode_t fastMode); + + /*** Non-Batched Routines ***/ + /* CUSOLVERRF setup of internal structures from host or device memory */ + cusolverStatus_t CUSOLVERAPI + cusolverRfSetupHost(/* Input (in the host memory) */ + int n, + int nnzA, + int* h_csrRowPtrA, + int* h_csrColIndA, + double* h_csrValA, + int nnzL, + int* h_csrRowPtrL, + int* h_csrColIndL, + double* h_csrValL, + int nnzU, + int* h_csrRowPtrU, + int* h_csrColIndU, + double* h_csrValU, + int* h_P, + int* h_Q, + /* Output */ + cusolverRfHandle_t handle); + + cusolverStatus_t CUSOLVERAPI + cusolverRfSetupDevice(/* Input (in the device memory) */ + int n, + int nnzA, + int* csrRowPtrA, + int* csrColIndA, + double* csrValA, + int nnzL, + int* csrRowPtrL, + int* csrColIndL, + double* csrValL, + int nnzU, + int* csrRowPtrU, + int* csrColIndU, + double* csrValU, + int* P, + int* Q, + /* Output */ + cusolverRfHandle_t handle); + + /* CUSOLVERRF update the matrix values (assuming the reordering, pivoting + and consequently the sparsity pattern of L and U did not change), + and zero out the remaining values. */ + cusolverStatus_t CUSOLVERAPI + cusolverRfResetValues(/* Input (in the device memory) */ + int n, + int nnzA, + int* csrRowPtrA, + int* csrColIndA, + double* csrValA, + int* P, + int* Q, + /* Output */ + cusolverRfHandle_t handle); + + /* CUSOLVERRF analysis (for parallelism) */ + cusolverStatus_t CUSOLVERAPI cusolverRfAnalyze(cusolverRfHandle_t handle); + + /* CUSOLVERRF re-factorization (for parallelism) */ + cusolverStatus_t CUSOLVERAPI cusolverRfRefactor(cusolverRfHandle_t handle); + + /* CUSOLVERRF extraction: Get L & U packed into a single matrix M */ + cusolverStatus_t CUSOLVERAPI + cusolverRfAccessBundledFactorsDevice(/* Input */ + cusolverRfHandle_t handle, + /* Output (in the host memory) */ + int* nnzM, + /* Output (in the device memory) */ + int** Mp, + int** Mi, + double** Mx); + + cusolverStatus_t CUSOLVERAPI + cusolverRfExtractBundledFactorsHost(/* Input */ + cusolverRfHandle_t handle, + /* Output (in the host memory) */ + int* h_nnzM, + int** h_Mp, + int** h_Mi, + double** h_Mx); + + /* CUSOLVERRF extraction: Get L & U individually */ + cusolverStatus_t CUSOLVERAPI + cusolverRfExtractSplitFactorsHost(/* Input */ + cusolverRfHandle_t handle, + /* Output (in the host memory) */ + int* h_nnzL, + int** h_csrRowPtrL, + int** h_csrColIndL, + double** h_csrValL, + int* h_nnzU, + int** h_csrRowPtrU, + int** h_csrColIndU, + double** h_csrValU); + + /* CUSOLVERRF (forward and backward triangular) solves */ + cusolverStatus_t CUSOLVERAPI + cusolverRfSolve(/* Input (in the device memory) */ + cusolverRfHandle_t handle, + int* P, + int* Q, + int nrhs, // only nrhs=1 is supported + double* Temp, // of size ldt*nrhs (ldt>=n) + int ldt, + /* Input/Output (in the device memory) */ + double* XF, + /* Input */ + int ldxf); + + /*** Batched Routines ***/ + /* CUSOLVERRF-batch setup of internal structures from host */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchSetupHost(/* Input (in the host memory)*/ + int batchSize, + int n, + int nnzA, + int* h_csrRowPtrA, + int* h_csrColIndA, + double* h_csrValA_array[], + int nnzL, + int* h_csrRowPtrL, + int* h_csrColIndL, + double* h_csrValL, + int nnzU, + int* h_csrRowPtrU, + int* h_csrColIndU, + double* h_csrValU, + int* h_P, + int* h_Q, + /* Output (in the device memory) */ + cusolverRfHandle_t handle); + + /* CUSOLVERRF-batch update the matrix values (assuming the reordering, + pivoting and consequently the sparsity pattern of L and U did not change), + and zero out the remaining values. */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchResetValues(/* Input (in the device memory) */ + int batchSize, + int n, + int nnzA, + int* csrRowPtrA, + int* csrColIndA, + double* csrValA_array[], + int* P, + int* Q, + /* Output */ + cusolverRfHandle_t handle); + + /* CUSOLVERRF-batch analysis (for parallelism) */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchAnalyze(cusolverRfHandle_t handle); + + /* CUSOLVERRF-batch re-factorization (for parallelism) */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchRefactor(cusolverRfHandle_t handle); + + /* CUSOLVERRF-batch (forward and backward triangular) solves */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchSolve(/* Input (in the device memory) */ + cusolverRfHandle_t handle, + int* P, + int* Q, + int nrhs, // only nrhs=1 is supported + double* Temp, // of size 2*batchSize*(n*nrhs) + int ldt, // only ldt=n is supported + /* Input/Output (in the device memory) */ + double* XF_array[], + /* Input */ + int ldxf); + + /* CUSOLVERRF-batch obtain the position of zero pivot */ + cusolverStatus_t CUSOLVERAPI + cusolverRfBatchZeroPivot(/* Input */ + cusolverRfHandle_t handle, + /* Output (in the host memory) */ + int* position); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif /* CUSOLVERRF_H_ */ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp.h new file mode 100644 index 0000000000000000000000000000000000000000..a00a2fac14664090a116bae89fe34f97d8e41f9c --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp.h @@ -0,0 +1,923 @@ +/* + * Copyright 2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(CUSOLVERSP_H_) + #define CUSOLVERSP_H_ + + #include "cusparse.h" + #include "cublas_v2.h" + #include "cusolver_common.h" + + #if defined(__cplusplus) +extern "C" { + #endif /* __cplusplus */ + + struct cusolverSpContext; + typedef struct cusolverSpContext *cusolverSpHandle_t; + + struct csrqrInfo; + typedef struct csrqrInfo *csrqrInfo_t; + + cusolverStatus_t CUSOLVERAPI cusolverSpCreate(cusolverSpHandle_t *handle); + cusolverStatus_t CUSOLVERAPI cusolverSpDestroy(cusolverSpHandle_t handle); + cusolverStatus_t CUSOLVERAPI + cusolverSpSetStream(cusolverSpHandle_t handle, cudaStream_t streamId); + cusolverStatus_t CUSOLVERAPI + cusolverSpGetStream(cusolverSpHandle_t handle, cudaStream_t *streamId); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrissymHost( + cusolverSpHandle_t handle, + int m, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrEndPtrA, + const int * csrColIndA, + int * issym); + + /* -------- GPU linear solver by LU factorization + * solve A*x = b, A can be singular + * [ls] stands for linear solve + * [v] stands for vector + * [lu] stands for LU factorization + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsvluHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const float * b, + float tol, + int reorder, + float * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsvluHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const double * b, + double tol, + int reorder, + double * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsvluHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuComplex * b, + float tol, + int reorder, + cuComplex * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsvluHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuDoubleComplex * b, + double tol, + int reorder, + cuDoubleComplex * x, + int * singularity); + + /* -------- GPU linear solver by QR factorization + * solve A*x = b, A can be singular + * [ls] stands for linear solve + * [v] stands for vector + * [qr] stands for QR factorization + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsvqr( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const float * b, + float tol, + int reorder, + float * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsvqr( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const double * b, + double tol, + int reorder, + double * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsvqr( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuComplex * b, + float tol, + int reorder, + cuComplex * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsvqr( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuDoubleComplex * b, + double tol, + int reorder, + cuDoubleComplex * x, + int * singularity); + + /* -------- CPU linear solver by QR factorization + * solve A*x = b, A can be singular + * [ls] stands for linear solve + * [v] stands for vector + * [qr] stands for QR factorization + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsvqrHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const float * b, + float tol, + int reorder, + float * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsvqrHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const double * b, + double tol, + int reorder, + double * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsvqrHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuComplex * b, + float tol, + int reorder, + cuComplex * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsvqrHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuDoubleComplex * b, + double tol, + int reorder, + cuDoubleComplex * x, + int * singularity); + + /* -------- CPU linear solver by Cholesky factorization + * solve A*x = b, A can be singular + * [ls] stands for linear solve + * [v] stands for vector + * [chol] stands for Cholesky factorization + * + * Only works for symmetric positive definite matrix. + * The upper part of A is ignored. + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsvcholHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const float * b, + float tol, + int reorder, + float * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsvcholHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const double * b, + double tol, + int reorder, + double * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsvcholHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuComplex * b, + float tol, + int reorder, + cuComplex * x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsvcholHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuDoubleComplex * b, + double tol, + int reorder, + cuDoubleComplex * x, + int * singularity); + + /* -------- GPU linear solver by Cholesky factorization + * solve A*x = b, A can be singular + * [ls] stands for linear solve + * [v] stands for vector + * [chol] stands for Cholesky factorization + * + * Only works for symmetric positive definite matrix. + * The upper part of A is ignored. + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsvchol( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const float * b, + float tol, + int reorder, + // output + float *x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsvchol( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const double * b, + double tol, + int reorder, + // output + double *x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsvchol( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuComplex * b, + float tol, + int reorder, + // output + cuComplex *x, + int * singularity); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsvchol( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + const cuDoubleComplex * b, + double tol, + int reorder, + // output + cuDoubleComplex *x, + int * singularity); + + /* ----------- CPU least square solver by QR factorization + * solve min|b - A*x| + * [lsq] stands for least square + * [v] stands for vector + * [qr] stands for QR factorization + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrlsqvqrHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const float * b, + float tol, + int * rankA, + float * x, + int * p, + float * min_norm); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrlsqvqrHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const double * b, + double tol, + int * rankA, + double * x, + int * p, + double * min_norm); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrlsqvqrHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuComplex * b, + float tol, + int * rankA, + cuComplex * x, + int * p, + float * min_norm); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrlsqvqrHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuDoubleComplex * b, + double tol, + int * rankA, + cuDoubleComplex * x, + int * p, + double * min_norm); + + /* --------- CPU eigenvalue solver by shift inverse + * solve A*x = lambda * x + * where lambda is the eigenvalue nearest mu0. + * [eig] stands for eigenvalue solver + * [si] stands for shift-inverse + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsreigvsiHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + float mu0, + const float * x0, + int maxite, + float tol, + float * mu, + float * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsreigvsiHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + double mu0, + const double * x0, + int maxite, + double tol, + double * mu, + double * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsreigvsiHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex mu0, + const cuComplex * x0, + int maxite, + float tol, + cuComplex * mu, + cuComplex * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsreigvsiHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex mu0, + const cuDoubleComplex * x0, + int maxite, + double tol, + cuDoubleComplex * mu, + cuDoubleComplex * x); + + /* --------- GPU eigenvalue solver by shift inverse + * solve A*x = lambda * x + * where lambda is the eigenvalue nearest mu0. + * [eig] stands for eigenvalue solver + * [si] stands for shift-inverse + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsreigvsi( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + float mu0, + const float * x0, + int maxite, + float eps, + float * mu, + float * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsreigvsi( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + double mu0, + const double * x0, + int maxite, + double eps, + double * mu, + double * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsreigvsi( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex mu0, + const cuComplex * x0, + int maxite, + float eps, + cuComplex * mu, + cuComplex * x); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsreigvsi( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex mu0, + const cuDoubleComplex * x0, + int maxite, + double eps, + cuDoubleComplex * mu, + cuDoubleComplex * x); + + // ----------- enclosed eigenvalues + + cusolverStatus_t CUSOLVERAPI cusolverSpScsreigsHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex left_bottom_corner, + cuComplex right_upper_corner, + int * num_eigs); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsreigsHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex left_bottom_corner, + cuDoubleComplex right_upper_corner, + int * num_eigs); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsreigsHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex left_bottom_corner, + cuComplex right_upper_corner, + int * num_eigs); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsreigsHost( + cusolverSpHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex left_bottom_corner, + cuDoubleComplex right_upper_corner, + int * num_eigs); + + /* --------- CPU symrcm + * Symmetric reverse Cuthill McKee permutation + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrsymrcmHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + int * p); + + /* --------- CPU symmdq + * Symmetric minimum degree algorithm by quotient graph + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrsymmdqHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + int * p); + + /* --------- CPU symmdq + * Symmetric Approximate minimum degree algorithm by quotient graph + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrsymamdHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + int * p); + + /* --------- CPU metis + * symmetric reordering + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrmetisndHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + const int64_t * options, + int * p); + + /* --------- CPU zfd + * Zero free diagonal reordering + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrzfdHost( + cusolverSpHandle_t handle, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + int * P, + int * numnz); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrzfdHost( + cusolverSpHandle_t handle, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + int * P, + int * numnz); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrzfdHost( + cusolverSpHandle_t handle, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + int * P, + int * numnz); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrzfdHost( + cusolverSpHandle_t handle, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + int * P, + int * numnz); + + /* --------- CPU permuation + * P*A*Q^T + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrperm_bufferSizeHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + const int * p, + const int * q, + size_t * bufferSizeInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrpermHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + int * csrRowPtrA, + int * csrColIndA, + const int * p, + const int * q, + int * map, + void * pBuffer); + + /* + * Low-level API: Batched QR + * + */ + + cusolverStatus_t CUSOLVERAPI cusolverSpCreateCsrqrInfo(csrqrInfo_t *info); + + cusolverStatus_t CUSOLVERAPI cusolverSpDestroyCsrqrInfo(csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrqrAnalysisBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrBufferInfoBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrVal, + const int * csrRowPtr, + const int * csrColInd, + int batchSize, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrBufferInfoBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrVal, + const int * csrRowPtr, + const int * csrColInd, + int batchSize, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrBufferInfoBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + int batchSize, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrBufferInfoBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrVal, + const int * csrRowPtr, + const int * csrColInd, + int batchSize, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrsvBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const float * b, + float * x, + int batchSize, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrsvBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const double * b, + double * x, + int batchSize, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrsvBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuComplex * b, + cuComplex * x, + int batchSize, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrsvBatched( + cusolverSpHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + const cuDoubleComplex * b, + cuDoubleComplex * x, + int batchSize, + csrqrInfo_t info, + void * pBuffer); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif // define CUSOLVERSP_H_ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp_LOWLEVEL_PREVIEW.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp_LOWLEVEL_PREVIEW.h new file mode 100644 index 0000000000000000000000000000000000000000..e660bb87ea5d89cc1d430dce6c50df006d796809 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverSp_LOWLEVEL_PREVIEW.h @@ -0,0 +1,1107 @@ +/* + * Copyright 2015 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(CUSOLVERSP_LOWLEVEL_PREVIEW_H_) + #define CUSOLVERSP_LOWLEVEL_PREVIEW_H_ + + #include "cusolverSp.h" + + #if defined(__cplusplus) +extern "C" { + #endif /* __cplusplus */ + + struct csrluInfoHost; + typedef struct csrluInfoHost *csrluInfoHost_t; + + struct csrqrInfoHost; + typedef struct csrqrInfoHost *csrqrInfoHost_t; + + struct csrcholInfoHost; + typedef struct csrcholInfoHost *csrcholInfoHost_t; + + struct csrcholInfo; + typedef struct csrcholInfo *csrcholInfo_t; + + /* + * Low level API for CPU LU + * + */ + cusolverStatus_t CUSOLVERAPI + cusolverSpCreateCsrluInfoHost(csrluInfoHost_t *info); + + cusolverStatus_t CUSOLVERAPI + cusolverSpDestroyCsrluInfoHost(csrluInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrluAnalysisHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrluBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrluBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrluBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrluBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrluFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + float pivot_threshold, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrluFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + double pivot_threshold, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrluFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + float pivot_threshold, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrluFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrluInfoHost_t info, + double pivot_threshold, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrluZeroPivotHost( + cusolverSpHandle_t handle, + csrluInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrluZeroPivotHost( + cusolverSpHandle_t handle, + csrluInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrluZeroPivotHost( + cusolverSpHandle_t handle, + csrluInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrluZeroPivotHost( + cusolverSpHandle_t handle, + csrluInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrluSolveHost( + cusolverSpHandle_t handle, + int n, + const float * b, + float * x, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrluSolveHost( + cusolverSpHandle_t handle, + int n, + const double * b, + double * x, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrluSolveHost( + cusolverSpHandle_t handle, + int n, + const cuComplex * b, + cuComplex * x, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrluSolveHost( + cusolverSpHandle_t handle, + int n, + const cuDoubleComplex *b, + cuDoubleComplex * x, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrluNnzHost( + cusolverSpHandle_t handle, + int * nnzLRef, + int * nnzURef, + csrluInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrluExtractHost( + cusolverSpHandle_t handle, + int * P, + int * Q, + const cusparseMatDescr_t descrL, + float * csrValL, + int * csrRowPtrL, + int * csrColIndL, + const cusparseMatDescr_t descrU, + float * csrValU, + int * csrRowPtrU, + int * csrColIndU, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrluExtractHost( + cusolverSpHandle_t handle, + int * P, + int * Q, + const cusparseMatDescr_t descrL, + double * csrValL, + int * csrRowPtrL, + int * csrColIndL, + const cusparseMatDescr_t descrU, + double * csrValU, + int * csrRowPtrU, + int * csrColIndU, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrluExtractHost( + cusolverSpHandle_t handle, + int * P, + int * Q, + const cusparseMatDescr_t descrL, + cuComplex * csrValL, + int * csrRowPtrL, + int * csrColIndL, + const cusparseMatDescr_t descrU, + cuComplex * csrValU, + int * csrRowPtrU, + int * csrColIndU, + csrluInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrluExtractHost( + cusolverSpHandle_t handle, + int * P, + int * Q, + const cusparseMatDescr_t descrL, + cuDoubleComplex * csrValL, + int * csrRowPtrL, + int * csrColIndL, + const cusparseMatDescr_t descrU, + cuDoubleComplex * csrValU, + int * csrRowPtrU, + int * csrColIndU, + csrluInfoHost_t info, + void * pBuffer); + + /* + * Low level API for CPU QR + * + */ + cusolverStatus_t CUSOLVERAPI + cusolverSpCreateCsrqrInfoHost(csrqrInfoHost_t *info); + + cusolverStatus_t CUSOLVERAPI + cusolverSpDestroyCsrqrInfoHost(csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrqrAnalysisHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrBufferInfoHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrBufferInfoHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrBufferInfoHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrBufferInfoHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrSetupHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + float mu, + csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrSetupHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + double mu, + csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrSetupHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex mu, + csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrSetupHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex mu, + csrqrInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrFactorHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + float * b, + float * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrFactorHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + double * b, + double * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrFactorHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + cuComplex * b, + cuComplex * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrFactorHost( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + cuDoubleComplex * b, + cuDoubleComplex * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrZeroPivotHost( + cusolverSpHandle_t handle, + csrqrInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrZeroPivotHost( + cusolverSpHandle_t handle, + csrqrInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrZeroPivotHost( + cusolverSpHandle_t handle, + csrqrInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrZeroPivotHost( + cusolverSpHandle_t handle, + csrqrInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrSolveHost( + cusolverSpHandle_t handle, + int m, + int n, + float * b, + float * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrSolveHost( + cusolverSpHandle_t handle, + int m, + int n, + double * b, + double * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrSolveHost( + cusolverSpHandle_t handle, + int m, + int n, + cuComplex * b, + cuComplex * x, + csrqrInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrSolveHost( + cusolverSpHandle_t handle, + int m, + int n, + cuDoubleComplex * b, + cuDoubleComplex * x, + csrqrInfoHost_t info, + void * pBuffer); + + /* + * Low level API for GPU QR + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrqrAnalysis( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrBufferInfo( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrBufferInfo( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrBufferInfo( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrBufferInfo( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrqrInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrSetup( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + float mu, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrSetup( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + double mu, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrSetup( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuComplex mu, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrSetup( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + cuDoubleComplex mu, + csrqrInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrFactor( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + float * b, + float * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrFactor( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + double * b, + double * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrFactor( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + cuComplex * b, + cuComplex * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrFactor( + cusolverSpHandle_t handle, + int m, + int n, + int nnzA, + cuDoubleComplex * b, + cuDoubleComplex * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrZeroPivot( + cusolverSpHandle_t handle, + csrqrInfo_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrZeroPivot( + cusolverSpHandle_t handle, + csrqrInfo_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrZeroPivot( + cusolverSpHandle_t handle, + csrqrInfo_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrZeroPivot( + cusolverSpHandle_t handle, + csrqrInfo_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrqrSolve( + cusolverSpHandle_t handle, + int m, + int n, + float * b, + float * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrqrSolve( + cusolverSpHandle_t handle, + int m, + int n, + double * b, + double * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrqrSolve( + cusolverSpHandle_t handle, + int m, + int n, + cuComplex * b, + cuComplex * x, + csrqrInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrqrSolve( + cusolverSpHandle_t handle, + int m, + int n, + cuDoubleComplex * b, + cuDoubleComplex * x, + csrqrInfo_t info, + void * pBuffer); + + /* + * Low level API for CPU Cholesky + * + */ + cusolverStatus_t CUSOLVERAPI + cusolverSpCreateCsrcholInfoHost(csrcholInfoHost_t *info); + + cusolverStatus_t CUSOLVERAPI + cusolverSpDestroyCsrcholInfoHost(csrcholInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrcholAnalysisHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholBufferInfoHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholFactorHost( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholZeroPivotHost( + cusolverSpHandle_t handle, + csrcholInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholZeroPivotHost( + cusolverSpHandle_t handle, + csrcholInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholZeroPivotHost( + cusolverSpHandle_t handle, + csrcholInfoHost_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholZeroPivotHost( + cusolverSpHandle_t handle, + csrcholInfoHost_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholSolveHost( + cusolverSpHandle_t handle, + int n, + const float * b, + float * x, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholSolveHost( + cusolverSpHandle_t handle, + int n, + const double * b, + double * x, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholSolveHost( + cusolverSpHandle_t handle, + int n, + const cuComplex * b, + cuComplex * x, + csrcholInfoHost_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholSolveHost( + cusolverSpHandle_t handle, + int n, + const cuDoubleComplex *b, + cuDoubleComplex * x, + csrcholInfoHost_t info, + void * pBuffer); + + /* + * Low level API for GPU Cholesky + * + */ + cusolverStatus_t CUSOLVERAPI cusolverSpCreateCsrcholInfo(csrcholInfo_t *info); + + cusolverStatus_t CUSOLVERAPI cusolverSpDestroyCsrcholInfo(csrcholInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpXcsrcholAnalysis( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholBufferInfo( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholBufferInfo( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholBufferInfo( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholBufferInfo( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + size_t * internalDataInBytes, + size_t * workspaceInBytes); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholFactor( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholFactor( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholFactor( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholFactor( + cusolverSpHandle_t handle, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const cuDoubleComplex * csrValA, + const int * csrRowPtrA, + const int * csrColIndA, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholZeroPivot( + cusolverSpHandle_t handle, + csrcholInfo_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholZeroPivot( + cusolverSpHandle_t handle, + csrcholInfo_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholZeroPivot( + cusolverSpHandle_t handle, + csrcholInfo_t info, + float tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholZeroPivot( + cusolverSpHandle_t handle, + csrcholInfo_t info, + double tol, + int * position); + + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholSolve( + cusolverSpHandle_t handle, + int n, + const float * b, + float * x, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholSolve( + cusolverSpHandle_t handle, + int n, + const double * b, + double * x, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholSolve( + cusolverSpHandle_t handle, + int n, + const cuComplex * b, + cuComplex * x, + csrcholInfo_t info, + void * pBuffer); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholSolve( + cusolverSpHandle_t handle, + int n, + const cuDoubleComplex *b, + cuDoubleComplex * x, + csrcholInfo_t info, + void * pBuffer); + + /* + * "diag" is a device array of size N. + * cusolverSpcsrcholDiag returns diag(L) to "diag" where A(P,P) = L*L**T + * "diag" can estimate det(A) because det(A(P,P)) = det(A) = det(L)^2 if A = + * L*L**T. + * + * cusolverSpcsrcholDiag must be called after cusolverSpcsrcholFactor. + * otherwise "diag" is wrong. + */ + cusolverStatus_t CUSOLVERAPI cusolverSpScsrcholDiag( + cusolverSpHandle_t handle, + csrcholInfo_t info, + float * diag); + + cusolverStatus_t CUSOLVERAPI cusolverSpDcsrcholDiag( + cusolverSpHandle_t handle, + csrcholInfo_t info, + double * diag); + + cusolverStatus_t CUSOLVERAPI cusolverSpCcsrcholDiag( + cusolverSpHandle_t handle, + csrcholInfo_t info, + float * diag); + + cusolverStatus_t CUSOLVERAPI cusolverSpZcsrcholDiag( + cusolverSpHandle_t handle, + csrcholInfo_t info, + double * diag); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif // CUSOLVERSP_LOWLEVEL_PREVIEW_H_ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolver_common.h b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolver_common.h new file mode 100644 index 0000000000000000000000000000000000000000..204dffef076fbce62066e98a5a8b041695fc7aad --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusolver/include/cusolver_common.h @@ -0,0 +1,261 @@ +/* + * Copyright 2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(CUSOLVER_COMMON_H_) + #define CUSOLVER_COMMON_H_ + + #include "library_types.h" + + #ifndef CUSOLVERAPI + #ifdef _WIN32 + #define CUSOLVERAPI __stdcall + #else + #define CUSOLVERAPI + #endif + #endif + + #if defined(_MSC_VER) +typedef __int64 int64_t; + #else + #include + #endif + +typedef int cusolver_int_t; + + #define CUSOLVER_VER_MAJOR 11 + #define CUSOLVER_VER_MINOR 6 + #define CUSOLVER_VER_PATCH 1 + #define CUSOLVER_VER_BUILD 9 + #define CUSOLVER_VERSION \ + (CUSOLVER_VER_MAJOR * 1000 + CUSOLVER_VER_MINOR * 100 + CUSOLVER_VER_PATCH) + +//------------------------------------------------------------------------------ + + #if !defined(_MSC_VER) + #define CUSOLVER_CPP_VERSION __cplusplus + #elif _MSC_FULL_VER >= 190024210 // Visual Studio 2015 Update 3 + #define CUSOLVER_CPP_VERSION _MSVC_LANG + #else + #define CUSOLVER_CPP_VERSION 0 + #endif + +//------------------------------------------------------------------------------ + + #if !defined(DISABLE_CUSOLVER_DEPRECATED) + + #if CUSOLVER_CPP_VERSION >= 201402L + + #define CUSOLVER_DEPRECATED(new_func) \ + [[deprecated("please use " #new_func " instead")]] + + #elif defined(_MSC_VER) + + #define CUSOLVER_DEPRECATED(new_func) \ + __declspec(deprecated("please use " #new_func " instead")) + + #elif defined(__INTEL_COMPILER) || defined(__clang__) || \ + (defined(__GNUC__) && \ + (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5))) + + #define CUSOLVER_DEPRECATED(new_func) \ + __attribute__((deprecated("please use " #new_func " instead"))) + + #elif defined(__GNUC__) || defined(__xlc__) + + #define CUSOLVER_DEPRECATED(new_func) __attribute__((deprecated)) + + #else + + #define CUSOLVER_DEPRECATED(new_func) + + #endif // defined(__cplusplus) && __cplusplus >= 201402L + //------------------------------------------------------------------------------ + + #if CUSOLVER_CPP_VERSION >= 201703L + + #define CUSOLVER_DEPRECATED_ENUM(new_enum) \ + [[deprecated("please use " #new_enum " instead")]] + + #elif defined(__clang__) || \ + (defined(__GNUC__) && __GNUC__ >= 6 && !defined(__PGI)) + + #define CUSOLVER_DEPRECATED_ENUM(new_enum) \ + __attribute__((deprecated("please use " #new_enum " instead"))) + + #else + + #define CUSOLVER_DEPRECATED_ENUM(new_enum) + + #endif // defined(__cplusplus) && __cplusplus >= 201402L + + #else // defined(DISABLE_CUSOLVER_DEPRECATED) + + #define CUSOLVER_DEPRECATED(new_func) + #define CUSOLVER_DEPRECATED_ENUM(new_enum) + + #endif // !defined(DISABLE_CUSOLVER_DEPRECATED) + + #undef CUSOLVER_CPP_VERSION + + #if defined(__cplusplus) +extern "C" { + #endif /* __cplusplus */ + + typedef enum { + CUSOLVER_STATUS_SUCCESS = 0, + CUSOLVER_STATUS_NOT_INITIALIZED = 1, + CUSOLVER_STATUS_ALLOC_FAILED = 2, + CUSOLVER_STATUS_INVALID_VALUE = 3, + CUSOLVER_STATUS_ARCH_MISMATCH = 4, + CUSOLVER_STATUS_MAPPING_ERROR = 5, + CUSOLVER_STATUS_EXECUTION_FAILED = 6, + CUSOLVER_STATUS_INTERNAL_ERROR = 7, + CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED = 8, + CUSOLVER_STATUS_NOT_SUPPORTED = 9, + CUSOLVER_STATUS_ZERO_PIVOT = 10, + CUSOLVER_STATUS_INVALID_LICENSE = 11, + CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED = 12, + CUSOLVER_STATUS_IRS_PARAMS_INVALID = 13, + CUSOLVER_STATUS_IRS_PARAMS_INVALID_PREC = 14, + CUSOLVER_STATUS_IRS_PARAMS_INVALID_REFINE = 15, + CUSOLVER_STATUS_IRS_PARAMS_INVALID_MAXITER = 16, + CUSOLVER_STATUS_IRS_INTERNAL_ERROR = 20, + CUSOLVER_STATUS_IRS_NOT_SUPPORTED = 21, + CUSOLVER_STATUS_IRS_OUT_OF_RANGE = 22, + CUSOLVER_STATUS_IRS_NRHS_NOT_SUPPORTED_FOR_REFINE_GMRES = 23, + CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED = 25, + CUSOLVER_STATUS_IRS_INFOS_NOT_DESTROYED = 26, + CUSOLVER_STATUS_IRS_MATRIX_SINGULAR = 30, + CUSOLVER_STATUS_INVALID_WORKSPACE = 31 + } cusolverStatus_t; + + typedef enum { + CUSOLVER_EIG_TYPE_1 = 1, + CUSOLVER_EIG_TYPE_2 = 2, + CUSOLVER_EIG_TYPE_3 = 3 + } cusolverEigType_t; + + typedef enum { + CUSOLVER_EIG_MODE_NOVECTOR = 0, + CUSOLVER_EIG_MODE_VECTOR = 1 + } cusolverEigMode_t; + + typedef enum { + CUSOLVER_EIG_RANGE_ALL = 1001, + CUSOLVER_EIG_RANGE_I = 1002, + CUSOLVER_EIG_RANGE_V = 1003, + } cusolverEigRange_t; + + typedef enum { + CUSOLVER_INF_NORM = 104, + CUSOLVER_MAX_NORM = 105, + CUSOLVER_ONE_NORM = 106, + CUSOLVER_FRO_NORM = 107, + } cusolverNorm_t; + + typedef enum { + CUSOLVER_IRS_REFINE_NOT_SET = 1100, + CUSOLVER_IRS_REFINE_NONE = 1101, + CUSOLVER_IRS_REFINE_CLASSICAL = 1102, + CUSOLVER_IRS_REFINE_CLASSICAL_GMRES = 1103, + CUSOLVER_IRS_REFINE_GMRES = 1104, + CUSOLVER_IRS_REFINE_GMRES_GMRES = 1105, + CUSOLVER_IRS_REFINE_GMRES_NOPCOND = 1106, + + CUSOLVER_PREC_DD = 1150, + CUSOLVER_PREC_SS = 1151, + CUSOLVER_PREC_SHT = 1152, + + } cusolverIRSRefinement_t; + + typedef enum { + CUSOLVER_R_8I = 1201, + CUSOLVER_R_8U = 1202, + CUSOLVER_R_64F = 1203, + CUSOLVER_R_32F = 1204, + CUSOLVER_R_16F = 1205, + CUSOLVER_R_16BF = 1206, + CUSOLVER_R_TF32 = 1207, + CUSOLVER_R_AP = 1208, + CUSOLVER_C_8I = 1211, + CUSOLVER_C_8U = 1212, + CUSOLVER_C_64F = 1213, + CUSOLVER_C_32F = 1214, + CUSOLVER_C_16F = 1215, + CUSOLVER_C_16BF = 1216, + CUSOLVER_C_TF32 = 1217, + CUSOLVER_C_AP = 1218, + } cusolverPrecType_t; + + typedef enum { + CUSOLVER_ALG_0 = 0, /* default algorithm */ + CUSOLVER_ALG_1 = 1, + CUSOLVER_ALG_2 = 2 + } cusolverAlgMode_t; + + typedef enum { + CUBLAS_STOREV_COLUMNWISE = 0, + CUBLAS_STOREV_ROWWISE = 1 + } cusolverStorevMode_t; + + typedef enum { + CUBLAS_DIRECT_FORWARD = 0, + CUBLAS_DIRECT_BACKWARD = 1 + } cusolverDirectMode_t; + + cusolverStatus_t CUSOLVERAPI + cusolverGetProperty(libraryPropertyType type, int *value); + + cusolverStatus_t CUSOLVERAPI cusolverGetVersion(int *version); + + #if defined(__cplusplus) +} + #endif /* __cplusplus */ + +#endif // CUSOLVER_COMMON_H_ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/lib/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusolver/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusolver/lib/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusolver/lib/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..76a8816c8ea2c58af18f988f7a159b2b1d326e0c Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusolver/lib/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusparse/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusparse/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..dfa4e8668a11944a7f66a9b1ead90f872e4f4d7c Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusparse/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..52959233eed5ae2d883d11d8444138fcad02cff6 Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse.h b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse.h new file mode 100644 index 0000000000000000000000000000000000000000..8ad24a1ead7943e333919affd5f506ed70f05aea --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse.h @@ -0,0 +1,6106 @@ +/* + * Copyright 1993-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ +#if !defined(CUSPARSE_H_) +#define CUSPARSE_H_ + +#include // cuComplex +#include // cudaStream_t +#include // CUDA_R_32F +#include // int64_t +#include // FILE* + +#if defined(__cplusplus) +# include // __half +#endif // defined(__cplusplus) + +//############################################################################## +//# CUSPARSE VERSION INFORMATION +//############################################################################## + +#define CUSPARSE_VER_MAJOR 12 +#define CUSPARSE_VER_MINOR 3 +#define CUSPARSE_VER_PATCH 1 +#define CUSPARSE_VER_BUILD 170 +#define CUSPARSE_VERSION (CUSPARSE_VER_MAJOR * 1000 + \ + CUSPARSE_VER_MINOR * 100 + \ + CUSPARSE_VER_PATCH) + +// ############################################################################# +// # BASIC MACROS +// ############################################################################# + +#if !defined(CUSPARSEAPI) +# if defined(_WIN32) +# define CUSPARSEAPI __stdcall +# else +# define CUSPARSEAPI +# endif +#endif + +//------------------------------------------------------------------------------ + +#if !defined(_MSC_VER) +# define CUSPARSE_CPP_VERSION __cplusplus +#elif _MSC_FULL_VER >= 190024210 // Visual Studio 2015 Update 3 +# define CUSPARSE_CPP_VERSION _MSVC_LANG +#else +# define CUSPARSE_CPP_VERSION 0 +#endif + +// ############################################################################# +// # CUSPARSE_DEPRECATED MACRO +// ############################################################################# + +#if !defined(DISABLE_CUSPARSE_DEPRECATED) + +# if CUSPARSE_CPP_VERSION >= 201402L + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) \ + [[deprecated("please use " #new_func " instead")]] + +# define CUSPARSE_DEPRECATED \ + [[deprecated("The routine will be removed in the next major release")]] + +# define CUSPARSE_DEPRECATED_TYPE \ + [[deprecated("The type will be removed in the next major release")]] + +# define CUSPARSE_DEPRECATED_TYPE_MSVC + +# elif defined(_MSC_VER) + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) \ + __declspec(deprecated("please use " #new_func " instead")) + +# define CUSPARSE_DEPRECATED \ + __declspec(deprecated( \ + "The routine will be removed in the next major release")) + +# define CUSPARSE_DEPRECATED_TYPE + +# define CUSPARSE_DEPRECATED_TYPE_MSVC + __declspec(deprecated( \ + "The type will be removed in the next major release")) + +# elif defined(__INTEL_COMPILER) || defined(__clang__) || \ + (defined(__GNUC__) && \ + (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5))) + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) \ + __attribute__((deprecated("please use " #new_func " instead"))) + +# define CUSPARSE_DEPRECATED \ + __attribute__((deprecated( \ + "The routine will be removed in the next major release"))) + +# define CUSPARSE_DEPRECATED_TYPE \ + __attribute__((deprecated( \ + "The type will be removed in the next major release"))) + +# define CUSPARSE_DEPRECATED_TYPE_MSVC + +# elif defined(__GNUC__) || defined(__xlc__) + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) \ + __attribute__((deprecated)) + +# define CUSPARSE_DEPRECATED __attribute__((deprecated)) +# define CUSPARSE_DEPRECATED_TYPE __attribute__((deprecated)) +# define CUSPARSE_DEPRECATED_TYPE_MSVC + +# else + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) +# define CUSPARSE_DEPRECATED +# define CUSPARSE_DEPRECATED_TYPE +# define CUSPARSE_DEPRECATED_TYPE_MSVC + +# endif // defined(__cplusplus) && __cplusplus >= 201402L +//------------------------------------------------------------------------------ + +# if CUSPARSE_CPP_VERSION >= 201703L + +# define CUSPARSE_DEPRECATED_ENUM_REPLACE_WITH(new_enum) \ + [[deprecated("please use " #new_enum " instead")]] + +# define CUSPARSE_DEPRECATED_ENUM \ + [[deprecated("The enum will be removed in the next major release")]] + +# elif defined(__clang__) || \ + (defined(__GNUC__) && __GNUC__ >= 6 && !defined(__PGI)) + +# define CUSPARSE_DEPRECATED_ENUM_REPLACE_WITH(new_enum) \ + __attribute__((deprecated("please use " #new_enum " instead"))) + +# define CUSPARSE_DEPRECATED_ENUM \ + __attribute__((deprecated( \ + "The enum will be removed in the next major release"))) + +# else + +# define CUSPARSE_DEPRECATED_ENUM_REPLACE_WITH(new_enum) +# define CUSPARSE_DEPRECATED_ENUM + +# endif // defined(__cplusplus) && __cplusplus >= 201402L + +#else // defined(DISABLE_CUSPARSE_DEPRECATED) + +# define CUSPARSE_DEPRECATED_REPLACE_WITH(new_func) +# define CUSPARSE_DEPRECATED +# define CUSPARSE_DEPRECATED_TYPE +# define CUSPARSE_DEPRECATED_TYPE_MSVC +# define CUSPARSE_DEPRECATED_ENUM_REPLACE_WITH(new_enum) +# define CUSPARSE_DEPRECATED_ENUM + +#endif // !defined(DISABLE_CUSPARSE_DEPRECATED) + +#undef CUSPARSE_CPP_VERSION + +//------------------------------------------------------------------------------ + +#if defined(__cplusplus) +extern "C" { +#endif // defined(__cplusplus) + +//############################################################################## +//# OPAQUE DATA STRUCTURES +//############################################################################## + +struct cusparseContext; +typedef struct cusparseContext* cusparseHandle_t; + +struct cusparseMatDescr; +typedef struct cusparseMatDescr* cusparseMatDescr_t; + +struct bsrsv2Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct bsrsv2Info* bsrsv2Info_t CUSPARSE_DEPRECATED_TYPE; + +struct bsrsm2Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct bsrsm2Info* bsrsm2Info_t CUSPARSE_DEPRECATED_TYPE; + +struct csric02Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct csric02Info* csric02Info_t CUSPARSE_DEPRECATED_TYPE; + +struct bsric02Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct bsric02Info* bsric02Info_t CUSPARSE_DEPRECATED_TYPE; + +struct csrilu02Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct csrilu02Info* csrilu02Info_t CUSPARSE_DEPRECATED_TYPE; + +struct bsrilu02Info; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct bsrilu02Info* bsrilu02Info_t CUSPARSE_DEPRECATED_TYPE; + +struct csru2csrInfo; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct csru2csrInfo* csru2csrInfo_t CUSPARSE_DEPRECATED_TYPE; + +struct cusparseColorInfo; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct cusparseColorInfo* cusparseColorInfo_t CUSPARSE_DEPRECATED_TYPE; + +struct pruneInfo; +typedef CUSPARSE_DEPRECATED_TYPE_MSVC +struct pruneInfo* pruneInfo_t CUSPARSE_DEPRECATED_TYPE; + +//############################################################################## +//# ENUMERATORS +//############################################################################## + +typedef enum { + CUSPARSE_STATUS_SUCCESS = 0, + CUSPARSE_STATUS_NOT_INITIALIZED = 1, + CUSPARSE_STATUS_ALLOC_FAILED = 2, + CUSPARSE_STATUS_INVALID_VALUE = 3, + CUSPARSE_STATUS_ARCH_MISMATCH = 4, + CUSPARSE_STATUS_MAPPING_ERROR = 5, + CUSPARSE_STATUS_EXECUTION_FAILED = 6, + CUSPARSE_STATUS_INTERNAL_ERROR = 7, + CUSPARSE_STATUS_MATRIX_TYPE_NOT_SUPPORTED = 8, + CUSPARSE_STATUS_ZERO_PIVOT = 9, + CUSPARSE_STATUS_NOT_SUPPORTED = 10, + CUSPARSE_STATUS_INSUFFICIENT_RESOURCES = 11 +} cusparseStatus_t; + +typedef enum { + CUSPARSE_POINTER_MODE_HOST = 0, + CUSPARSE_POINTER_MODE_DEVICE = 1 +} cusparsePointerMode_t; + +typedef enum { + CUSPARSE_ACTION_SYMBOLIC = 0, + CUSPARSE_ACTION_NUMERIC = 1 +} cusparseAction_t; + +typedef enum { + CUSPARSE_MATRIX_TYPE_GENERAL = 0, + CUSPARSE_MATRIX_TYPE_SYMMETRIC = 1, + CUSPARSE_MATRIX_TYPE_HERMITIAN = 2, + CUSPARSE_MATRIX_TYPE_TRIANGULAR = 3 +} cusparseMatrixType_t; + +typedef enum { + CUSPARSE_FILL_MODE_LOWER = 0, + CUSPARSE_FILL_MODE_UPPER = 1 +} cusparseFillMode_t; + +typedef enum { + CUSPARSE_DIAG_TYPE_NON_UNIT = 0, + CUSPARSE_DIAG_TYPE_UNIT = 1 +} cusparseDiagType_t; + +typedef enum { + CUSPARSE_INDEX_BASE_ZERO = 0, + CUSPARSE_INDEX_BASE_ONE = 1 +} cusparseIndexBase_t; + +typedef enum { + CUSPARSE_OPERATION_NON_TRANSPOSE = 0, + CUSPARSE_OPERATION_TRANSPOSE = 1, + CUSPARSE_OPERATION_CONJUGATE_TRANSPOSE = 2 +} cusparseOperation_t; + +typedef enum { + CUSPARSE_DIRECTION_ROW = 0, + CUSPARSE_DIRECTION_COLUMN = 1 +} cusparseDirection_t; + +typedef enum { + CUSPARSE_SOLVE_POLICY_NO_LEVEL = 0, + CUSPARSE_SOLVE_POLICY_USE_LEVEL = 1 +} cusparseSolvePolicy_t CUSPARSE_DEPRECATED_TYPE; + +typedef enum { + CUSPARSE_COLOR_ALG0 = 0, // default + CUSPARSE_COLOR_ALG1 = 1 +} cusparseColorAlg_t CUSPARSE_DEPRECATED_TYPE; + +//############################################################################## +//# INITIALIZATION AND MANAGEMENT ROUTINES +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseCreate(cusparseHandle_t* handle); + +cusparseStatus_t CUSPARSEAPI +cusparseDestroy(cusparseHandle_t handle); + +cusparseStatus_t CUSPARSEAPI +cusparseGetVersion(cusparseHandle_t handle, + int* version); + +cusparseStatus_t CUSPARSEAPI +cusparseGetProperty(libraryPropertyType type, + int* value); + +const char* CUSPARSEAPI +cusparseGetErrorName(cusparseStatus_t status); + +const char* CUSPARSEAPI +cusparseGetErrorString(cusparseStatus_t status); + +cusparseStatus_t CUSPARSEAPI +cusparseSetStream(cusparseHandle_t handle, + cudaStream_t streamId); + +cusparseStatus_t CUSPARSEAPI +cusparseGetStream(cusparseHandle_t handle, + cudaStream_t* streamId); + +cusparseStatus_t CUSPARSEAPI +cusparseGetPointerMode(cusparseHandle_t handle, + cusparsePointerMode_t* mode); + +cusparseStatus_t CUSPARSEAPI +cusparseSetPointerMode(cusparseHandle_t handle, + cusparsePointerMode_t mode); + +//############################################################################## +//# LOGGING APIs +//############################################################################## + +typedef void (*cusparseLoggerCallback_t)(int logLevel, + const char* functionName, + const char* message); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerSetCallback(cusparseLoggerCallback_t callback); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerSetFile(FILE* file); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerOpenFile(const char* logFile); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerSetLevel(int level); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerSetMask(int mask); + +cusparseStatus_t CUSPARSEAPI +cusparseLoggerForceDisable(void); + +//############################################################################## +//# HELPER ROUTINES +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseCreateMatDescr(cusparseMatDescr_t* descrA); + +cusparseStatus_t CUSPARSEAPI +cusparseDestroyMatDescr(cusparseMatDescr_t descrA); + +cusparseStatus_t CUSPARSEAPI +cusparseSetMatType(cusparseMatDescr_t descrA, + cusparseMatrixType_t type); + +cusparseMatrixType_t CUSPARSEAPI +cusparseGetMatType(const cusparseMatDescr_t descrA); + +cusparseStatus_t CUSPARSEAPI +cusparseSetMatFillMode(cusparseMatDescr_t descrA, + cusparseFillMode_t fillMode); + +cusparseFillMode_t CUSPARSEAPI +cusparseGetMatFillMode(const cusparseMatDescr_t descrA); + +cusparseStatus_t CUSPARSEAPI +cusparseSetMatDiagType(cusparseMatDescr_t descrA, + cusparseDiagType_t diagType); + +cusparseDiagType_t CUSPARSEAPI +cusparseGetMatDiagType(const cusparseMatDescr_t descrA); + +cusparseStatus_t CUSPARSEAPI +cusparseSetMatIndexBase(cusparseMatDescr_t descrA, + cusparseIndexBase_t base); + +cusparseIndexBase_t CUSPARSEAPI +cusparseGetMatIndexBase(const cusparseMatDescr_t descrA); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateCsric02Info(csric02Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyCsric02Info(csric02Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateBsric02Info(bsric02Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyBsric02Info(bsric02Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateCsrilu02Info(csrilu02Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyCsrilu02Info(csrilu02Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateBsrilu02Info(bsrilu02Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyBsrilu02Info(bsrilu02Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateBsrsv2Info(bsrsv2Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyBsrsv2Info(bsrsv2Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateBsrsm2Info(bsrsm2Info_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyBsrsm2Info(bsrsm2Info_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateCsru2csrInfo(csru2csrInfo_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyCsru2csrInfo(csru2csrInfo_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateColorInfo(cusparseColorInfo_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyColorInfo(cusparseColorInfo_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreatePruneInfo(pruneInfo_t* info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDestroyPruneInfo(pruneInfo_t info); + +//############################################################################## +//# SPARSE LEVEL 2 ROUTINES +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseSgemvi(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + const float* alpha, + const float* A, + int lda, + int nnz, + const float* xVal, + const int* xInd, + const float* beta, + float* y, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgemvi_bufferSize(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + int nnz, + int* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseDgemvi(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + const double* alpha, + const double* A, + int lda, + int nnz, + const double* xVal, + const int* xInd, + const double* beta, + double* y, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgemvi_bufferSize(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + int nnz, + int* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseCgemvi(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + const cuComplex* alpha, + const cuComplex* A, + int lda, + int nnz, + const cuComplex* xVal, + const int* xInd, + const cuComplex* beta, + cuComplex* y, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgemvi_bufferSize(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + int nnz, + int* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseZgemvi(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + const cuDoubleComplex* alpha, + const cuDoubleComplex* A, + int lda, + int nnz, + const cuDoubleComplex* xVal, + const int* xInd, + const cuDoubleComplex* beta, + cuDoubleComplex* y, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgemvi_bufferSize(cusparseHandle_t handle, + cusparseOperation_t transA, + int m, + int n, + int nnz, + int* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSbsrmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nb, + int nnzb, + const float* alpha, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const float* x, + const float* beta, + float* y); + +cusparseStatus_t CUSPARSEAPI +cusparseDbsrmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nb, + int nnzb, + const double* alpha, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const double* x, + const double* beta, + double* y); + +cusparseStatus_t CUSPARSEAPI +cusparseCbsrmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nb, + int nnzb, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cuComplex* x, + const cuComplex* beta, + cuComplex* y); + +cusparseStatus_t CUSPARSEAPI +cusparseZbsrmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nb, + int nnzb, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cuDoubleComplex* x, + const cuDoubleComplex* beta, + cuDoubleComplex* y); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrxmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int sizeOfMask, + int mb, + int nb, + int nnzb, + const float* alpha, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedMaskPtrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedEndPtrA, + const int* bsrSortedColIndA, + int blockDim, + const float* x, + const float* beta, + float* y); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrxmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int sizeOfMask, + int mb, + int nb, + int nnzb, + const double* alpha, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedMaskPtrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedEndPtrA, + const int* bsrSortedColIndA, + int blockDim, + const double* x, + const double* beta, + double* y); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrxmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int sizeOfMask, + int mb, + int nb, + int nnzb, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedMaskPtrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedEndPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cuComplex* x, + const cuComplex* beta, + cuComplex* y); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrxmv(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int sizeOfMask, + int mb, + int nb, + int nnzb, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedMaskPtrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedEndPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cuDoubleComplex* x, + const cuDoubleComplex* beta, + cuDoubleComplex* y); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXbsrsv2_zeroPivot(cusparseHandle_t handle, + bsrsv2Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsv2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsv2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsv2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsv2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsv2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockSize, + bsrsv2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsv2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockSize, + bsrsv2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsv2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockSize, + bsrsv2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsv2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockSize, + bsrsv2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsv2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsv2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsv2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsv2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsv2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const float* alpha, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + const float* f, + float* x, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsv2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const double* alpha, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + const double* f, + double* x, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsv2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + const cuComplex* f, + cuComplex* x, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsv2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + int mb, + int nnzb, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + bsrsv2Info_t info, + const cuDoubleComplex* f, + cuDoubleComplex* x, + cusparseSolvePolicy_t policy, + void* pBuffer); + +//############################################################################## +//# SPARSE LEVEL 3 ROUTINES +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseSbsrmm(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int kb, + int nnzb, + const float* alpha, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + const int blockSize, + const float* B, + const int ldb, + const float* beta, + float* C, + int ldc); + +cusparseStatus_t CUSPARSEAPI +cusparseDbsrmm(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int kb, + int nnzb, + const double* alpha, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + const int blockSize, + const double* B, + const int ldb, + const double* beta, + double* C, + int ldc); + +cusparseStatus_t CUSPARSEAPI +cusparseCbsrmm(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int kb, + int nnzb, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + const int blockSize, + const cuComplex* B, + const int ldb, + const cuComplex* beta, + cuComplex* C, + int ldc); + +cusparseStatus_t CUSPARSEAPI +cusparseZbsrmm(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int kb, + int nnzb, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + const int blockSize, + const cuDoubleComplex* B, + const int ldb, + const cuDoubleComplex* beta, + cuDoubleComplex* C, + int ldc); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXbsrsm2_zeroPivot(cusparseHandle_t handle, + bsrsm2Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsm2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsm2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsm2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsm2_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsm2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsm2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsm2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsm2_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transB, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsm2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsm2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsm2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsm2_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrsm2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const float* alpha, + const cusparseMatDescr_t descrA, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + const float* B, + int ldb, + float* X, + int ldx, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrsm2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const double* alpha, + const cusparseMatDescr_t descrA, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + const double* B, + int ldb, + double* X, + int ldx, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrsm2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + const cuComplex* B, + int ldb, + cuComplex* X, + int ldx, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrsm2_solve(cusparseHandle_t handle, + cusparseDirection_t dirA, + cusparseOperation_t transA, + cusparseOperation_t transXY, + int mb, + int n, + int nnzb, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrsm2Info_t info, + const cuDoubleComplex* B, + int ldb, + cuDoubleComplex* X, + int ldx, + cusparseSolvePolicy_t policy, + void* pBuffer); + +//############################################################################## +//# PRECONDITIONERS +//############################################################################## + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrilu02_numericBoost(cusparseHandle_t handle, + csrilu02Info_t info, + int enable_boost, + double* tol, + float* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrilu02_numericBoost(cusparseHandle_t handle, + csrilu02Info_t info, + int enable_boost, + double* tol, + double* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrilu02_numericBoost(cusparseHandle_t handle, + csrilu02Info_t info, + int enable_boost, + double* tol, + cuComplex* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrilu02_numericBoost(cusparseHandle_t handle, + csrilu02Info_t info, + int enable_boost, + double* tol, + cuDoubleComplex* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXcsrilu02_zeroPivot(cusparseHandle_t handle, + csrilu02Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrilu02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrilu02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrilu02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrilu02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrilu02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrilu02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrilu02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrilu02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrilu02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrilu02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrilu02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrilu02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrilu02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrilu02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrilu02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrilu02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrilu02_numericBoost(cusparseHandle_t handle, + bsrilu02Info_t info, + int enable_boost, + double* tol, + float* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrilu02_numericBoost(cusparseHandle_t handle, + bsrilu02Info_t info, + int enable_boost, + double* tol, + double* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrilu02_numericBoost(cusparseHandle_t handle, + bsrilu02Info_t info, + int enable_boost, + double* tol, + cuComplex* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrilu02_numericBoost(cusparseHandle_t handle, + bsrilu02Info_t info, + int enable_boost, + double* tol, + cuDoubleComplex* boost_val); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXbsrilu02_zeroPivot(cusparseHandle_t handle, + bsrilu02Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrilu02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrilu02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrilu02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrilu02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrilu02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrilu02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrilu02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrilu02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsrilu02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrilu02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrilu02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrilu02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrilu02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsrilu02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsrilu02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsrilu02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsrilu02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsrilu02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXcsric02_zeroPivot(cusparseHandle_t handle, + csric02Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsric02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsric02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsric02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsric02_bufferSize(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsric02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsric02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsric02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsric02_bufferSizeExt(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedVal, + const int* csrSortedRowPtr, + const int* csrSortedColInd, + csric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsric02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsric02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsric02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsric02_analysis(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsric02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + float* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsric02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + double* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsric02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsric02(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrSortedValA_valM, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + csric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXbsric02_zeroPivot(cusparseHandle_t handle, + bsric02Info_t info, + int* position); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsric02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsric02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsric02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsric02_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + int* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsric02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsric02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsric02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsric02_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockSize, + bsric02Info_t info, + size_t* pBufferSize); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsric02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pInputBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsric02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pInputBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsric02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pInputBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsric02_analysis(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pInputBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSbsric02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDbsric02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCbsric02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* + bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZbsric02(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nnzb, + const cusparseMatDescr_t descrA, + cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int blockDim, + bsric02Info_t info, + cusparseSolvePolicy_t policy, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const float* dl, + const float* d, + const float* du, + const float* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const double* dl, + const double* d, + const double* du, + const double* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + const cuComplex* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + const cuDoubleComplex* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2(cusparseHandle_t handle, + int m, + int n, + const float* dl, + const float* d, + const float* du, + float* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2(cusparseHandle_t handle, + int m, + int n, + const double* dl, + const double* d, + const double* du, + double* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2(cusparseHandle_t handle, + int m, + int n, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + cuComplex* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + cuDoubleComplex* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2_nopivot_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const float* dl, + const float* d, + const float* du, + const float* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2_nopivot_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const double* dl, + const double* d, + const double* du, + const double* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2_nopivot_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + const cuComplex* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2_nopivot_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + const cuDoubleComplex* B, + int ldb, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2_nopivot(cusparseHandle_t handle, + int m, + int n, + const float* dl, + const float* d, + const float* du, + float* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2_nopivot(cusparseHandle_t handle, + int m, + int n, + const double* dl, + const double* d, + const double* du, + double* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2_nopivot(cusparseHandle_t handle, + int m, + int n, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + cuComplex* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2_nopivot(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + cuDoubleComplex* B, + int ldb, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2StridedBatch_bufferSizeExt(cusparseHandle_t handle, + int m, + const float* dl, + const float* d, + const float* du, + const float* x, + int batchCount, + int batchStride, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2StridedBatch_bufferSizeExt(cusparseHandle_t handle, + int m, + const double* dl, + const double* d, + const double* du, + const double* x, + int batchCount, + int batchStride, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2StridedBatch_bufferSizeExt(cusparseHandle_t handle, + int m, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + const cuComplex* x, + int batchCount, + int batchStride, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2StridedBatch_bufferSizeExt(cusparseHandle_t handle, + int m, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + const cuDoubleComplex* x, + int batchCount, + int batchStride, + size_t* bufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsv2StridedBatch(cusparseHandle_t handle, + int m, + const float* dl, + const float* d, + const float* du, + float* x, + int batchCount, + int batchStride, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsv2StridedBatch(cusparseHandle_t handle, + int m, + const double* dl, + const double* d, + const double* du, + double* x, + int batchCount, + int batchStride, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsv2StridedBatch(cusparseHandle_t handle, + int m, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + cuComplex* x, + int batchCount, + int batchStride, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsv2StridedBatch(cusparseHandle_t handle, + int m, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + cuDoubleComplex* x, + int batchCount, + int batchStride, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const float* dl, + const float* d, + const float* du, + const float* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const double* dl, + const double* d, + const double* du, + const double* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + const cuComplex* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + const cuDoubleComplex* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgtsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + float* dl, + float* d, + float* du, + float* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgtsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + double* dl, + double* d, + double* du, + double* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgtsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + cuComplex* dl, + cuComplex* d, + cuComplex* du, + cuComplex* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgtsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + cuDoubleComplex* dl, + cuDoubleComplex* d, + cuDoubleComplex* du, + cuDoubleComplex* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgpsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const float* ds, + const float* dl, + const float* d, + const float* du, + const float* dw, + const float* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgpsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const double* ds, + const double* dl, + const double* d, + const double* du, + const double* dw, + const double* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgpsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const cuComplex* ds, + const cuComplex* dl, + const cuComplex* d, + const cuComplex* du, + const cuComplex* dw, + const cuComplex* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgpsvInterleavedBatch_bufferSizeExt(cusparseHandle_t handle, + int algo, + int m, + const cuDoubleComplex* ds, + const cuDoubleComplex* dl, + const cuDoubleComplex* d, + const cuDoubleComplex* du, + const cuDoubleComplex* dw, + const cuDoubleComplex* x, + int batchCount, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgpsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + float* ds, + float* dl, + float* d, + float* du, + float* dw, + float* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgpsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + double* ds, + double* dl, + double* d, + double* du, + double* dw, + double* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgpsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + cuComplex* ds, + cuComplex* dl, + cuComplex* d, + cuComplex* du, + cuComplex* dw, + cuComplex* x, + int batchCount, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgpsvInterleavedBatch(cusparseHandle_t handle, + int algo, + int m, + cuDoubleComplex* ds, + cuDoubleComplex* dl, + cuDoubleComplex* d, + cuDoubleComplex* du, + cuDoubleComplex* dw, + cuDoubleComplex* x, + int batchCount, + void* pBuffer); + +//############################################################################## +//# EXTRA ROUTINES +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseScsrgeam2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const float* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const float* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + const float* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDcsrgeam2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const double* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const double* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + const double* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCcsrgeam2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cuComplex* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const cuComplex* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + const cuComplex* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZcsrgeam2_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cuDoubleComplex* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const cuDoubleComplex* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + const cuDoubleComplex* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseXcsrgeam2Nnz(cusparseHandle_t handle, + int m, + int n, + const cusparseMatDescr_t descrA, + int nnzA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrB, + int nnzB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + void* workspace); + +cusparseStatus_t CUSPARSEAPI +cusparseScsrgeam2(cusparseHandle_t handle, + int m, + int n, + const float* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const float* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDcsrgeam2(cusparseHandle_t handle, + int m, + int n, + const double* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const double* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCcsrgeam2(cusparseHandle_t handle, + int m, + int n, + const cuComplex* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cuComplex* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const cuComplex* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + cuComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZcsrgeam2(cusparseHandle_t handle, + int m, + int n, + const cuDoubleComplex* alpha, + const cusparseMatDescr_t descrA, + int nnzA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cuDoubleComplex* beta, + const cusparseMatDescr_t descrB, + int nnzB, + const cuDoubleComplex* csrSortedValB, + const int* csrSortedRowPtrB, + const int* csrSortedColIndB, + const cusparseMatDescr_t descrC, + cuDoubleComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +//############################################################################## +//# SPARSE MATRIX REORDERING +//############################################################################## + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsrcolor(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* fractionToColor, + int* ncolors, + int* coloring, + int* reordering, + const cusparseColorInfo_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsrcolor(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* fractionToColor, + int* ncolors, + int* coloring, + int* reordering, + const cusparseColorInfo_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsrcolor(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* fractionToColor, + int* ncolors, + int* coloring, + int* reordering, + const cusparseColorInfo_t info); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsrcolor(cusparseHandle_t handle, + int m, + int nnz, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* fractionToColor, + int* ncolors, + int* coloring, + int* reordering, + const cusparseColorInfo_t info); + +//############################################################################## +//# SPARSE FORMAT CONVERSION +//############################################################################## + +cusparseStatus_t CUSPARSEAPI +cusparseSnnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* A, + int lda, + int* nnzPerRowCol, + int* nnzTotalDevHostPtr); + +cusparseStatus_t CUSPARSEAPI +cusparseDnnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* A, + int lda, + int* nnzPerRowCol, + int* nnzTotalDevHostPtr); + +cusparseStatus_t CUSPARSEAPI +cusparseCnnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* A, + int lda, + int* nnzPerRowCol, + int* nnzTotalDevHostPtr); + +cusparseStatus_t CUSPARSEAPI +cusparseZnnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* A, + int lda, + int* nnzPerRowCol, + int* nnzTotalDevHostPtr); + +//############################################################################## +//# SPARSE FORMAT CONVERSION +//############################################################################## + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSnnz_compress(cusparseHandle_t handle, + int m, + const cusparseMatDescr_t descr, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + int* nnzPerRow, + int* nnzC, + float tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDnnz_compress(cusparseHandle_t handle, + int m, + const cusparseMatDescr_t descr, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + int* nnzPerRow, + int* nnzC, + double tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCnnz_compress(cusparseHandle_t handle, + int m, + const cusparseMatDescr_t descr, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + int* nnzPerRow, + int* nnzC, + cuComplex tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZnnz_compress(cusparseHandle_t handle, + int m, + const cusparseMatDescr_t descr, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + int* nnzPerRow, + int* nnzC, + cuDoubleComplex tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsr2csr_compress(cusparseHandle_t handle, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedColIndA, + const int* csrSortedRowPtrA, + int nnzA, + const int* nnzPerRow, + float* csrSortedValC, + int* csrSortedColIndC, + int* csrSortedRowPtrC, + float tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2csr_compress(cusparseHandle_t handle, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedColIndA, + const int* csrSortedRowPtrA, + int nnzA, + const int* nnzPerRow, + double* csrSortedValC, + int* csrSortedColIndC, + int* csrSortedRowPtrC, + double tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2csr_compress(cusparseHandle_t handle, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedColIndA, + const int* csrSortedRowPtrA, + int nnzA, + const int* nnzPerRow, + cuComplex* csrSortedValC, + int* csrSortedColIndC, + int* csrSortedRowPtrC, + cuComplex tol); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2csr_compress(cusparseHandle_t handle, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedColIndA, + const int* csrSortedRowPtrA, + int nnzA, + const int* nnzPerRow, + cuDoubleComplex* csrSortedValC, + int* csrSortedColIndC, + int* csrSortedRowPtrC, + cuDoubleComplex tol); + +cusparseStatus_t CUSPARSEAPI +cusparseXcoo2csr(cusparseHandle_t handle, + const int* cooRowInd, + int nnz, + int m, + int* csrSortedRowPtr, + cusparseIndexBase_t idxBase); + +cusparseStatus_t CUSPARSEAPI +cusparseXcsr2coo(cusparseHandle_t handle, + const int* csrSortedRowPtr, + int nnz, + int m, + int* cooRowInd, + cusparseIndexBase_t idxBase); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXcsr2bsrNnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + int* bsrSortedRowPtrC, + int* nnzTotalDevHostPtr); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsr2bsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + float* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2bsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + double* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2bsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + cuComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2bsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + cuDoubleComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseSbsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseDbsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseCbsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + cuComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseZbsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int blockDim, + const cusparseMatDescr_t descrC, + cuDoubleComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsc_bufferSize(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsc_bufferSize(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsc_bufferSize(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsc_bufferSize(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsc_bufferSizeExt(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsc_bufferSizeExt(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsc_bufferSizeExt(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsc_bufferSizeExt(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsc(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const float* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + float* bscVal, + int* bscRowInd, + int* bscColPtr, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsc(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const double* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + double* bscVal, + int* bscRowInd, + int* bscColPtr, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsc(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + cuComplex* bscVal, + int* bscRowInd, + int* bscColPtr, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsc(cusparseHandle_t handle, + int mb, + int nb, + int nnzb, + const cuDoubleComplex* bsrSortedVal, + const int* bsrSortedRowPtr, + const int* bsrSortedColInd, + int rowBlockDim, + int colBlockDim, + cuDoubleComplex* bscVal, + int* bscRowInd, + int* bscColPtr, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseXgebsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDim, + int colBlockDim, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDim, + int colBlockDim, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDim, + int colBlockDim, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDim, + int colBlockDim, + const cusparseMatDescr_t descrC, + cuComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2csr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDim, + int colBlockDim, + const cusparseMatDescr_t descrC, + cuDoubleComplex* csrSortedValC, + int* csrSortedRowPtrC, + int* csrSortedColIndC); + +cusparseStatus_t CUSPARSEAPI +cusparseScsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseScsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + int rowBlockDim, + int colBlockDim, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseXcsr2gebsrNnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrC, + int* bsrSortedRowPtrC, + int rowBlockDim, + int colBlockDim, + int* nnzTotalDevHostPtr, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseScsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrC, + float* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDim, + int colBlockDim, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrC, + double* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDim, + int colBlockDim, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrC, + cuComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDim, + int colBlockDim, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int m, + int n, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const cusparseMatDescr_t descrC, + cuDoubleComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDim, + int colBlockDim, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsr_bufferSize(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + int* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsr_bufferSizeExt(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + int rowBlockDimC, + int colBlockDimC, + size_t* pBufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseXgebsr2gebsrNnz(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + const cusparseMatDescr_t descrC, + int* bsrSortedRowPtrC, + int rowBlockDimC, + int colBlockDimC, + int* nnzTotalDevHostPtr, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSgebsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const float* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + const cusparseMatDescr_t descrC, + float* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDimC, + int colBlockDimC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDgebsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const double* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + const cusparseMatDescr_t descrC, + double* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDimC, + int colBlockDimC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCgebsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + const cusparseMatDescr_t descrC, + cuComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDimC, + int colBlockDimC, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseZgebsr2gebsr(cusparseHandle_t handle, + cusparseDirection_t dirA, + int mb, + int nb, + int nnzb, + const cusparseMatDescr_t descrA, + const cuDoubleComplex* bsrSortedValA, + const int* bsrSortedRowPtrA, + const int* bsrSortedColIndA, + int rowBlockDimA, + int colBlockDimA, + const cusparseMatDescr_t descrC, + cuDoubleComplex* bsrSortedValC, + int* bsrSortedRowPtrC, + int* bsrSortedColIndC, + int rowBlockDimC, + int colBlockDimC, + void* pBuffer); + +//############################################################################## +//# SPARSE MATRIX SORTING +//############################################################################## + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCreateIdentityPermutation(cusparseHandle_t handle, + int n, + int* p); + +cusparseStatus_t CUSPARSEAPI +cusparseXcoosort_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + const int* cooRowsA, + const int* cooColsA, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseXcoosortByRow(cusparseHandle_t handle, + int m, + int n, + int nnz, + int* cooRowsA, + int* cooColsA, + int* P, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseXcoosortByColumn(cusparseHandle_t handle, + int m, + int n, + int nnz, + int* cooRowsA, + int* cooColsA, + int* P, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseXcsrsort_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + const int* csrRowPtrA, + const int* csrColIndA, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseXcsrsort(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const int* csrRowPtrA, + int* csrColIndA, + int* P, + void* pBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseXcscsort_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + const int* cscColPtrA, + const int* cscRowIndA, + size_t* pBufferSizeInBytes); + +cusparseStatus_t CUSPARSEAPI +cusparseXcscsort(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + const int* cscColPtrA, + int* cscRowIndA, + int* P, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsru2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + float* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsru2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + double* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsru2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + cuComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsru2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnz, + cuDoubleComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsru2csr(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + float* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsru2csr(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + double* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsru2csr(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsru2csr(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseScsr2csru(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + float* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDcsr2csru(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + double* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseCcsr2csru(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + cuComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseZcsr2csru(cusparseHandle_t handle, + int m, + int n, + int nnz, + const cusparseMatDescr_t descrA, + cuDoubleComplex* csrVal, + const int* csrRowPtr, + int* csrColInd, + csru2csrInfo_t info, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + const __half* threshold, + const cusparseMatDescr_t descrC, + const __half* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + const float* threshold, + const cusparseMatDescr_t descrC, + const float* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + const double* threshold, + const cusparseMatDescr_t descrC, + const double* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csrNnz(cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + const __half* threshold, + const cusparseMatDescr_t descrC, + int* csrRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csrNnz(cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + const float* threshold, + const cusparseMatDescr_t descrC, + int* csrRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csrNnz(cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + const double* threshold, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csr(cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + const __half* threshold, + const cusparseMatDescr_t descrC, + __half* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csr(cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + const float* threshold, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csr(cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + const double* threshold, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const __half* threshold, + const cusparseMatDescr_t descrC, + const __half* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* threshold, + const cusparseMatDescr_t descrC, + const float* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csr_bufferSizeExt(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* threshold, + const cusparseMatDescr_t descrC, + const double* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + size_t* pBufferSizeInBytes); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csrNnz(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const __half* threshold, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csrNnz(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* threshold, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csrNnz(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* threshold, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csr(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const __half* threshold, + const cusparseMatDescr_t descrC, + __half* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csr(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const float* threshold, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csr(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + const double* threshold, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + const __half* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + const float* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + const double* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + int* csrRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + int* csrRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + int* csrRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneDense2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + const __half* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + __half* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneDense2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + const float* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneDense2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + const double* A, + int lda, + float percentage, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + const __half* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + const float* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csrByPercentage_bufferSizeExt( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + const double* csrSortedValC, + const int* csrSortedRowPtrC, + const int* csrSortedColIndC, + pruneInfo_t info, + size_t* pBufferSizeInBytes); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); + +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csrNnzByPercentage( + cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + int* csrSortedRowPtrC, + int* nnzTotalDevHostPtr, + pruneInfo_t info, + void* pBuffer); + +#if defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseHpruneCsr2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const __half* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, /* between 0 to 100 */ + const cusparseMatDescr_t descrC, + __half* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +#endif // defined(__cplusplus) + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseSpruneCsr2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const float* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + float* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseDpruneCsr2csrByPercentage(cusparseHandle_t handle, + int m, + int n, + int nnzA, + const cusparseMatDescr_t descrA, + const double* csrSortedValA, + const int* csrSortedRowPtrA, + const int* csrSortedColIndA, + float percentage, + const cusparseMatDescr_t descrC, + double* csrSortedValC, + const int* csrSortedRowPtrC, + int* csrSortedColIndC, + pruneInfo_t info, + void* pBuffer); + +//############################################################################## +//# CSR2CSC +//############################################################################## + +typedef enum { + CUSPARSE_CSR2CSC_ALG_DEFAULT = 1, + CUSPARSE_CSR2CSC_ALG1 = 1 +} cusparseCsr2CscAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseCsr2cscEx2(cusparseHandle_t handle, + int m, + int n, + int nnz, + const void* csrVal, + const int* csrRowPtr, + const int* csrColInd, + void* cscVal, + int* cscColPtr, + int* cscRowInd, + cudaDataType valType, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + cusparseCsr2CscAlg_t alg, + void* buffer); + +cusparseStatus_t CUSPARSEAPI +cusparseCsr2cscEx2_bufferSize(cusparseHandle_t handle, + int m, + int n, + int nnz, + const void* csrVal, + const int* csrRowPtr, + const int* csrColInd, + void* cscVal, + int* cscColPtr, + int* cscRowInd, + cudaDataType valType, + cusparseAction_t copyValues, + cusparseIndexBase_t idxBase, + cusparseCsr2CscAlg_t alg, + size_t* bufferSize); + +// ############################################################################# +// # GENERIC APIs - Enumerators and Opaque Data Structures +// ############################################################################# + +typedef enum { + CUSPARSE_FORMAT_CSR = 1, ///< Compressed Sparse Row (CSR) + CUSPARSE_FORMAT_CSC = 2, ///< Compressed Sparse Column (CSC) + CUSPARSE_FORMAT_COO = 3, ///< Coordinate (COO) - Structure of Arrays + CUSPARSE_FORMAT_BLOCKED_ELL = 5, ///< Blocked ELL + CUSPARSE_FORMAT_BSR = 6, ///< Blocked Compressed Sparse Row (BSR) + CUSPARSE_FORMAT_SLICED_ELLPACK = 7 ///< Sliced ELL +} cusparseFormat_t; + +typedef enum { + CUSPARSE_ORDER_COL = 1, ///< Column-Major Order - Matrix memory layout + CUSPARSE_ORDER_ROW = 2 ///< Row-Major Order - Matrix memory layout +} cusparseOrder_t; + +typedef enum { + CUSPARSE_INDEX_16U = 1, ///< 16-bit unsigned integer for matrix/vector + ///< indices + CUSPARSE_INDEX_32I = 2, ///< 32-bit signed integer for matrix/vector indices + CUSPARSE_INDEX_64I = 3 ///< 64-bit signed integer for matrix/vector indices +} cusparseIndexType_t; + +//------------------------------------------------------------------------------ + +struct cusparseSpVecDescr; +struct cusparseDnVecDescr; +struct cusparseSpMatDescr; +struct cusparseDnMatDescr; + +typedef struct cusparseSpVecDescr* cusparseSpVecDescr_t; +typedef struct cusparseDnVecDescr* cusparseDnVecDescr_t; +typedef struct cusparseSpMatDescr* cusparseSpMatDescr_t; +typedef struct cusparseDnMatDescr* cusparseDnMatDescr_t; + +typedef struct cusparseSpVecDescr const* cusparseConstSpVecDescr_t; +typedef struct cusparseDnVecDescr const* cusparseConstDnVecDescr_t; +typedef struct cusparseSpMatDescr const* cusparseConstSpMatDescr_t; +typedef struct cusparseDnMatDescr const* cusparseConstDnMatDescr_t; + +// ############################################################################# +// # SPARSE VECTOR DESCRIPTOR +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseCreateSpVec(cusparseSpVecDescr_t* spVecDescr, + int64_t size, + int64_t nnz, + void* indices, + void* values, + cusparseIndexType_t idxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstSpVec(cusparseConstSpVecDescr_t* spVecDescr, + int64_t size, + int64_t nnz, + const void* indices, + const void* values, + cusparseIndexType_t idxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseDestroySpVec(cusparseConstSpVecDescr_t spVecDescr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVecGet(cusparseSpVecDescr_t spVecDescr, + int64_t* size, + int64_t* nnz, + void** indices, + void** values, + cusparseIndexType_t* idxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstSpVecGet(cusparseConstSpVecDescr_t spVecDescr, + int64_t* size, + int64_t* nnz, + const void** indices, + const void** values, + cusparseIndexType_t* idxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVecGetIndexBase(cusparseConstSpVecDescr_t spVecDescr, + cusparseIndexBase_t* idxBase); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVecGetValues(cusparseSpVecDescr_t spVecDescr, + void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseConstSpVecGetValues(cusparseConstSpVecDescr_t spVecDescr, + const void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVecSetValues(cusparseSpVecDescr_t spVecDescr, + void* values); + +// ############################################################################# +// # DENSE VECTOR DESCRIPTOR +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseCreateDnVec(cusparseDnVecDescr_t* dnVecDescr, + int64_t size, + void* values, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstDnVec(cusparseConstDnVecDescr_t* dnVecDescr, + int64_t size, + const void* values, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseDestroyDnVec(cusparseConstDnVecDescr_t dnVecDescr); + +cusparseStatus_t CUSPARSEAPI +cusparseDnVecGet(cusparseDnVecDescr_t dnVecDescr, + int64_t* size, + void** values, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstDnVecGet(cusparseConstDnVecDescr_t dnVecDescr, + int64_t* size, + const void** values, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseDnVecGetValues(cusparseDnVecDescr_t dnVecDescr, + void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseConstDnVecGetValues(cusparseConstDnVecDescr_t dnVecDescr, + const void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseDnVecSetValues(cusparseDnVecDescr_t dnVecDescr, + void* values); + +// ############################################################################# +// # SPARSE MATRIX DESCRIPTOR +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseDestroySpMat(cusparseConstSpMatDescr_t spMatDescr); + + cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetFormat(cusparseConstSpMatDescr_t spMatDescr, + cusparseFormat_t* format); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetIndexBase(cusparseConstSpMatDescr_t spMatDescr, + cusparseIndexBase_t* idxBase); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetValues(cusparseSpMatDescr_t spMatDescr, + void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseConstSpMatGetValues(cusparseConstSpMatDescr_t spMatDescr, + const void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatSetValues(cusparseSpMatDescr_t spMatDescr, + void* values); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetSize(cusparseConstSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetStridedBatch(cusparseConstSpMatDescr_t spMatDescr, + int* batchCount); + +cusparseStatus_t CUSPARSEAPI +cusparseCooSetStridedBatch(cusparseSpMatDescr_t spMatDescr, + int batchCount, + int64_t batchStride); + +cusparseStatus_t CUSPARSEAPI +cusparseCsrSetStridedBatch(cusparseSpMatDescr_t spMatDescr, + int batchCount, + int64_t offsetsBatchStride, + int64_t columnsValuesBatchStride); + +cusparseStatus_t CUSPARSEAPI +cusparseBsrSetStridedBatch(cusparseSpMatDescr_t spMatDescr, + int batchCount, + int64_t offsetsBatchStride, + int64_t columnsBatchStride, + int64_t ValuesBatchStride); + +typedef enum { + CUSPARSE_SPMAT_FILL_MODE, + CUSPARSE_SPMAT_DIAG_TYPE +} cusparseSpMatAttribute_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatGetAttribute(cusparseConstSpMatDescr_t spMatDescr, + cusparseSpMatAttribute_t attribute, + void* data, + size_t dataSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMatSetAttribute(cusparseSpMatDescr_t spMatDescr, + cusparseSpMatAttribute_t attribute, + void* data, + size_t dataSize); + +//------------------------------------------------------------------------------ +// ### CSR ### + +cusparseStatus_t CUSPARSEAPI +cusparseCreateCsr(cusparseSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + void* csrRowOffsets, + void* csrColInd, + void* csrValues, + cusparseIndexType_t csrRowOffsetsType, + cusparseIndexType_t csrColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstCsr(cusparseConstSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + const void* csrRowOffsets, + const void* csrColInd, + const void* csrValues, + cusparseIndexType_t csrRowOffsetsType, + cusparseIndexType_t csrColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateCsc(cusparseSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + void* cscColOffsets, + void* cscRowInd, + void* cscValues, + cusparseIndexType_t cscColOffsetsType, + cusparseIndexType_t cscRowIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstCsc(cusparseConstSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + const void* cscColOffsets, + const void* cscRowInd, + const void* cscValues, + cusparseIndexType_t cscColOffsetsType, + cusparseIndexType_t cscRowIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCsrGet(cusparseSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + void** csrRowOffsets, + void** csrColInd, + void** csrValues, + cusparseIndexType_t* csrRowOffsetsType, + cusparseIndexType_t* csrColIndType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstCsrGet(cusparseConstSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + const void** csrRowOffsets, + const void** csrColInd, + const void** csrValues, + cusparseIndexType_t* csrRowOffsetsType, + cusparseIndexType_t* csrColIndType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCscGet(cusparseSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + void** cscColOffsets, + void** cscRowInd, + void** cscValues, + cusparseIndexType_t* cscColOffsetsType, + cusparseIndexType_t* cscRowIndType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstCscGet(cusparseConstSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + const void** cscColOffsets, + const void** cscRowInd, + const void** cscValues, + cusparseIndexType_t* cscColOffsetsType, + cusparseIndexType_t* cscRowIndType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCsrSetPointers(cusparseSpMatDescr_t spMatDescr, + void* csrRowOffsets, + void* csrColInd, + void* csrValues); + +cusparseStatus_t CUSPARSEAPI +cusparseCscSetPointers(cusparseSpMatDescr_t spMatDescr, + void* cscColOffsets, + void* cscRowInd, + void* cscValues); + +//------------------------------------------------------------------------------ +// ### BSR ### + +cusparseStatus_t CUSPARSEAPI +cusparseCreateBsr(cusparseSpMatDescr_t* spMatDescr, + int64_t brows, + int64_t bcols, + int64_t bnnz, + int64_t rowBlockSize, + int64_t colBlockSize, + void* bsrRowOffsets, + void* bsrColInd, + void* bsrValues, + cusparseIndexType_t bsrRowOffsetsType, + cusparseIndexType_t bsrColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType, + cusparseOrder_t order); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstBsr(cusparseConstSpMatDescr_t* spMatDescr, + int64_t brows, + int64_t bcols, + int64_t bnnz, + int64_t rowBlockDim, + int64_t colBlockDim, + const void* bsrRowOffsets, + const void* bsrColInd, + const void* bsrValues, + cusparseIndexType_t bsrRowOffsetsType, + cusparseIndexType_t bsrColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType, + cusparseOrder_t order); + +//------------------------------------------------------------------------------ +// ### COO ### + +cusparseStatus_t CUSPARSEAPI +cusparseCreateCoo(cusparseSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + void* cooRowInd, + void* cooColInd, + void* cooValues, + cusparseIndexType_t cooIdxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstCoo(cusparseConstSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + const void* cooRowInd, + const void* cooColInd, + const void* cooValues, + cusparseIndexType_t cooIdxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCooGet(cusparseSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + void** cooRowInd, // COO row indices + void** cooColInd, // COO column indices + void** cooValues, // COO values + cusparseIndexType_t* idxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstCooGet(cusparseConstSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* nnz, + const void** cooRowInd, // COO row indices + const void** cooColInd, // COO column indices + const void** cooValues, // COO values + cusparseIndexType_t* idxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCooSetPointers(cusparseSpMatDescr_t spMatDescr, + void* cooRows, + void* cooColumns, + void* cooValues); + +//------------------------------------------------------------------------------ +// ### BLOCKED ELL ### + +cusparseStatus_t CUSPARSEAPI +cusparseCreateBlockedEll(cusparseSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t ellBlockSize, + int64_t ellCols, + void* ellColInd, + void* ellValue, + cusparseIndexType_t ellIdxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstBlockedEll(cusparseConstSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t ellBlockSize, + int64_t ellCols, + const void* ellColInd, + const void* ellValue, + cusparseIndexType_t ellIdxType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseBlockedEllGet(cusparseSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* ellBlockSize, + int64_t* ellCols, + void** ellColInd, + void** ellValue, + cusparseIndexType_t* ellIdxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseConstBlockedEllGet(cusparseConstSpMatDescr_t spMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* ellBlockSize, + int64_t* ellCols, + const void** ellColInd, + const void** ellValue, + cusparseIndexType_t* ellIdxType, + cusparseIndexBase_t* idxBase, + cudaDataType* valueType); + +//------------------------------------------------------------------------------ +// ### Sliced ELLPACK ### + +cusparseStatus_t CUSPARSEAPI +cusparseCreateSlicedEll(cusparseSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + int64_t sellValuesSize, + int64_t sliceSize, + void* sellSliceOffsets, + void* sellColInd, + void* sellValues, + cusparseIndexType_t sellSliceOffsetsType, + cusparseIndexType_t sellColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstSlicedEll(cusparseConstSpMatDescr_t* spMatDescr, + int64_t rows, + int64_t cols, + int64_t nnz, + int64_t sellValuesSize, + int64_t sliceSize, + const void* sellSliceOffsets, + const void* sellColInd, + const void* sellValues, + cusparseIndexType_t sellSliceOffsetsType, + cusparseIndexType_t sellColIndType, + cusparseIndexBase_t idxBase, + cudaDataType valueType); + +// ############################################################################# +// # DENSE MATRIX DESCRIPTOR +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseCreateDnMat(cusparseDnMatDescr_t* dnMatDescr, + int64_t rows, + int64_t cols, + int64_t ld, + void* values, + cudaDataType valueType, + cusparseOrder_t order); + +cusparseStatus_t CUSPARSEAPI +cusparseCreateConstDnMat(cusparseConstDnMatDescr_t* dnMatDescr, + int64_t rows, + int64_t cols, + int64_t ld, + const void* values, + cudaDataType valueType, + cusparseOrder_t order); + +cusparseStatus_t CUSPARSEAPI +cusparseDestroyDnMat(cusparseConstDnMatDescr_t dnMatDescr); + +cusparseStatus_t CUSPARSEAPI +cusparseDnMatGet(cusparseDnMatDescr_t dnMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* ld, + void** values, + cudaDataType* type, + cusparseOrder_t* order); + +cusparseStatus_t CUSPARSEAPI +cusparseConstDnMatGet(cusparseConstDnMatDescr_t dnMatDescr, + int64_t* rows, + int64_t* cols, + int64_t* ld, + const void** values, + cudaDataType* type, + cusparseOrder_t* order); + +cusparseStatus_t CUSPARSEAPI +cusparseDnMatGetValues(cusparseDnMatDescr_t dnMatDescr, + void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseConstDnMatGetValues(cusparseConstDnMatDescr_t dnMatDescr, + const void** values); + +cusparseStatus_t CUSPARSEAPI +cusparseDnMatSetValues(cusparseDnMatDescr_t dnMatDescr, + void* values); + +cusparseStatus_t CUSPARSEAPI +cusparseDnMatSetStridedBatch(cusparseDnMatDescr_t dnMatDescr, + int batchCount, + int64_t batchStride); + +cusparseStatus_t CUSPARSEAPI +cusparseDnMatGetStridedBatch(cusparseConstDnMatDescr_t dnMatDescr, + int* batchCount, + int64_t* batchStride); + +// ############################################################################# +// # VECTOR-VECTOR OPERATIONS +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseAxpby(cusparseHandle_t handle, + const void* alpha, + cusparseConstSpVecDescr_t vecX, + const void* beta, + cusparseDnVecDescr_t vecY); + +cusparseStatus_t CUSPARSEAPI +cusparseGather(cusparseHandle_t handle, + cusparseConstDnVecDescr_t vecY, + cusparseSpVecDescr_t vecX); + +cusparseStatus_t CUSPARSEAPI +cusparseScatter(cusparseHandle_t handle, + cusparseConstSpVecDescr_t vecX, + cusparseDnVecDescr_t vecY); + +CUSPARSE_DEPRECATED +cusparseStatus_t CUSPARSEAPI +cusparseRot(cusparseHandle_t handle, + const void* c_coeff, + const void* s_coeff, + cusparseSpVecDescr_t vecX, + cusparseDnVecDescr_t vecY); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVV_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opX, + cusparseConstSpVecDescr_t vecX, + cusparseConstDnVecDescr_t vecY, + const void* result, + cudaDataType computeType, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpVV(cusparseHandle_t handle, + cusparseOperation_t opX, + cusparseConstSpVecDescr_t vecX, + cusparseConstDnVecDescr_t vecY, + void* result, + cudaDataType computeType, + void* externalBuffer); + +// ############################################################################# +// # SPARSE TO DENSE +// ############################################################################# + +typedef enum { + CUSPARSE_SPARSETODENSE_ALG_DEFAULT = 0 +} cusparseSparseToDenseAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSparseToDense_bufferSize(cusparseHandle_t handle, + cusparseConstSpMatDescr_t matA, + cusparseDnMatDescr_t matB, + cusparseSparseToDenseAlg_t alg, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSparseToDense(cusparseHandle_t handle, + cusparseConstSpMatDescr_t matA, + cusparseDnMatDescr_t matB, + cusparseSparseToDenseAlg_t alg, + void* externalBuffer); + +// ############################################################################# +// # DENSE TO SPARSE +// ############################################################################# + +typedef enum { + CUSPARSE_DENSETOSPARSE_ALG_DEFAULT = 0 +} cusparseDenseToSparseAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseDenseToSparse_bufferSize(cusparseHandle_t handle, + cusparseConstDnMatDescr_t matA, + cusparseSpMatDescr_t matB, + cusparseDenseToSparseAlg_t alg, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseDenseToSparse_analysis(cusparseHandle_t handle, + cusparseConstDnMatDescr_t matA, + cusparseSpMatDescr_t matB, + cusparseDenseToSparseAlg_t alg, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseDenseToSparse_convert(cusparseHandle_t handle, + cusparseConstDnMatDescr_t matA, + cusparseSpMatDescr_t matB, + cusparseDenseToSparseAlg_t alg, + void* externalBuffer); + +// ############################################################################# +// # SPARSE MATRIX-VECTOR MULTIPLICATION +// ############################################################################# + +typedef enum { + CUSPARSE_SPMV_ALG_DEFAULT = 0, + CUSPARSE_SPMV_CSR_ALG1 = 2, + CUSPARSE_SPMV_CSR_ALG2 = 3, + CUSPARSE_SPMV_COO_ALG1 = 1, + CUSPARSE_SPMV_COO_ALG2 = 4, + CUSPARSE_SPMV_SELL_ALG1 = 5 +} cusparseSpMVAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpMV(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + const void* beta, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpMVAlg_t alg, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMV_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + const void* beta, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpMVAlg_t alg, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMV_preprocess(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + const void* beta, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpMVAlg_t alg, + void* externalBuffer); +// ############################################################################# +// # SPARSE TRIANGULAR VECTOR SOLVE +// ############################################################################# + +typedef enum { + CUSPARSE_SPSV_ALG_DEFAULT = 0, +} cusparseSpSVAlg_t; + +typedef enum { + CUSPARSE_SPSV_UPDATE_GENERAL = 0, + CUSPARSE_SPSV_UPDATE_DIAGONAL = 1 +} cusparseSpSVUpdate_t; + +struct cusparseSpSVDescr; +typedef struct cusparseSpSVDescr* cusparseSpSVDescr_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_createDescr(cusparseSpSVDescr_t* descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_destroyDescr(cusparseSpSVDescr_t descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpSVAlg_t alg, + cusparseSpSVDescr_t spsvDescr, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_analysis(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpSVAlg_t alg, + cusparseSpSVDescr_t spsvDescr, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_solve(cusparseHandle_t handle, + cusparseOperation_t opA, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnVecDescr_t vecX, + cusparseDnVecDescr_t vecY, + cudaDataType computeType, + cusparseSpSVAlg_t alg, + cusparseSpSVDescr_t spsvDescr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSV_updateMatrix(cusparseHandle_t handle, + cusparseSpSVDescr_t spsvDescr, + void* newValues, + cusparseSpSVUpdate_t updatePart); + + + +// ############################################################################# +// # SPARSE TRIANGULAR MATRIX SOLVE +// ############################################################################# + +typedef enum { + CUSPARSE_SPSM_ALG_DEFAULT = 0, +} cusparseSpSMAlg_t; + +typedef enum { + CUSPARSE_SPSM_UPDATE_GENERAL = 0, + CUSPARSE_SPSM_UPDATE_DIAGONAL = 1 +} cusparseSpSMUpdate_t; + +struct cusparseSpSMDescr; +typedef struct cusparseSpSMDescr* cusparseSpSMDescr_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_createDescr(cusparseSpSMDescr_t* descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_destroyDescr(cusparseSpSMDescr_t descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpSMAlg_t alg, + cusparseSpSMDescr_t spsmDescr, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_analysis(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpSMAlg_t alg, + cusparseSpSMDescr_t spsmDescr, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_solve(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpSMAlg_t alg, + cusparseSpSMDescr_t spsmDescr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpSM_updateMatrix(cusparseHandle_t handle, + cusparseSpSMDescr_t spsmDescr, + void* newValues, + cusparseSpSMUpdate_t updatePart); + +// ############################################################################# +// # SPARSE MATRIX-MATRIX MULTIPLICATION +// ############################################################################# + +typedef enum { + CUSPARSE_SPMM_ALG_DEFAULT = 0, + CUSPARSE_SPMM_COO_ALG1 = 1, + CUSPARSE_SPMM_COO_ALG2 = 2, + CUSPARSE_SPMM_COO_ALG3 = 3, + CUSPARSE_SPMM_COO_ALG4 = 5, + CUSPARSE_SPMM_CSR_ALG1 = 4, + CUSPARSE_SPMM_CSR_ALG2 = 6, + CUSPARSE_SPMM_CSR_ALG3 = 12, + CUSPARSE_SPMM_BLOCKED_ELL_ALG1 = 13 +} cusparseSpMMAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpMM_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpMMAlg_t alg, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMM_preprocess(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpMMAlg_t alg, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMM(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpMMAlg_t alg, + void* externalBuffer); + +// ############################################################################# +// # SPARSE MATRIX - SPARSE MATRIX MULTIPLICATION (SpGEMM) +// ############################################################################# + +typedef enum { + CUSPARSE_SPGEMM_DEFAULT = 0, + CUSPARSE_SPGEMM_CSR_ALG_DETERMINITIC = 1, + CUSPARSE_SPGEMM_CSR_ALG_NONDETERMINITIC = 2, + CUSPARSE_SPGEMM_ALG1 = 3, + CUSPARSE_SPGEMM_ALG2 = 4, + CUSPARSE_SPGEMM_ALG3 = 5 +} cusparseSpGEMMAlg_t; + +struct cusparseSpGEMMDescr; +typedef struct cusparseSpGEMMDescr* cusparseSpGEMMDescr_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_createDescr(cusparseSpGEMMDescr_t* descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_destroyDescr(cusparseSpGEMMDescr_t descr); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_workEstimation(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + size_t* bufferSize1, + void* externalBuffer1); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_getNumProducts(cusparseSpGEMMDescr_t spgemmDescr, + int64_t* num_prods); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_estimateMemory(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + float chunk_fraction, + size_t* bufferSize3, + void* externalBuffer3, + size_t* bufferSize2); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_compute(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + size_t* bufferSize2, + void* externalBuffer2); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMM_copy(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr); + +// ############################################################################# +// # SPARSE MATRIX - SPARSE MATRIX MULTIPLICATION (SpGEMM) STRUCTURE REUSE +// ############################################################################# + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMMreuse_workEstimation(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + cusparseSpMatDescr_t matC, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + size_t* bufferSize1, + void* externalBuffer1); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMMreuse_nnz(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + cusparseSpMatDescr_t matC, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + size_t* bufferSize2, + void* externalBuffer2, + size_t* bufferSize3, + void* externalBuffer3, + size_t* bufferSize4, + void* externalBuffer4); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMMreuse_copy(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + cusparseSpMatDescr_t matC, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr, + size_t* bufferSize5, + void* externalBuffer5); + +cusparseStatus_t CUSPARSEAPI +cusparseSpGEMMreuse_compute(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstSpMatDescr_t matA, + cusparseConstSpMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSpGEMMAlg_t alg, + cusparseSpGEMMDescr_t spgemmDescr); + +// ############################################################################# +// # SAMPLED DENSE-DENSE MATRIX MULTIPLICATION +// ############################################################################# + +typedef enum { + CUSPARSE_SDDMM_ALG_DEFAULT = 0 +} cusparseSDDMMAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSDDMM_bufferSize(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstDnMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSDDMMAlg_t alg, + size_t* bufferSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSDDMM_preprocess(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstDnMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSDDMMAlg_t alg, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSDDMM(cusparseHandle_t handle, + cusparseOperation_t opA, + cusparseOperation_t opB, + const void* alpha, + cusparseConstDnMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + const void* beta, + cusparseSpMatDescr_t matC, + cudaDataType computeType, + cusparseSDDMMAlg_t alg, + void* externalBuffer); + +// ############################################################################# +// # GENERIC APIs WITH CUSTOM OPERATORS (PREVIEW) +// ############################################################################# + +struct cusparseSpMMOpPlan; +typedef struct cusparseSpMMOpPlan* cusparseSpMMOpPlan_t; + +typedef enum { + CUSPARSE_SPMM_OP_ALG_DEFAULT +} cusparseSpMMOpAlg_t; + +cusparseStatus_t CUSPARSEAPI +cusparseSpMMOp_createPlan(cusparseHandle_t handle, + cusparseSpMMOpPlan_t* plan, + cusparseOperation_t opA, + cusparseOperation_t opB, + cusparseConstSpMatDescr_t matA, + cusparseConstDnMatDescr_t matB, + cusparseDnMatDescr_t matC, + cudaDataType computeType, + cusparseSpMMOpAlg_t alg, + const void* addOperationNvvmBuffer, + size_t addOperationBufferSize, + const void* mulOperationNvvmBuffer, + size_t mulOperationBufferSize, + const void* epilogueNvvmBuffer, + size_t epilogueBufferSize, + size_t* SpMMWorkspaceSize); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMMOp(cusparseSpMMOpPlan_t plan, + void* externalBuffer); + +cusparseStatus_t CUSPARSEAPI +cusparseSpMMOp_destroyPlan(cusparseSpMMOpPlan_t plan); + +//------------------------------------------------------------------------------ + +#if defined(__cplusplus) +} // extern "C" +#endif // defined(__cplusplus) + +#undef CUSPARSE_DEPRECATED_REPLACE_WITH +#undef CUSPARSE_DEPRECATED +#undef CUSPARSE_DEPRECATED_TYPE +#undef CUSPARSE_DEPRECATED_TYPE_MSVC +#undef CUSPARSE_DEPRECATED_ENUM_REPLACE_WITH +#undef CUSPARSE_DEPRECATED_ENUM + +#endif // !defined(CUSPARSE_H_) diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse_v2.h b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse_v2.h new file mode 100644 index 0000000000000000000000000000000000000000..f889e1f569d46d1116fe6e302429b3855de43c21 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia/cusparse/include/cusparse_v2.h @@ -0,0 +1,54 @@ +/* + * Copyright 1993-2019 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ +#if !defined(CUSPARSE_V2_H_) +#define CUSPARSE_V2_H_ + +#include "cusparse.h" + +#endif diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/lib/__init__.py b/pllava/lib/python3.10/site-packages/nvidia/cusparse/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/pllava/lib/python3.10/site-packages/nvidia/cusparse/lib/__pycache__/__init__.cpython-310.pyc b/pllava/lib/python3.10/site-packages/nvidia/cusparse/lib/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..89837ad815aa902ca71843f84190ace720038d07 Binary files /dev/null and b/pllava/lib/python3.10/site-packages/nvidia/cusparse/lib/__pycache__/__init__.cpython-310.pyc differ diff --git a/pllava/lib/python3.10/site-packages/nvidia_cufft_cu11-10.9.0.58.dist-info/METADATA b/pllava/lib/python3.10/site-packages/nvidia_cufft_cu11-10.9.0.58.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..e68df8c7ef6299be1b3f9979ba9fe172ee70609a --- /dev/null +++ b/pllava/lib/python3.10/site-packages/nvidia_cufft_cu11-10.9.0.58.dist-info/METADATA @@ -0,0 +1,35 @@ +Metadata-Version: 2.1 +Name: nvidia-cufft-cu11 +Version: 10.9.0.58 +Summary: CUFFT native runtime libraries +Home-page: https://developer.nvidia.com/cuda-zone +Author: Nvidia CUDA Installer Team +Author-email: compute_installer@nvidia.com +License: NVIDIA Proprietary Software +Keywords: cuda,nvidia,runtime,machine learning,deep learning +Classifier: Development Status :: 4 - Beta +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Education +Classifier: Intended Audience :: Science/Research +Classifier: License :: Other/Proprietary License +Classifier: Natural Language :: English +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Topic :: Scientific/Engineering +Classifier: Topic :: Scientific/Engineering :: Mathematics +Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence +Classifier: Topic :: Software Development +Classifier: Topic :: Software Development :: Libraries +Classifier: Operating System :: Microsoft :: Windows +Classifier: Operating System :: POSIX :: Linux +Requires-Python: >=3 +License-File: License.txt + +CUFFT native runtime libraries diff --git a/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/LICENSE b/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..1edcf92c3317b90fedd187e2eaad101bd1c1efc5 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/LICENSE @@ -0,0 +1,29 @@ +BSD 3-Clause License + +Copyright (c) Soumith Chintala 2016, +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +* Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/METADATA b/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..f16c30c65238e92c4592aa4d72ca8e5412937390 --- /dev/null +++ b/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/METADATA @@ -0,0 +1,146 @@ +Metadata-Version: 2.1 +Name: torchvision +Version: 0.20.1 +Summary: image and video datasets and models for torch deep learning +Home-page: https://github.com/pytorch/vision +Author: PyTorch Core Team +Author-email: soumith@pytorch.org +License: BSD +Requires-Python: >=3.8 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: numpy +Requires-Dist: torch (==2.5.1) +Requires-Dist: pillow (!=8.3.*,>=5.3.0) +Provides-Extra: gdown +Requires-Dist: gdown (>=4.7.3) ; extra == 'gdown' +Provides-Extra: scipy +Requires-Dist: scipy ; extra == 'scipy' + +# torchvision + +[![total torchvision downloads](https://pepy.tech/badge/torchvision)](https://pepy.tech/project/torchvision) +[![documentation](https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchvision%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https://pytorch.org/vision/stable/index.html) + +The torchvision package consists of popular datasets, model architectures, and common image transformations for computer +vision. + +## Installation + +Please refer to the [official +instructions](https://pytorch.org/get-started/locally/) to install the stable +versions of `torch` and `torchvision` on your system. + +To build source, refer to our [contributing +page](https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation). + +The following is the corresponding `torchvision` versions and supported Python +versions. + +| `torch` | `torchvision` | Python | +| ------------------ | ------------------ | ------------------- | +| `main` / `nightly` | `main` / `nightly` | `>=3.9`, `<=3.12` | +| `2.4` | `0.19` | `>=3.8`, `<=3.12` | +| `2.3` | `0.18` | `>=3.8`, `<=3.12` | +| `2.2` | `0.17` | `>=3.8`, `<=3.11` | +| `2.1` | `0.16` | `>=3.8`, `<=3.11` | +| `2.0` | `0.15` | `>=3.8`, `<=3.11` | + +
+ older versions + +| `torch` | `torchvision` | Python | +|---------|-------------------|---------------------------| +| `1.13` | `0.14` | `>=3.7.2`, `<=3.10` | +| `1.12` | `0.13` | `>=3.7`, `<=3.10` | +| `1.11` | `0.12` | `>=3.7`, `<=3.10` | +| `1.10` | `0.11` | `>=3.6`, `<=3.9` | +| `1.9` | `0.10` | `>=3.6`, `<=3.9` | +| `1.8` | `0.9` | `>=3.6`, `<=3.9` | +| `1.7` | `0.8` | `>=3.6`, `<=3.9` | +| `1.6` | `0.7` | `>=3.6`, `<=3.8` | +| `1.5` | `0.6` | `>=3.5`, `<=3.8` | +| `1.4` | `0.5` | `==2.7`, `>=3.5`, `<=3.8` | +| `1.3` | `0.4.2` / `0.4.3` | `==2.7`, `>=3.5`, `<=3.7` | +| `1.2` | `0.4.1` | `==2.7`, `>=3.5`, `<=3.7` | +| `1.1` | `0.3` | `==2.7`, `>=3.5`, `<=3.7` | +| `<=1.0` | `0.2` | `==2.7`, `>=3.5`, `<=3.7` | + +
+ +## Image Backends + +Torchvision currently supports the following image backends: + +- torch tensors +- PIL images: + - [Pillow](https://python-pillow.org/) + - [Pillow-SIMD](https://github.com/uploadcare/pillow-simd) - a **much faster** drop-in replacement for Pillow with SIMD. + +Read more in in our [docs](https://pytorch.org/vision/stable/transforms.html). + +## [UNSTABLE] Video Backend + +Torchvision currently supports the following video backends: + +- [pyav](https://github.com/PyAV-Org/PyAV) (default) - Pythonic binding for ffmpeg libraries. +- video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any + conflicting version of ffmpeg installed. Currently, this is only supported on Linux. + +``` +conda install -c conda-forge 'ffmpeg<4.3' +python setup.py install +``` + +# Using the models on C++ + +Refer to [example/cpp](https://github.com/pytorch/vision/tree/main/examples/cpp). + +**DISCLAIMER**: the `libtorchvision` library includes the torchvision +custom ops as well as most of the C++ torchvision APIs. Those APIs do not come +with any backward-compatibility guarantees and may change from one version to +the next. Only the Python APIs are stable and with backward-compatibility +guarantees. So, if you need stability within a C++ environment, your best bet is +to export the Python APIs via torchscript. + +## Documentation + +You can find the API documentation on the pytorch website: + +## Contributing + +See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out. + +## Disclaimer on Datasets + +This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, +vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to +determine whether you have permission to use the dataset under the dataset's license. + +If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset +to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML +community! + +## Pre-trained Model License + +The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the +dataset used for training. It is your responsibility to determine whether you have permission to use the models for your +use case. + +More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See +[SWAG LICENSE](https://github.com/facebookresearch/SWAG/blob/main/LICENSE) for additional details. + +## Citing TorchVision + +If you find TorchVision useful in your work, please consider citing the following BibTeX entry: + +```bibtex +@software{torchvision2016, + title = {TorchVision: PyTorch's Computer Vision library}, + author = {TorchVision maintainers and contributors}, + year = 2016, + journal = {GitHub repository}, + publisher = {GitHub}, + howpublished = {\url{https://github.com/pytorch/vision}} +} +``` diff --git a/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/REQUESTED b/pllava/lib/python3.10/site-packages/torchvision-0.20.1.dist-info/REQUESTED new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391