diff --git a/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/METADATA b/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..acdd71cfb59789ddeee27acbdbfee7c57e391717 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/METADATA @@ -0,0 +1,531 @@ +Metadata-Version: 2.1 +Name: fastapi +Version: 0.103.2 +Summary: FastAPI framework, high performance, easy to learn, fast to code, ready for production +Project-URL: Homepage, https://github.com/tiangolo/fastapi +Project-URL: Documentation, https://fastapi.tiangolo.com/ +Project-URL: Repository, https://github.com/tiangolo/fastapi +Author-email: Sebastián Ramírez +License-Expression: MIT +License-File: LICENSE +Classifier: Development Status :: 4 - Beta +Classifier: Environment :: Web Environment +Classifier: Framework :: AsyncIO +Classifier: Framework :: FastAPI +Classifier: Framework :: Pydantic +Classifier: Framework :: Pydantic :: 1 +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Information Technology +Classifier: Intended Audience :: System Administrators +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Topic :: Internet +Classifier: Topic :: Internet :: WWW/HTTP +Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers +Classifier: Topic :: Software Development +Classifier: Topic :: Software Development :: Libraries +Classifier: Topic :: Software Development :: Libraries :: Application Frameworks +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Typing :: Typed +Requires-Python: >=3.7 +Requires-Dist: anyio<4.0.0,>=3.7.1 +Requires-Dist: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 +Requires-Dist: starlette<0.28.0,>=0.27.0 +Requires-Dist: typing-extensions>=4.5.0 +Provides-Extra: all +Requires-Dist: email-validator>=2.0.0; extra == 'all' +Requires-Dist: httpx>=0.23.0; extra == 'all' +Requires-Dist: itsdangerous>=1.1.0; extra == 'all' +Requires-Dist: jinja2>=2.11.2; extra == 'all' +Requires-Dist: orjson>=3.2.1; extra == 'all' +Requires-Dist: pydantic-extra-types>=2.0.0; extra == 'all' +Requires-Dist: pydantic-settings>=2.0.0; extra == 'all' +Requires-Dist: python-multipart>=0.0.5; extra == 'all' +Requires-Dist: pyyaml>=5.3.1; extra == 'all' +Requires-Dist: ujson!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0,>=4.0.1; extra == 'all' +Requires-Dist: uvicorn[standard]>=0.12.0; extra == 'all' +Description-Content-Type: text/markdown + +

+ FastAPI +

+

+ FastAPI framework, high performance, easy to learn, fast to code, ready for production +

+

+ + Test + + + Coverage + + + Package version + + + Supported Python versions + +

+ +--- + +**Documentation**: https://fastapi.tiangolo.com + +**Source Code**: https://github.com/tiangolo/fastapi + +--- + +FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.7+ based on standard Python type hints. + +The key features are: + +* **Fast**: Very high performance, on par with **NodeJS** and **Go** (thanks to Starlette and Pydantic). [One of the fastest Python frameworks available](#performance). +* **Fast to code**: Increase the speed to develop features by about 200% to 300%. * +* **Fewer bugs**: Reduce about 40% of human (developer) induced errors. * +* **Intuitive**: Great editor support. Completion everywhere. Less time debugging. +* **Easy**: Designed to be easy to use and learn. Less time reading docs. +* **Short**: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs. +* **Robust**: Get production-ready code. With automatic interactive documentation. +* **Standards-based**: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema. + +* estimation based on tests on an internal development team, building production applications. + +## Sponsors + + + + + + + + + + + + + + + + + + + +Other sponsors + +## Opinions + +"_[...] I'm using **FastAPI** a ton these days. [...] I'm actually planning to use it for all of my team's **ML services at Microsoft**. Some of them are getting integrated into the core **Windows** product and some **Office** products._" + +
Kabir Khan - Microsoft (ref)
+ +--- + +"_We adopted the **FastAPI** library to spawn a **REST** server that can be queried to obtain **predictions**. [for Ludwig]_" + +
Piero Molino, Yaroslav Dudin, and Sai Sumanth Miryala - Uber (ref)
+ +--- + +"_**Netflix** is pleased to announce the open-source release of our **crisis management** orchestration framework: **Dispatch**! [built with **FastAPI**]_" + +
Kevin Glisson, Marc Vilanova, Forest Monsen - Netflix (ref)
+ +--- + +"_I’m over the moon excited about **FastAPI**. It’s so fun!_" + +
Brian Okken - Python Bytes podcast host (ref)
+ +--- + +"_Honestly, what you've built looks super solid and polished. In many ways, it's what I wanted **Hug** to be - it's really inspiring to see someone build that._" + +
Timothy Crosley - Hug creator (ref)
+ +--- + +"_If you're looking to learn one **modern framework** for building REST APIs, check out **FastAPI** [...] It's fast, easy to use and easy to learn [...]_" + +"_We've switched over to **FastAPI** for our **APIs** [...] I think you'll like it [...]_" + +
Ines Montani - Matthew Honnibal - Explosion AI founders - spaCy creators (ref) - (ref)
+ +--- + +"_If anyone is looking to build a production Python API, I would highly recommend **FastAPI**. It is **beautifully designed**, **simple to use** and **highly scalable**, it has become a **key component** in our API first development strategy and is driving many automations and services such as our Virtual TAC Engineer._" + +
Deon Pillsbury - Cisco (ref)
+ +--- + +## **Typer**, the FastAPI of CLIs + + + +If you are building a CLI app to be used in the terminal instead of a web API, check out **Typer**. + +**Typer** is FastAPI's little sibling. And it's intended to be the **FastAPI of CLIs**. ⌨️ 🚀 + +## Requirements + +Python 3.7+ + +FastAPI stands on the shoulders of giants: + +* Starlette for the web parts. +* Pydantic for the data parts. + +## Installation + +
+ +```console +$ pip install fastapi + +---> 100% +``` + +
+ +You will also need an ASGI server, for production such as Uvicorn or Hypercorn. + +
+ +```console +$ pip install "uvicorn[standard]" + +---> 100% +``` + +
+ +## Example + +### Create it + +* Create a file `main.py` with: + +```Python +from typing import Union + +from fastapi import FastAPI + +app = FastAPI() + + +@app.get("/") +def read_root(): + return {"Hello": "World"} + + +@app.get("/items/{item_id}") +def read_item(item_id: int, q: Union[str, None] = None): + return {"item_id": item_id, "q": q} +``` + +
+Or use async def... + +If your code uses `async` / `await`, use `async def`: + +```Python hl_lines="9 14" +from typing import Union + +from fastapi import FastAPI + +app = FastAPI() + + +@app.get("/") +async def read_root(): + return {"Hello": "World"} + + +@app.get("/items/{item_id}") +async def read_item(item_id: int, q: Union[str, None] = None): + return {"item_id": item_id, "q": q} +``` + +**Note**: + +If you don't know, check the _"In a hurry?"_ section about `async` and `await` in the docs. + +
+ +### Run it + +Run the server with: + +
+ +```console +$ uvicorn main:app --reload + +INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) +INFO: Started reloader process [28720] +INFO: Started server process [28722] +INFO: Waiting for application startup. +INFO: Application startup complete. +``` + +
+ +
+About the command uvicorn main:app --reload... + +The command `uvicorn main:app` refers to: + +* `main`: the file `main.py` (the Python "module"). +* `app`: the object created inside of `main.py` with the line `app = FastAPI()`. +* `--reload`: make the server restart after code changes. Only do this for development. + +
+ +### Check it + +Open your browser at http://127.0.0.1:8000/items/5?q=somequery. + +You will see the JSON response as: + +```JSON +{"item_id": 5, "q": "somequery"} +``` + +You already created an API that: + +* Receives HTTP requests in the _paths_ `/` and `/items/{item_id}`. +* Both _paths_ take `GET` operations (also known as HTTP _methods_). +* The _path_ `/items/{item_id}` has a _path parameter_ `item_id` that should be an `int`. +* The _path_ `/items/{item_id}` has an optional `str` _query parameter_ `q`. + +### Interactive API docs + +Now go to http://127.0.0.1:8000/docs. + +You will see the automatic interactive API documentation (provided by Swagger UI): + +![Swagger UI](https://fastapi.tiangolo.com/img/index/index-01-swagger-ui-simple.png) + +### Alternative API docs + +And now, go to http://127.0.0.1:8000/redoc. + +You will see the alternative automatic documentation (provided by ReDoc): + +![ReDoc](https://fastapi.tiangolo.com/img/index/index-02-redoc-simple.png) + +## Example upgrade + +Now modify the file `main.py` to receive a body from a `PUT` request. + +Declare the body using standard Python types, thanks to Pydantic. + +```Python hl_lines="4 9-12 25-27" +from typing import Union + +from fastapi import FastAPI +from pydantic import BaseModel + +app = FastAPI() + + +class Item(BaseModel): + name: str + price: float + is_offer: Union[bool, None] = None + + +@app.get("/") +def read_root(): + return {"Hello": "World"} + + +@app.get("/items/{item_id}") +def read_item(item_id: int, q: Union[str, None] = None): + return {"item_id": item_id, "q": q} + + +@app.put("/items/{item_id}") +def update_item(item_id: int, item: Item): + return {"item_name": item.name, "item_id": item_id} +``` + +The server should reload automatically (because you added `--reload` to the `uvicorn` command above). + +### Interactive API docs upgrade + +Now go to http://127.0.0.1:8000/docs. + +* The interactive API documentation will be automatically updated, including the new body: + +![Swagger UI](https://fastapi.tiangolo.com/img/index/index-03-swagger-02.png) + +* Click on the button "Try it out", it allows you to fill the parameters and directly interact with the API: + +![Swagger UI interaction](https://fastapi.tiangolo.com/img/index/index-04-swagger-03.png) + +* Then click on the "Execute" button, the user interface will communicate with your API, send the parameters, get the results and show them on the screen: + +![Swagger UI interaction](https://fastapi.tiangolo.com/img/index/index-05-swagger-04.png) + +### Alternative API docs upgrade + +And now, go to http://127.0.0.1:8000/redoc. + +* The alternative documentation will also reflect the new query parameter and body: + +![ReDoc](https://fastapi.tiangolo.com/img/index/index-06-redoc-02.png) + +### Recap + +In summary, you declare **once** the types of parameters, body, etc. as function parameters. + +You do that with standard modern Python types. + +You don't have to learn a new syntax, the methods or classes of a specific library, etc. + +Just standard **Python 3.7+**. + +For example, for an `int`: + +```Python +item_id: int +``` + +or for a more complex `Item` model: + +```Python +item: Item +``` + +...and with that single declaration you get: + +* Editor support, including: + * Completion. + * Type checks. +* Validation of data: + * Automatic and clear errors when the data is invalid. + * Validation even for deeply nested JSON objects. +* Conversion of input data: coming from the network to Python data and types. Reading from: + * JSON. + * Path parameters. + * Query parameters. + * Cookies. + * Headers. + * Forms. + * Files. +* Conversion of output data: converting from Python data and types to network data (as JSON): + * Convert Python types (`str`, `int`, `float`, `bool`, `list`, etc). + * `datetime` objects. + * `UUID` objects. + * Database models. + * ...and many more. +* Automatic interactive API documentation, including 2 alternative user interfaces: + * Swagger UI. + * ReDoc. + +--- + +Coming back to the previous code example, **FastAPI** will: + +* Validate that there is an `item_id` in the path for `GET` and `PUT` requests. +* Validate that the `item_id` is of type `int` for `GET` and `PUT` requests. + * If it is not, the client will see a useful, clear error. +* Check if there is an optional query parameter named `q` (as in `http://127.0.0.1:8000/items/foo?q=somequery`) for `GET` requests. + * As the `q` parameter is declared with `= None`, it is optional. + * Without the `None` it would be required (as is the body in the case with `PUT`). +* For `PUT` requests to `/items/{item_id}`, Read the body as JSON: + * Check that it has a required attribute `name` that should be a `str`. + * Check that it has a required attribute `price` that has to be a `float`. + * Check that it has an optional attribute `is_offer`, that should be a `bool`, if present. + * All this would also work for deeply nested JSON objects. +* Convert from and to JSON automatically. +* Document everything with OpenAPI, that can be used by: + * Interactive documentation systems. + * Automatic client code generation systems, for many languages. +* Provide 2 interactive documentation web interfaces directly. + +--- + +We just scratched the surface, but you already get the idea of how it all works. + +Try changing the line with: + +```Python + return {"item_name": item.name, "item_id": item_id} +``` + +...from: + +```Python + ... "item_name": item.name ... +``` + +...to: + +```Python + ... "item_price": item.price ... +``` + +...and see how your editor will auto-complete the attributes and know their types: + +![editor support](https://fastapi.tiangolo.com/img/vscode-completion.png) + +For a more complete example including more features, see the Tutorial - User Guide. + +**Spoiler alert**: the tutorial - user guide includes: + +* Declaration of **parameters** from other different places as: **headers**, **cookies**, **form fields** and **files**. +* How to set **validation constraints** as `maximum_length` or `regex`. +* A very powerful and easy to use **Dependency Injection** system. +* Security and authentication, including support for **OAuth2** with **JWT tokens** and **HTTP Basic** auth. +* More advanced (but equally easy) techniques for declaring **deeply nested JSON models** (thanks to Pydantic). +* **GraphQL** integration with Strawberry and other libraries. +* Many extra features (thanks to Starlette) as: + * **WebSockets** + * extremely easy tests based on HTTPX and `pytest` + * **CORS** + * **Cookie Sessions** + * ...and more. + +## Performance + +Independent TechEmpower benchmarks show **FastAPI** applications running under Uvicorn as one of the fastest Python frameworks available, only below Starlette and Uvicorn themselves (used internally by FastAPI). (*) + +To understand more about it, see the section Benchmarks. + +## Optional Dependencies + +Used by Pydantic: + +* email_validator - for email validation. +* pydantic-settings - for settings management. +* pydantic-extra-types - for extra types to be used with Pydantic. + +Used by Starlette: + +* httpx - Required if you want to use the `TestClient`. +* jinja2 - Required if you want to use the default template configuration. +* python-multipart - Required if you want to support form "parsing", with `request.form()`. +* itsdangerous - Required for `SessionMiddleware` support. +* pyyaml - Required for Starlette's `SchemaGenerator` support (you probably don't need it with FastAPI). +* ujson - Required if you want to use `UJSONResponse`. + +Used by FastAPI / Starlette: + +* uvicorn - for the server that loads and serves your application. +* orjson - Required if you want to use `ORJSONResponse`. + +You can install all of these with `pip install "fastapi[all]"`. + +## License + +This project is licensed under the terms of the MIT license. diff --git a/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/WHEEL b/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..9a7c9d3aa0dbdfb11d17aa156b7fc731dcd4a038 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/fastapi-0.103.2.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: hatchling 1.17.1 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_checksum.py b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_checksum.py new file mode 100644 index 0000000000000000000000000000000000000000..517674670b181500a391ad5fa462ac2e00a85963 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_checksum.py @@ -0,0 +1,87 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import struct + + +class CommonChecksum(object): + """Hashlib-alike helper for CRC32C operations. + + This class should not be used directly and requires an update implementation. + + Args: + initial_value (Optional[bytes]): the initial chunk of data from + which the CRC32C checksum is computed. Defaults to b''. + """ + __slots__ = () + + def __init__(self, initial_value=b""): + self._crc = 0 + if initial_value != b"": + self.update(initial_value) + + def update(self, data): + """Update the checksum with a new chunk of data. + + Args: + chunk (Optional[bytes]): a chunk of data used to extend + the CRC32C checksum. + """ + raise NotImplemented() + + def digest(self): + """Big-endian order, per RFC 4960. + + See: https://cloud.google.com/storage/docs/json_api/v1/objects#crc32c + + Returns: + bytes: An eight-byte digest string. + """ + return struct.pack(">L", self._crc) + + def hexdigest(self): + """Like :meth:`digest` except returns as a bytestring of double length. + + Returns + bytes: A sixteen byte digest string, contaiing only hex digits. + """ + return "{:08x}".format(self._crc).encode("ascii") + + def copy(self): + """Create another checksum with the same CRC32C value. + + Returns: + Checksum: the new instance. + """ + clone = self.__class__() + clone._crc = self._crc + return clone + + def consume(self, stream, chunksize): + """Consume chunks from a stream, extending our CRC32 checksum. + + Args: + stream (BinaryIO): the stream to consume. + chunksize (int): the size of the read to perform + + Returns: + Generator[bytes, None, None]: Iterable of the chunks read from the + stream. + """ + while True: + chunk = stream.read(chunksize) + if not chunk: + break + self.update(chunk) + yield chunk diff --git a/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_crc32c.cpython-310-x86_64-linux-gnu.so b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_crc32c.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..2a2c63236aae960fbc95805cbb941a4be16fd275 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/_crc32c.cpython-310-x86_64-linux-gnu.so differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/cext.py b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/cext.py new file mode 100644 index 0000000000000000000000000000000000000000..4a764e388325cbb9fef5357e2a583cef7474ae22 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/cext.py @@ -0,0 +1,45 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import struct + +# NOTE: ``__config__`` **must** be the first import because it (may) +# modify the search path used to locate shared libraries. +import google_crc32c.__config__ # type: ignore +from google_crc32c._crc32c import extend # type: ignore +from google_crc32c._crc32c import value # type: ignore +from google_crc32c._checksum import CommonChecksum + + +class Checksum(CommonChecksum): + """Hashlib-alike helper for CRC32C operations. + + Args: + initial_value (Optional[bytes]): the initial chunk of data from + which the CRC32C checksum is computed. Defaults to b''. + """ + + __slots__ = ("_crc",) + + def __init__(self, initial_value=b""): + self._crc = value(initial_value) + + def update(self, chunk): + """Update the checksum with a new chunk of data. + + Args: + chunk (Optional[bytes]): a chunk of data used to extend + the CRC32C checksum. + """ + self._crc = extend(self._crc, chunk) diff --git a/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/py.typed b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/py.typed new file mode 100644 index 0000000000000000000000000000000000000000..076325e24604a323aed02a88bc9104f46bbc3948 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/google_crc32c/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google_crc32c package uses inline types. diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/Openmp/omp-tools.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/Openmp/omp-tools.h new file mode 100644 index 0000000000000000000000000000000000000000..276967d07e8f8c0f7686e5b3b15151edf2415ae7 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/Openmp/omp-tools.h @@ -0,0 +1,1083 @@ +/* + * include/50/omp-tools.h.var + */ + +//===----------------------------------------------------------------------===// +// +// The LLVM Compiler Infrastructure +// +// This file is dual licensed under the MIT and the University of Illinois Open +// Source Licenses. See LICENSE.txt for details. +// +//===----------------------------------------------------------------------===// + +#ifndef __OMPT__ +#define __OMPT__ + +/***************************************************************************** + * system include files + *****************************************************************************/ + +#include +#include + +/***************************************************************************** + * iteration macros + *****************************************************************************/ + +#define FOREACH_OMPT_INQUIRY_FN(macro) \ + macro (ompt_enumerate_states) \ + macro (ompt_enumerate_mutex_impls) \ + \ + macro (ompt_set_callback) \ + macro (ompt_get_callback) \ + \ + macro (ompt_get_state) \ + \ + macro (ompt_get_parallel_info) \ + macro (ompt_get_task_info) \ + macro (ompt_get_task_memory) \ + macro (ompt_get_thread_data) \ + macro (ompt_get_unique_id) \ + macro (ompt_finalize_tool) \ + \ + macro(ompt_get_num_procs) \ + macro(ompt_get_num_places) \ + macro(ompt_get_place_proc_ids) \ + macro(ompt_get_place_num) \ + macro(ompt_get_partition_place_nums) \ + macro(ompt_get_proc_id) \ + \ + macro(ompt_get_target_info) \ + macro(ompt_get_num_devices) + +#define FOREACH_OMPT_STATE(macro) \ + \ + /* first available state */ \ + macro (ompt_state_undefined, 0x102) /* undefined thread state */ \ + \ + /* work states (0..15) */ \ + macro (ompt_state_work_serial, 0x000) /* working outside parallel */ \ + macro (ompt_state_work_parallel, 0x001) /* working within parallel */ \ + macro (ompt_state_work_reduction, 0x002) /* performing a reduction */ \ + \ + /* barrier wait states (16..31) */ \ + macro (ompt_state_wait_barrier, 0x010) /* waiting at a barrier */ \ + macro (ompt_state_wait_barrier_implicit_parallel, 0x011) \ + /* implicit barrier at the end of parallel region */\ + macro (ompt_state_wait_barrier_implicit_workshare, 0x012) \ + /* implicit barrier at the end of worksharing */ \ + macro (ompt_state_wait_barrier_implicit, 0x013) /* implicit barrier */ \ + macro (ompt_state_wait_barrier_explicit, 0x014) /* explicit barrier */ \ + \ + /* task wait states (32..63) */ \ + macro (ompt_state_wait_taskwait, 0x020) /* waiting at a taskwait */ \ + macro (ompt_state_wait_taskgroup, 0x021) /* waiting at a taskgroup */ \ + \ + /* mutex wait states (64..127) */ \ + macro (ompt_state_wait_mutex, 0x040) \ + macro (ompt_state_wait_lock, 0x041) /* waiting for lock */ \ + macro (ompt_state_wait_critical, 0x042) /* waiting for critical */ \ + macro (ompt_state_wait_atomic, 0x043) /* waiting for atomic */ \ + macro (ompt_state_wait_ordered, 0x044) /* waiting for ordered */ \ + \ + /* target wait states (128..255) */ \ + macro (ompt_state_wait_target, 0x080) /* waiting for target region */ \ + macro (ompt_state_wait_target_map, 0x081) /* waiting for target data mapping operation */ \ + macro (ompt_state_wait_target_update, 0x082) /* waiting for target update operation */ \ + \ + /* misc (256..511) */ \ + macro (ompt_state_idle, 0x100) /* waiting for work */ \ + macro (ompt_state_overhead, 0x101) /* overhead excluding wait states */ \ + \ + /* implementation-specific states (512..) */ + + +#define FOREACH_KMP_MUTEX_IMPL(macro) \ + macro (kmp_mutex_impl_none, 0) /* unknown implementation */ \ + macro (kmp_mutex_impl_spin, 1) /* based on spin */ \ + macro (kmp_mutex_impl_queuing, 2) /* based on some fair policy */ \ + macro (kmp_mutex_impl_speculative, 3) /* based on HW-supported speculation */ + +#define FOREACH_OMPT_EVENT(macro) \ + \ + /*--- Mandatory Events ---*/ \ + macro (ompt_callback_thread_begin, ompt_callback_thread_begin_t, 1) /* thread begin */ \ + macro (ompt_callback_thread_end, ompt_callback_thread_end_t, 2) /* thread end */ \ + \ + macro (ompt_callback_parallel_begin, ompt_callback_parallel_begin_t, 3) /* parallel begin */ \ + macro (ompt_callback_parallel_end, ompt_callback_parallel_end_t, 4) /* parallel end */ \ + \ + macro (ompt_callback_task_create, ompt_callback_task_create_t, 5) /* task begin */ \ + macro (ompt_callback_task_schedule, ompt_callback_task_schedule_t, 6) /* task schedule */ \ + macro (ompt_callback_implicit_task, ompt_callback_implicit_task_t, 7) /* implicit task */ \ + \ + macro (ompt_callback_target, ompt_callback_target_t, 8) /* target */ \ + macro (ompt_callback_target_data_op, ompt_callback_target_data_op_t, 9) /* target data op */ \ + macro (ompt_callback_target_submit, ompt_callback_target_submit_t, 10) /* target submit */ \ + \ + macro (ompt_callback_control_tool, ompt_callback_control_tool_t, 11) /* control tool */ \ + \ + macro (ompt_callback_device_initialize, ompt_callback_device_initialize_t, 12) /* device initialize */ \ + macro (ompt_callback_device_finalize, ompt_callback_device_finalize_t, 13) /* device finalize */ \ + \ + macro (ompt_callback_device_load, ompt_callback_device_load_t, 14) /* device load */ \ + macro (ompt_callback_device_unload, ompt_callback_device_unload_t, 15) /* device unload */ \ + \ + /* Optional Events */ \ + macro (ompt_callback_sync_region_wait, ompt_callback_sync_region_t, 16) /* sync region wait begin or end */ \ + \ + macro (ompt_callback_mutex_released, ompt_callback_mutex_t, 17) /* mutex released */ \ + \ + macro (ompt_callback_dependences, ompt_callback_dependences_t, 18) /* report task dependences */ \ + macro (ompt_callback_task_dependence, ompt_callback_task_dependence_t, 19) /* report task dependence */ \ + \ + macro (ompt_callback_work, ompt_callback_work_t, 20) /* task at work begin or end */ \ + \ + macro (ompt_callback_master, ompt_callback_master_t, 21) /* task at master begin or end */ \ + \ + macro (ompt_callback_target_map, ompt_callback_target_map_t, 22) /* target map */ \ + \ + macro (ompt_callback_sync_region, ompt_callback_sync_region_t, 23) /* sync region begin or end */ \ + \ + macro (ompt_callback_lock_init, ompt_callback_mutex_acquire_t, 24) /* lock init */ \ + macro (ompt_callback_lock_destroy, ompt_callback_mutex_t, 25) /* lock destroy */ \ + \ + macro (ompt_callback_mutex_acquire, ompt_callback_mutex_acquire_t, 26) /* mutex acquire */ \ + macro (ompt_callback_mutex_acquired, ompt_callback_mutex_t, 27) /* mutex acquired */ \ + \ + macro (ompt_callback_nest_lock, ompt_callback_nest_lock_t, 28) /* nest lock */ \ + \ + macro (ompt_callback_flush, ompt_callback_flush_t, 29) /* after executing flush */ \ + \ + macro (ompt_callback_cancel, ompt_callback_cancel_t, 30) /* cancel innermost binding region */ \ + \ + macro (ompt_callback_reduction, ompt_callback_sync_region_t, 31) /* reduction */ \ + \ + macro (ompt_callback_dispatch, ompt_callback_dispatch_t, 32) /* dispatch of work */ + +/***************************************************************************** + * implementation specific types + *****************************************************************************/ + +typedef enum kmp_mutex_impl_t { +#define kmp_mutex_impl_macro(impl, code) impl = code, + FOREACH_KMP_MUTEX_IMPL(kmp_mutex_impl_macro) +#undef kmp_mutex_impl_macro +} kmp_mutex_impl_t; + +/***************************************************************************** + * definitions generated from spec + *****************************************************************************/ + +typedef enum ompt_callbacks_t { + ompt_callback_thread_begin = 1, + ompt_callback_thread_end = 2, + ompt_callback_parallel_begin = 3, + ompt_callback_parallel_end = 4, + ompt_callback_task_create = 5, + ompt_callback_task_schedule = 6, + ompt_callback_implicit_task = 7, + ompt_callback_target = 8, + ompt_callback_target_data_op = 9, + ompt_callback_target_submit = 10, + ompt_callback_control_tool = 11, + ompt_callback_device_initialize = 12, + ompt_callback_device_finalize = 13, + ompt_callback_device_load = 14, + ompt_callback_device_unload = 15, + ompt_callback_sync_region_wait = 16, + ompt_callback_mutex_released = 17, + ompt_callback_dependences = 18, + ompt_callback_task_dependence = 19, + ompt_callback_work = 20, + ompt_callback_master = 21, + ompt_callback_target_map = 22, + ompt_callback_sync_region = 23, + ompt_callback_lock_init = 24, + ompt_callback_lock_destroy = 25, + ompt_callback_mutex_acquire = 26, + ompt_callback_mutex_acquired = 27, + ompt_callback_nest_lock = 28, + ompt_callback_flush = 29, + ompt_callback_cancel = 30, + ompt_callback_reduction = 31, + ompt_callback_dispatch = 32 +} ompt_callbacks_t; + +typedef enum ompt_record_t { + ompt_record_ompt = 1, + ompt_record_native = 2, + ompt_record_invalid = 3 +} ompt_record_t; + +typedef enum ompt_record_native_t { + ompt_record_native_info = 1, + ompt_record_native_event = 2 +} ompt_record_native_t; + +typedef enum ompt_set_result_t { + ompt_set_error = 0, + ompt_set_never = 1, + ompt_set_impossible = 2, + ompt_set_sometimes = 3, + ompt_set_sometimes_paired = 4, + ompt_set_always = 5 +} ompt_set_result_t; + +typedef uint64_t ompt_id_t; + +typedef uint64_t ompt_device_time_t; + +typedef uint64_t ompt_buffer_cursor_t; + +typedef enum ompt_thread_t { + ompt_thread_initial = 1, + ompt_thread_worker = 2, + ompt_thread_other = 3, + ompt_thread_unknown = 4 +} ompt_thread_t; + +typedef enum ompt_scope_endpoint_t { + ompt_scope_begin = 1, + ompt_scope_end = 2 +} ompt_scope_endpoint_t; + +typedef enum ompt_dispatch_t { + ompt_dispatch_iteration = 1, + ompt_dispatch_section = 2 +} ompt_dispatch_t; + +typedef enum ompt_sync_region_t { + ompt_sync_region_barrier = 1, + ompt_sync_region_barrier_implicit = 2, + ompt_sync_region_barrier_explicit = 3, + ompt_sync_region_barrier_implementation = 4, + ompt_sync_region_taskwait = 5, + ompt_sync_region_taskgroup = 6, + ompt_sync_region_reduction = 7 +} ompt_sync_region_t; + +typedef enum ompt_target_data_op_t { + ompt_target_data_alloc = 1, + ompt_target_data_transfer_to_device = 2, + ompt_target_data_transfer_from_device = 3, + ompt_target_data_delete = 4, + ompt_target_data_associate = 5, + ompt_target_data_disassociate = 6 +} ompt_target_data_op_t; + +typedef enum ompt_work_t { + ompt_work_loop = 1, + ompt_work_sections = 2, + ompt_work_single_executor = 3, + ompt_work_single_other = 4, + ompt_work_workshare = 5, + ompt_work_distribute = 6, + ompt_work_taskloop = 7 +} ompt_work_t; + +typedef enum ompt_mutex_t { + ompt_mutex_lock = 1, + ompt_mutex_test_lock = 2, + ompt_mutex_nest_lock = 3, + ompt_mutex_test_nest_lock = 4, + ompt_mutex_critical = 5, + ompt_mutex_atomic = 6, + ompt_mutex_ordered = 7 +} ompt_mutex_t; + +typedef enum ompt_native_mon_flag_t { + ompt_native_data_motion_explicit = 0x01, + ompt_native_data_motion_implicit = 0x02, + ompt_native_kernel_invocation = 0x04, + ompt_native_kernel_execution = 0x08, + ompt_native_driver = 0x10, + ompt_native_runtime = 0x20, + ompt_native_overhead = 0x40, + ompt_native_idleness = 0x80 +} ompt_native_mon_flag_t; + +typedef enum ompt_task_flag_t { + ompt_task_initial = 0x00000001, + ompt_task_implicit = 0x00000002, + ompt_task_explicit = 0x00000004, + ompt_task_target = 0x00000008, + ompt_task_undeferred = 0x08000000, + ompt_task_untied = 0x10000000, + ompt_task_final = 0x20000000, + ompt_task_mergeable = 0x40000000, + ompt_task_merged = 0x80000000 +} ompt_task_flag_t; + +typedef enum ompt_task_status_t { + ompt_task_complete = 1, + ompt_task_yield = 2, + ompt_task_cancel = 3, + ompt_task_detach = 4, + ompt_task_early_fulfill = 5, + ompt_task_late_fulfill = 6, + ompt_task_switch = 7 +} ompt_task_status_t; + +typedef enum ompt_target_t { + ompt_target = 1, + ompt_target_enter_data = 2, + ompt_target_exit_data = 3, + ompt_target_update = 4 +} ompt_target_t; + +typedef enum ompt_parallel_flag_t { + ompt_parallel_invoker_program = 0x00000001, + ompt_parallel_invoker_runtime = 0x00000002, + ompt_parallel_league = 0x40000000, + ompt_parallel_team = 0x80000000 +} ompt_parallel_flag_t; + +typedef enum ompt_target_map_flag_t { + ompt_target_map_flag_to = 0x01, + ompt_target_map_flag_from = 0x02, + ompt_target_map_flag_alloc = 0x04, + ompt_target_map_flag_release = 0x08, + ompt_target_map_flag_delete = 0x10, + ompt_target_map_flag_implicit = 0x20 +} ompt_target_map_flag_t; + +typedef enum ompt_dependence_type_t { + ompt_dependence_type_in = 1, + ompt_dependence_type_out = 2, + ompt_dependence_type_inout = 3, + ompt_dependence_type_mutexinoutset = 4, + ompt_dependence_type_source = 5, + ompt_dependence_type_sink = 6 +} ompt_dependence_type_t; + +typedef enum ompt_cancel_flag_t { + ompt_cancel_parallel = 0x01, + ompt_cancel_sections = 0x02, + ompt_cancel_loop = 0x04, + ompt_cancel_taskgroup = 0x08, + ompt_cancel_activated = 0x10, + ompt_cancel_detected = 0x20, + ompt_cancel_discarded_task = 0x40 +} ompt_cancel_flag_t; + +typedef uint64_t ompt_hwid_t; + +typedef uint64_t ompt_wait_id_t; + +typedef enum ompt_frame_flag_t { + ompt_frame_runtime = 0x00, + ompt_frame_application = 0x01, + ompt_frame_cfa = 0x10, + ompt_frame_framepointer = 0x20, + ompt_frame_stackaddress = 0x30 +} ompt_frame_flag_t; + +typedef enum ompt_state_t { + ompt_state_work_serial = 0x000, + ompt_state_work_parallel = 0x001, + ompt_state_work_reduction = 0x002, + + ompt_state_wait_barrier = 0x010, + ompt_state_wait_barrier_implicit_parallel = 0x011, + ompt_state_wait_barrier_implicit_workshare = 0x012, + ompt_state_wait_barrier_implicit = 0x013, + ompt_state_wait_barrier_explicit = 0x014, + + ompt_state_wait_taskwait = 0x020, + ompt_state_wait_taskgroup = 0x021, + + ompt_state_wait_mutex = 0x040, + ompt_state_wait_lock = 0x041, + ompt_state_wait_critical = 0x042, + ompt_state_wait_atomic = 0x043, + ompt_state_wait_ordered = 0x044, + + ompt_state_wait_target = 0x080, + ompt_state_wait_target_map = 0x081, + ompt_state_wait_target_update = 0x082, + + ompt_state_idle = 0x100, + ompt_state_overhead = 0x101, + ompt_state_undefined = 0x102 +} ompt_state_t; + +typedef uint64_t (*ompt_get_unique_id_t) (void); + +typedef uint64_t ompd_size_t; + +typedef uint64_t ompd_wait_id_t; + +typedef uint64_t ompd_addr_t; +typedef int64_t ompd_word_t; +typedef uint64_t ompd_seg_t; + +typedef uint64_t ompd_device_t; + +typedef uint64_t ompd_thread_id_t; + +typedef enum ompd_scope_t { + ompd_scope_global = 1, + ompd_scope_address_space = 2, + ompd_scope_thread = 3, + ompd_scope_parallel = 4, + ompd_scope_implicit_task = 5, + ompd_scope_task = 6 +} ompd_scope_t; + +typedef uint64_t ompd_icv_id_t; + +typedef enum ompd_rc_t { + ompd_rc_ok = 0, + ompd_rc_unavailable = 1, + ompd_rc_stale_handle = 2, + ompd_rc_bad_input = 3, + ompd_rc_error = 4, + ompd_rc_unsupported = 5, + ompd_rc_needs_state_tracking = 6, + ompd_rc_incompatible = 7, + ompd_rc_device_read_error = 8, + ompd_rc_device_write_error = 9, + ompd_rc_nomem = 10, +} ompd_rc_t; + +typedef void (*ompt_interface_fn_t) (void); + +typedef ompt_interface_fn_t (*ompt_function_lookup_t) ( + const char *interface_function_name +); + +typedef union ompt_data_t { + uint64_t value; + void *ptr; +} ompt_data_t; + +typedef struct ompt_frame_t { + ompt_data_t exit_frame; + ompt_data_t enter_frame; + int exit_frame_flags; + int enter_frame_flags; +} ompt_frame_t; + +typedef void (*ompt_callback_t) (void); + +typedef void ompt_device_t; + +typedef void ompt_buffer_t; + +typedef void (*ompt_callback_buffer_request_t) ( + int device_num, + ompt_buffer_t **buffer, + size_t *bytes +); + +typedef void (*ompt_callback_buffer_complete_t) ( + int device_num, + ompt_buffer_t *buffer, + size_t bytes, + ompt_buffer_cursor_t begin, + int buffer_owned +); + +typedef void (*ompt_finalize_t) ( + ompt_data_t *tool_data +); + +typedef int (*ompt_initialize_t) ( + ompt_function_lookup_t lookup, + int initial_device_num, + ompt_data_t *tool_data +); + +typedef struct ompt_start_tool_result_t { + ompt_initialize_t initialize; + ompt_finalize_t finalize; + ompt_data_t tool_data; +} ompt_start_tool_result_t; + +typedef struct ompt_record_abstract_t { + ompt_record_native_t rclass; + const char *type; + ompt_device_time_t start_time; + ompt_device_time_t end_time; + ompt_hwid_t hwid; +} ompt_record_abstract_t; + +typedef struct ompt_dependence_t { + ompt_data_t variable; + ompt_dependence_type_t dependence_type; +} ompt_dependence_t; + +typedef int (*ompt_enumerate_states_t) ( + int current_state, + int *next_state, + const char **next_state_name +); + +typedef int (*ompt_enumerate_mutex_impls_t) ( + int current_impl, + int *next_impl, + const char **next_impl_name +); + +typedef ompt_set_result_t (*ompt_set_callback_t) ( + ompt_callbacks_t event, + ompt_callback_t callback +); + +typedef int (*ompt_get_callback_t) ( + ompt_callbacks_t event, + ompt_callback_t *callback +); + +typedef ompt_data_t *(*ompt_get_thread_data_t) (void); + +typedef int (*ompt_get_num_procs_t) (void); + +typedef int (*ompt_get_num_places_t) (void); + +typedef int (*ompt_get_place_proc_ids_t) ( + int place_num, + int ids_size, + int *ids +); + +typedef int (*ompt_get_place_num_t) (void); + +typedef int (*ompt_get_partition_place_nums_t) ( + int place_nums_size, + int *place_nums +); + +typedef int (*ompt_get_proc_id_t) (void); + +typedef int (*ompt_get_state_t) ( + ompt_wait_id_t *wait_id +); + +typedef int (*ompt_get_parallel_info_t) ( + int ancestor_level, + ompt_data_t **parallel_data, + int *team_size +); + +typedef int (*ompt_get_task_info_t) ( + int ancestor_level, + int *flags, + ompt_data_t **task_data, + ompt_frame_t **task_frame, + ompt_data_t **parallel_data, + int *thread_num +); + +typedef int (*ompt_get_task_memory_t)( + void **addr, + size_t *size, + int block +); + +typedef int (*ompt_get_target_info_t) ( + uint64_t *device_num, + ompt_id_t *target_id, + ompt_id_t *host_op_id +); + +typedef int (*ompt_get_num_devices_t) (void); + +typedef void (*ompt_finalize_tool_t) (void); + +typedef int (*ompt_get_device_num_procs_t) ( + ompt_device_t *device +); + +typedef ompt_device_time_t (*ompt_get_device_time_t) ( + ompt_device_t *device +); + +typedef double (*ompt_translate_time_t) ( + ompt_device_t *device, + ompt_device_time_t time +); + +typedef ompt_set_result_t (*ompt_set_trace_ompt_t) ( + ompt_device_t *device, + unsigned int enable, + unsigned int etype +); + +typedef ompt_set_result_t (*ompt_set_trace_native_t) ( + ompt_device_t *device, + int enable, + int flags +); + +typedef int (*ompt_start_trace_t) ( + ompt_device_t *device, + ompt_callback_buffer_request_t request, + ompt_callback_buffer_complete_t complete +); + +typedef int (*ompt_pause_trace_t) ( + ompt_device_t *device, + int begin_pause +); + +typedef int (*ompt_flush_trace_t) ( + ompt_device_t *device +); + +typedef int (*ompt_stop_trace_t) ( + ompt_device_t *device +); + +typedef int (*ompt_advance_buffer_cursor_t) ( + ompt_device_t *device, + ompt_buffer_t *buffer, + size_t size, + ompt_buffer_cursor_t current, + ompt_buffer_cursor_t *next +); + +typedef ompt_record_t (*ompt_get_record_type_t) ( + ompt_buffer_t *buffer, + ompt_buffer_cursor_t current +); + +typedef void *(*ompt_get_record_native_t) ( + ompt_buffer_t *buffer, + ompt_buffer_cursor_t current, + ompt_id_t *host_op_id +); + +typedef ompt_record_abstract_t * +(*ompt_get_record_abstract_t) ( + void *native_record +); + +typedef void (*ompt_callback_thread_begin_t) ( + ompt_thread_t thread_type, + ompt_data_t *thread_data +); + +typedef struct ompt_record_thread_begin_t { + ompt_thread_t thread_type; +} ompt_record_thread_begin_t; + +typedef void (*ompt_callback_thread_end_t) ( + ompt_data_t *thread_data +); + +typedef void (*ompt_callback_parallel_begin_t) ( + ompt_data_t *encountering_task_data, + const ompt_frame_t *encountering_task_frame, + ompt_data_t *parallel_data, + unsigned int requested_parallelism, + int flags, + const void *codeptr_ra +); + +typedef struct ompt_record_parallel_begin_t { + ompt_id_t encountering_task_id; + ompt_id_t parallel_id; + unsigned int requested_parallelism; + int flags; + const void *codeptr_ra; +} ompt_record_parallel_begin_t; + +typedef void (*ompt_callback_parallel_end_t) ( + ompt_data_t *parallel_data, + ompt_data_t *encountering_task_data, + int flags, + const void *codeptr_ra +); + +typedef struct ompt_record_parallel_end_t { + ompt_id_t parallel_id; + ompt_id_t encountering_task_id; + int flags; + const void *codeptr_ra; +} ompt_record_parallel_end_t; + +typedef void (*ompt_callback_work_t) ( + ompt_work_t wstype, + ompt_scope_endpoint_t endpoint, + ompt_data_t *parallel_data, + ompt_data_t *task_data, + uint64_t count, + const void *codeptr_ra +); + +typedef struct ompt_record_work_t { + ompt_work_t wstype; + ompt_scope_endpoint_t endpoint; + ompt_id_t parallel_id; + ompt_id_t task_id; + uint64_t count; + const void *codeptr_ra; +} ompt_record_work_t; + +typedef void (*ompt_callback_dispatch_t) ( + ompt_data_t *parallel_data, + ompt_data_t *task_data, + ompt_dispatch_t kind, + ompt_data_t instance +); + +typedef struct ompt_record_dispatch_t { + ompt_id_t parallel_id; + ompt_id_t task_id; + ompt_dispatch_t kind; + ompt_data_t instance; +} ompt_record_dispatch_t; + +typedef void (*ompt_callback_task_create_t) ( + ompt_data_t *encountering_task_data, + const ompt_frame_t *encountering_task_frame, + ompt_data_t *new_task_data, + int flags, + int has_dependences, + const void *codeptr_ra +); + +typedef struct ompt_record_task_create_t { + ompt_id_t encountering_task_id; + ompt_id_t new_task_id; + int flags; + int has_dependences; + const void *codeptr_ra; +} ompt_record_task_create_t; + +typedef void (*ompt_callback_dependences_t) ( + ompt_data_t *task_data, + const ompt_dependence_t *deps, + int ndeps +); + +typedef struct ompt_record_dependences_t { + ompt_id_t task_id; + ompt_dependence_t dep; + int ndeps; +} ompt_record_dependences_t; + +typedef void (*ompt_callback_task_dependence_t) ( + ompt_data_t *src_task_data, + ompt_data_t *sink_task_data +); + +typedef struct ompt_record_task_dependence_t { + ompt_id_t src_task_id; + ompt_id_t sink_task_id; +} ompt_record_task_dependence_t; + +typedef void (*ompt_callback_task_schedule_t) ( + ompt_data_t *prior_task_data, + ompt_task_status_t prior_task_status, + ompt_data_t *next_task_data +); + +typedef struct ompt_record_task_schedule_t { + ompt_id_t prior_task_id; + ompt_task_status_t prior_task_status; + ompt_id_t next_task_id; +} ompt_record_task_schedule_t; + +typedef void (*ompt_callback_implicit_task_t) ( + ompt_scope_endpoint_t endpoint, + ompt_data_t *parallel_data, + ompt_data_t *task_data, + unsigned int actual_parallelism, + unsigned int index, + int flags +); + +typedef struct ompt_record_implicit_task_t { + ompt_scope_endpoint_t endpoint; + ompt_id_t parallel_id; + ompt_id_t task_id; + unsigned int actual_parallelism; + unsigned int index; + int flags; +} ompt_record_implicit_task_t; + +typedef void (*ompt_callback_master_t) ( + ompt_scope_endpoint_t endpoint, + ompt_data_t *parallel_data, + ompt_data_t *task_data, + const void *codeptr_ra +); + +typedef struct ompt_record_master_t { + ompt_scope_endpoint_t endpoint; + ompt_id_t parallel_id; + ompt_id_t task_id; + const void *codeptr_ra; +} ompt_record_master_t; + +typedef void (*ompt_callback_sync_region_t) ( + ompt_sync_region_t kind, + ompt_scope_endpoint_t endpoint, + ompt_data_t *parallel_data, + ompt_data_t *task_data, + const void *codeptr_ra +); + +typedef struct ompt_record_sync_region_t { + ompt_sync_region_t kind; + ompt_scope_endpoint_t endpoint; + ompt_id_t parallel_id; + ompt_id_t task_id; + const void *codeptr_ra; +} ompt_record_sync_region_t; + +typedef void (*ompt_callback_mutex_acquire_t) ( + ompt_mutex_t kind, + unsigned int hint, + unsigned int impl, + ompt_wait_id_t wait_id, + const void *codeptr_ra +); + +typedef struct ompt_record_mutex_acquire_t { + ompt_mutex_t kind; + unsigned int hint; + unsigned int impl; + ompt_wait_id_t wait_id; + const void *codeptr_ra; +} ompt_record_mutex_acquire_t; + +typedef void (*ompt_callback_mutex_t) ( + ompt_mutex_t kind, + ompt_wait_id_t wait_id, + const void *codeptr_ra +); + +typedef struct ompt_record_mutex_t { + ompt_mutex_t kind; + ompt_wait_id_t wait_id; + const void *codeptr_ra; +} ompt_record_mutex_t; + +typedef void (*ompt_callback_nest_lock_t) ( + ompt_scope_endpoint_t endpoint, + ompt_wait_id_t wait_id, + const void *codeptr_ra +); + +typedef struct ompt_record_nest_lock_t { + ompt_scope_endpoint_t endpoint; + ompt_wait_id_t wait_id; + const void *codeptr_ra; +} ompt_record_nest_lock_t; + +typedef void (*ompt_callback_flush_t) ( + ompt_data_t *thread_data, + const void *codeptr_ra +); + +typedef struct ompt_record_flush_t { + const void *codeptr_ra; +} ompt_record_flush_t; + +typedef void (*ompt_callback_cancel_t) ( + ompt_data_t *task_data, + int flags, + const void *codeptr_ra +); + +typedef struct ompt_record_cancel_t { + ompt_id_t task_id; + int flags; + const void *codeptr_ra; +} ompt_record_cancel_t; + +typedef void (*ompt_callback_device_initialize_t) ( + int device_num, + const char *type, + ompt_device_t *device, + ompt_function_lookup_t lookup, + const char *documentation +); + +typedef void (*ompt_callback_device_finalize_t) ( + int device_num +); + +typedef void (*ompt_callback_device_load_t) ( + int device_num, + const char *filename, + int64_t offset_in_file, + void *vma_in_file, + size_t bytes, + void *host_addr, + void *device_addr, + uint64_t module_id +); + +typedef void (*ompt_callback_device_unload_t) ( + int device_num, + uint64_t module_id +); + +typedef void (*ompt_callback_target_data_op_t) ( + ompt_id_t target_id, + ompt_id_t host_op_id, + ompt_target_data_op_t optype, + void *src_addr, + int src_device_num, + void *dest_addr, + int dest_device_num, + size_t bytes, + const void *codeptr_ra +); + +typedef struct ompt_record_target_data_op_t { + ompt_id_t host_op_id; + ompt_target_data_op_t optype; + void *src_addr; + int src_device_num; + void *dest_addr; + int dest_device_num; + size_t bytes; + ompt_device_time_t end_time; + const void *codeptr_ra; +} ompt_record_target_data_op_t; + +typedef void (*ompt_callback_target_t) ( + ompt_target_t kind, + ompt_scope_endpoint_t endpoint, + int device_num, + ompt_data_t *task_data, + ompt_id_t target_id, + const void *codeptr_ra +); + +typedef struct ompt_record_target_t { + ompt_target_t kind; + ompt_scope_endpoint_t endpoint; + int device_num; + ompt_id_t task_id; + ompt_id_t target_id; + const void *codeptr_ra; +} ompt_record_target_t; + +typedef void (*ompt_callback_target_map_t) ( + ompt_id_t target_id, + unsigned int nitems, + void **host_addr, + void **device_addr, + size_t *bytes, + unsigned int *mapping_flags, + const void *codeptr_ra +); + +typedef struct ompt_record_target_map_t { + ompt_id_t target_id; + unsigned int nitems; + void **host_addr; + void **device_addr; + size_t *bytes; + unsigned int *mapping_flags; + const void *codeptr_ra; +} ompt_record_target_map_t; + +typedef void (*ompt_callback_target_submit_t) ( + ompt_id_t target_id, + ompt_id_t host_op_id, + unsigned int requested_num_teams +); + +typedef struct ompt_record_target_kernel_t { + ompt_id_t host_op_id; + unsigned int requested_num_teams; + unsigned int granted_num_teams; + ompt_device_time_t end_time; +} ompt_record_target_kernel_t; + +typedef int (*ompt_callback_control_tool_t) ( + uint64_t command, + uint64_t modifier, + void *arg, + const void *codeptr_ra +); + +typedef struct ompt_record_control_tool_t { + uint64_t command; + uint64_t modifier; + const void *codeptr_ra; +} ompt_record_control_tool_t; + +typedef struct ompd_address_t { + ompd_seg_t segment; + ompd_addr_t address; +} ompd_address_t; + +typedef struct ompd_frame_info_t { + ompd_address_t frame_address; + ompd_word_t frame_flag; +} ompd_frame_info_t; + +typedef struct _ompd_aspace_handle ompd_address_space_handle_t; +typedef struct _ompd_thread_handle ompd_thread_handle_t; +typedef struct _ompd_parallel_handle ompd_parallel_handle_t; +typedef struct _ompd_task_handle ompd_task_handle_t; + +typedef struct _ompd_aspace_cont ompd_address_space_context_t; +typedef struct _ompd_thread_cont ompd_thread_context_t; + +typedef struct ompd_device_type_sizes_t { + uint8_t sizeof_char; + uint8_t sizeof_short; + uint8_t sizeof_int; + uint8_t sizeof_long; + uint8_t sizeof_long_long; + uint8_t sizeof_pointer; +} ompd_device_type_sizes_t; + +typedef struct ompt_record_ompt_t { + ompt_callbacks_t type; + ompt_device_time_t time; + ompt_id_t thread_id; + ompt_id_t target_id; + union { + ompt_record_thread_begin_t thread_begin; + ompt_record_parallel_begin_t parallel_begin; + ompt_record_parallel_end_t parallel_end; + ompt_record_work_t work; + ompt_record_dispatch_t dispatch; + ompt_record_task_create_t task_create; + ompt_record_dependences_t dependences; + ompt_record_task_dependence_t task_dependence; + ompt_record_task_schedule_t task_schedule; + ompt_record_implicit_task_t implicit_task; + ompt_record_master_t master; + ompt_record_sync_region_t sync_region; + ompt_record_mutex_acquire_t mutex_acquire; + ompt_record_mutex_t mutex; + ompt_record_nest_lock_t nest_lock; + ompt_record_flush_t flush; + ompt_record_cancel_t cancel; + ompt_record_target_t target; + ompt_record_target_data_op_t target_data_op; + ompt_record_target_map_t target_map; + ompt_record_target_kernel_t target_kernel; + ompt_record_control_tool_t control_tool; + } record; +} ompt_record_ompt_t; + +typedef ompt_record_ompt_t *(*ompt_get_record_ompt_t) ( + ompt_buffer_t *buffer, + ompt_buffer_cursor_t current +); + +#define ompt_id_none 0 +#define ompt_data_none {0} +#define ompt_time_none 0 +#define ompt_hwid_none 0 +#define ompt_addr_none ~0 +#define ompt_mutex_impl_none 0 +#define ompt_wait_id_none 0 + +#define ompd_segment_none 0 + +#endif /* __OMPT__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..b4676a72f5beaa5a955188d44dabb7afe3dd916e Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_callbacks.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_callbacks.h new file mode 100644 index 0000000000000000000000000000000000000000..147f4c47b7281a154b1353065617df938d575f25 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_callbacks.h @@ -0,0 +1,762 @@ +/* + * Copyright 2010-2020 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUPTI_CALLBACKS_H__) +#define __CUPTI_CALLBACKS_H__ + +#include +#include +#include +#include +#include + +#ifndef CUPTIAPI +#ifdef _WIN32 +#define CUPTIAPI __stdcall +#else +#define CUPTIAPI +#endif +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \defgroup CUPTI_CALLBACK_API CUPTI Callback API + * Functions, types, and enums that implement the CUPTI Callback API. + * @{ + */ + +/** + * \brief Specifies the point in an API call that a callback is issued. + * + * Specifies the point in an API call that a callback is issued. This + * value is communicated to the callback function via \ref + * CUpti_CallbackData::callbackSite. + */ +typedef enum { + /** + * The callback is at the entry of the API call. + */ + CUPTI_API_ENTER = 0, + /** + * The callback is at the exit of the API call. + */ + CUPTI_API_EXIT = 1, + CUPTI_API_CBSITE_FORCE_INT = 0x7fffffff +} CUpti_ApiCallbackSite; + +/** + * \brief Callback domains. + * + * Callback domains. Each domain represents callback points for a + * group of related API functions or CUDA driver activity. + */ +typedef enum { + /** + * Invalid domain. + */ + CUPTI_CB_DOMAIN_INVALID = 0, + /** + * Domain containing callback points for all driver API functions. + */ + CUPTI_CB_DOMAIN_DRIVER_API = 1, + /** + * Domain containing callback points for all runtime API + * functions. + */ + CUPTI_CB_DOMAIN_RUNTIME_API = 2, + /** + * Domain containing callback points for CUDA resource tracking. + */ + CUPTI_CB_DOMAIN_RESOURCE = 3, + /** + * Domain containing callback points for CUDA synchronization. + */ + CUPTI_CB_DOMAIN_SYNCHRONIZE = 4, + /** + * Domain containing callback points for NVTX API functions. + */ + CUPTI_CB_DOMAIN_NVTX = 5, + CUPTI_CB_DOMAIN_SIZE, + + CUPTI_CB_DOMAIN_FORCE_INT = 0x7fffffff +} CUpti_CallbackDomain; + +/** + * \brief Callback IDs for resource domain. + * + * Callback IDs for resource domain, CUPTI_CB_DOMAIN_RESOURCE. This + * value is communicated to the callback function via the \p cbid + * parameter. + */ +typedef enum { + /** + * Invalid resource callback ID. + */ + CUPTI_CBID_RESOURCE_INVALID = 0, + /** + * A new context has been created. + */ + CUPTI_CBID_RESOURCE_CONTEXT_CREATED = 1, + /** + * A context is about to be destroyed. + */ + CUPTI_CBID_RESOURCE_CONTEXT_DESTROY_STARTING = 2, + /** + * A new stream has been created. + */ + CUPTI_CBID_RESOURCE_STREAM_CREATED = 3, + /** + * A stream is about to be destroyed. + */ + CUPTI_CBID_RESOURCE_STREAM_DESTROY_STARTING = 4, + /** + * The driver has finished initializing. + */ + CUPTI_CBID_RESOURCE_CU_INIT_FINISHED = 5, + /** + * A module has been loaded. + */ + CUPTI_CBID_RESOURCE_MODULE_LOADED = 6, + /** + * A module is about to be unloaded. + */ + CUPTI_CBID_RESOURCE_MODULE_UNLOAD_STARTING = 7, + /** + * The current module which is being profiled. + */ + CUPTI_CBID_RESOURCE_MODULE_PROFILED = 8, + /** + * CUDA graph has been created. + */ + CUPTI_CBID_RESOURCE_GRAPH_CREATED = 9, + /** + * CUDA graph is about to be destroyed. + */ + CUPTI_CBID_RESOURCE_GRAPH_DESTROY_STARTING = 10, + /** + * CUDA graph is cloned. + */ + CUPTI_CBID_RESOURCE_GRAPH_CLONED = 11, + /** + * CUDA graph node is about to be created + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_CREATE_STARTING = 12, + /** + * CUDA graph node is created. + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_CREATED = 13, + /** + * CUDA graph node is about to be destroyed. + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_DESTROY_STARTING = 14, + /** + * Dependency on a CUDA graph node is created. + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_DEPENDENCY_CREATED = 15, + /** + * Dependency on a CUDA graph node is destroyed. + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_DEPENDENCY_DESTROY_STARTING = 16, + /** + * An executable CUDA graph is about to be created. + */ + CUPTI_CBID_RESOURCE_GRAPHEXEC_CREATE_STARTING = 17, + /** + * An executable CUDA graph is created. + */ + CUPTI_CBID_RESOURCE_GRAPHEXEC_CREATED = 18, + /** + * An executable CUDA graph is about to be destroyed. + */ + CUPTI_CBID_RESOURCE_GRAPHEXEC_DESTROY_STARTING = 19, + /** + * CUDA graph node is cloned. + */ + CUPTI_CBID_RESOURCE_GRAPHNODE_CLONED = 20, + + CUPTI_CBID_RESOURCE_SIZE, + CUPTI_CBID_RESOURCE_FORCE_INT = 0x7fffffff +} CUpti_CallbackIdResource; + +/** + * \brief Callback IDs for synchronization domain. + * + * Callback IDs for synchronization domain, + * CUPTI_CB_DOMAIN_SYNCHRONIZE. This value is communicated to the + * callback function via the \p cbid parameter. + */ +typedef enum { + /** + * Invalid synchronize callback ID. + */ + CUPTI_CBID_SYNCHRONIZE_INVALID = 0, + /** + * Stream synchronization has completed for the stream. + */ + CUPTI_CBID_SYNCHRONIZE_STREAM_SYNCHRONIZED = 1, + /** + * Context synchronization has completed for the context. + */ + CUPTI_CBID_SYNCHRONIZE_CONTEXT_SYNCHRONIZED = 2, + CUPTI_CBID_SYNCHRONIZE_SIZE, + CUPTI_CBID_SYNCHRONIZE_FORCE_INT = 0x7fffffff +} CUpti_CallbackIdSync; + + +/** + * \brief Data passed into a runtime or driver API callback function. + * + * Data passed into a runtime or driver API callback function as the + * \p cbdata argument to \ref CUpti_CallbackFunc. The \p cbdata will + * be this type for \p domain equal to CUPTI_CB_DOMAIN_DRIVER_API or + * CUPTI_CB_DOMAIN_RUNTIME_API. The callback data is valid only within + * the invocation of the callback function that is passed the data. If + * you need to retain some data for use outside of the callback, you + * must make a copy of that data. For example, if you make a shallow + * copy of CUpti_CallbackData within a callback, you cannot + * dereference \p functionParams outside of that callback to access + * the function parameters. \p functionName is an exception: the + * string pointed to by \p functionName is a global constant and so + * may be accessed outside of the callback. + */ +typedef struct { + /** + * Point in the runtime or driver function from where the callback + * was issued. + */ + CUpti_ApiCallbackSite callbackSite; + + /** + * Name of the runtime or driver API function which issued the + * callback. This string is a global constant and so may be + * accessed outside of the callback. + */ + const char *functionName; + + /** + * Pointer to the arguments passed to the runtime or driver API + * call. See generated_cuda_runtime_api_meta.h and + * generated_cuda_meta.h for structure definitions for the + * parameters for each runtime and driver API function. + */ + const void *functionParams; + + /** + * Pointer to the return value of the runtime or driver API + * call. This field is only valid within the exit::CUPTI_API_EXIT + * callback. For a runtime API \p functionReturnValue points to a + * \p cudaError_t. For a driver API \p functionReturnValue points + * to a \p CUresult. + */ + void *functionReturnValue; + + /** + * Name of the symbol operated on by the runtime or driver API + * function which issued the callback. This entry is valid only for + * driver and runtime launch callbacks, where it returns the name of + * the kernel. + */ + const char *symbolName; + + /** + * Driver context current to the thread, or null if no context is + * current. This value can change from the entry to exit callback + * of a runtime API function if the runtime initializes a context. + */ + CUcontext context; + + /** + * Unique ID for the CUDA context associated with the thread. The + * UIDs are assigned sequentially as contexts are created and are + * unique within a process. + */ + uint32_t contextUid; + + /** + * Pointer to data shared between the entry and exit callbacks of + * a given runtime or drive API function invocation. This field + * can be used to pass 64-bit values from the entry callback to + * the corresponding exit callback. + */ + uint64_t *correlationData; + + /** + * The activity record correlation ID for this callback. For a + * driver domain callback (i.e. \p domain + * CUPTI_CB_DOMAIN_DRIVER_API) this ID will equal the correlation ID + * in the CUpti_ActivityAPI record corresponding to the CUDA driver + * function call. For a runtime domain callback (i.e. \p domain + * CUPTI_CB_DOMAIN_RUNTIME_API) this ID will equal the correlation + * ID in the CUpti_ActivityAPI record corresponding to the CUDA + * runtime function call. Within the callback, this ID can be + * recorded to correlate user data with the activity record. This + * field is new in 4.1. + */ + uint32_t correlationId; + +} CUpti_CallbackData; + +/** + * \brief Data passed into a resource callback function. + * + * Data passed into a resource callback function as the \p cbdata + * argument to \ref CUpti_CallbackFunc. The \p cbdata will be this + * type for \p domain equal to CUPTI_CB_DOMAIN_RESOURCE. The callback + * data is valid only within the invocation of the callback function + * that is passed the data. If you need to retain some data for use + * outside of the callback, you must make a copy of that data. + */ +typedef struct { + /** + * For CUPTI_CBID_RESOURCE_CONTEXT_CREATED and + * CUPTI_CBID_RESOURCE_CONTEXT_DESTROY_STARTING, the context being + * created or destroyed. For CUPTI_CBID_RESOURCE_STREAM_CREATED and + * CUPTI_CBID_RESOURCE_STREAM_DESTROY_STARTING, the context + * containing the stream being created or destroyed. + */ + CUcontext context; + + union { + /** + * For CUPTI_CBID_RESOURCE_STREAM_CREATED and + * CUPTI_CBID_RESOURCE_STREAM_DESTROY_STARTING, the stream being + * created or destroyed. + */ + CUstream stream; + } resourceHandle; + + /** + * Reserved for future use. + */ + void *resourceDescriptor; +} CUpti_ResourceData; + + +/** + * \brief Module data passed into a resource callback function. + * + * CUDA module data passed into a resource callback function as the \p cbdata + * argument to \ref CUpti_CallbackFunc. The \p cbdata will be this + * type for \p domain equal to CUPTI_CB_DOMAIN_RESOURCE. The module + * data is valid only within the invocation of the callback function + * that is passed the data. If you need to retain some data for use + * outside of the callback, you must make a copy of that data. + */ + +typedef struct { + /** + * Identifier to associate with the CUDA module. + */ + uint32_t moduleId; + + /** + * The size of the cubin. + */ + size_t cubinSize; + + /** + * Pointer to the associated cubin. + */ + const char *pCubin; +} CUpti_ModuleResourceData; + +/** + * \brief CUDA graphs data passed into a resource callback function. + * + * CUDA graphs data passed into a resource callback function as the \p cbdata + * argument to \ref CUpti_CallbackFunc. The \p cbdata will be this + * type for \p domain equal to CUPTI_CB_DOMAIN_RESOURCE. The graph + * data is valid only within the invocation of the callback function + * that is passed the data. If you need to retain some data for use + * outside of the callback, you must make a copy of that data. + */ + +typedef struct { + /** + * CUDA graph + */ + CUgraph graph; + /** + * The original CUDA graph from which \param graph is cloned + */ + CUgraph originalGraph; + /** + * CUDA graph node + */ + CUgraphNode node; + /** + * The original CUDA graph node from which \param node is cloned + */ + CUgraphNode originalNode; + /** + * Type of the \param node + */ + CUgraphNodeType nodeType; + /** + * The dependent graph node + * The size of the array is \param numDependencies. + */ + CUgraphNode dependency; + /** + * CUDA executable graph + */ + CUgraphExec graphExec; +} CUpti_GraphData; + +/** + * \brief Data passed into a synchronize callback function. + * + * Data passed into a synchronize callback function as the \p cbdata + * argument to \ref CUpti_CallbackFunc. The \p cbdata will be this + * type for \p domain equal to CUPTI_CB_DOMAIN_SYNCHRONIZE. The + * callback data is valid only within the invocation of the callback + * function that is passed the data. If you need to retain some data + * for use outside of the callback, you must make a copy of that data. + */ +typedef struct { + /** + * The context of the stream being synchronized. + */ + CUcontext context; + /** + * The stream being synchronized. + */ + CUstream stream; +} CUpti_SynchronizeData; + +/** + * \brief Data passed into a NVTX callback function. + * + * Data passed into a NVTX callback function as the \p cbdata argument + * to \ref CUpti_CallbackFunc. The \p cbdata will be this type for \p + * domain equal to CUPTI_CB_DOMAIN_NVTX. Unless otherwise notes, the + * callback data is valid only within the invocation of the callback + * function that is passed the data. If you need to retain some data + * for use outside of the callback, you must make a copy of that data. + */ +typedef struct { + /** + * Name of the NVTX API function which issued the callback. This + * string is a global constant and so may be accessed outside of the + * callback. + */ + const char *functionName; + + /** + * Pointer to the arguments passed to the NVTX API call. See + * generated_nvtx_meta.h for structure definitions for the + * parameters for each NVTX API function. + */ + const void *functionParams; + + /** + * Pointer to the return value of the NVTX API call. See + * nvToolsExt.h for each NVTX API function's return value. + */ + const void *functionReturnValue; +} CUpti_NvtxData; + +/** + * \brief An ID for a driver API, runtime API, resource or + * synchronization callback. + * + * An ID for a driver API, runtime API, resource or synchronization + * callback. Within a driver API callback this should be interpreted + * as a CUpti_driver_api_trace_cbid value (these values are defined in + * cupti_driver_cbid.h). Within a runtime API callback this should be + * interpreted as a CUpti_runtime_api_trace_cbid value (these values + * are defined in cupti_runtime_cbid.h). Within a resource API + * callback this should be interpreted as a \ref + * CUpti_CallbackIdResource value. Within a synchronize API callback + * this should be interpreted as a \ref CUpti_CallbackIdSync value. + */ +typedef uint32_t CUpti_CallbackId; + +/** + * \brief Function type for a callback. + * + * Function type for a callback. The type of the data passed to the + * callback in \p cbdata depends on the \p domain. If \p domain is + * CUPTI_CB_DOMAIN_DRIVER_API or CUPTI_CB_DOMAIN_RUNTIME_API the type + * of \p cbdata will be CUpti_CallbackData. If \p domain is + * CUPTI_CB_DOMAIN_RESOURCE the type of \p cbdata will be + * CUpti_ResourceData. If \p domain is CUPTI_CB_DOMAIN_SYNCHRONIZE the + * type of \p cbdata will be CUpti_SynchronizeData. If \p domain is + * CUPTI_CB_DOMAIN_NVTX the type of \p cbdata will be CUpti_NvtxData. + * + * \param userdata User data supplied at subscription of the callback + * \param domain The domain of the callback + * \param cbid The ID of the callback + * \param cbdata Data passed to the callback. + */ +typedef void (CUPTIAPI *CUpti_CallbackFunc)( + void *userdata, + CUpti_CallbackDomain domain, + CUpti_CallbackId cbid, + const void *cbdata); + +/** + * \brief A callback subscriber. + */ +typedef struct CUpti_Subscriber_st *CUpti_SubscriberHandle; + +/** + * \brief Pointer to an array of callback domains. + */ +typedef CUpti_CallbackDomain *CUpti_DomainTable; + +/** + * \brief Get the available callback domains. + * + * Returns in \p *domainTable an array of size \p *domainCount of all + * the available callback domains. + * \note \b Thread-safety: this function is thread safe. + * + * \param domainCount Returns number of callback domains + * \param domainTable Returns pointer to array of available callback domains + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialize CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p domainCount or \p domainTable are NULL + */ +CUptiResult CUPTIAPI cuptiSupportedDomains(size_t *domainCount, + CUpti_DomainTable *domainTable); + +/** + * \brief Initialize a callback subscriber with a callback function + * and user data. + * + * Initializes a callback subscriber with a callback function and + * (optionally) a pointer to user data. The returned subscriber handle + * can be used to enable and disable the callback for specific domains + * and callback IDs. + * \note Only a single subscriber can be registered at a time. To ensure + * that no other CUPTI client interrupts the profiling session, it's the + * responsibility of all the CUPTI clients to call this function before + * starting the profling session. In case profiling session is already + * started by another CUPTI client, this function returns the error code + * CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED. + * Note that this function returns the same error when application is + * launched using NVIDIA tools like nvprof, Visual Profiler, Nsight Systems, + * Nsight Compute, cuda-gdb and cuda-memcheck. + * \note This function does not enable any callbacks. + * \note \b Thread-safety: this function is thread safe. + * + * \param subscriber Returns handle to initialize subscriber + * \param callback The callback function + * \param userdata A pointer to user data. This data will be passed to + * the callback function via the \p userdata paramater. + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialize CUPTI + * \retval CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED if there is already a CUPTI subscriber + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p subscriber is NULL + */ +CUptiResult CUPTIAPI cuptiSubscribe(CUpti_SubscriberHandle *subscriber, + CUpti_CallbackFunc callback, + void *userdata); + +/** + * \brief Unregister a callback subscriber. + * + * Removes a callback subscriber so that no future callbacks will be + * issued to that subscriber. + * \note \b Thread-safety: this function is thread safe. + * + * \param subscriber Handle to the initialize subscriber + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialized CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p subscriber is NULL or not initialized + */ +CUptiResult CUPTIAPI cuptiUnsubscribe(CUpti_SubscriberHandle subscriber); + +/** + * \brief Get the current enabled/disabled state of a callback for a specific + * domain and function ID. + * + * Returns non-zero in \p *enable if the callback for a domain and + * callback ID is enabled, and zero if not enabled. + * + * \note \b Thread-safety: a subscriber must serialize access to + * cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and + * cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, + * d, c) and cuptiEnableCallback(sub, d, c) are called concurrently, + * the results are undefined. + * + * \param enable Returns non-zero if callback enabled, zero if not enabled + * \param subscriber Handle to the initialize subscriber + * \param domain The domain of the callback + * \param cbid The ID of the callback + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialized CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p enabled is NULL, or if \p + * subscriber, \p domain or \p cbid is invalid. + */ +CUptiResult CUPTIAPI cuptiGetCallbackState(uint32_t *enable, + CUpti_SubscriberHandle subscriber, + CUpti_CallbackDomain domain, + CUpti_CallbackId cbid); + +/** + * \brief Enable or disabled callbacks for a specific domain and + * callback ID. + * + * Enable or disabled callbacks for a subscriber for a specific domain + * and callback ID. + * + * \note \b Thread-safety: a subscriber must serialize access to + * cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and + * cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, + * d, c) and cuptiEnableCallback(sub, d, c) are called concurrently, + * the results are undefined. + * + * \param enable New enable state for the callback. Zero disables the + * callback, non-zero enables the callback. + * \param subscriber - Handle to callback subscription + * \param domain The domain of the callback + * \param cbid The ID of the callback + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialized CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p subscriber, \p domain or \p + * cbid is invalid. + */ +CUptiResult CUPTIAPI cuptiEnableCallback(uint32_t enable, + CUpti_SubscriberHandle subscriber, + CUpti_CallbackDomain domain, + CUpti_CallbackId cbid); + +/** + * \brief Enable or disabled all callbacks for a specific domain. + * + * Enable or disabled all callbacks for a specific domain. + * + * \note \b Thread-safety: a subscriber must serialize access to + * cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and + * cuptiEnableAllDomains. For example, if cuptiGetCallbackEnabled(sub, + * d, *) and cuptiEnableDomain(sub, d) are called concurrently, the + * results are undefined. + * + * \param enable New enable state for all callbacks in the + * domain. Zero disables all callbacks, non-zero enables all + * callbacks. + * \param subscriber - Handle to callback subscription + * \param domain The domain of the callback + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialized CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p subscriber or \p domain is invalid + */ +CUptiResult CUPTIAPI cuptiEnableDomain(uint32_t enable, + CUpti_SubscriberHandle subscriber, + CUpti_CallbackDomain domain); + +/** + * \brief Enable or disable all callbacks in all domains. + * + * Enable or disable all callbacks in all domains. + * + * \note \b Thread-safety: a subscriber must serialize access to + * cuptiGetCallbackState, cuptiEnableCallback, cuptiEnableDomain, and + * cuptiEnableAllDomains. For example, if cuptiGetCallbackState(sub, + * d, *) and cuptiEnableAllDomains(sub) are called concurrently, the + * results are undefined. + * + * \param enable New enable state for all callbacks in all + * domain. Zero disables all callbacks, non-zero enables all + * callbacks. + * \param subscriber - Handle to callback subscription + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_NOT_INITIALIZED if unable to initialized CUPTI + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p subscriber is invalid + */ +CUptiResult CUPTIAPI cuptiEnableAllDomains(uint32_t enable, + CUpti_SubscriberHandle subscriber); + +/** + * \brief Get the name of a callback for a specific domain and callback ID. + * + * Returns a pointer to the name c_string in \p **name. + * + * \note \b Names are available only for the DRIVER and RUNTIME domains. + * + * \param domain The domain of the callback + * \param cbid The ID of the callback + * \param name Returns pointer to the name string on success, NULL otherwise + * + * \retval CUPTI_SUCCESS on success + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p name is NULL, or if + * \p domain or \p cbid is invalid. + */ +CUptiResult CUPTIAPI cuptiGetCallbackName(CUpti_CallbackDomain domain, + uint32_t cbid, + const char **name); + +/** @} */ /* END CUPTI_CALLBACK_API */ + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +#if defined(__cplusplus) +} +#endif + +#endif // file guard + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_checkpoint.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_checkpoint.h new file mode 100644 index 0000000000000000000000000000000000000000..36eeddc4e2b7bfd1902ce313d71f173db70beaef --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_checkpoint.h @@ -0,0 +1,127 @@ +#pragma once + +#include +#include + +#include +#include + +namespace NV { namespace Cupti { namespace Checkpoint { + +#ifdef __cplusplus +extern "C" +{ +#endif + +/** + * \defgroup CUPTI_CHECKPOINT_API CUPTI Checkpoint API + * Functions, types, and enums that implement the CUPTI Checkpoint API. + * @{ + */ + +/** + * \brief Specifies optimization options for a checkpoint, may be OR'd together to specify multiple options. + */ +typedef enum +{ + CUPTI_CHECKPOINT_OPT_NONE = 0, //!< Default behavior + CUPTI_CHECKPOINT_OPT_TRANSFER = 1, //!< Determine which mem blocks have changed, and only restore those. This optimization is cached, which means cuptiCheckpointRestore must always be called at the same point in the application when this option is enabled, or the result may be incorrect. +} CUpti_CheckpointOptimizations; + +/** + * \brief Configuration and handle for a CUPTI Checkpoint + * + * A CUptiCheckpoint object should be initialized with desired options prior to passing into any + * CUPTI Checkpoint API function. The first call into a Checkpoint API function will initialize internal + * state based on these options. Subsequent changes to these options will not have any effect. + * + * Checkpoint data is saved in device, host, and filesystem space. There are options to reserve memory + * at each level (device, host, filesystem) which are intended to allow a guarantee that a certain amount + * of memory will remain free for use after the checkpoint is saved. + * Note, however, that falling back to slower levels of memory (host, and then filesystem) to save the checkpoint + * will result in performance degradation. + * Currently, the filesystem limitation is not implemented. Note that falling back to filesystem storage may + * significantly impact the performance for saving and restoring a checkpoint. + */ +typedef struct +{ + size_t structSize; //!< [in] Must be set to CUpti_Checkpoint_STRUCT_SIZE + + CUcontext ctx; //!< [in] Set to context to save from, or will use current context if NULL + + size_t reserveDeviceMB; //!< [in] Restrict checkpoint from using last N MB of device memory (-1 = use no device memory) + size_t reserveHostMB; //!< [in] Restrict checkpoint from using last N MB of host memory (-1 = use no host memory) + uint8_t allowOverwrite; //!< [in] Boolean, Allow checkpoint to save over existing checkpoint + uint8_t optimizations; //!< [in] Mask of CUpti_CheckpointOptimizations flags for this checkpoint + + void * pPriv; //!< [in] Assign to NULL +} CUpti_Checkpoint; + +#define CUpti_Checkpoint_STRUCT_SIZE \ +(offsetof(CUpti_Checkpoint, pPriv) + \ +sizeof(((CUpti_Checkpoint*)(nullptr))->pPriv)) + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility push(default) +#endif + +/** + * \brief Initialize and save a checkpoint of the device state associated with the handle context + * + * Uses the handle options to configure and save a checkpoint of the device state associated with the specified context. + * + * \param handle A pointer to a CUpti_Checkpoint object + * + * \retval CUPTI_SUCCESS if a checkpoint was successfully initialized and saved + * \retval CUPTI_ERROR_INVALID_PARAMETER if \p handle does not appear to refer to a valid CUpti_Checkpoint + * \retval CUPTI_ERROR_INVALID_CONTEXT + * \retval CUPTI_ERROR_INVALID_DEVICE if device associated with context is not compatible with checkpoint API + * \retval CUPTI_ERROR_INVALID_OPERATION if Save is requested over an existing checkpoint, but \p allowOverwrite was not originally specified + * \retval CUPTI_ERROR_OUT_OF_MEMORY if as configured, not enough backing storage space to save the checkpoint + */ +CUptiResult cuptiCheckpointSave(CUpti_Checkpoint * const handle); + +/** + * \brief Restore a checkpoint to the device associated with its context + * + * Restores device, pinned, and allocated memory to the state when the checkpoint was saved + * + * \param handle A pointer to a previously saved CUpti_Checkpoint object + * + * \retval CUTPI_SUCCESS if the checkpoint was successfully restored + * \retval CUPTI_ERROR_NOT_INITIALIZED if the checkpoint was not previously initialized + * \retval CUPTI_ERROR_INVALID_CONTEXT + * \retval CUPTI_ERROR_INVALID_PARAMETER if the handle appears invalid + * \retval CUPTI_ERROR_UNKNOWN if the restore or optimization operation fails + */ +CUptiResult cuptiCheckpointRestore(CUpti_Checkpoint * const handle); + +/** + * \brief Free the backing data for a checkpoint + * + * Frees all associated device, host memory and filesystem storage used for this context. + * After freeing a handle, it may be re-used as if it was new - options may be re-configured and will + * take effect on the next call to \p cuptiCheckpointSave. + * + * \param handle A pointer to a previously saved CUpti_Checkpoint object + * + * \retval CUPTI_SUCCESS if the handle was successfully freed + * \retval CUPTI_ERROR_INVALID_PARAMETER if the handle was already freed or appears invalid + * \retval CUPTI_ERROR_INVALID_CONTEXT if the context is no longer valid + */ +CUptiResult cuptiCheckpointFree(CUpti_Checkpoint * const handle); + +#if defined(__GNUC__) && defined(CUPTI_LIB) + #pragma GCC visibility pop +#endif + +/** + * @} + */ + +#ifdef __cplusplus +} +#endif + +// Exit namespace NV::Cupti::Checkpoint +}}} diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling_util.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling_util.h new file mode 100644 index 0000000000000000000000000000000000000000..9cb1ac2132b3d53bd67f39f1e4ebd85d3ea61465 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_pcsampling_util.h @@ -0,0 +1,419 @@ +#if !defined(_CUPTI_PCSAMPLING_UTIL_H_) +#define _CUPTI_PCSAMPLING_UTIL_H_ + +#include +#include + +#ifndef CUPTIUTILAPI +#ifdef _WIN32 +#define CUPTIUTILAPI __stdcall +#else +#define CUPTIUTILAPI +#endif +#endif + +#define ACTIVITY_RECORD_ALIGNMENT 8 +#if defined(_WIN32) // Windows 32- and 64-bit +#define START_PACKED_ALIGNMENT __pragma(pack(push,1)) // exact fit - no padding +#define PACKED_ALIGNMENT __declspec(align(ACTIVITY_RECORD_ALIGNMENT)) +#define END_PACKED_ALIGNMENT __pragma(pack(pop)) +#elif defined(__GNUC__) // GCC +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT __attribute__ ((__packed__)) __attribute__ ((aligned (ACTIVITY_RECORD_ALIGNMENT))) +#define END_PACKED_ALIGNMENT +#else // all other compilers +#define START_PACKED_ALIGNMENT +#define PACKED_ALIGNMENT +#define END_PACKED_ALIGNMENT +#endif + +#ifndef CUPTI_UTIL_STRUCT_SIZE +#define CUPTI_UTIL_STRUCT_SIZE(type_, lastfield_) (offsetof(type_, lastfield_) + sizeof(((type_*)0)->lastfield_)) +#endif + +#ifndef CHECK_PC_SAMPLING_STRUCT_FIELD_EXISTS +#define CHECK_PC_SAMPLING_STRUCT_FIELD_EXISTS(type, member, structSize) \ + (offsetof(type, member) < structSize) +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(__GNUC__) + #pragma GCC visibility push(default) +#endif + +namespace CUPTI { namespace PcSamplingUtil { + +/** + * \defgroup CUPTI_PCSAMPLING_UTILITY CUPTI PC Sampling Utility API + * Functions, types, and enums that implement the CUPTI PC Sampling Utility API. + * @{ + */ + +/** + * \brief Header info will be stored in file. + */ +typedef struct PACKED_ALIGNMENT { + /** + * Version of file format. + */ + uint32_t version; + /** + * Total number of buffers present in the file. + */ + uint32_t totalBuffers; +} Header; + +/** + * \brief BufferInfo will be stored in the file for every buffer + * i.e for every call of UtilDumpPcSamplingBufferInFile() API. + */ +typedef struct PACKED_ALIGNMENT { + /** + * Total number of PC records. + */ + uint64_t recordCount; + /** + * Count of all stall reasons supported on the GPU + */ + size_t numStallReasons; + /** + * Total number of stall reasons in single record. + */ + uint64_t numSelectedStallReasons; + /** + * Buffer size in Bytes. + */ + uint64_t bufferByteSize; +} BufferInfo; + +/** + * \brief All available stall reasons name and respective indexes + * will be stored in it. + */ +typedef struct PACKED_ALIGNMENT { + /** + * Number of all available stall reasons + */ + size_t numStallReasons; + /** + * Stall reasons names of all available stall reasons + */ + char **stallReasons; + /** + * Stall reason index of all available stall reasons + */ + uint32_t *stallReasonIndex; +} PcSamplingStallReasons; + +typedef enum { + /** + * Invalid buffer type. + */ + PC_SAMPLING_BUFFER_INVALID = 0, + /** + * Refers to CUpti_PCSamplingData buffer. + */ + PC_SAMPLING_BUFFER_PC_TO_COUNTER_DATA = 1 +} PcSamplingBufferType; + +/** + * \brief CUPTI PC sampling utility API result codes. + * + * Error and result codes returned by CUPTI PC sampling utility API. + */ +typedef enum { + /** + * No error + */ + CUPTI_UTIL_SUCCESS = 0, + /** + * One or more of the parameters are invalid. + */ + CUPTI_UTIL_ERROR_INVALID_PARAMETER = 1, + /** + * Unable to create a new file + */ + CUPTI_UTIL_ERROR_UNABLE_TO_CREATE_FILE = 2, + /** + * Unable to open a file + */ + CUPTI_UTIL_ERROR_UNABLE_TO_OPEN_FILE = 3, + /** + * Read or write operation failed + */ + CUPTI_UTIL_ERROR_READ_WRITE_OPERATION_FAILED = 4, + /** + * Provided file handle is corrupted. + */ + CUPTI_UTIL_ERROR_FILE_HANDLE_CORRUPTED = 5, + /** + * seek operation failed. + */ + CUPTI_UTIL_ERROR_SEEK_OPERATION_FAILED = 6, + /** + * Unable to allocate enough memory to perform the requested + * operation. + */ + CUPTI_UTIL_ERROR_OUT_OF_MEMORY = 7, + /** + * An unknown internal error has occurred. + */ + CUPTI_UTIL_ERROR_UNKNOWN = 999, + CUPTI_UTIL_ERROR_FORCE_INT = 0x7fffffff +} CUptiUtilResult; + +/** + * \brief Params for \ref CuptiUtilPutPcSampData + */ +typedef struct { + /** + * Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * Type of buffer to store in file + */ + PcSamplingBufferType bufferType; + /** + * PC sampling buffer. + */ + void *pSamplingData; + /** + * Number of configured attributes + */ + size_t numAttributes; + /** + * Refer \ref CUpti_PCSamplingConfigurationInfo + * It is expected to provide configuration details of at least + * CUPTI_PC_SAMPLING_CONFIGURATION_ATTR_TYPE_STALL_REASON attribute. + */ + CUpti_PCSamplingConfigurationInfo *pPCSamplingConfigurationInfo; + /** + * Refer \ref PcSamplingStallReasons. + */ + PcSamplingStallReasons *pPcSamplingStallReasons; + /** + * File name to store buffer into it. + */ + const char* fileName; +} CUptiUtil_PutPcSampDataParams; +#define CUptiUtil_PutPcSampDataParamsSize CUPTI_UTIL_STRUCT_SIZE(CUptiUtil_PutPcSampDataParams, fileName) + +/** + * \brief Dump PC sampling data into the file. + * + * This API can be called multiple times. + * It will append buffer in the file. + * For every buffer it will store BufferInfo + * so that before retrieving data it will help to allocate buffer + * to store retrieved data. + * This API creates file if file does not present. + * If stallReasonIndex or stallReasons pointer of \ref CUptiUtil_PutPcSampDataParams is NULL + * then stall reasons data will not be stored in file. + * It is expected to store all available stall reason data at least once to refer it during + * offline correlation. + * + * \retval CUPTI_UTIL_SUCCESS + * \retval CUPTI_UTIL_ERROR_INVALID_PARAMETER error out if buffer type is invalid + * or if either of pSamplingData, pParams pointer is NULL or stall reason configuration details not provided + * or filename is empty. + * \retval CUPTI_UTIL_ERROR_UNABLE_TO_CREATE_FILE + * \retval CUPTI_UTIL_ERROR_UNABLE_TO_OPEN_FILE + * \retval CUPTI_UTIL_ERROR_READ_WRITE_OPERATION_FAILED + */ +CUptiUtilResult CUPTIUTILAPI CuptiUtilPutPcSampData(CUptiUtil_PutPcSampDataParams *pParams); + +/** + * \brief Params for \ref CuptiUtilGetHeaderData + */ +typedef struct { + /** + * Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * File handle. + */ + std::ifstream *fileHandler; + /** + * Header Info. + */ + Header headerInfo; + +} CUptiUtil_GetHeaderDataParams; +#define CUptiUtil_GetHeaderDataParamsSize CUPTI_UTIL_STRUCT_SIZE(CUptiUtil_GetHeaderDataParams, headerInfo) + +/** + * \brief Get header data of file. + * + * This API must be called once initially while retrieving data from file. + * \ref Header structure, it gives info about total number + * of buffers present in the file. + * + * \retval CUPTI_UTIL_SUCCESS + * \retval CUPTI_UTIL_ERROR_INVALID_PARAMETER error out if either of pParam or fileHandle is NULL or param struct size is incorrect. + * \retval CUPTI_UTIL_ERROR_FILE_HANDLE_CORRUPTED file handle is not in good state to read data from file + * \retval CUPTI_UTIL_ERROR_READ_WRITE_OPERATION_FAILED failed to read data from file. + */ +CUptiUtilResult CUPTIUTILAPI CuptiUtilGetHeaderData(CUptiUtil_GetHeaderDataParams *pParams); + +/** + * \brief Params for \ref CuptiUtilGetBufferInfo + */ +typedef struct { + /** + * Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * File handle. + */ + std::ifstream *fileHandler; + /** + * Buffer Info. + */ + BufferInfo bufferInfoData; +} CUptiUtil_GetBufferInfoParams; +#define CUptiUtil_GetBufferInfoParamsSize CUPTI_UTIL_STRUCT_SIZE(CUptiUtil_GetBufferInfoParams, bufferInfoData) + +/** + * \brief Get buffer info data of file. + * + * This API must be called every time before calling CuptiUtilGetPcSampData API. + * \ref BufferInfo structure, it gives info about recordCount and stallReasonCount + * of every record in the buffer. This will help to allocate exact buffer to retrieve data into it. + * + * \retval CUPTI_UTIL_SUCCESS + * \retval CUPTI_UTIL_ERROR_INVALID_PARAMETER error out if either of pParam or fileHandle is NULL or param struct size is incorrect. + * \retval CUPTI_UTIL_ERROR_FILE_HANDLE_CORRUPTED file handle is not in good state to read data from file. + * \retval CUPTI_UTIL_ERROR_READ_WRITE_OPERATION_FAILED failed to read data from file. + */ +CUptiUtilResult CUPTIUTILAPI CuptiUtilGetBufferInfo(CUptiUtil_GetBufferInfoParams *pParams); + +/** + * \brief Params for \ref CuptiUtilGetPcSampData + */ +typedef struct { + /** + * Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * File handle. + */ + std::ifstream *fileHandler; + /** + * Type of buffer to store in file + */ + PcSamplingBufferType bufferType; + /** + * Pointer to collected buffer info using \ref CuptiUtilGetBufferInfo + */ + BufferInfo *pBufferInfoData; + /** + * Pointer to allocated memory to store retrieved data from file. + */ + void *pSamplingData; + /** + * Number of configuration attributes + */ + size_t numAttributes; + /** + * Refer \ref CUpti_PCSamplingConfigurationInfo + */ + CUpti_PCSamplingConfigurationInfo *pPCSamplingConfigurationInfo; + /** + * Refer \ref PcSamplingStallReasons. + * For stallReasons field of \ref PcSamplingStallReasons it is expected to + * allocate memory for each string element of array. + */ + PcSamplingStallReasons *pPcSamplingStallReasons; +} CUptiUtil_GetPcSampDataParams; +#define CUptiUtil_GetPcSampDataParamsSize CUPTI_UTIL_STRUCT_SIZE(CUptiUtil_GetPcSampDataParams, pPcSamplingStallReasons) + +/** + * \brief Retrieve PC sampling data from file into allocated buffer. + * + * This API must be called after CuptiUtilGetBufferInfo API. + * It will retrieve data from file into allocated buffer. + * + * \retval CUPTI_UTIL_SUCCESS + * \retval CUPTI_UTIL_ERROR_INVALID_PARAMETER error out if buffer type is invalid + * or if either of pSampData, pParams is NULL. If pPcSamplingStallReasons is not NULL then + * error out if either of stallReasonIndex, stallReasons or stallReasons array element pointer is NULL. + * or filename is empty. + * \retval CUPTI_UTIL_ERROR_READ_WRITE_OPERATION_FAILED + * \retval CUPTI_UTIL_ERROR_FILE_HANDLE_CORRUPTED file handle is not in good state to read data from file. + */ +CUptiUtilResult CUPTIUTILAPI CuptiUtilGetPcSampData(CUptiUtil_GetPcSampDataParams *pParams); + +/** + * \brief Params for \ref CuptiUtilMergePcSampData + */ +typedef struct +{ + /** + * Size of the data structure i.e. CUpti_PCSamplingDisableParamsSize + * CUPTI client should set the size of the structure. It will be used in CUPTI to check what fields are + * available in the structure. Used to preserve backward compatibility. + */ + size_t size; + /** + * Number of buffers to merge. + */ + size_t numberOfBuffers; + /** + * Pointer to array of buffers to merge + */ + CUpti_PCSamplingData *PcSampDataBuffer; + /** + * Pointer to array of merged buffers as per the range id. + */ + CUpti_PCSamplingData **MergedPcSampDataBuffers; + /** + * Number of merged buffers. + */ + size_t *numMergedBuffer; +} CUptiUtil_MergePcSampDataParams; +#define CUptiUtil_MergePcSampDataParamsSize CUPTI_UTIL_STRUCT_SIZE(CUptiUtil_MergePcSampDataParams, numMergedBuffer) + +/** + * \brief Merge PC sampling data range id wise. + * + * This API merge PC sampling data range id wise. + * It allocates memory for merged data and fill data in it + * and provide buffer pointer in MergedPcSampDataBuffers field. + * It is expected from user to free merge data buffers after use. + * + * \retval CUPTI_UTIL_SUCCESS + * \retval CUPTI_UTIL_ERROR_INVALID_PARAMETER error out if param struct size is invalid + * or count of buffers to merge is invalid i.e less than 1 + * or either of PcSampDataBuffer, MergedPcSampDataBuffers, numMergedBuffer is NULL + * \retval CUPTI_UTIL_ERROR_OUT_OF_MEMORY Unable to allocate memory for merged buffer. + */ +CUptiUtilResult CUPTIUTILAPI CuptiUtilMergePcSampData(CUptiUtil_MergePcSampDataParams *pParams); + +/** @} */ /* END CUPTI_PCSAMPLING_UTILITY */ + +} } + +#if defined(__GNUC__) + #pragma GCC visibility pop +#endif + +#if defined(__cplusplus) +} +#endif + +#endif diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_runtime_cbid.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_runtime_cbid.h new file mode 100644 index 0000000000000000000000000000000000000000..fa608759184021e13e25144c666cd0e1a95ea7c6 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/cupti_runtime_cbid.h @@ -0,0 +1,458 @@ + +// ************************************************************************* +// Definitions of indices for API functions, unique across entire API +// ************************************************************************* + +// This file is generated. Any changes you make will be lost during the next clean build. +// CUDA public interface, for type definitions and cu* function prototypes + +typedef enum CUpti_runtime_api_trace_cbid_enum { + CUPTI_RUNTIME_TRACE_CBID_INVALID = 0, + CUPTI_RUNTIME_TRACE_CBID_cudaDriverGetVersion_v3020 = 1, + CUPTI_RUNTIME_TRACE_CBID_cudaRuntimeGetVersion_v3020 = 2, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDeviceCount_v3020 = 3, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDeviceProperties_v3020 = 4, + CUPTI_RUNTIME_TRACE_CBID_cudaChooseDevice_v3020 = 5, + CUPTI_RUNTIME_TRACE_CBID_cudaGetChannelDesc_v3020 = 6, + CUPTI_RUNTIME_TRACE_CBID_cudaCreateChannelDesc_v3020 = 7, + CUPTI_RUNTIME_TRACE_CBID_cudaConfigureCall_v3020 = 8, + CUPTI_RUNTIME_TRACE_CBID_cudaSetupArgument_v3020 = 9, + CUPTI_RUNTIME_TRACE_CBID_cudaGetLastError_v3020 = 10, + CUPTI_RUNTIME_TRACE_CBID_cudaPeekAtLastError_v3020 = 11, + CUPTI_RUNTIME_TRACE_CBID_cudaGetErrorString_v3020 = 12, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunch_v3020 = 13, + CUPTI_RUNTIME_TRACE_CBID_cudaFuncSetCacheConfig_v3020 = 14, + CUPTI_RUNTIME_TRACE_CBID_cudaFuncGetAttributes_v3020 = 15, + CUPTI_RUNTIME_TRACE_CBID_cudaSetDevice_v3020 = 16, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDevice_v3020 = 17, + CUPTI_RUNTIME_TRACE_CBID_cudaSetValidDevices_v3020 = 18, + CUPTI_RUNTIME_TRACE_CBID_cudaSetDeviceFlags_v3020 = 19, + CUPTI_RUNTIME_TRACE_CBID_cudaMalloc_v3020 = 20, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocPitch_v3020 = 21, + CUPTI_RUNTIME_TRACE_CBID_cudaFree_v3020 = 22, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocArray_v3020 = 23, + CUPTI_RUNTIME_TRACE_CBID_cudaFreeArray_v3020 = 24, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocHost_v3020 = 25, + CUPTI_RUNTIME_TRACE_CBID_cudaFreeHost_v3020 = 26, + CUPTI_RUNTIME_TRACE_CBID_cudaHostAlloc_v3020 = 27, + CUPTI_RUNTIME_TRACE_CBID_cudaHostGetDevicePointer_v3020 = 28, + CUPTI_RUNTIME_TRACE_CBID_cudaHostGetFlags_v3020 = 29, + CUPTI_RUNTIME_TRACE_CBID_cudaMemGetInfo_v3020 = 30, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy_v3020 = 31, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2D_v3020 = 32, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToArray_v3020 = 33, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DToArray_v3020 = 34, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromArray_v3020 = 35, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DFromArray_v3020 = 36, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyArrayToArray_v3020 = 37, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DArrayToArray_v3020 = 38, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToSymbol_v3020 = 39, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromSymbol_v3020 = 40, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyAsync_v3020 = 41, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToArrayAsync_v3020 = 42, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromArrayAsync_v3020 = 43, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DAsync_v3020 = 44, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DToArrayAsync_v3020 = 45, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DFromArrayAsync_v3020 = 46, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToSymbolAsync_v3020 = 47, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromSymbolAsync_v3020 = 48, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset_v3020 = 49, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset2D_v3020 = 50, + CUPTI_RUNTIME_TRACE_CBID_cudaMemsetAsync_v3020 = 51, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset2DAsync_v3020 = 52, + CUPTI_RUNTIME_TRACE_CBID_cudaGetSymbolAddress_v3020 = 53, + CUPTI_RUNTIME_TRACE_CBID_cudaGetSymbolSize_v3020 = 54, + CUPTI_RUNTIME_TRACE_CBID_cudaBindTexture_v3020 = 55, + CUPTI_RUNTIME_TRACE_CBID_cudaBindTexture2D_v3020 = 56, + CUPTI_RUNTIME_TRACE_CBID_cudaBindTextureToArray_v3020 = 57, + CUPTI_RUNTIME_TRACE_CBID_cudaUnbindTexture_v3020 = 58, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureAlignmentOffset_v3020 = 59, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureReference_v3020 = 60, + CUPTI_RUNTIME_TRACE_CBID_cudaBindSurfaceToArray_v3020 = 61, + CUPTI_RUNTIME_TRACE_CBID_cudaGetSurfaceReference_v3020 = 62, + CUPTI_RUNTIME_TRACE_CBID_cudaGLSetGLDevice_v3020 = 63, + CUPTI_RUNTIME_TRACE_CBID_cudaGLRegisterBufferObject_v3020 = 64, + CUPTI_RUNTIME_TRACE_CBID_cudaGLMapBufferObject_v3020 = 65, + CUPTI_RUNTIME_TRACE_CBID_cudaGLUnmapBufferObject_v3020 = 66, + CUPTI_RUNTIME_TRACE_CBID_cudaGLUnregisterBufferObject_v3020 = 67, + CUPTI_RUNTIME_TRACE_CBID_cudaGLSetBufferObjectMapFlags_v3020 = 68, + CUPTI_RUNTIME_TRACE_CBID_cudaGLMapBufferObjectAsync_v3020 = 69, + CUPTI_RUNTIME_TRACE_CBID_cudaGLUnmapBufferObjectAsync_v3020 = 70, + CUPTI_RUNTIME_TRACE_CBID_cudaWGLGetDevice_v3020 = 71, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsGLRegisterImage_v3020 = 72, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsGLRegisterBuffer_v3020 = 73, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsUnregisterResource_v3020 = 74, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsResourceSetMapFlags_v3020 = 75, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsMapResources_v3020 = 76, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsUnmapResources_v3020 = 77, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsResourceGetMappedPointer_v3020 = 78, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsSubResourceGetMappedArray_v3020 = 79, + CUPTI_RUNTIME_TRACE_CBID_cudaVDPAUGetDevice_v3020 = 80, + CUPTI_RUNTIME_TRACE_CBID_cudaVDPAUSetVDPAUDevice_v3020 = 81, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsVDPAURegisterVideoSurface_v3020 = 82, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsVDPAURegisterOutputSurface_v3020 = 83, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D11GetDevice_v3020 = 84, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D11GetDevices_v3020 = 85, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D11SetDirect3DDevice_v3020 = 86, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsD3D11RegisterResource_v3020 = 87, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10GetDevice_v3020 = 88, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10GetDevices_v3020 = 89, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10SetDirect3DDevice_v3020 = 90, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsD3D10RegisterResource_v3020 = 91, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10RegisterResource_v3020 = 92, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10UnregisterResource_v3020 = 93, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10MapResources_v3020 = 94, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10UnmapResources_v3020 = 95, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceSetMapFlags_v3020 = 96, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceGetSurfaceDimensions_v3020 = 97, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceGetMappedArray_v3020 = 98, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceGetMappedPointer_v3020 = 99, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceGetMappedSize_v3020 = 100, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10ResourceGetMappedPitch_v3020 = 101, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9GetDevice_v3020 = 102, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9GetDevices_v3020 = 103, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9SetDirect3DDevice_v3020 = 104, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9GetDirect3DDevice_v3020 = 105, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsD3D9RegisterResource_v3020 = 106, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9RegisterResource_v3020 = 107, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9UnregisterResource_v3020 = 108, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9MapResources_v3020 = 109, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9UnmapResources_v3020 = 110, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceSetMapFlags_v3020 = 111, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceGetSurfaceDimensions_v3020 = 112, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceGetMappedArray_v3020 = 113, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceGetMappedPointer_v3020 = 114, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceGetMappedSize_v3020 = 115, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9ResourceGetMappedPitch_v3020 = 116, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9Begin_v3020 = 117, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9End_v3020 = 118, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9RegisterVertexBuffer_v3020 = 119, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9UnregisterVertexBuffer_v3020 = 120, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9MapVertexBuffer_v3020 = 121, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D9UnmapVertexBuffer_v3020 = 122, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadExit_v3020 = 123, + CUPTI_RUNTIME_TRACE_CBID_cudaSetDoubleForDevice_v3020 = 124, + CUPTI_RUNTIME_TRACE_CBID_cudaSetDoubleForHost_v3020 = 125, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadSynchronize_v3020 = 126, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadGetLimit_v3020 = 127, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadSetLimit_v3020 = 128, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamCreate_v3020 = 129, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamDestroy_v3020 = 130, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSynchronize_v3020 = 131, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamQuery_v3020 = 132, + CUPTI_RUNTIME_TRACE_CBID_cudaEventCreate_v3020 = 133, + CUPTI_RUNTIME_TRACE_CBID_cudaEventCreateWithFlags_v3020 = 134, + CUPTI_RUNTIME_TRACE_CBID_cudaEventRecord_v3020 = 135, + CUPTI_RUNTIME_TRACE_CBID_cudaEventDestroy_v3020 = 136, + CUPTI_RUNTIME_TRACE_CBID_cudaEventSynchronize_v3020 = 137, + CUPTI_RUNTIME_TRACE_CBID_cudaEventQuery_v3020 = 138, + CUPTI_RUNTIME_TRACE_CBID_cudaEventElapsedTime_v3020 = 139, + CUPTI_RUNTIME_TRACE_CBID_cudaMalloc3D_v3020 = 140, + CUPTI_RUNTIME_TRACE_CBID_cudaMalloc3DArray_v3020 = 141, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset3D_v3020 = 142, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset3DAsync_v3020 = 143, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3D_v3020 = 144, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DAsync_v3020 = 145, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadSetCacheConfig_v3020 = 146, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamWaitEvent_v3020 = 147, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D11GetDirect3DDevice_v3020 = 148, + CUPTI_RUNTIME_TRACE_CBID_cudaD3D10GetDirect3DDevice_v3020 = 149, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadGetCacheConfig_v3020 = 150, + CUPTI_RUNTIME_TRACE_CBID_cudaPointerGetAttributes_v4000 = 151, + CUPTI_RUNTIME_TRACE_CBID_cudaHostRegister_v4000 = 152, + CUPTI_RUNTIME_TRACE_CBID_cudaHostUnregister_v4000 = 153, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceCanAccessPeer_v4000 = 154, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceEnablePeerAccess_v4000 = 155, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceDisablePeerAccess_v4000 = 156, + CUPTI_RUNTIME_TRACE_CBID_cudaPeerRegister_v4000 = 157, + CUPTI_RUNTIME_TRACE_CBID_cudaPeerUnregister_v4000 = 158, + CUPTI_RUNTIME_TRACE_CBID_cudaPeerGetDevicePointer_v4000 = 159, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyPeer_v4000 = 160, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyPeerAsync_v4000 = 161, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DPeer_v4000 = 162, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DPeerAsync_v4000 = 163, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceReset_v3020 = 164, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSynchronize_v3020 = 165, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetLimit_v3020 = 166, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSetLimit_v3020 = 167, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetCacheConfig_v3020 = 168, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSetCacheConfig_v3020 = 169, + CUPTI_RUNTIME_TRACE_CBID_cudaProfilerInitialize_v4000 = 170, + CUPTI_RUNTIME_TRACE_CBID_cudaProfilerStart_v4000 = 171, + CUPTI_RUNTIME_TRACE_CBID_cudaProfilerStop_v4000 = 172, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetByPCIBusId_v4010 = 173, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetPCIBusId_v4010 = 174, + CUPTI_RUNTIME_TRACE_CBID_cudaGLGetDevices_v4010 = 175, + CUPTI_RUNTIME_TRACE_CBID_cudaIpcGetEventHandle_v4010 = 176, + CUPTI_RUNTIME_TRACE_CBID_cudaIpcOpenEventHandle_v4010 = 177, + CUPTI_RUNTIME_TRACE_CBID_cudaIpcGetMemHandle_v4010 = 178, + CUPTI_RUNTIME_TRACE_CBID_cudaIpcOpenMemHandle_v4010 = 179, + CUPTI_RUNTIME_TRACE_CBID_cudaIpcCloseMemHandle_v4010 = 180, + CUPTI_RUNTIME_TRACE_CBID_cudaArrayGetInfo_v4010 = 181, + CUPTI_RUNTIME_TRACE_CBID_cudaFuncSetSharedMemConfig_v4020 = 182, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetSharedMemConfig_v4020 = 183, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSetSharedMemConfig_v4020 = 184, + CUPTI_RUNTIME_TRACE_CBID_cudaCreateTextureObject_v5000 = 185, + CUPTI_RUNTIME_TRACE_CBID_cudaDestroyTextureObject_v5000 = 186, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureObjectResourceDesc_v5000 = 187, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureObjectTextureDesc_v5000 = 188, + CUPTI_RUNTIME_TRACE_CBID_cudaCreateSurfaceObject_v5000 = 189, + CUPTI_RUNTIME_TRACE_CBID_cudaDestroySurfaceObject_v5000 = 190, + CUPTI_RUNTIME_TRACE_CBID_cudaGetSurfaceObjectResourceDesc_v5000 = 191, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocMipmappedArray_v5000 = 192, + CUPTI_RUNTIME_TRACE_CBID_cudaGetMipmappedArrayLevel_v5000 = 193, + CUPTI_RUNTIME_TRACE_CBID_cudaFreeMipmappedArray_v5000 = 194, + CUPTI_RUNTIME_TRACE_CBID_cudaBindTextureToMipmappedArray_v5000 = 195, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsResourceGetMappedMipmappedArray_v5000 = 196, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamAddCallback_v5000 = 197, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamCreateWithFlags_v5000 = 198, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureObjectResourceViewDesc_v5000 = 199, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetAttribute_v5000 = 200, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamDestroy_v5050 = 201, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamCreateWithPriority_v5050 = 202, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetPriority_v5050 = 203, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetFlags_v5050 = 204, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetStreamPriorityRange_v5050 = 205, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocManaged_v6000 = 206, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyMaxActiveBlocksPerMultiprocessor_v6000 = 207, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamAttachMemAsync_v6000 = 208, + CUPTI_RUNTIME_TRACE_CBID_cudaGetErrorName_v6050 = 209, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyMaxActiveBlocksPerMultiprocessor_v6050 = 210, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchKernel_v7000 = 211, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDeviceFlags_v7000 = 212, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunch_ptsz_v7000 = 213, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchKernel_ptsz_v7000 = 214, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy_ptds_v7000 = 215, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2D_ptds_v7000 = 216, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToArray_ptds_v7000 = 217, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DToArray_ptds_v7000 = 218, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromArray_ptds_v7000 = 219, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DFromArray_ptds_v7000 = 220, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyArrayToArray_ptds_v7000 = 221, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DArrayToArray_ptds_v7000 = 222, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToSymbol_ptds_v7000 = 223, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromSymbol_ptds_v7000 = 224, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyAsync_ptsz_v7000 = 225, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToArrayAsync_ptsz_v7000 = 226, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromArrayAsync_ptsz_v7000 = 227, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DAsync_ptsz_v7000 = 228, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DToArrayAsync_ptsz_v7000 = 229, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy2DFromArrayAsync_ptsz_v7000 = 230, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyToSymbolAsync_ptsz_v7000 = 231, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpyFromSymbolAsync_ptsz_v7000 = 232, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset_ptds_v7000 = 233, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset2D_ptds_v7000 = 234, + CUPTI_RUNTIME_TRACE_CBID_cudaMemsetAsync_ptsz_v7000 = 235, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset2DAsync_ptsz_v7000 = 236, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetPriority_ptsz_v7000 = 237, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetFlags_ptsz_v7000 = 238, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSynchronize_ptsz_v7000 = 239, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamQuery_ptsz_v7000 = 240, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamAttachMemAsync_ptsz_v7000 = 241, + CUPTI_RUNTIME_TRACE_CBID_cudaEventRecord_ptsz_v7000 = 242, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset3D_ptds_v7000 = 243, + CUPTI_RUNTIME_TRACE_CBID_cudaMemset3DAsync_ptsz_v7000 = 244, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3D_ptds_v7000 = 245, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DAsync_ptsz_v7000 = 246, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamWaitEvent_ptsz_v7000 = 247, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamAddCallback_ptsz_v7000 = 248, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DPeer_ptds_v7000 = 249, + CUPTI_RUNTIME_TRACE_CBID_cudaMemcpy3DPeerAsync_ptsz_v7000 = 250, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags_v7000 = 251, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPrefetchAsync_v8000 = 252, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPrefetchAsync_ptsz_v8000 = 253, + CUPTI_RUNTIME_TRACE_CBID_cudaMemAdvise_v8000 = 254, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetP2PAttribute_v8000 = 255, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsEGLRegisterImage_v7000 = 256, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamConsumerConnect_v7000 = 257, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamConsumerDisconnect_v7000 = 258, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamConsumerAcquireFrame_v7000 = 259, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamConsumerReleaseFrame_v7000 = 260, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamProducerConnect_v7000 = 261, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamProducerDisconnect_v7000 = 262, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamProducerPresentFrame_v7000 = 263, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamProducerReturnFrame_v7000 = 264, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphicsResourceGetMappedEglFrame_v7000 = 265, + CUPTI_RUNTIME_TRACE_CBID_cudaMemRangeGetAttribute_v8000 = 266, + CUPTI_RUNTIME_TRACE_CBID_cudaMemRangeGetAttributes_v8000 = 267, + CUPTI_RUNTIME_TRACE_CBID_cudaEGLStreamConsumerConnectWithFlags_v7000 = 268, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchCooperativeKernel_v9000 = 269, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchCooperativeKernel_ptsz_v9000 = 270, + CUPTI_RUNTIME_TRACE_CBID_cudaEventCreateFromEGLSync_v9000 = 271, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchCooperativeKernelMultiDevice_v9000 = 272, + CUPTI_RUNTIME_TRACE_CBID_cudaFuncSetAttribute_v9000 = 273, + CUPTI_RUNTIME_TRACE_CBID_cudaImportExternalMemory_v10000 = 274, + CUPTI_RUNTIME_TRACE_CBID_cudaExternalMemoryGetMappedBuffer_v10000 = 275, + CUPTI_RUNTIME_TRACE_CBID_cudaExternalMemoryGetMappedMipmappedArray_v10000 = 276, + CUPTI_RUNTIME_TRACE_CBID_cudaDestroyExternalMemory_v10000 = 277, + CUPTI_RUNTIME_TRACE_CBID_cudaImportExternalSemaphore_v10000 = 278, + CUPTI_RUNTIME_TRACE_CBID_cudaSignalExternalSemaphoresAsync_v10000 = 279, + CUPTI_RUNTIME_TRACE_CBID_cudaSignalExternalSemaphoresAsync_ptsz_v10000 = 280, + CUPTI_RUNTIME_TRACE_CBID_cudaWaitExternalSemaphoresAsync_v10000 = 281, + CUPTI_RUNTIME_TRACE_CBID_cudaWaitExternalSemaphoresAsync_ptsz_v10000 = 282, + CUPTI_RUNTIME_TRACE_CBID_cudaDestroyExternalSemaphore_v10000 = 283, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchHostFunc_v10000 = 284, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchHostFunc_ptsz_v10000 = 285, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphCreate_v10000 = 286, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphKernelNodeGetParams_v10000 = 287, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphKernelNodeSetParams_v10000 = 288, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddKernelNode_v10000 = 289, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemcpyNode_v10000 = 290, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemcpyNodeGetParams_v10000 = 291, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemcpyNodeSetParams_v10000 = 292, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemsetNode_v10000 = 293, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemsetNodeGetParams_v10000 = 294, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemsetNodeSetParams_v10000 = 295, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddHostNode_v10000 = 296, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphHostNodeGetParams_v10000 = 297, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddChildGraphNode_v10000 = 298, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphChildGraphNodeGetGraph_v10000 = 299, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddEmptyNode_v10000 = 300, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphClone_v10000 = 301, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeFindInClone_v10000 = 302, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeGetType_v10000 = 303, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphGetRootNodes_v10000 = 304, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeGetDependencies_v10000 = 305, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeGetDependentNodes_v10000 = 306, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddDependencies_v10000 = 307, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphRemoveDependencies_v10000 = 308, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphDestroyNode_v10000 = 309, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphInstantiate_v10000 = 310, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphLaunch_v10000 = 311, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphLaunch_ptsz_v10000 = 312, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecDestroy_v10000 = 313, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphDestroy_v10000 = 314, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamBeginCapture_v10000 = 315, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamBeginCapture_ptsz_v10000 = 316, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamIsCapturing_v10000 = 317, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamIsCapturing_ptsz_v10000 = 318, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamEndCapture_v10000 = 319, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamEndCapture_ptsz_v10000 = 320, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphHostNodeSetParams_v10000 = 321, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphGetNodes_v10000 = 322, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphGetEdges_v10000 = 323, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetCaptureInfo_v10010 = 324, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetCaptureInfo_ptsz_v10010 = 325, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecKernelNodeSetParams_v10010 = 326, + CUPTI_RUNTIME_TRACE_CBID_cudaThreadExchangeStreamCaptureMode_v10010 = 327, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetNvSciSyncAttributes_v10020 = 328, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyAvailableDynamicSMemPerBlock_v10200 = 329, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSetFlags_v10200 = 330, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSetFlags_ptsz_v10200 = 331, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecMemcpyNodeSetParams_v10020 = 332, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecMemsetNodeSetParams_v10020 = 333, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecHostNodeSetParams_v10020 = 334, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecUpdate_v10020 = 335, + CUPTI_RUNTIME_TRACE_CBID_cudaGetFuncBySymbol_v11000 = 336, + CUPTI_RUNTIME_TRACE_CBID_cudaCtxResetPersistingL2Cache_v11000 = 337, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphKernelNodeCopyAttributes_v11000 = 338, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphKernelNodeGetAttribute_v11000 = 339, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphKernelNodeSetAttribute_v11000 = 340, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamCopyAttributes_v11000 = 341, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamCopyAttributes_ptsz_v11000 = 342, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetAttribute_v11000 = 343, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetAttribute_ptsz_v11000 = 344, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSetAttribute_v11000 = 345, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamSetAttribute_ptsz_v11000 = 346, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetTexture1DLinearMaxWidth_v11010 = 347, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphUpload_v10000 = 348, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphUpload_ptsz_v10000 = 349, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemcpyNodeToSymbol_v11010 = 350, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemcpyNodeFromSymbol_v11010 = 351, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemcpyNode1D_v11010 = 352, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemcpyNodeSetParamsToSymbol_v11010 = 353, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemcpyNodeSetParamsFromSymbol_v11010 = 354, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemcpyNodeSetParams1D_v11010 = 355, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecMemcpyNodeSetParamsToSymbol_v11010 = 356, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecMemcpyNodeSetParamsFromSymbol_v11010 = 357, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecMemcpyNodeSetParams1D_v11010 = 358, + CUPTI_RUNTIME_TRACE_CBID_cudaArrayGetSparseProperties_v11010 = 359, + CUPTI_RUNTIME_TRACE_CBID_cudaMipmappedArrayGetSparseProperties_v11010 = 360, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecChildGraphNodeSetParams_v11010 = 361, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddEventRecordNode_v11010 = 362, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphEventRecordNodeGetEvent_v11010 = 363, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphEventRecordNodeSetEvent_v11010 = 364, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddEventWaitNode_v11010 = 365, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphEventWaitNodeGetEvent_v11010 = 366, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphEventWaitNodeSetEvent_v11010 = 367, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecEventRecordNodeSetEvent_v11010 = 368, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecEventWaitNodeSetEvent_v11010 = 369, + CUPTI_RUNTIME_TRACE_CBID_cudaEventRecordWithFlags_v11010 = 370, + CUPTI_RUNTIME_TRACE_CBID_cudaEventRecordWithFlags_ptsz_v11010 = 371, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetDefaultMemPool_v11020 = 372, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocAsync_v11020 = 373, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocAsync_ptsz_v11020 = 374, + CUPTI_RUNTIME_TRACE_CBID_cudaFreeAsync_v11020 = 375, + CUPTI_RUNTIME_TRACE_CBID_cudaFreeAsync_ptsz_v11020 = 376, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolTrimTo_v11020 = 377, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolSetAttribute_v11020 = 378, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolGetAttribute_v11020 = 379, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolSetAccess_v11020 = 380, + CUPTI_RUNTIME_TRACE_CBID_cudaArrayGetPlane_v11020 = 381, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolGetAccess_v11020 = 382, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolCreate_v11020 = 383, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolDestroy_v11020 = 384, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSetMemPool_v11020 = 385, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetMemPool_v11020 = 386, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolExportToShareableHandle_v11020 = 387, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolImportFromShareableHandle_v11020 = 388, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolExportPointer_v11020 = 389, + CUPTI_RUNTIME_TRACE_CBID_cudaMemPoolImportPointer_v11020 = 390, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocFromPoolAsync_v11020 = 391, + CUPTI_RUNTIME_TRACE_CBID_cudaMallocFromPoolAsync_ptsz_v11020 = 392, + CUPTI_RUNTIME_TRACE_CBID_cudaSignalExternalSemaphoresAsync_v2_v11020 = 393, + CUPTI_RUNTIME_TRACE_CBID_cudaSignalExternalSemaphoresAsync_v2_ptsz_v11020 = 394, + CUPTI_RUNTIME_TRACE_CBID_cudaWaitExternalSemaphoresAsync_v2_v11020 = 395, + CUPTI_RUNTIME_TRACE_CBID_cudaWaitExternalSemaphoresAsync_v2_ptsz_v11020 = 396, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddExternalSemaphoresSignalNode_v11020 = 397, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExternalSemaphoresSignalNodeGetParams_v11020 = 398, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExternalSemaphoresSignalNodeSetParams_v11020 = 399, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddExternalSemaphoresWaitNode_v11020 = 400, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExternalSemaphoresWaitNodeGetParams_v11020 = 401, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExternalSemaphoresWaitNodeSetParams_v11020 = 402, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecExternalSemaphoresSignalNodeSetParams_v11020 = 403, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecExternalSemaphoresWaitNodeSetParams_v11020 = 404, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceFlushGPUDirectRDMAWrites_v11030 = 405, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDriverEntryPoint_v11030 = 406, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDriverEntryPoint_ptsz_v11030 = 407, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphDebugDotPrint_v11030 = 408, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetCaptureInfo_v2_v11030 = 409, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetCaptureInfo_v2_ptsz_v11030 = 410, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamUpdateCaptureDependencies_v11030 = 411, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamUpdateCaptureDependencies_ptsz_v11030 = 412, + CUPTI_RUNTIME_TRACE_CBID_cudaUserObjectCreate_v11030 = 413, + CUPTI_RUNTIME_TRACE_CBID_cudaUserObjectRetain_v11030 = 414, + CUPTI_RUNTIME_TRACE_CBID_cudaUserObjectRelease_v11030 = 415, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphRetainUserObject_v11030 = 416, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphReleaseUserObject_v11030 = 417, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphInstantiateWithFlags_v11040 = 418, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemAllocNode_v11040 = 419, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemAllocNodeGetParams_v11040 = 420, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphAddMemFreeNode_v11040 = 421, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphMemFreeNodeGetParams_v11040 = 422, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGraphMemTrim_v11040 = 423, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceGetGraphMemAttribute_v11040 = 424, + CUPTI_RUNTIME_TRACE_CBID_cudaDeviceSetGraphMemAttribute_v11040 = 425, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeSetEnabled_v11060 = 426, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphNodeGetEnabled_v11060 = 427, + CUPTI_RUNTIME_TRACE_CBID_cudaArrayGetMemoryRequirements_v11060 = 428, + CUPTI_RUNTIME_TRACE_CBID_cudaMipmappedArrayGetMemoryRequirements_v11060 = 429, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchKernelExC_v11060 = 430, + CUPTI_RUNTIME_TRACE_CBID_cudaLaunchKernelExC_ptsz_v11060 = 431, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyMaxPotentialClusterSize_v11070 = 432, + CUPTI_RUNTIME_TRACE_CBID_cudaOccupancyMaxActiveClusters_v11070 = 433, + CUPTI_RUNTIME_TRACE_CBID_cudaCreateTextureObject_v2_v11080 = 434, + CUPTI_RUNTIME_TRACE_CBID_cudaGetTextureObjectTextureDesc_v2_v11080 = 435, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphInstantiateWithParams_v12000 = 436, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphInstantiateWithParams_ptsz_v12000 = 437, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphExecGetFlags_v12000 = 438, + CUPTI_RUNTIME_TRACE_CBID_cudaGetKernel_v12000 = 439, + CUPTI_RUNTIME_TRACE_CBID_cudaGetDeviceProperties_v2_v12000 = 440, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetId_v12000 = 441, + CUPTI_RUNTIME_TRACE_CBID_cudaStreamGetId_ptsz_v12000 = 442, + CUPTI_RUNTIME_TRACE_CBID_cudaGraphInstantiate_v12000 = 443, + CUPTI_RUNTIME_TRACE_CBID_cudaInitDevice_v12000 = 444, + CUPTI_RUNTIME_TRACE_CBID_SIZE = 445, + CUPTI_RUNTIME_TRACE_CBID_FORCE_INT = 0x7fffffff +} CUpti_runtime_api_trace_cbid; + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h new file mode 100644 index 0000000000000000000000000000000000000000..88e79d1957925c4bbacd381e9461d5072de88f24 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_cupti/include/generated_cuda_vdpau_interop_meta.h @@ -0,0 +1,38 @@ +// This file is generated. Any changes you make will be lost during the next clean build. + +// CUDA public interface, for type definitions and api function prototypes +#include "cuda_vdpau_interop.h" + +// ************************************************************************* +// Definitions of structs to hold parameters for each function +// ************************************************************************* + +// Currently used parameter trace structures +typedef struct cudaVDPAUGetDevice_v3020_params_st { + int *device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cudaVDPAUGetDevice_v3020_params; + +typedef struct cudaVDPAUSetVDPAUDevice_v3020_params_st { + int device; + VdpDevice vdpDevice; + VdpGetProcAddress *vdpGetProcAddress; +} cudaVDPAUSetVDPAUDevice_v3020_params; + +typedef struct cudaGraphicsVDPAURegisterVideoSurface_v3020_params_st { + struct cudaGraphicsResource **resource; + VdpVideoSurface vdpSurface; + unsigned int flags; +} cudaGraphicsVDPAURegisterVideoSurface_v3020_params; + +typedef struct cudaGraphicsVDPAURegisterOutputSurface_v3020_params_st { + struct cudaGraphicsResource **resource; + VdpOutputSurface vdpSurface; + unsigned int flags; +} cudaGraphicsVDPAURegisterOutputSurface_v3020_params; + +// Parameter trace structures for removed functions + + +// End of parameter trace structures diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/include/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/include/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/include/nvrtc.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/include/nvrtc.h new file mode 100644 index 0000000000000000000000000000000000000000..5f4ab67f81ae661f17628f857c4f6d73711e7d0a --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/include/nvrtc.h @@ -0,0 +1,845 @@ +// +// NVIDIA_COPYRIGHT_BEGIN +// +// Copyright (c) 2014-2023, NVIDIA CORPORATION. All rights reserved. +// +// NVIDIA CORPORATION and its licensors retain all intellectual property +// and proprietary rights in and to this software, related documentation +// and any modifications thereto. Any use, reproduction, disclosure or +// distribution of this software and related documentation without an express +// license agreement from NVIDIA CORPORATION is strictly prohibited. +// +// NVIDIA_COPYRIGHT_END +// + +#ifndef __NVRTC_H__ +#define __NVRTC_H__ + +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ + +#include + + +/*************************************************************************//** + * + * \defgroup error Error Handling + * + * NVRTC defines the following enumeration type and function for API call + * error handling. + * + ****************************************************************************/ + + +/** + * \ingroup error + * \brief The enumerated type nvrtcResult defines API call result codes. + * NVRTC API functions return nvrtcResult to indicate the call + * result. + */ +typedef enum { + NVRTC_SUCCESS = 0, + NVRTC_ERROR_OUT_OF_MEMORY = 1, + NVRTC_ERROR_PROGRAM_CREATION_FAILURE = 2, + NVRTC_ERROR_INVALID_INPUT = 3, + NVRTC_ERROR_INVALID_PROGRAM = 4, + NVRTC_ERROR_INVALID_OPTION = 5, + NVRTC_ERROR_COMPILATION = 6, + NVRTC_ERROR_BUILTIN_OPERATION_FAILURE = 7, + NVRTC_ERROR_NO_NAME_EXPRESSIONS_AFTER_COMPILATION = 8, + NVRTC_ERROR_NO_LOWERED_NAMES_BEFORE_COMPILATION = 9, + NVRTC_ERROR_NAME_EXPRESSION_NOT_VALID = 10, + NVRTC_ERROR_INTERNAL_ERROR = 11, + NVRTC_ERROR_TIME_FILE_WRITE_FAILED = 12 +} nvrtcResult; + + +/** + * \ingroup error + * \brief nvrtcGetErrorString is a helper function that returns a string + * describing the given nvrtcResult code, e.g., NVRTC_SUCCESS to + * \c "NVRTC_SUCCESS". + * For unrecognized enumeration values, it returns + * \c "NVRTC_ERROR unknown". + * + * \param [in] result CUDA Runtime Compilation API result code. + * \return Message string for the given #nvrtcResult code. + */ +const char *nvrtcGetErrorString(nvrtcResult result); + + +/*************************************************************************//** + * + * \defgroup query General Information Query + * + * NVRTC defines the following function for general information query. + * + ****************************************************************************/ + + +/** + * \ingroup query + * \brief nvrtcVersion sets the output parameters \p major and \p minor + * with the CUDA Runtime Compilation version number. + * + * \param [out] major CUDA Runtime Compilation major version number. + * \param [out] minor CUDA Runtime Compilation minor version number. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * + */ +nvrtcResult nvrtcVersion(int *major, int *minor); + + +/** + * \ingroup query + * \brief nvrtcGetNumSupportedArchs sets the output parameter \p numArchs + * with the number of architectures supported by NVRTC. This can + * then be used to pass an array to ::nvrtcGetSupportedArchs to + * get the supported architectures. + * + * \param [out] numArchs number of supported architectures. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * + * see ::nvrtcGetSupportedArchs + */ +nvrtcResult nvrtcGetNumSupportedArchs(int* numArchs); + + +/** + * \ingroup query + * \brief nvrtcGetSupportedArchs populates the array passed via the output parameter + * \p supportedArchs with the architectures supported by NVRTC. The array is + * sorted in the ascending order. The size of the array to be passed can be + * determined using ::nvrtcGetNumSupportedArchs. + * + * \param [out] supportedArchs sorted array of supported architectures. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * + * see ::nvrtcGetNumSupportedArchs + */ +nvrtcResult nvrtcGetSupportedArchs(int* supportedArchs); + + +/*************************************************************************//** + * + * \defgroup compilation Compilation + * + * NVRTC defines the following type and functions for actual compilation. + * + ****************************************************************************/ + + +/** + * \ingroup compilation + * \brief nvrtcProgram is the unit of compilation, and an opaque handle for + * a program. + * + * To compile a CUDA program string, an instance of nvrtcProgram must be + * created first with ::nvrtcCreateProgram, then compiled with + * ::nvrtcCompileProgram. + */ +typedef struct _nvrtcProgram *nvrtcProgram; + + +/** + * \ingroup compilation + * \brief nvrtcCreateProgram creates an instance of nvrtcProgram with the + * given input parameters, and sets the output parameter \p prog with + * it. + * + * \param [out] prog CUDA Runtime Compilation program. + * \param [in] src CUDA program source. + * \param [in] name CUDA program name.\n + * \p name can be \c NULL; \c "default_program" is + * used when \p name is \c NULL or "". + * \param [in] numHeaders Number of headers used.\n + * \p numHeaders must be greater than or equal to 0. + * \param [in] headers Sources of the headers.\n + * \p headers can be \c NULL when \p numHeaders is + * 0. + * \param [in] includeNames Name of each header by which they can be + * included in the CUDA program source.\n + * \p includeNames can be \c NULL when \p numHeaders + * is 0. These headers must be included with the exact + * names specified here. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_OUT_OF_MEMORY \endlink + * - \link #nvrtcResult NVRTC_ERROR_PROGRAM_CREATION_FAILURE \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcDestroyProgram + */ +nvrtcResult nvrtcCreateProgram(nvrtcProgram *prog, + const char *src, + const char *name, + int numHeaders, + const char * const *headers, + const char * const *includeNames); + + +/** + * \ingroup compilation + * \brief nvrtcDestroyProgram destroys the given program. + * + * \param [in] prog CUDA Runtime Compilation program. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcCreateProgram + */ +nvrtcResult nvrtcDestroyProgram(nvrtcProgram *prog); + + +/** + * \ingroup compilation + * \brief nvrtcCompileProgram compiles the given program. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [in] numOptions Number of compiler options passed. + * \param [in] options Compiler options in the form of C string array.\n + * \p options can be \c NULL when \p numOptions is 0. + * + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_OUT_OF_MEMORY \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_OPTION \endlink + * - \link #nvrtcResult NVRTC_ERROR_COMPILATION \endlink + * - \link #nvrtcResult NVRTC_ERROR_BUILTIN_OPERATION_FAILURE \endlink + * - \link #nvrtcResult NVRTC_ERROR_TIME_FILE_WRITE_FAILED \endlink + * + * It supports compile options listed in \ref options. + */ +nvrtcResult nvrtcCompileProgram(nvrtcProgram prog, + int numOptions, const char * const *options); + + +/** + * \ingroup compilation + * \brief nvrtcGetPTXSize sets the value of \p ptxSizeRet with the size of the PTX + * generated by the previous compilation of \p prog (including the + * trailing \c NULL). + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] ptxSizeRet Size of the generated PTX (including the trailing + * \c NULL). + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetPTX + */ +nvrtcResult nvrtcGetPTXSize(nvrtcProgram prog, size_t *ptxSizeRet); + + +/** + * \ingroup compilation + * \brief nvrtcGetPTX stores the PTX generated by the previous compilation + * of \p prog in the memory pointed by \p ptx. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] ptx Compiled result. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetPTXSize + */ +nvrtcResult nvrtcGetPTX(nvrtcProgram prog, char *ptx); + + +/** + * \ingroup compilation + * \brief nvrtcGetCUBINSize sets the value of \p cubinSizeRet with the size of the cubin + * generated by the previous compilation of \p prog. The value of + * cubinSizeRet is set to 0 if the value specified to \c -arch is a + * virtual architecture instead of an actual architecture. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] cubinSizeRet Size of the generated cubin. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetCUBIN + */ +nvrtcResult nvrtcGetCUBINSize(nvrtcProgram prog, size_t *cubinSizeRet); + + +/** + * \ingroup compilation + * \brief nvrtcGetCUBIN stores the cubin generated by the previous compilation + * of \p prog in the memory pointed by \p cubin. No cubin is available + * if the value specified to \c -arch is a virtual architecture instead + * of an actual architecture. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] cubin Compiled and assembled result. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetCUBINSize + */ +nvrtcResult nvrtcGetCUBIN(nvrtcProgram prog, char *cubin); + + +#if defined(_WIN32) +# define __DEPRECATED__(msg) __declspec(deprecated(msg)) +#elif (defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 5 && !defined(__clang__)))) +# define __DEPRECATED__(msg) __attribute__((deprecated)) +#elif (defined(__GNUC__)) +# define __DEPRECATED__(msg) __attribute__((deprecated(msg))) +#else +# define __DEPRECATED__(msg) +#endif + +/** + * \ingroup compilation + * \brief + * DEPRECATION NOTICE: This function will be removed in a future release. Please use + * nvrtcGetLTOIRSize (and nvrtcGetLTOIR) instead. + */ +__DEPRECATED__("This function will be removed in a future release. Please use nvrtcGetLTOIRSize instead") +nvrtcResult nvrtcGetNVVMSize(nvrtcProgram prog, size_t *nvvmSizeRet); + +/** + * \ingroup compilation + * \brief + * DEPRECATION NOTICE: This function will be removed in a future release. Please use + * nvrtcGetLTOIR (and nvrtcGetLTOIRSize) instead. + */ +__DEPRECATED__("This function will be removed in a future release. Please use nvrtcGetLTOIR instead") +nvrtcResult nvrtcGetNVVM(nvrtcProgram prog, char *nvvm); + +#undef __DEPRECATED__ + +/** + * \ingroup compilation + * \brief nvrtcGetLTOIRSize sets the value of \p LTOIRSizeRet with the size of the LTO IR + * generated by the previous compilation of \p prog. The value of + * LTOIRSizeRet is set to 0 if the program was not compiled with + * \c -dlto. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] LTOIRSizeRet Size of the generated LTO IR. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetLTOIR + */ +nvrtcResult nvrtcGetLTOIRSize(nvrtcProgram prog, size_t *LTOIRSizeRet); + + +/** + * \ingroup compilation + * \brief nvrtcGetLTOIR stores the LTO IR generated by the previous compilation + * of \p prog in the memory pointed by \p LTOIR. No LTO IR is available + * if the program was compiled without \c -dlto. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] LTOIR Compiled result. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetLTOIRSize + */ +nvrtcResult nvrtcGetLTOIR(nvrtcProgram prog, char *LTOIR); + + +/** + * \ingroup compilation + * \brief nvrtcGetOptiXIRSize sets the value of \p optixirSizeRet with the size of the OptiX IR + * generated by the previous compilation of \p prog. The value of + * nvrtcGetOptiXIRSize is set to 0 if the program was compiled with + * options incompatible with OptiX IR generation. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] optixirSizeRet Size of the generated LTO IR. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetOptiXIR + */ +nvrtcResult nvrtcGetOptiXIRSize(nvrtcProgram prog, size_t *optixirSizeRet); + + +/** + * \ingroup compilation + * \brief nvrtcGetOptiXIR stores the OptiX IR generated by the previous compilation + * of \p prog in the memory pointed by \p optixir. No OptiX IR is available + * if the program was compiled with options incompatible with OptiX IR generation. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] Optix IR Compiled result. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetOptiXIRSize + */ +nvrtcResult nvrtcGetOptiXIR(nvrtcProgram prog, char *optixir); + +/** + * \ingroup compilation + * \brief nvrtcGetProgramLogSize sets \p logSizeRet with the size of the + * log generated by the previous compilation of \p prog (including the + * trailing \c NULL). + * + * Note that compilation log may be generated with warnings and informative + * messages, even when the compilation of \p prog succeeds. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] logSizeRet Size of the compilation log + * (including the trailing \c NULL). + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetProgramLog + */ +nvrtcResult nvrtcGetProgramLogSize(nvrtcProgram prog, size_t *logSizeRet); + + +/** + * \ingroup compilation + * \brief nvrtcGetProgramLog stores the log generated by the previous + * compilation of \p prog in the memory pointed by \p log. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [out] log Compilation log. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_INPUT \endlink + * - \link #nvrtcResult NVRTC_ERROR_INVALID_PROGRAM \endlink + * + * \see ::nvrtcGetProgramLogSize + */ +nvrtcResult nvrtcGetProgramLog(nvrtcProgram prog, char *log); + + +/** + * \ingroup compilation + * \brief nvrtcAddNameExpression notes the given name expression + * denoting the address of a __global__ function + * or __device__/__constant__ variable. + * + * The identical name expression string must be provided on a subsequent + * call to nvrtcGetLoweredName to extract the lowered name. + * \param [in] prog CUDA Runtime Compilation program. + * \param [in] name_expression constant expression denoting the address of + * a __global__ function or __device__/__constant__ variable. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_NO_NAME_EXPRESSIONS_AFTER_COMPILATION \endlink + * + * \see ::nvrtcGetLoweredName + */ +nvrtcResult nvrtcAddNameExpression(nvrtcProgram prog, + const char * const name_expression); + +/** + * \ingroup compilation + * \brief nvrtcGetLoweredName extracts the lowered (mangled) name + * for a __global__ function or __device__/__constant__ variable, + * and updates *lowered_name to point to it. The memory containing + * the name is released when the NVRTC program is destroyed by + * nvrtcDestroyProgram. + * The identical name expression must have been previously + * provided to nvrtcAddNameExpression. + * + * \param [in] prog CUDA Runtime Compilation program. + * \param [in] name_expression constant expression denoting the address of + * a __global__ function or __device__/__constant__ variable. + * \param [out] lowered_name initialized by the function to point to a + * C string containing the lowered (mangled) + * name corresponding to the provided name expression. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_NO_LOWERED_NAMES_BEFORE_COMPILATION \endlink + * - \link #nvrtcResult NVRTC_ERROR_NAME_EXPRESSION_NOT_VALID \endlink + * + * \see ::nvrtcAddNameExpression + */ +nvrtcResult nvrtcGetLoweredName(nvrtcProgram prog, + const char *const name_expression, + const char** lowered_name); + + +/** + * \defgroup options Supported Compile Options + * + * NVRTC supports the compile options below. + * Option names with two preceding dashs (\c --) are long option names and + * option names with one preceding dash (\c -) are short option names. + * Short option names can be used instead of long option names. + * When a compile option takes an argument, an assignment operator (\c =) + * is used to separate the compile option argument from the compile option + * name, e.g., \c "--gpu-architecture=compute_60". + * Alternatively, the compile option name and the argument can be specified in + * separate strings without an assignment operator, .e.g, + * \c "--gpu-architecture" \c "compute_60". + * Single-character short option names, such as \c -D, \c -U, and \c -I, do + * not require an assignment operator, and the compile option name and the + * argument can be present in the same string with or without spaces between + * them. + * For instance, \c "-D=", \c "-D", and \c "-D " are all + * supported. + * + * The valid compiler options are: + * + * - Compilation targets + * - \c --gpu-architecture=\ (\c -arch)\n + * Specify the name of the class of GPU architectures for which the + * input must be compiled.\n + * - Valid \s: + * - \c compute_50 + * - \c compute_52 + * - \c compute_53 + * - \c compute_60 + * - \c compute_61 + * - \c compute_62 + * - \c compute_70 + * - \c compute_72 + * - \c compute_75 + * - \c compute_80 + * - \c compute_87 + * - \c compute_89 + * - \c compute_90 + * - \c compute_90a + * - \c sm_50 + * - \c sm_52 + * - \c sm_53 + * - \c sm_60 + * - \c sm_61 + * - \c sm_62 + * - \c sm_70 + * - \c sm_72 + * - \c sm_75 + * - \c sm_80 + * - \c sm_87 + * - \c sm_89 + * - \c sm_90 + * - \c sm_90a + * - Default: \c compute_52 + * - Separate compilation / whole-program compilation + * - \c --device-c (\c -dc)\n + * Generate relocatable code that can be linked with other relocatable + * device code. It is equivalent to --relocatable-device-code=true. + * - \c --device-w (\c -dw)\n + * Generate non-relocatable code. It is equivalent to + * \c --relocatable-device-code=false. + * - \c --relocatable-device-code={true|false} (\c -rdc)\n + * Enable (disable) the generation of relocatable device code. + * - Default: \c false + * - \c --extensible-whole-program (\c -ewp)\n + * Do extensible whole program compilation of device code. + * - Default: \c false + * - Debugging support + * - \c --device-debug (\c -G)\n + * Generate debug information. If --dopt is not specified, + * then turns off all optimizations. + * - \c --generate-line-info (\c -lineinfo)\n + * Generate line-number information. + * - Code generation + * - \c --dopt on (\c -dopt)\n + * - \c --dopt=on \n + * Enable device code optimization. When specified along with '-G', enables + * limited debug information generation for optimized device code (currently, + * only line number information). + * When '-G' is not specified, '-dopt=on' is implicit. + * - \c --ptxas-options \ (\c -Xptxas)\n + * - \c --ptxas-options=\ \n + * Specify options directly to ptxas, the PTX optimizing assembler. + * - \c --maxrregcount=\ (\c -maxrregcount)\n + * Specify the maximum amount of registers that GPU functions can use. + * Until a function-specific limit, a higher value will generally + * increase the performance of individual GPU threads that execute this + * function. However, because thread registers are allocated from a + * global register pool on each GPU, a higher value of this option will + * also reduce the maximum thread block size, thereby reducing the amount + * of thread parallelism. Hence, a good maxrregcount value is the result + * of a trade-off. If this option is not specified, then no maximum is + * assumed. Value less than the minimum registers required by ABI will + * be bumped up by the compiler to ABI minimum limit. + * - \c --ftz={true|false} (\c -ftz)\n + * When performing single-precision floating-point operations, flush + * denormal values to zero or preserve denormal values. + * \c --use_fast_math implies \c --ftz=true. + * - Default: \c false + * - \c --prec-sqrt={true|false} (\c -prec-sqrt)\n + * For single-precision floating-point square root, use IEEE + * round-to-nearest mode or use a faster approximation. + * \c --use_fast_math implies \c --prec-sqrt=false. + * - Default: \c true + * - \c --prec-div={true|false} (\c -prec-div)\n + * For single-precision floating-point division and reciprocals, use IEEE + * round-to-nearest mode or use a faster approximation. + * \c --use_fast_math implies \c --prec-div=false. + * - Default: \c true + * - \c --fmad={true|false} (\c -fmad)\n + * Enables (disables) the contraction of floating-point multiplies and + * adds/subtracts into floating-point multiply-add operations (FMAD, + * FFMA, or DFMA). \c --use_fast_math implies \c --fmad=true. + * - Default: \c true + * - \c --use_fast_math (\c -use_fast_math)\n + * Make use of fast math operations. + * \c --use_fast_math implies \c --ftz=true \c --prec-div=false + * \c --prec-sqrt=false \c --fmad=true. + * - \c --extra-device-vectorization (\c -extra-device-vectorization)\n + * Enables more aggressive device code vectorization in the NVVM optimizer. + * - \c --modify-stack-limit={true|false} (\c -modify-stack-limit)\n + * On Linux, during compilation, use \c setrlimit() to increase stack size + * to maximum allowed. The limit is reset to the previous value at the + * end of compilation. + * Note: \c setrlimit() changes the value for the entire process. + * - Default: \c true + * - \c --dlink-time-opt (\c -dlto)\n + * Generate intermediate code for later link-time optimization. + * It implies \c -rdc=true. + * Note: when this option is used the nvrtcGetLTOIR API should be used, + * as PTX or Cubin will not be generated. + * - \c --gen-opt-lto (\c -gen-opt-lto)\n + * Run the optimizer passes before generating the LTO IR. + * - \c --optix-ir (\c -optix-ir)\n + * Generate OptiX IR. The Optix IR is only intended for consumption by OptiX + * through appropriate APIs. This feature is not supported with + * link-time-optimization (\c -dlto)\n. + * Note: when this option is used the nvrtcGetOptiX API should be used, + * as PTX or Cubin will not be generated. + * - Preprocessing + * - \c --define-macro=\ (\c -D)\n + * \c \ can be either \c \ or \c \. + * - \c \ \n + * Predefine \c \ as a macro with definition \c 1. + * - \c \=\ \n + * The contents of \c \ are tokenized and preprocessed + * as if they appeared during translation phase three in a \c \#define + * directive. In particular, the definition will be truncated by + * embedded new line characters. + * - \c --undefine-macro=\ (\c -U)\n + * Cancel any previous definition of \c \. + * - \c --include-path=\ (\c -I)\n + * Add the directory \c \ to the list of directories to be + * searched for headers. These paths are searched after the list of + * headers given to ::nvrtcCreateProgram. + * - \c --pre-include=\ (\c -include)\n + * Preinclude \c \ during preprocessing. + * - \c --no-source-include (\c -no-source-include) + * The preprocessor by default adds the directory of each input sources + * to the include path. This option disables this feature and only + * considers the path specified explicitly. + * - Language Dialect + * - \c --std={c++03|c++11|c++14|c++17|c++20} + * (\c -std={c++11|c++14|c++17|c++20})\n + * Set language dialect to C++03, C++11, C++14, C++17 or C++20 + * - Default: \c c++17 + * - \c --builtin-move-forward={true|false} (\c -builtin-move-forward)\n + * Provide builtin definitions of \c std::move and \c std::forward, + * when C++11 or later language dialect is selected. + * - Default: \c true + * - \c --builtin-initializer-list={true|false} + * (\c -builtin-initializer-list)\n + * Provide builtin definitions of \c std::initializer_list class and + * member functions when C++11 or later language dialect is selected. + * - Default: \c true + * - Misc. + * - \c --disable-warnings (\c -w)\n + * Inhibit all warning messages. + * - \c --restrict (\c -restrict)\n + * Programmer assertion that all kernel pointer parameters are restrict + * pointers. + * - \c --device-as-default-execution-space + * (\c -default-device)\n + * Treat entities with no execution space annotation as \c __device__ + * entities. + * - \c --device-int128 (\c -device-int128)\n + * Allow the \c __int128 type in device code. Also causes the macro \c __CUDACC_RTC_INT128__ + * to be defined. + * - \c --optimization-info=\ (\c -opt-info)\n + * Provide optimization reports for the specified kind of optimization. + * The following kind tags are supported: + * - \c inline : emit a remark when a function is inlined. + * - \c --version-ident={true|false} (\c -dQ)\n + * Embed used compiler's version info into generated PTX/CUBIN + * - Default: \c false + * - \c --display-error-number (\c -err-no)\n + * Display diagnostic number for warning messages. (Default) + * - \c --no-display-error-number (\c -no-err-no)\n + * Disables the display of a diagnostic number for warning messages. + * - \c --diag-error=,... (\c -diag-error)\n + * Emit error for specified diagnostic message number(s). Message numbers can be separated by comma. + * - \c --diag-suppress=,... (\c -diag-suppress)\n + * Suppress specified diagnostic message number(s). Message numbers can be separated by comma. + * - \c --diag-warn=,... (\c -diag-warn)\n + * Emit warning for specified diagnostic message number(s). Message numbers can be separated by comma. + * - \c --brief-diagnostics={true|false} (\c -brief-diag)\n + * This option disables or enables showing source line and column info + * in a diagnostic. + * The --brief-diagnostics=true will not show the source line and column info. + * - Default: \c false + * - \c --time= (\c -time)\n + * Generate a comma separated value table with the time taken by each compilation + * phase, and append it at the end of the file given as the option argument. + * If the file does not exist, the column headings are generated in the first row + * of the table. If the file name is '-', the timing data is written to the compilation log. + * + */ + + +#ifdef __cplusplus +} +#endif /* __cplusplus */ + + +/* The utility function 'nvrtcGetTypeName' is not available by default. Define + the macro 'NVRTC_GET_TYPE_NAME' to a non-zero value to make it available. +*/ + +#if NVRTC_GET_TYPE_NAME || __DOXYGEN_ONLY__ + +#if NVRTC_USE_CXXABI || __clang__ || __GNUC__ || __DOXYGEN_ONLY__ +#include +#include + +#elif defined(_WIN32) +#include +#include +#endif /* NVRTC_USE_CXXABI || __clang__ || __GNUC__ */ + + +#include +#include + +template struct __nvrtcGetTypeName_helper_t { }; + +/*************************************************************************//** + * + * \defgroup hosthelper Host Helper + * + * NVRTC defines the following functions for easier interaction with host code. + * + ****************************************************************************/ + +/** + * \ingroup hosthelper + * \brief nvrtcGetTypeName stores the source level name of a type in the given + * std::string location. + * + * This function is only provided when the macro NVRTC_GET_TYPE_NAME is + * defined with a non-zero value. It uses abi::__cxa_demangle or UnDecorateSymbolName + * function calls to extract the type name, when using gcc/clang or cl.exe compilers, + * respectively. If the name extraction fails, it will return NVRTC_INTERNAL_ERROR, + * otherwise *result is initialized with the extracted name. + * + * Windows-specific notes: + * - nvrtcGetTypeName() is not multi-thread safe because it calls UnDecorateSymbolName(), + * which is not multi-thread safe. + * - The returned string may contain Microsoft-specific keywords such as __ptr64 and __cdecl. + * + * \param [in] tinfo: reference to object of type std::type_info for a given type. + * \param [in] result: pointer to std::string in which to store the type name. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INTERNAL_ERROR \endlink + * + */ +inline nvrtcResult nvrtcGetTypeName(const std::type_info &tinfo, std::string *result) +{ +#if USE_CXXABI || __clang__ || __GNUC__ + const char *name = tinfo.name(); + int status; + char *undecorated_name = abi::__cxa_demangle(name, 0, 0, &status); + if (status == 0) { + *result = undecorated_name; + free(undecorated_name); + return NVRTC_SUCCESS; + } +#elif defined(_WIN32) + const char *name = tinfo.raw_name(); + if (!name || *name != '.') { + return NVRTC_ERROR_INTERNAL_ERROR; + } + char undecorated_name[4096]; + //name+1 skips over the '.' prefix + if(UnDecorateSymbolName(name+1, undecorated_name, + sizeof(undecorated_name) / sizeof(*undecorated_name), + //note: doesn't seem to work correctly without UNDNAME_NO_ARGUMENTS. + UNDNAME_NO_ARGUMENTS | UNDNAME_NAME_ONLY ) ) { + *result = undecorated_name; + return NVRTC_SUCCESS; + } +#endif /* USE_CXXABI || __clang__ || __GNUC__ */ + + return NVRTC_ERROR_INTERNAL_ERROR; +} + +/** + * \ingroup hosthelper + * \brief nvrtcGetTypeName stores the source level name of the template type argument + * T in the given std::string location. + * + * This function is only provided when the macro NVRTC_GET_TYPE_NAME is + * defined with a non-zero value. It uses abi::__cxa_demangle or UnDecorateSymbolName + * function calls to extract the type name, when using gcc/clang or cl.exe compilers, + * respectively. If the name extraction fails, it will return NVRTC_INTERNAL_ERROR, + * otherwise *result is initialized with the extracted name. + * + * Windows-specific notes: + * - nvrtcGetTypeName() is not multi-thread safe because it calls UnDecorateSymbolName(), + * which is not multi-thread safe. + * - The returned string may contain Microsoft-specific keywords such as __ptr64 and __cdecl. + * + * \param [in] result: pointer to std::string in which to store the type name. + * \return + * - \link #nvrtcResult NVRTC_SUCCESS \endlink + * - \link #nvrtcResult NVRTC_ERROR_INTERNAL_ERROR \endlink + * + */ + +template +nvrtcResult nvrtcGetTypeName(std::string *result) +{ + nvrtcResult res = nvrtcGetTypeName(typeid(__nvrtcGetTypeName_helper_t), + result); + if (res != NVRTC_SUCCESS) + return res; + + std::string repr = *result; + std::size_t idx = repr.find("__nvrtcGetTypeName_helper_t"); + idx = (idx != std::string::npos) ? repr.find("<", idx) : idx; + std::size_t last_idx = repr.find_last_of('>'); + if (idx == std::string::npos || last_idx == std::string::npos) { + return NVRTC_ERROR_INTERNAL_ERROR; + } + ++idx; + *result = repr.substr(idx, last_idx - idx); + return NVRTC_SUCCESS; +} + +#endif /* NVRTC_GET_TYPE_NAME */ + +#endif /* __NVRTC_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..b1e5f47aa33daec632d6ba8fa95cfa1a3c4764ff Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/channel_descriptor.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/channel_descriptor.h new file mode 100644 index 0000000000000000000000000000000000000000..c6f039db8effce996015f901562009ebe976d832 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/channel_descriptor.h @@ -0,0 +1,588 @@ +/* + * Copyright 1993-2012 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CHANNEL_DESCRIPTOR_H__) +#define __CHANNEL_DESCRIPTOR_H__ + +#if defined(__cplusplus) + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +/** + * \addtogroup CUDART_HIGHLEVEL + * + * @{ + */ + +/** + * \brief \hl Returns a channel descriptor using the specified format + * + * Returns a channel descriptor with format \p f and number of bits of each + * component \p x, \p y, \p z, and \p w. The ::cudaChannelFormatDesc is + * defined as: + * \code + struct cudaChannelFormatDesc { + int x, y, z, w; + enum cudaChannelFormatKind f; + }; + * \endcode + * + * where ::cudaChannelFormatKind is one of ::cudaChannelFormatKindSigned, + * ::cudaChannelFormatKindUnsigned, cudaChannelFormatKindFloat, + * ::cudaChannelFormatKindSignedNormalized8X1, ::cudaChannelFormatKindSignedNormalized8X2, + * ::cudaChannelFormatKindSignedNormalized8X4, + * ::cudaChannelFormatKindUnsignedNormalized8X1, ::cudaChannelFormatKindUnsignedNormalized8X2, + * ::cudaChannelFormatKindUnsignedNormalized8X4, + * ::cudaChannelFormatKindSignedNormalized16X1, ::cudaChannelFormatKindSignedNormalized16X2, + * ::cudaChannelFormatKindSignedNormalized16X4, + * ::cudaChannelFormatKindUnsignedNormalized16X1, ::cudaChannelFormatKindUnsignedNormalized16X2, + * ::cudaChannelFormatKindUnsignedNormalized16X4 + * or ::cudaChannelFormatKindNV12. + * + * The format is specified by the template specialization. + * + * The template function specializes for the following scalar types: + * char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, and float. + * The template function specializes for the following vector types: + * char{1|2|4}, uchar{1|2|4}, short{1|2|4}, ushort{1|2|4}, int{1|2|4}, uint{1|2|4}, long{1|2|4}, ulong{1|2|4}, float{1|2|4}. + * The template function specializes for following cudaChannelFormatKind enum values: + * ::cudaChannelFormatKind{Uns|S}ignedNormalized{8|16}X{1|2|4}, and ::cudaChannelFormatKindNV12. + * + * Invoking the function on a type without a specialization defaults to creating a channel format of kind ::cudaChannelFormatKindNone + * + * \return + * Channel descriptor with format \p f + * + * \sa \ref ::cudaCreateChannelDesc(int,int,int,int,cudaChannelFormatKind) "cudaCreateChannelDesc (Low level)", + * ::cudaGetChannelDesc, + */ +template __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(0, 0, 0, 0, cudaChannelFormatKindNone); +} + +static __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDescHalf(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindFloat); +} + +static __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDescHalf1(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindFloat); +} + +static __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDescHalf2(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindFloat); +} + +static __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDescHalf4(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindFloat); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(char) * 8; + +#if defined(_CHAR_UNSIGNED) || defined(__CHAR_UNSIGNED__) + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +#else /* _CHAR_UNSIGNED || __CHAR_UNSIGNED__ */ + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +#endif /* _CHAR_UNSIGNED || __CHAR_UNSIGNED__ */ +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(signed char) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned char) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(signed char) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned char) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(signed char) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned char) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(signed char) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned char) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(short) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(short) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned short) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(int) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned int) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(int) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned int) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(int) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned int) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(int) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned int) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindUnsigned); +} + +#if !defined(__LP64__) + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(long) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned long) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(long) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned long) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(long) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned long) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindUnsigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(long) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindSigned); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(unsigned long) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindUnsigned); +} + +#endif /* !__LP64__ */ + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(float) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindFloat); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(float) * 8; + + return cudaCreateChannelDesc(e, 0, 0, 0, cudaChannelFormatKindFloat); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(float) * 8; + + return cudaCreateChannelDesc(e, e, 0, 0, cudaChannelFormatKindFloat); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + int e = (int)sizeof(float) * 8; + + return cudaCreateChannelDesc(e, e, e, e, cudaChannelFormatKindFloat); +} + +static __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDescNV12(void) +{ + int e = (int)sizeof(char) * 8; + + return cudaCreateChannelDesc(e, e, e, 0, cudaChannelFormatKindNV12); +} + +template __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(0, 0, 0, 0, cudaChannelFormatKindNone); +} + +/* Signed 8-bit normalized integer formats */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 0, 0, 0, cudaChannelFormatKindSignedNormalized8X1); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 0, 0, cudaChannelFormatKindSignedNormalized8X2); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindSignedNormalized8X4); +} + +/* Unsigned 8-bit normalized integer formats */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 0, 0, 0, cudaChannelFormatKindUnsignedNormalized8X1); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 0, 0, cudaChannelFormatKindUnsignedNormalized8X2); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedNormalized8X4); +} + +/* Signed 16-bit normalized integer formats */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 0, 0, 0, cudaChannelFormatKindSignedNormalized16X1); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 0, 0, cudaChannelFormatKindSignedNormalized16X2); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 16, 16, cudaChannelFormatKindSignedNormalized16X4); +} + +/* Unsigned 16-bit normalized integer formats */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 0, 0, 0, cudaChannelFormatKindUnsignedNormalized16X1); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 0, 0, cudaChannelFormatKindUnsignedNormalized16X2); +} + +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 16, 16, cudaChannelFormatKindUnsignedNormalized16X4); +} + +/* NV12 format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 0, cudaChannelFormatKindNV12); +} + +/* BC1 format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed1); +} + +/* BC1sRGB format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed1SRGB); +} + +/* BC2 format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed2); +} + +/* BC2sRGB format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed2SRGB); +} + +/* BC3 format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed3); +} + +/* BC3sRGB format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed3SRGB); +} + +/* BC4 unsigned format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 0, 0, 0, cudaChannelFormatKindUnsignedBlockCompressed4); +} + +/* BC4 signed format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 0, 0, 0, cudaChannelFormatKindSignedBlockCompressed4); +} + +/* BC5 unsigned format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 0, 0, cudaChannelFormatKindUnsignedBlockCompressed5); +} + +/* BC5 signed format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 0, 0, cudaChannelFormatKindSignedBlockCompressed5); +} + +/* BC6H unsigned format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 16, 0, cudaChannelFormatKindUnsignedBlockCompressed6H); +} + +/* BC6H signed format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(16, 16, 16, 0, cudaChannelFormatKindSignedBlockCompressed6H); +} + +/* BC7 format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed7); +} + +/* BC7sRGB format */ +template<> __inline__ __host__ cudaChannelFormatDesc cudaCreateChannelDesc(void) +{ + return cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsignedBlockCompressed7SRGB); +} + +#endif /* __cplusplus */ + +/** @} */ +/** @} */ /* END CUDART_TEXTURE_HL */ + +#endif /* !__CHANNEL_DESCRIPTOR_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/coalesced_reduce.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/coalesced_reduce.h new file mode 100644 index 0000000000000000000000000000000000000000..c3722fb5c22809027cee66ab05758e477e8ef2bf --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/coalesced_reduce.h @@ -0,0 +1,108 @@ + /* Copyright 1993-2016 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef _CG_COALESCED_REDUCE_H_ +#define _CG_COALESCED_REDUCE_H_ + +#include "info.h" +#include "helpers.h" +#include "cooperative_groups.h" +#include "partitioning.h" +#include "coalesced_scan.h" + +_CG_BEGIN_NAMESPACE + +namespace details { + +template +_CG_QUALIFIER auto coalesced_reduce_to_one(const coalesced_group& group, TyVal&& val, TyOp&& op) -> decltype(op(val, val)) { + if (group.size() == 32) { + auto out = val; + for (int offset = group.size() >> 1; offset > 0; offset >>= 1) { + out = op(out, group.shfl_up(out, offset)); + } + return out; + } + else { + auto scan_result = + inclusive_scan_non_contiguous(group, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); + return scan_result; + } +} + +template +_CG_QUALIFIER auto coalesced_reduce(const coalesced_group& group, TyVal&& val, TyOp&& op) -> decltype(op(val, val)) { + auto out = coalesced_reduce_to_one(group, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); + if (group.size() == 32) { + return group.shfl(out, 31); + } + else { + unsigned int group_mask = _coalesced_group_data_access::get_mask(group); + unsigned int last_thread_id = 31 - __clz(group_mask); + return details::tile::shuffle_dispatch::shfl( + _CG_STL_NAMESPACE::forward(out), group_mask, last_thread_id, 32); + } +} + +template +_CG_QUALIFIER auto coalesced_reduce(const __single_warp_thread_block_tile& group, + TyVal&& val, + TyOp&& op) -> decltype(op(val, val)) { + auto out = val; + for (int mask = TySize >> 1; mask > 0; mask >>= 1) { + out = op(out, group.shfl_xor(out, mask)); + } + + return out; +} + +} // details + +_CG_END_NAMESPACE + +#endif // _CG_COALESCED_REDUCE_H_ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/scan.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/scan.h new file mode 100644 index 0000000000000000000000000000000000000000..96d68350e48307d120289e22872abc66f5188115 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/scan.h @@ -0,0 +1,320 @@ +/* Copyright 1993-2016 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * The source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * The Licensed Deliverables contained herein are PROPRIETARY and + * CONFIDENTIAL to NVIDIA and are being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef _CG_SCAN_H_ +#define _CG_SCAN_H_ + +#include "info.h" +#include "helpers.h" +#include "functional.h" +#include "coalesced_scan.h" + +_CG_BEGIN_NAMESPACE + +namespace details { + + // Group support for scan. + template struct _scan_group_supported : public _CG_STL_NAMESPACE::false_type {}; + + template + struct _scan_group_supported> : public _CG_STL_NAMESPACE::true_type {}; + template + struct _scan_group_supported> : public _CG_STL_NAMESPACE::true_type {}; + template <> + struct _scan_group_supported : public _CG_STL_NAMESPACE::true_type {}; + + template + using scan_group_supported = _scan_group_supported>; + + template + struct integral_optimized_scan; + + enum class ScanType { exclusive, inclusive }; + + template + struct scan_dispatch; + + template + struct scan_dispatch { + template + _CG_STATIC_QUALIFIER auto scan(const TyGroup& group, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + auto scan_result = coalesced_inclusive_scan(group, val, op); + if (TyScan == ScanType::exclusive) { + scan_result = convert_inclusive_to_exclusive(group, + scan_result, + _CG_STL_NAMESPACE::forward(val), + _CG_STL_NAMESPACE::forward(op)); + } + return scan_result; + } + }; + +#if defined(_CG_CPP11_FEATURES) + template + struct scan_dispatch { + template + _CG_STATIC_QUALIFIER auto scan(const thread_block_tile& group, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + using warpType = details::internal_thread_block_tile<32, __static_size_multi_warp_tile_base>; + using TyRet = details::remove_qual; + const unsigned int num_warps = Size / 32; + // In warp scan result, calculated in warp_lambda + TyRet warp_scan; + + // In warp scan, put sum in the warp_scratch_location + auto warp_lambda = [&] (const warpType& warp, TyRet* warp_scratch_location) { + warp_scan = + details::coalesced_inclusive_scan(warp, _CG_STL_NAMESPACE::forward(val), op); + if (warp.thread_rank() + 1 == warp.size()) { + *warp_scratch_location = warp_scan; + } + if (TyScan == ScanType::exclusive) { + warp_scan = warp.shfl_up(warp_scan, 1); + } + }; + + // Tile of size num_warps performing the final scan part (exclusive scan of warp sums), other threads will add it + // to its in-warp scan result + auto inter_warp_lambda = + [&] (const details::internal_thread_block_tile& subwarp, TyRet* thread_scratch_location) { + auto thread_val = *thread_scratch_location; + auto result = coalesced_inclusive_scan(subwarp, thread_val, op); + *thread_scratch_location = convert_inclusive_to_exclusive(subwarp, result, thread_val, op); + }; + + TyRet previous_warps_sum = details::multi_warp_collectives_helper(group, warp_lambda, inter_warp_lambda); + if (TyScan == ScanType::exclusive && warpType::thread_rank() == 0) { + return previous_warps_sum; + } + if (warpType::meta_group_rank() == 0) { + return warp_scan; + } + else { + return op(warp_scan, previous_warps_sum); + } + } + }; + +#if defined(_CG_HAS_STL_ATOMICS) + template + struct scan_update_dispatch; + + template + struct scan_update_dispatch { + template + _CG_STATIC_QUALIFIER auto scan(const TyGroup& group, TyAtomic& dst, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::remove_qual old; + + // Do regular in group scan + auto scan_result = details::coalesced_inclusive_scan(group, val, op); + + // Last thread updates the atomic and distributes its old value to other threads + if (group.thread_rank() == group.size() - 1) { + old = atomic_update(dst, scan_result, _CG_STL_NAMESPACE::forward(op)); + } + old = group.shfl(old, group.size() - 1); + if (TyScan == ScanType::exclusive) { + scan_result = convert_inclusive_to_exclusive(group, scan_result, _CG_STL_NAMESPACE::forward(val), op); + } + scan_result = op(old, scan_result); + return scan_result; + } + }; + + template + struct scan_update_dispatch { + template + _CG_STATIC_QUALIFIER auto scan(const thread_block_tile& group, TyAtomic& dst, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + using warpType = details::internal_thread_block_tile<32, __static_size_multi_warp_tile_base>; + using TyRet = details::remove_qual; + const unsigned int num_warps = Size / 32; + // In warp scan result, calculated in warp_lambda + TyRet warp_scan; + + // In warp scan, put sum in the warp_scratch_location + auto warp_lambda = [&] (const warpType& warp, TyRet* warp_scratch_location) { + warp_scan = + details::coalesced_inclusive_scan(warp, _CG_STL_NAMESPACE::forward(val), op); + if (warp.thread_rank() + 1 == warp.size()) { + *warp_scratch_location = warp_scan; + } + if (TyScan == ScanType::exclusive) { + warp_scan = warp.shfl_up(warp_scan, 1); + } + }; + + // Tile of size num_warps performing the final scan part (exclusive scan of warp sums), other threads will add it + // to its in-warp scan result + auto inter_warp_lambda = + [&] (const details::internal_thread_block_tile& subwarp, TyRet* thread_scratch_location) { + auto thread_val = *thread_scratch_location; + auto scan_result = details::coalesced_inclusive_scan(subwarp, thread_val, op); + TyRet offset; + // Single thread does the atomic update with sum of all contributions and reads the old value. + if (subwarp.thread_rank() == subwarp.size() - 1) { + offset = details::atomic_update(dst, scan_result, op); + } + offset = subwarp.shfl(offset, subwarp.size() - 1); + scan_result = convert_inclusive_to_exclusive(subwarp, scan_result, thread_val, op); + // Add offset read from the atomic to the scanned warp sum. + // Skipping first thread, since it got defautly constructed value from the conversion, + // it should just return the offset received from the thread that did the atomic update. + if (subwarp.thread_rank() != 0) { + offset = op(scan_result, offset); + } + *thread_scratch_location = offset; + }; + + TyRet previous_warps_sum = details::multi_warp_collectives_helper(group, warp_lambda, inter_warp_lambda); + if (TyScan == ScanType::exclusive && warpType::thread_rank() == 0) { + return previous_warps_sum; + } + return op(warp_scan, previous_warps_sum); + } + }; +#endif +#endif + + template + _CG_QUALIFIER void check_scan_params() { + static_assert(details::is_op_type_same::value, "Operator input and output types differ"); + static_assert(details::scan_group_supported::value, "This group does not exclusively represent a tile"); + } + +#if defined(_CG_HAS_STL_ATOMICS) + template + _CG_QUALIFIER void check_scan_update_params() { + check_scan_params(); + static_assert(details::is_op_type_same::value, "Destination and input types differ"); + } +#endif + +} // details + +template +_CG_QUALIFIER auto inclusive_scan(const TyGroup& group, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_params(); + + using dispatch = details::scan_dispatch; + return dispatch::scan(group, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER details::remove_qual inclusive_scan(const TyGroup& group, TyVal&& val) { + return inclusive_scan(group, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus>()); +} + +template +_CG_QUALIFIER auto exclusive_scan(const TyGroup& group, TyVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_params(); + + using dispatch = details::scan_dispatch; + return dispatch::scan(group, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER details::remove_qual exclusive_scan(const TyGroup& group, TyVal&& val) { + return exclusive_scan(group, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus>()); +} + +#if defined(_CG_HAS_STL_ATOMICS) +template +_CG_QUALIFIER auto inclusive_scan_update(const TyGroup& group, cuda::atomic& dst, TyInputVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_update_params, decltype(op(val, val))>(); + + using dispatch = details::scan_update_dispatch; + return dispatch::scan(group, dst, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER TyVal inclusive_scan_update(const TyGroup& group, cuda::atomic & dst, TyInputVal&& val) { + return inclusive_scan_update(group, dst, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus()); +} + +template +_CG_QUALIFIER auto exclusive_scan_update(const TyGroup& group, cuda::atomic& dst, TyInputVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_update_params, decltype(op(val, val))>(); + + using dispatch = details::scan_update_dispatch; + return dispatch::scan(group, dst, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER TyVal exclusive_scan_update(const TyGroup& group, cuda::atomic& dst, TyInputVal&& val) { + return exclusive_scan_update(group, dst, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus()); +} + +template +_CG_QUALIFIER auto inclusive_scan_update(const TyGroup& group, const cuda::atomic_ref& dst, TyInputVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_update_params, decltype(op(val, val))>(); + + using dispatch = details::scan_update_dispatch; + return dispatch::scan(group, dst, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER TyVal inclusive_scan_update(const TyGroup& group, const cuda::atomic_ref & dst, TyInputVal&& val) { + return inclusive_scan_update(group, dst, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus()); +} + +template +_CG_QUALIFIER auto exclusive_scan_update(const TyGroup& group, const cuda::atomic_ref& dst, TyInputVal&& val, TyFn&& op) -> decltype(op(val, val)) { + details::check_scan_update_params, decltype(op(val, val))>(); + + using dispatch = details::scan_update_dispatch; + return dispatch::scan(group, dst, _CG_STL_NAMESPACE::forward(val), _CG_STL_NAMESPACE::forward(op)); +} + +template +_CG_QUALIFIER TyVal exclusive_scan_update(const TyGroup& group, const cuda::atomic_ref& dst, TyInputVal&& val) { + return exclusive_scan_update(group, dst, _CG_STL_NAMESPACE::forward(val), cooperative_groups::plus()); +} +#endif + +_CG_END_NAMESPACE + +#endif // _CG_SCAN_H_ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaEGLTypedefs.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaEGLTypedefs.h new file mode 100644 index 0000000000000000000000000000000000000000..61b82337dc4bb280869934b11c2105db62ae20c3 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaEGLTypedefs.h @@ -0,0 +1,96 @@ +/* + * Copyright 2020-2021 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef CUDAEGLTYPEDEFS_H +#define CUDAEGLTYPEDEFS_H + +#include + +#ifdef __cplusplus +extern "C" { +#endif // __cplusplus + +/* + * Macros for the latest version for each driver function in cudaEGL.h + */ +#define PFN_cuGraphicsEGLRegisterImage PFN_cuGraphicsEGLRegisterImage_v7000 +#define PFN_cuEGLStreamConsumerConnect PFN_cuEGLStreamConsumerConnect_v7000 +#define PFN_cuEGLStreamConsumerConnectWithFlags PFN_cuEGLStreamConsumerConnectWithFlags_v8000 +#define PFN_cuEGLStreamConsumerDisconnect PFN_cuEGLStreamConsumerDisconnect_v7000 +#define PFN_cuEGLStreamConsumerAcquireFrame PFN_cuEGLStreamConsumerAcquireFrame_v7000 +#define PFN_cuEGLStreamConsumerReleaseFrame PFN_cuEGLStreamConsumerReleaseFrame_v7000 +#define PFN_cuEGLStreamProducerConnect PFN_cuEGLStreamProducerConnect_v7000 +#define PFN_cuEGLStreamProducerDisconnect PFN_cuEGLStreamProducerDisconnect_v7000 +#define PFN_cuEGLStreamProducerPresentFrame PFN_cuEGLStreamProducerPresentFrame_v7000 +#define PFN_cuEGLStreamProducerReturnFrame PFN_cuEGLStreamProducerReturnFrame_v7000 +#define PFN_cuGraphicsResourceGetMappedEglFrame PFN_cuGraphicsResourceGetMappedEglFrame_v7000 +#define PFN_cuEventCreateFromEGLSync PFN_cuEventCreateFromEGLSync_v9000 + + +/** + * Type definitions for functions defined in cudaEGL.h + */ +typedef CUresult (CUDAAPI *PFN_cuGraphicsEGLRegisterImage_v7000)(CUgraphicsResource CUDAAPI *pCudaResource, EGLImageKHR image, unsigned int flags); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamConsumerConnect_v7000)(CUeglStreamConnection CUDAAPI *conn, EGLStreamKHR stream); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamConsumerConnectWithFlags_v8000)(CUeglStreamConnection CUDAAPI *conn, EGLStreamKHR stream, unsigned int flags); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamConsumerDisconnect_v7000)(CUeglStreamConnection CUDAAPI *conn); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamConsumerAcquireFrame_v7000)(CUeglStreamConnection CUDAAPI *conn, CUgraphicsResource CUDAAPI *pCudaResource, CUstream CUDAAPI *pStream, unsigned int timeout); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamConsumerReleaseFrame_v7000)(CUeglStreamConnection CUDAAPI *conn, CUgraphicsResource pCudaResource, CUstream CUDAAPI *pStream); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamProducerConnect_v7000)(CUeglStreamConnection CUDAAPI *conn, EGLStreamKHR stream, EGLint width, EGLint height); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamProducerDisconnect_v7000)(CUeglStreamConnection CUDAAPI *conn); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamProducerPresentFrame_v7000)(CUeglStreamConnection CUDAAPI *conn, CUeglFrame_v1 eglframe, CUstream CUDAAPI *pStream); +typedef CUresult (CUDAAPI *PFN_cuEGLStreamProducerReturnFrame_v7000)(CUeglStreamConnection CUDAAPI *conn, CUeglFrame_v1 CUDAAPI *eglframe, CUstream CUDAAPI *pStream); +typedef CUresult (CUDAAPI *PFN_cuGraphicsResourceGetMappedEglFrame_v7000)(CUeglFrame_v1 CUDAAPI *eglFrame, CUgraphicsResource resource, unsigned int index, unsigned int mipLevel); +typedef CUresult (CUDAAPI *PFN_cuEventCreateFromEGLSync_v9000)(CUevent CUDAAPI *phEvent, EGLSyncKHR eglSync, unsigned int flags); + +#ifdef __cplusplus +} +#endif // __cplusplus + +#endif // file guard diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaGLTypedefs.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaGLTypedefs.h new file mode 100644 index 0000000000000000000000000000000000000000..81f0d5349e435159647af9af379d1e8e8441221c --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cudaGLTypedefs.h @@ -0,0 +1,123 @@ +/* + * Copyright 2020-2021 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef CUDAGLTYPEDEFS_H +#define CUDAGLTYPEDEFS_H + +// Dependent includes for cudagl.h +#include + +#include + +#if defined(CUDA_API_PER_THREAD_DEFAULT_STREAM) + #define __API_TYPEDEF_PTDS(api, default_version, ptds_version) api ## _v ## ptds_version ## _ptds + #define __API_TYPEDEF_PTSZ(api, default_version, ptds_version) api ## _v ## ptds_version ## _ptsz +#else + #define __API_TYPEDEF_PTDS(api, default_version, ptds_version) api ## _v ## default_version + #define __API_TYPEDEF_PTSZ(api, default_version, ptds_version) api ## _v ## default_version +#endif + +#ifdef __cplusplus +extern "C" { +#endif // __cplusplus + +/* + * Macros for the latest version for each driver function in cudaGL.h + */ +#define PFN_cuGraphicsGLRegisterBuffer PFN_cuGraphicsGLRegisterBuffer_v3000 +#define PFN_cuGraphicsGLRegisterImage PFN_cuGraphicsGLRegisterImage_v3000 +#define PFN_cuWGLGetDevice PFN_cuWGLGetDevice_v2020 +#define PFN_cuGLGetDevices PFN_cuGLGetDevices_v6050 +#define PFN_cuGLCtxCreate PFN_cuGLCtxCreate_v3020 +#define PFN_cuGLInit PFN_cuGLInit_v2000 +#define PFN_cuGLRegisterBufferObject PFN_cuGLRegisterBufferObject_v2000 +#define PFN_cuGLMapBufferObject __API_TYPEDEF_PTDS(PFN_cuGLMapBufferObject, 3020, 7000) +#define PFN_cuGLUnmapBufferObject PFN_cuGLUnmapBufferObject_v2000 +#define PFN_cuGLUnregisterBufferObject PFN_cuGLUnregisterBufferObject_v2000 +#define PFN_cuGLSetBufferObjectMapFlags PFN_cuGLSetBufferObjectMapFlags_v2030 +#define PFN_cuGLMapBufferObjectAsync __API_TYPEDEF_PTSZ(PFN_cuGLMapBufferObjectAsync, 3020, 7000) +#define PFN_cuGLUnmapBufferObjectAsync PFN_cuGLUnmapBufferObjectAsync_v2030 + + +/** + * Type definitions for functions defined in cudaGL.h + */ +typedef CUresult (CUDAAPI *PFN_cuGraphicsGLRegisterBuffer_v3000)(CUgraphicsResource *pCudaResource, GLuint buffer, unsigned int Flags); +typedef CUresult (CUDAAPI *PFN_cuGraphicsGLRegisterImage_v3000)(CUgraphicsResource *pCudaResource, GLuint image, GLenum target, unsigned int Flags); +#ifdef _WIN32 +typedef CUresult (CUDAAPI *PFN_cuWGLGetDevice_v2020)(CUdevice_v1 *pDevice, HGPUNV hGpu); +#endif +typedef CUresult (CUDAAPI *PFN_cuGLGetDevices_v6050)(unsigned int *pCudaDeviceCount, CUdevice_v1 *pCudaDevices, unsigned int cudaDeviceCount, CUGLDeviceList deviceList); +typedef CUresult (CUDAAPI *PFN_cuGLCtxCreate_v3020)(CUcontext *pCtx, unsigned int Flags, CUdevice_v1 device); +typedef CUresult (CUDAAPI *PFN_cuGLInit_v2000)(void); +typedef CUresult (CUDAAPI *PFN_cuGLRegisterBufferObject_v2000)(GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObject_v7000_ptds)(CUdeviceptr_v2 *dptr, size_t *size, GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLUnmapBufferObject_v2000)(GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLUnregisterBufferObject_v2000)(GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLSetBufferObjectMapFlags_v2030)(GLuint buffer, unsigned int Flags); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObjectAsync_v7000_ptsz)(CUdeviceptr_v2 *dptr, size_t *size, GLuint buffer, CUstream hStream); +typedef CUresult (CUDAAPI *PFN_cuGLUnmapBufferObjectAsync_v2030)(GLuint buffer, CUstream hStream); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObject_v3020)(CUdeviceptr_v2 *dptr, size_t *size, GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObjectAsync_v3020)(CUdeviceptr_v2 *dptr, size_t *size, GLuint buffer, CUstream hStream); + +/* + * Type definitions for older versioned functions in cuda.h + */ +#if defined(__CUDA_API_VERSION_INTERNAL) +typedef CUresult (CUDAAPI *PFN_cuGLGetDevices_v4010)(unsigned int *pCudaDeviceCount, CUdevice_v1 *pCudaDevices, unsigned int cudaDeviceCount, CUGLDeviceList deviceList); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObject_v2000)(CUdeviceptr_v1 *dptr, unsigned int *size, GLuint buffer); +typedef CUresult (CUDAAPI *PFN_cuGLMapBufferObjectAsync_v2030)(CUdeviceptr_v1 *dptr, unsigned int *size, GLuint buffer, CUstream hStream); +typedef CUresult (CUDAAPI *PFN_cuGLCtxCreate_v2000)(CUcontext *pCtx, unsigned int Flags, CUdevice_v1 device); +#endif + +#ifdef __cplusplus +} +#endif // __cplusplus + +#endif // file guard diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.h new file mode 100644 index 0000000000000000000000000000000000000000..d47c9a9100b13192e1e6376001a4989e9c077340 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.h @@ -0,0 +1,367 @@ +/* + * Copyright 2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef __CUDA_FP8_H__ +#define __CUDA_FP8_H__ + +/* Set up function decorations */ +#if defined(__CUDACC__) +#define __CUDA_FP8_DECL__ static __device__ __inline__ +#define __CUDA_HOSTDEVICE_FP8__ __host__ __device__ +#define __CUDA_HOSTDEVICE_FP8_DECL__ static __host__ __device__ __inline__ +#else /* !defined(__CUDACC__) */ +#if defined(__GNUC__) +#define __CUDA_HOSTDEVICE_FP8_DECL__ static __attribute__((unused)) +#else +#define __CUDA_HOSTDEVICE_FP8_DECL__ static +#endif /* defined(__GNUC__) */ +#define __CUDA_HOSTDEVICE_FP8__ +#endif /* defined(__CUDACC_) */ + +#if !defined(_MSC_VER) && __cplusplus >= 201103L +#define __CPP_VERSION_AT_LEAST_11_FP8 +#elif _MSC_FULL_VER >= 190024210 && _MSVC_LANG >= 201103L +#define __CPP_VERSION_AT_LEAST_11_FP8 +#endif + +/* bring in __half_raw data type */ +#include "cuda_fp16.h" +/* bring in __nv_bfloat16_raw data type */ +#include "cuda_bf16.h" +/* bring in float2, double4, etc vector types */ +#include "vector_types.h" + +/** + * \defgroup CUDA_MATH_INTRINSIC_FP8 FP8 Intrinsics + * This section describes fp8 intrinsic functions. + * To use these functions, include the header file \p cuda_fp8.h in your + * program. + * The following macros are available to help users selectively enable/disable + * various definitions present in the header file: + * - \p __CUDA_NO_FP8_CONVERSIONS__ - If defined, this macro will prevent any + * use of the C++ type conversions (converting constructors and conversion + * operators) defined in the header. + * - \p __CUDA_NO_FP8_CONVERSION_OPERATORS__ - If defined, this macro will + * prevent any use of the C++ conversion operators from \p fp8 to other types. + */ + +/** + * \defgroup CUDA_MATH_FP8_MISC FP8 Conversion and Data Movement + * \ingroup CUDA_MATH_INTRINSIC_FP8 + * To use these functions, include the header file \p cuda_fp8.h in your + * program. + */ + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief 8-bit \p unsigned \p integer + * type abstraction used to for \p fp8 floating-point + * numbers storage. + */ +typedef unsigned char __nv_fp8_storage_t; + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief 16-bit \p unsigned \p integer + * type abstraction used to for storage of pairs of + * \p fp8 floating-point numbers. + */ +typedef unsigned short int __nv_fp8x2_storage_t; + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief 32-bit \p unsigned \p integer + * type abstraction used to for storage of tetrads of + * \p fp8 floating-point numbers. + */ +typedef unsigned int __nv_fp8x4_storage_t; + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Enumerates the modes applicable when + * performing a narrowing conversion to \p fp8 destination types. + */ +typedef enum __nv_saturation_t { + /** + * Means no saturation to finite is performed when conversion + * results in rounding values outside the range of destination + * type. + * NOTE: for fp8 type of e4m3 kind, the results that are larger + * than the maximum representable finite number of the target + * format become NaN. + */ + __NV_NOSAT, + /** + * Means input larger than the maximum representable + * finite number MAXNORM of the target format round to the + * MAXNORM of the same sign as input. + */ + __NV_SATFINITE, +} __nv_saturation_t; + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Enumerates the possible + * interpretations of the 8-bit values when referring to them as + * \p fp8 types. + */ +typedef enum __nv_fp8_interpretation_t { + __NV_E4M3, /**< Stands for \p fp8 numbers of \p e4m3 kind. */ + __NV_E5M2, /**< Stands for \p fp8 numbers of \p e5m2 kind. */ +} __nv_fp8_interpretation_t; + +/* Forward-declaration of C-style APIs */ + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input \p double precision \p x to \p fp8 type of the + * requested kind using round-to-nearest-even rounding and requested saturation + * mode. + * + * \details Converts input \p x to \p fp8 type of the kind specified by + * \p fp8_interpretation parameter, + * using round-to-nearest-even rounding and + * saturation mode specified by \p saturate parameter. + * + * \returns + * - The \p __nv_fp8_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_double_to_fp8(const double x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input vector of two \p double precision numbers packed + * in \p double2 \p x into a vector of two values of \p fp8 type of + * the requested kind using round-to-nearest-even rounding and requested + * saturation mode. + * + * \details Converts input vector \p x to a vector of two \p fp8 values of the + * kind specified by \p fp8_interpretation parameter, using + * round-to-nearest-even rounding and saturation mode specified by \p saturate + * parameter. + * + * \returns + * - The \p __nv_fp8x2_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_double2_to_fp8x2(const double2 x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input \p single precision \p x to \p fp8 type of the + * requested kind using round-to-nearest-even rounding and requested saturation + * mode. + * + * \details Converts input \p x to \p fp8 type of the kind specified by + * \p fp8_interpretation parameter, + * using round-to-nearest-even rounding and + * saturation mode specified by \p saturate parameter. + * + * \returns + * - The \p __nv_fp8_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_float_to_fp8(const float x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input vector of two \p single precision numbers packed + * in \p float2 \p x into a vector of two values of \p fp8 type of + * the requested kind using round-to-nearest-even rounding and requested + * saturation mode. + * + * \details Converts input vector \p x to a vector of two \p fp8 values of the + * kind specified by \p fp8_interpretation parameter, using + * round-to-nearest-even rounding and saturation mode specified by \p saturate + * parameter. + * + * \returns + * - The \p __nv_fp8x2_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_float2_to_fp8x2(const float2 x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input \p half precision \p x to \p fp8 type of the requested + * kind using round-to-nearest-even rounding and requested saturation mode. + * + * \details Converts input \p x to \p fp8 type of the kind specified by + * \p fp8_interpretation parameter, + * using round-to-nearest-even rounding and + * saturation mode specified by \p saturate parameter. + * + * \returns + * - The \p __nv_fp8_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_halfraw_to_fp8(const __half_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input vector of two \p half precision numbers packed + * in \p __half2_raw \p x into a vector of two values of \p fp8 type of + * the requested kind using round-to-nearest-even rounding and requested + * saturation mode. + * + * \details Converts input vector \p x to a vector of two \p fp8 values of the + * kind specified by \p fp8_interpretation parameter, using + * round-to-nearest-even rounding and saturation mode specified by \p saturate + * parameter. + * + * \returns + * - The \p __nv_fp8x2_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t __nv_cvt_halfraw2_to_fp8x2( + const __half2_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input \p nv_bfloat16 precision \p x to \p fp8 type of the + * requested kind using round-to-nearest-even rounding and requested saturation + * mode. + * + * \details Converts input \p x to \p fp8 type of the kind specified by + * \p fp8_interpretation parameter, + * using round-to-nearest-even rounding and + * saturation mode specified by \p saturate parameter. + * + * \returns + * - The \p __nv_fp8_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t __nv_cvt_bfloat16raw_to_fp8( + const __nv_bfloat16_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input vector of two \p nv_bfloat16 precision numbers packed + * in \p __nv_bfloat162_raw \p x into a vector of two values of \p fp8 type of + * the requested kind using round-to-nearest-even rounding and requested + * saturation mode. + * + * \details Converts input vector \p x to a vector of two \p fp8 values of the + * kind specified by \p fp8_interpretation parameter, using + * round-to-nearest-even rounding and saturation mode specified by \p saturate + * parameter. + * + * \returns + * - The \p __nv_fp8x2_storage_t value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_bfloat16raw2_to_fp8x2( + const __nv_bfloat162_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation); + +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input \p fp8 \p x of the specified kind + * to \p half precision. + * + * \details Converts input \p x of \p fp8 type of the kind specified by + * \p fp8_interpretation parameter + * to \p half precision. + * + * \returns + * - The \p __half_raw value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __half_raw +__nv_cvt_fp8_to_halfraw(const __nv_fp8_storage_t x, + const __nv_fp8_interpretation_t fp8_interpretation); +/** + * \ingroup CUDA_MATH_FP8_MISC + * \brief Converts input vector of two \p fp8 values of the specified kind + * to a vector of two \p half precision values packed in \p __half2_raw + * structure. + * + * \details Converts input vector \p x of \p fp8 type of the kind specified by + * \p fp8_interpretation parameter + * to a vector of two \p half precision values and returns as \p __half2_raw + * structure. + * + * \returns + * - The \p __half2_raw value holds the result of conversion. + */ +__CUDA_HOSTDEVICE_FP8_DECL__ __half2_raw +__nv_cvt_fp8x2_to_halfraw2(const __nv_fp8x2_storage_t x, + const __nv_fp8_interpretation_t fp8_interpretation); + +#if defined(__cplusplus) + +#define __CUDA_FP8_TYPES_EXIST__ + +/* Forward-declaration of structures defined in "cuda_fp8.hpp" */ +struct __nv_fp8_e5m2; +struct __nv_fp8x2_e5m2; +struct __nv_fp8x4_e5m2; + +struct __nv_fp8_e4m3; +struct __nv_fp8x2_e4m3; +struct __nv_fp8x4_e4m3; + +#endif /* defined(__cplusplus) */ + +#include "cuda_fp8.hpp" + +#undef __CUDA_FP8_DECL__ +#undef __CUDA_HOSTDEVICE_FP8__ +#undef __CUDA_HOSTDEVICE_FP8_DECL__ + +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) +#undef __CPP_VERSION_AT_LEAST_11_FP8 +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#endif /* end of include guard: __CUDA_FP8_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.hpp new file mode 100644 index 0000000000000000000000000000000000000000..9212081df2c7ea8938cb1142a3b2ba5750b7329b --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_fp8.hpp @@ -0,0 +1,1546 @@ +/* + * Copyright 2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUDA_FP8_HPP__) +#define __CUDA_FP8_HPP__ + +#if !defined(__CUDA_FP8_H__) +#error "Do not include this file directly. Instead, include cuda_fp8.h." +#endif + +/* C++ header for std::memcpy (used for type punning in host-side + * implementations). When compiling as a CUDA source file memcpy is provided + * implicitly. !defined(__CUDACC__) implies !defined(__CUDACC_RTC__). + */ +#if defined(__cplusplus) && !defined(__CUDACC__) +#include +#elif !defined(__cplusplus) && !defined(__CUDACC__) +#include +#endif /* defined(__cplusplus) && !defined(__CUDACC__) */ + +/* Set up structure-alignment attribute */ +#if !(defined __CUDA_ALIGN__) +#if defined(__CUDACC__) +#define __CUDA_ALIGN__(align) __align__(align) +#else +/* Define alignment macro based on compiler type (cannot assume C11 "_Alignas" + * is available) */ +#if __cplusplus >= 201103L +#define __CUDA_ALIGN__(n) \ + alignas(n) /* C++11 kindly gives us a keyword for this */ +#else /* !defined(__CPP_VERSION_AT_LEAST_11_FP8)*/ +#if defined(__GNUC__) +#define __CUDA_ALIGN__(n) __attribute__((aligned(n))) +#elif defined(_MSC_VER) +#define __CUDA_ALIGN__(n) __declspec(align(n)) +#else +#define __CUDA_ALIGN__(n) +#endif /* defined(__GNUC__) */ +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ +#endif /* defined(__CUDACC__) */ +#endif /* !(defined __CUDA_ALIGN__) */ + +#if !(defined __CPP_VERSION_AT_LEAST_11_FP8) +/* need c++11 for explicit operators */ +#define __CUDA_NO_FP8_CONVERSION_OPERATORS__ +#endif + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_double_to_fp8(const double x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + unsigned char res; + unsigned long long int xbits; + +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&xbits, &x, sizeof(x)); +#else + (void)std::memcpy(&xbits, &x, sizeof(x)); +#endif + unsigned char FP8_MAXNORM; + unsigned char FP8_MANTISSA_MASK; + unsigned short int FP8_EXP_BIAS; + unsigned long long int FP8_SIGNIFICAND_BITS; + const unsigned long long int DP_INF_BITS = 0x7FF0000000000000ULL; + unsigned long long int FP8_MINDENORM_O2; + unsigned long long int FP8_OVERFLOW_THRESHOLD; + unsigned long long int FP8_MINNORM; + + if (fp8_interpretation == __NV_E4M3) { + FP8_EXP_BIAS = 7U; + FP8_SIGNIFICAND_BITS = 4ULL; + FP8_MANTISSA_MASK = 0x7U; + FP8_MINDENORM_O2 = 0x3F50000000000000ULL; // mindenorm/2 = 2^-10 + FP8_OVERFLOW_THRESHOLD = + 0x407D000000000000ULL; // maxnorm + 1/2ulp = 0x1.Cp+8 + 0x1p+4 + FP8_MAXNORM = 0x7EU; + FP8_MINNORM = 0x3F90000000000000ULL; // minnorm = 2^-6 + } else { //__NV_E5M2 + FP8_EXP_BIAS = 15U; + FP8_SIGNIFICAND_BITS = 3ULL; + FP8_MANTISSA_MASK = 0x3U; + FP8_MINDENORM_O2 = 0x3EE0000000000000ULL; // mindenorm/2 = 2^-17 + FP8_OVERFLOW_THRESHOLD = + 0x40EE000000000000ULL - + 1ULL; // maxnorm + 1/2ulp = 0x1.Ep+15, and -1 to have common code + FP8_MAXNORM = 0x7BU; + FP8_MINNORM = 0x3F10000000000000ULL; // minnorm = 2^-14 + } + + // 1/2 LSB of the target format, positioned in double precision mantissa + // helpful in midpoints detection during round-to-nearest-even step + const unsigned long long int FP8_DP_HALF_ULP = + (unsigned long long int)1ULL << (53ULL - FP8_SIGNIFICAND_BITS - 1ULL); + // prepare sign bit in target format + unsigned char sign = (unsigned char)((xbits >> 63ULL) << 7U); + // prepare exponent field in target format + unsigned char exp = + (unsigned char)((((unsigned short int)(xbits >> 52ULL)) & 0x7FFU) - + 1023U + FP8_EXP_BIAS); + // round mantissa to target format width, rounding towards zero + unsigned char mantissa = + (unsigned char)(xbits >> (53ULL - FP8_SIGNIFICAND_BITS)) & + FP8_MANTISSA_MASK; + unsigned long long int absx = xbits & 0x7FFFFFFFFFFFFFFFULL; + + if (absx <= FP8_MINDENORM_O2) { + // zero or underflow + res = 0U; + } else if (absx > DP_INF_BITS) { + // NaN + if (fp8_interpretation == __NV_E4M3) { + res = 0x7FU; + } else { + // NaN --> QNaN + res = 0x7EU | mantissa; + } + } else if (absx > FP8_OVERFLOW_THRESHOLD) { + if (saturate == __NV_SATFINITE) { + res = FP8_MAXNORM; + } else { + // __NV_NOSAT + if (fp8_interpretation == __NV_E4M3) { + // no Inf in E4M3 + res = 0x7FU; // NaN + } else { + res = 0x7CU; // Inf in E5M2 + } + } + } else if (absx >= FP8_MINNORM) { + res = (unsigned char)((exp << (FP8_SIGNIFICAND_BITS - 1U)) | mantissa); + // rounded-off bits + unsigned long long int round = + xbits & ((FP8_DP_HALF_ULP << 1ULL) - 1ULL); + // round-to-nearest-even adjustment + if ((round > FP8_DP_HALF_ULP) || + ((round == FP8_DP_HALF_ULP) && (mantissa & 1U))) { + res = (unsigned char)(res + 1U); + } + } else // Denormal range + { + unsigned char shift = (unsigned char)(1U - exp); + // add implicit leading bit + mantissa |= (unsigned char)(1U << (FP8_SIGNIFICAND_BITS - 1U)); + // additional round-off due to denormalization + res = (unsigned char)(mantissa >> shift); + + // rounded-off bits, including implicit leading bit + unsigned long long int round = + (xbits | ((unsigned long long int)1ULL << (53ULL - 1ULL))) & + ((FP8_DP_HALF_ULP << (shift + 1ULL)) - 1ULL); + // round-to-nearest-even adjustment + if ((round > (FP8_DP_HALF_ULP << shift)) || + ((round == (FP8_DP_HALF_ULP << shift)) && (res & 1U))) { + res = (unsigned char)(res + 1U); + } + } + + res |= sign; + + return (__nv_fp8_storage_t)res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_double2_to_fp8x2(const double2 x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_fp8x2_storage_t storage = (__nv_fp8x2_storage_t)__nv_cvt_double_to_fp8( + x.y, saturate, fp8_interpretation); + storage = (__nv_fp8x2_storage_t)(storage << 8U); + storage = (__nv_fp8x2_storage_t)(storage | + __nv_cvt_double_to_fp8( + x.x, saturate, fp8_interpretation)); + return storage; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_float_to_fp8(const float x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_fp8_storage_t res = 0U; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + if (saturate == __NV_SATFINITE) { + __nv_fp8x2_storage_t storage; + if (fp8_interpretation == __NV_E5M2) { + asm("{cvt.rn.satfinite.e5m2x2.f32 %0, %2, %1;}\n" + : "=h"(storage) + : "f"(x), "f"(0.0f)); + } else { + asm("{cvt.rn.satfinite.e4m3x2.f32 %0, %2, %1;}\n" + : "=h"(storage) + : "f"(x), "f"(0.0f)); + } + res = (__nv_fp8_storage_t)storage; + } else +#endif + { + unsigned int xbits; +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&xbits, &x, sizeof(x)); +#else + (void)std::memcpy(&xbits, &x, sizeof(x)); +#endif + + // isnan + if ((xbits & 0x7FFFFFFFU) > 0x7F800000U) { + // Canonical NaN + xbits = 0x7FFFFFFFU; + } + + float fx; +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&fx, &xbits, sizeof(xbits)); +#else + (void)std::memcpy(&fx, &xbits, sizeof(xbits)); +#endif + + const double dx = (double)fx; + res = __nv_cvt_double_to_fp8(dx, saturate, fp8_interpretation); + } + return res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_float2_to_fp8x2(const float2 x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_fp8x2_storage_t storage; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + if (saturate == __NV_SATFINITE) { + if (fp8_interpretation == __NV_E5M2) { + asm("{cvt.rn.satfinite.e5m2x2.f32 %0, %2, %1;}\n" + : "=h"(storage) + : "f"(x.x), "f"(x.y)); + } else { + asm("{cvt.rn.satfinite.e4m3x2.f32 %0, %2, %1;}\n" + : "=h"(storage) + : "f"(x.x), "f"(x.y)); + } + } else +#endif + { + storage = (__nv_fp8x2_storage_t)__nv_cvt_float_to_fp8( + x.y, saturate, fp8_interpretation); + storage = (__nv_fp8x2_storage_t)(storage << 8U); + storage = (__nv_fp8x2_storage_t)(storage | __nv_cvt_float_to_fp8( + x.x, saturate, + fp8_interpretation)); + } + return storage; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ float +__internal_halfraw_to_float(const __half_raw x) { + float f; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 530) + asm("{cvt.f32.f16 %0, %1;}\n" : "=f"(f) : "h"(x.x)); +#else + const unsigned int ux = (unsigned int)x.x; + unsigned int sign = (ux >> 15U) & 1U; + unsigned int exponent = (ux >> 10U) & 0x1fU; + unsigned int mantissa = (ux & 0x3ffU) << 13U; + if (exponent == 0x1fU) { /* NaN or Inf */ + /* discard sign of a NaN */ + sign = ((mantissa != 0U) ? (sign >> 1U) : sign); + mantissa = ((mantissa != 0U) ? 0x7fffffU : 0U); + exponent = 0xffU; + } else if (exponent == 0U) { /* Denorm or Zero */ + if (mantissa != 0U) { + unsigned int msb; + exponent = 0x71U; + do { + msb = (mantissa & 0x400000U); + mantissa <<= 1U; /* normalize */ + --exponent; + } while (msb == 0U); + mantissa &= 0x7fffffU; /* 1.mantissa is implicit */ + } + } else { + exponent += 0x70U; + } + const unsigned int u = ((sign << 31U) | (exponent << 23U) | mantissa); +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&f, &u, sizeof(u)); +#else + (void)std::memcpy(&f, &u, sizeof(u)); +#endif +#endif /* (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 530) */ + return f; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ float2 +__internal_halfraw2_to_float2(const __half2_raw x) { + __half_raw raw; + float2 res; + raw.x = x.x; + res.x = __internal_halfraw_to_float(raw); + raw.x = x.y; + res.y = __internal_halfraw_to_float(raw); + return res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t +__nv_cvt_halfraw_to_fp8(const __half_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_fp8_storage_t res = 0U; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + if (saturate == __NV_SATFINITE) { + unsigned int half2_storage = (unsigned int)(x.x); + __nv_fp8x2_storage_t tmp; + if (fp8_interpretation == __NV_E5M2) { + asm("{cvt.rn.satfinite.e5m2x2.f16x2 %0, %1;}\n" + : "=h"(tmp) + : "r"(half2_storage)); + } else { + asm("{cvt.rn.satfinite.e4m3x2.f16x2 %0, %1;}\n" + : "=h"(tmp) + : "r"(half2_storage)); + } + res = (__nv_fp8_storage_t)tmp; + } else +#endif + { + float fx = __internal_halfraw_to_float(x); + res = __nv_cvt_float_to_fp8(fx, saturate, fp8_interpretation); + } + return res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t __nv_cvt_halfraw2_to_fp8x2( + const __half2_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_fp8x2_storage_t tmp; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + if (saturate == __NV_SATFINITE) { + unsigned int half2_storage; + (void)memcpy(&half2_storage, &x, sizeof(x)); + + if (fp8_interpretation == __NV_E5M2) { + asm("{cvt.rn.satfinite.e5m2x2.f16x2 %0, %1;}\n" + : "=h"(tmp) + : "r"(half2_storage)); + } else { + asm("{cvt.rn.satfinite.e4m3x2.f16x2 %0, %1;}\n" + : "=h"(tmp) + : "r"(half2_storage)); + } + } else +#endif + { + __half_raw raw; + raw.x = x.x; + __nv_fp8_storage_t lo = + __nv_cvt_halfraw_to_fp8(raw, saturate, fp8_interpretation); + raw.x = x.y; + __nv_fp8_storage_t hi = + __nv_cvt_halfraw_to_fp8(raw, saturate, fp8_interpretation); + tmp = hi; + tmp = (__nv_fp8x2_storage_t)(tmp << 8U); + tmp = (__nv_fp8x2_storage_t)(tmp | lo); + } + return tmp; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ float +__internal_bf16raw_to_float(const __nv_bfloat16_raw x) { + const unsigned int ux = ((unsigned int)x.x) << 16U; + float fx; +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&fx, &ux, sizeof(ux)); +#else + (void)std::memcpy(&fx, &ux, sizeof(ux)); +#endif + return fx; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_bfloat16_raw +__internal_float_to_bf16raw_rz(const float x) { + unsigned int ux; + __nv_bfloat16_raw r; +#if defined(__CUDACC__) || (!defined __cplusplus) + (void)memcpy(&ux, &x, sizeof(x)); +#else + (void)std::memcpy(&ux, &x, sizeof(x)); +#endif + r.x = (unsigned short int)(ux >> 16U); + return r; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8_storage_t __nv_cvt_bfloat16raw_to_fp8( + const __nv_bfloat16_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + const float fx = __internal_bf16raw_to_float(x); + const __nv_fp8_storage_t res = + __nv_cvt_float_to_fp8(fx, saturate, fp8_interpretation); + return res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __nv_fp8x2_storage_t +__nv_cvt_bfloat16raw2_to_fp8x2( + const __nv_bfloat162_raw x, const __nv_saturation_t saturate, + const __nv_fp8_interpretation_t fp8_interpretation) { + __nv_bfloat16_raw raw; + raw.x = x.y; + __nv_fp8x2_storage_t storage = + (__nv_fp8x2_storage_t)__nv_cvt_bfloat16raw_to_fp8(raw, saturate, + fp8_interpretation); + storage = (__nv_fp8x2_storage_t)(storage << 8U); + raw.x = x.x; + storage = (__nv_fp8x2_storage_t)(storage | + __nv_cvt_bfloat16raw_to_fp8( + raw, saturate, fp8_interpretation)); + return storage; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __half2_raw +__nv_cvt_fp8x2_to_halfraw2(const __nv_fp8x2_storage_t x, + const __nv_fp8_interpretation_t fp8_interpretation); +__CUDA_HOSTDEVICE_FP8_DECL__ __half_raw +__nv_cvt_fp8_to_halfraw(const __nv_fp8_storage_t x, + const __nv_fp8_interpretation_t fp8_interpretation) { + __half_raw res; + res.x = 0U; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + res.x = + __nv_cvt_fp8x2_to_halfraw2((__nv_fp8x2_storage_t)x, fp8_interpretation) + .x; +#else + unsigned short int ur = (unsigned short int)x; + ur = (unsigned short int)(ur << 8U); + + if (fp8_interpretation == __NV_E5M2) { + if ((ur & 0x7FFFU) > 0x7C00U) { + /* If NaN, return canonical NaN */ + ur = 0x7FFFU; + } + } else { // __NV_E4M3 + unsigned short int sign = ur & 0x8000U; + unsigned short int exponent = + (unsigned short int)(((ur & 0x7800U) >> 1U) + 0x2000U); + unsigned short int mantissa = (ur & 0x0700U) >> 1U; + unsigned char absx = 0x7FU & (unsigned char)x; + + if (absx == 0x7FU) // NaN + { + ur = 0x7FFFU; // fp16 canonical NaN, discard sign + } else if (exponent == 0x2000U) { + // zero or denormal + if (mantissa != 0U) { + // normalize + mantissa = (unsigned short int)(mantissa << 1U); + while ((mantissa & 0x0400U) == 0U) { + mantissa = (unsigned short int)(mantissa << 1U); + exponent = (unsigned short int)(exponent - 0x0400U); + } + // discard implicit leading bit + mantissa &= 0x03FFU; + } else { // Zero + exponent = 0U; + } + + ur = (sign | exponent) | mantissa; + } else { + ur = (sign | exponent) | mantissa; + } + } + res.x = ur; +#endif + return res; +} + +__CUDA_HOSTDEVICE_FP8_DECL__ __half2_raw +__nv_cvt_fp8x2_to_halfraw2(const __nv_fp8x2_storage_t x, + const __nv_fp8_interpretation_t fp8_interpretation) { + __half2_raw res; +#if (defined __CUDA_ARCH__) && (__CUDA_ARCH__ >= 890) + unsigned int half2_storage; + if (fp8_interpretation == __NV_E5M2) { + asm("{cvt.rn.f16x2.e5m2x2 %0, %1;}\n" : "=r"(half2_storage) : "h"(x)); + } else { + asm("{cvt.rn.f16x2.e4m3x2 %0, %1;}\n" : "=r"(half2_storage) : "h"(x)); + } + (void)memcpy(&res, &half2_storage, sizeof(half2_storage)); +#else + res.x = + __nv_cvt_fp8_to_halfraw((__nv_fp8_storage_t)x, fp8_interpretation).x; + res.y = __nv_cvt_fp8_to_halfraw((__nv_fp8_storage_t)(x >> 8U), + fp8_interpretation) + .x; +#endif + return res; +} + +/* All other definitions in this file are only visible to C++ compilers */ +#if defined(__cplusplus) + +/** + * \defgroup CUDA_MATH_FP8_E5M2_STRUCT C++ struct for handling fp8 data type of e5m2 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * \brief __nv_fp8_e5m2 datatype + * + * \details This structure implements the datatype for handling + * \p fp8 floating-point numbers of \p e5m2 kind: + * with 1 sign, 5 exponent, 1 implicit and 2 explicit mantissa bits. + * + * The structure implements converting constructors and operators. + */ +struct __CUDA_ALIGN__(1) __nv_fp8_e5m2 { + public: + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Storage variable contains the \p fp8 floating-point data. + */ + __nv_fp8_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8_e5m2() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider FP types */ + /* Note we do avoid constructor init-list because of special host/device + * compilation rules */ + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p __half data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const __half f) { + __x = __nv_cvt_halfraw_to_fp8(static_cast<__half_raw>(f), + __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p __nv_bfloat16 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const __nv_bfloat16 f) { + __x = __nv_cvt_bfloat16raw_to_fp8(static_cast<__nv_bfloat16_raw>(f), + __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p float data type, relies on \p __NV_SATFINITE behavior + * for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const float f) { + __x = __nv_cvt_float_to_fp8(f, __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p double data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const double f) { + __x = __nv_cvt_double_to_fp8(f, __NV_SATFINITE, __NV_E5M2); + } + + /* Converts from integral */ + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p unsigned \p short \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ + __nv_fp8_e5m2(const unsigned short int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p unsigned \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const unsigned int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p unsigned \p long \p long \p int data type, relies on + * \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ + __nv_fp8_e5m2(const unsigned long long int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p short \p int data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const short int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p int data type, relies on \p __NV_SATFINITE behavior + * for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Constructor from \p long \p long \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e5m2(const long long int val) { + __x = static_cast<__nv_fp8_e5m2>(static_cast(val)).__x; + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening FP converts */ + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p __half data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __half() const { + return static_cast<__half>(__nv_cvt_fp8_to_halfraw(__x, __NV_E5M2)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p float data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float() const { + return __internal_halfraw_to_float( + __nv_cvt_fp8_to_halfraw(__x, __NV_E5M2)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p __nv_bfloat16 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __nv_bfloat16() const { + return static_cast<__nv_bfloat16>( + __internal_float_to_bf16raw_rz(float(*this))); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p double data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator double() const { + return static_cast(float(*this)); + } + + /* Convert to integral */ + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p unsigned \p char data type. + * Clamps negative and too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned char() const { + unsigned char i; + const float f = float(*this); + const unsigned char max_val = 0xFFU; + const unsigned char min_val = 0U; + const unsigned char bits = (*this).__x; + // saturation fixup + if ((bits & 0x7FU) > 0x7CU) { + // NaN + i = 0; + } else if (f > static_cast(max_val)) { + // saturate maximum + i = max_val; + } else if (f < static_cast(min_val)) { + // saturate minimum + i = min_val; + } else { + // normal value + i = static_cast(f); + } + return i; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p unsigned \p short \p int data type. + * Clamps negative and too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned short int() const { + return __half2ushort_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p unsigned \p int data type. + * Clamps negative and too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned int() const { + return __half2uint_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p unsigned \p long \p long \p int data type. + * Clamps negative and too large inputs to the output range. + * \p NaN inputs convert to \p 0x8000000000000000ULL. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned long long int() const { + return __half2ull_rz(__half(*this)); + } + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p signed \p char data type. + * Clamps too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator signed char() const { + signed char i; + const float f = float(*this); + const signed char max_val = (signed char)0x7FU; + const signed char min_val = (signed char)0x80U; + const unsigned char bits = (*this).__x; + // saturation fixup + if ((bits & 0x7FU) > 0x7CU) { + // NaN + i = 0; + } else if (f > static_cast(max_val)) { + // saturate maximum + i = max_val; + } else if (f < static_cast(min_val)) { + // saturate minimum + i = min_val; + } else { + // normal value + i = static_cast(f); + } + return i; + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p short \p int data type. + * Clamps too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator short int() const { + return __half2short_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p int data type. + * Clamps too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator int() const { + return __half2int_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p long \p long \p int data type. + * Clamps too large inputs to the output range. + * \p NaN inputs convert to \p 0x8000000000000000LL. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator long long int() const { + return __half2ll_rz(__half(*this)); + } + + /** + * \ingroup CUDA_MATH_FP8_E5M2_STRUCT + * Conversion operator to \p bool data type. + * +0 and -0 inputs convert to \p false. + * Non-zero inputs convert to \p true. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator bool() const { + return (__x & 0x7FU) != 0U; + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +/** + * \defgroup CUDA_MATH_FP8X2_E5M2_STRUCT C++ struct for handling vector type of two fp8 values of e5m2 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * \brief __nv_fp8x2_e5m2 datatype + * + * \details This structure implements the datatype for handling two + * \p fp8 floating-point numbers of \p e5m2 kind each: + * with 1 sign, 5 exponent, 1 implicit and 2 explicit mantissa bits. + * + * The structure implements converting constructors and operators. + */ +struct __CUDA_ALIGN__(2) __nv_fp8x2_e5m2 { + public: + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Storage variable contains the vector of two \p fp8 floating-point data + * values. + */ + __nv_fp8x2_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8x2_e5m2() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e5m2() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider types */ + + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Constructor from \p __half2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e5m2(const __half2 f) { + __x = __nv_cvt_halfraw2_to_fp8x2(static_cast<__half2_raw>(f), + __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Constructor from \p __nv_bfloat162 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e5m2(const __nv_bfloat162 f) { + __x = __nv_cvt_bfloat16raw2_to_fp8x2(static_cast<__nv_bfloat162_raw>(f), + __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Constructor from \p float2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e5m2(const float2 f) { + __x = __nv_cvt_float2_to_fp8x2(f, __NV_SATFINITE, __NV_E5M2); + } + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Constructor from \p double2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e5m2(const double2 f) { + __x = __nv_cvt_double2_to_fp8x2(f, __NV_SATFINITE, __NV_E5M2); + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening converts */ + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Conversion operator to \p __half2 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __half2() const { + return static_cast<__half2>(__nv_cvt_fp8x2_to_halfraw2(__x, __NV_E5M2)); + } + /** + * \ingroup CUDA_MATH_FP8X2_E5M2_STRUCT + * Conversion operator to \p float2 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float2() const { + return __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(__x, __NV_E5M2)); + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +__CUDA_HOSTDEVICE_FP8_DECL__ unsigned int +__internal_pack_u16x2_to_u32(const unsigned short int src_lo, + const unsigned short int src_hi) { + unsigned int dst; +#if (defined __CUDACC__) && (defined __CUDA_ARCH__) + asm("{ mov.b32 %0, {%1,%2};}\n" : "=r"(dst) : "h"(src_lo), "h"(src_hi)); +#else + dst = (static_cast(src_hi) << 16U) | + static_cast(src_lo); +#endif + return dst; +} + +/** + * \defgroup CUDA_MATH_FP8X4_E5M2_STRUCT C++ struct for handling vector type of four fp8 values of e5m2 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * \brief __nv_fp8x4_e5m2 datatype + * + * \details This structure implements the datatype for handling four + * \p fp8 floating-point numbers of \p e5m2 kind each: + * with 1 sign, 5 exponent, 1 implicit and 2 explicit mantissa bits. + * + * The structure implements converting constructors and operators. + */ +struct __CUDA_ALIGN__(4) __nv_fp8x4_e5m2 { + public: + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Storage variable contains the vector of four \p fp8 floating-point data + * values. + */ + __nv_fp8x4_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8x4_e5m2() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e5m2() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider types */ + + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Constructor from a pair of \p __half2 data type values, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e5m2(const __half2 flo, + const __half2 fhi) { + const __nv_fp8x2_storage_t rlo = __nv_cvt_halfraw2_to_fp8x2( + static_cast<__half2_raw>(flo), __NV_SATFINITE, __NV_E5M2); + const __nv_fp8x2_storage_t rhi = __nv_cvt_halfraw2_to_fp8x2( + static_cast<__half2_raw>(fhi), __NV_SATFINITE, __NV_E5M2); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Constructor from a pair of \p __nv_bfloat162 data type values, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e5m2(const __nv_bfloat162 flo, + const __nv_bfloat162 fhi) { + const __nv_fp8x2_storage_t rlo = __nv_cvt_bfloat16raw2_to_fp8x2( + static_cast<__nv_bfloat162_raw>(flo), __NV_SATFINITE, __NV_E5M2); + const __nv_fp8x2_storage_t rhi = __nv_cvt_bfloat16raw2_to_fp8x2( + static_cast<__nv_bfloat162_raw>(fhi), __NV_SATFINITE, __NV_E5M2); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Constructor from \p float4 vector data type, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e5m2(const float4 f) { + const float2 flo = {f.x, f.y}; + const float2 fhi = {f.z, f.w}; + const __nv_fp8x2_storage_t rlo = + __nv_cvt_float2_to_fp8x2(flo, __NV_SATFINITE, __NV_E5M2); + const __nv_fp8x2_storage_t rhi = + __nv_cvt_float2_to_fp8x2(fhi, __NV_SATFINITE, __NV_E5M2); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Constructor from \p double4 vector data type, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e5m2(const double4 f) { + const double2 flo = {f.x, f.y}; + const double2 fhi = {f.z, f.w}; + const __nv_fp8x2_storage_t rlo = + __nv_cvt_double2_to_fp8x2(flo, __NV_SATFINITE, __NV_E5M2); + const __nv_fp8x2_storage_t rhi = + __nv_cvt_double2_to_fp8x2(fhi, __NV_SATFINITE, __NV_E5M2); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening converts */ + + /** + * \ingroup CUDA_MATH_FP8X4_E5M2_STRUCT + * Conversion operator to \p float4 vector data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float4() const { + const __nv_fp8x2_storage_t slo = static_cast<__nv_fp8x2_storage_t>(__x); + const __nv_fp8x2_storage_t shi = + static_cast<__nv_fp8x2_storage_t>(__x >> 16U); + float2 rlo = __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(slo, __NV_E5M2)); + float2 rhi = __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(shi, __NV_E5M2)); + float4 res = {rlo.x, rlo.y, rhi.x, rhi.y}; + return res; + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +/** + * \defgroup CUDA_MATH_FP8_E4M3_STRUCT C++ struct for handling fp8 data type of e4m3 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * \brief __nv_fp8_e4m3 datatype + * + * \details This structure implements the datatype for storing + * \p fp8 floating-point numbers of \p e4m3 kind: + * with 1 sign, 4 exponent, 1 implicit and 3 explicit mantissa bits. + * The encoding doesn't support Infinity. + * NaNs are limited to 0x7F and 0xFF values. + * + * The structure implements converting constructors and operators. + */ +struct __CUDA_ALIGN__(1) __nv_fp8_e4m3 { + public: + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Storage variable contains the \p fp8 floating-point data. + */ + __nv_fp8_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8_e4m3() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider FP types */ + /* Note we do avoid constructor init-list because of special host/device + * compilation rules */ + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p __half data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const __half f) { + __x = __nv_cvt_halfraw_to_fp8(static_cast<__half_raw>(f), + __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p __nv_bfloat16 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const __nv_bfloat16 f) { + __x = __nv_cvt_bfloat16raw_to_fp8(static_cast<__nv_bfloat16_raw>(f), + __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p float data type, relies on \p __NV_SATFINITE behavior + * for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const float f) { + __x = __nv_cvt_float_to_fp8(f, __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p double data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const double f) { + __x = __nv_cvt_double_to_fp8(f, __NV_SATFINITE, __NV_E4M3); + } + + /* Converts from integral */ + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p unsigned \p short \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ + __nv_fp8_e4m3(const unsigned short int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p unsigned \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const unsigned int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p unsigned \p long \p long \p int data type, relies on + * \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ + __nv_fp8_e4m3(const unsigned long long int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p short \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const short int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p int data type, relies on \p __NV_SATFINITE behavior + * for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Constructor from \p long \p long \p int data type, relies on \p + * __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8_e4m3(const long long int val) { + __x = static_cast<__nv_fp8_e4m3>(static_cast(val)).__x; + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening FP converts */ + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p __half data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __half() const { + return static_cast<__half>(__nv_cvt_fp8_to_halfraw(__x, __NV_E4M3)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p float data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float() const { + return __internal_halfraw_to_float( + __nv_cvt_fp8_to_halfraw(__x, __NV_E4M3)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p __nv_bfloat16 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __nv_bfloat16() const { + return static_cast<__nv_bfloat16>( + __internal_float_to_bf16raw_rz(float(*this))); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p double data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator double() const { + return static_cast(float(*this)); + } + + /* Convert to integral */ + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p unsigned \p char data type. + * Clamps negative and too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned char() const { + unsigned char i; + const float f = float(*this); + const unsigned char max_val = 0xFFU; + const unsigned char min_val = 0U; + const unsigned char bits = (*this).__x; + // saturation fixup + if ((bits & 0x7FU) == 0x7FU) { + // NaN + i = 0; + } else if (f > static_cast(max_val)) { + // saturate maximum + i = max_val; + } else if (f < static_cast(min_val)) { + // saturate minimum + i = min_val; + } else { + // normal value + i = static_cast(f); + } + return i; + } + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p unsigned \p short \p int data type. + * Clamps negative inputs to zero. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned short int() const { + return __half2ushort_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p unsigned \p int data type. + * Clamps negative inputs to zero. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned int() const { + return __half2uint_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p unsigned \p long \p long \p int data type. + * Clamps negative inputs to zero. + * \p NaN inputs convert to \p 0x8000000000000000ULL. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator unsigned long long int() const { + return __half2ull_rz(__half(*this)); + } + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p signed \p char data type. + * Clamps too large inputs to the output range. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator signed char() const { + signed char i; + const float f = float(*this); + const signed char max_val = (signed char)0x7FU; + const signed char min_val = (signed char)0x80U; + const unsigned char bits = (*this).__x; + // saturation fixup + if ((bits & 0x7FU) == 0x7FU) { + // NaN + i = 0; + } else if (f > static_cast(max_val)) { + // saturate maximum + i = max_val; + } else if (f < static_cast(min_val)) { + // saturate minimum + i = min_val; + } else { + // normal value + i = static_cast(f); + } + return i; + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p short \p int data type. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator short int() const { + return __half2short_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p int data type. + * \p NaN inputs convert to \p zero. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator int() const { + return __half2int_rz(__half(*this)); + } + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p long \p long \p int data type. + * \p NaN inputs convert to \p 0x8000000000000000LL. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator long long int() const { + return __half2ll_rz(__half(*this)); + } + + /** + * \ingroup CUDA_MATH_FP8_E4M3_STRUCT + * Conversion operator to \p bool data type. + * +0 and -0 inputs convert to \p false. + * Non-zero inputs convert to \p true. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator bool() const { + return (__x & 0x7FU) != 0U; + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +/** + * \defgroup CUDA_MATH_FP8X2_E4M3_STRUCT C++ struct for handling vector type of two fp8 values of e4m3 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * \brief __nv_fp8x2_e4m3 datatype + * + * \details This structure implements the datatype for storage + * and operations on the vector of two \p fp8 values of \p e4m3 kind each: + * with 1 sign, 4 exponent, 1 implicit and 3 explicit mantissa bits. + * The encoding doesn't support Infinity. + * NaNs are limited to 0x7F and 0xFF values. + */ +struct __CUDA_ALIGN__(2) __nv_fp8x2_e4m3 { + public: + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Storage variable contains the vector of two \p fp8 floating-point data + * values. + */ + __nv_fp8x2_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8x2_e4m3() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e4m3() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider types */ + + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Constructor from \p __half2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e4m3(const __half2 f) { + __x = __nv_cvt_halfraw2_to_fp8x2(static_cast<__half2_raw>(f), + __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Constructor from \p __nv_bfloat162 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e4m3(const __nv_bfloat162 f) { + __x = __nv_cvt_bfloat16raw2_to_fp8x2(static_cast<__nv_bfloat162_raw>(f), + __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Constructor from \p float2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e4m3(const float2 f) { + __x = __nv_cvt_float2_to_fp8x2(f, __NV_SATFINITE, __NV_E4M3); + } + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Constructor from \p double2 data type, relies on \p __NV_SATFINITE + * behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x2_e4m3(const double2 f) { + __x = __nv_cvt_double2_to_fp8x2(f, __NV_SATFINITE, __NV_E4M3); + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening converts */ + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Conversion operator to \p __half2 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator __half2() const { + return static_cast<__half2>(__nv_cvt_fp8x2_to_halfraw2(__x, __NV_E4M3)); + } + /** + * \ingroup CUDA_MATH_FP8X2_E4M3_STRUCT + * Conversion operator to \p float2 data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float2() const { + return __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(__x, __NV_E4M3)); + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +/** + * \defgroup CUDA_MATH_FP8X4_E4M3_STRUCT C++ struct for handling vector type of four fp8 values of e4m3 kind. + * \ingroup CUDA_MATH_INTRINSIC_FP8 + */ + +/** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * \brief __nv_fp8x4_e4m3 datatype + * + * \details This structure implements the datatype for storage + * and operations on the vector of four \p fp8 values of \p e4m3 kind each: + * with 1 sign, 4 exponent, 1 implicit and 3 explicit mantissa bits. + * The encoding doesn't support Infinity. + * NaNs are limited to 0x7F and 0xFF values. + */ +struct __CUDA_ALIGN__(4) __nv_fp8x4_e4m3 { + public: + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Storage variable contains the vector of four \p fp8 floating-point data + * values. + */ + __nv_fp8x4_storage_t __x; + + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Constructor by default. + */ +#if defined(__CPP_VERSION_AT_LEAST_11_FP8) + __nv_fp8x4_e4m3() = default; +#else + __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e4m3() {} +#endif /* defined(__CPP_VERSION_AT_LEAST_11_FP8) */ + +#if !defined(__CUDA_NO_FP8_CONVERSIONS__) + + /* Construct from wider types */ + + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Constructor from a pair of \p __half2 data type values, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e4m3(const __half2 flo, + const __half2 fhi) { + const __nv_fp8x2_storage_t rlo = __nv_cvt_halfraw2_to_fp8x2( + static_cast<__half2_raw>(flo), __NV_SATFINITE, __NV_E4M3); + const __nv_fp8x2_storage_t rhi = __nv_cvt_halfraw2_to_fp8x2( + static_cast<__half2_raw>(fhi), __NV_SATFINITE, __NV_E4M3); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Constructor from a pair of \p __nv_bfloat162 data type values, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e4m3(const __nv_bfloat162 flo, + const __nv_bfloat162 fhi) { + const __nv_fp8x2_storage_t rlo = __nv_cvt_bfloat16raw2_to_fp8x2( + static_cast<__nv_bfloat162_raw>(flo), __NV_SATFINITE, __NV_E4M3); + const __nv_fp8x2_storage_t rhi = __nv_cvt_bfloat16raw2_to_fp8x2( + static_cast<__nv_bfloat162_raw>(fhi), __NV_SATFINITE, __NV_E4M3); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Constructor from \p float4 vector data type, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e4m3(const float4 f) { + const float2 flo = {f.x, f.y}; + const float2 fhi = {f.z, f.w}; + const __nv_fp8x2_storage_t rlo = + __nv_cvt_float2_to_fp8x2(flo, __NV_SATFINITE, __NV_E4M3); + const __nv_fp8x2_storage_t rhi = + __nv_cvt_float2_to_fp8x2(fhi, __NV_SATFINITE, __NV_E4M3); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Constructor from \p double4 vector data type, + * relies on \p __NV_SATFINITE behavior for out-of-range values. + */ + explicit __CUDA_HOSTDEVICE_FP8__ __nv_fp8x4_e4m3(const double4 f) { + const double2 flo = {f.x, f.y}; + const double2 fhi = {f.z, f.w}; + const __nv_fp8x2_storage_t rlo = + __nv_cvt_double2_to_fp8x2(flo, __NV_SATFINITE, __NV_E4M3); + const __nv_fp8x2_storage_t rhi = + __nv_cvt_double2_to_fp8x2(fhi, __NV_SATFINITE, __NV_E4M3); + __x = __internal_pack_u16x2_to_u32(rlo, rhi); + } + +#if !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) + /* Widening converts */ + + /** + * \ingroup CUDA_MATH_FP8X4_E4M3_STRUCT + * Conversion operator to \p float4 vector data type. + */ + explicit __CUDA_HOSTDEVICE_FP8__ operator float4() const { + const __nv_fp8x2_storage_t slo = static_cast<__nv_fp8x2_storage_t>(__x); + const __nv_fp8x2_storage_t shi = + static_cast<__nv_fp8x2_storage_t>(__x >> 16U); + float2 rlo = __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(slo, __NV_E4M3)); + float2 rhi = __internal_halfraw2_to_float2( + __nv_cvt_fp8x2_to_halfraw2(shi, __NV_E4M3)); + float4 res = {rlo.x, rlo.y, rhi.x, rhi.y}; + return res; + } +#endif /* !defined(__CUDA_NO_FP8_CONVERSION_OPERATORS__) */ +#endif /* !defined(__CUDA_NO_FP8_CONVERSIONS__) */ +}; + +#endif /* defined(__cplusplus) */ + +#endif /* end of include guard: __CUDA_FP8_HPP__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_occupancy.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_occupancy.h new file mode 100644 index 0000000000000000000000000000000000000000..ffe55709f8ccdebf7341180f043006b68c08e104 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_occupancy.h @@ -0,0 +1,1958 @@ +/* + * Copyright 1993-2017 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/** + * CUDA Occupancy Calculator + * + * NAME + * + * cudaOccMaxActiveBlocksPerMultiprocessor, + * cudaOccMaxPotentialOccupancyBlockSize, + * cudaOccMaxPotentialOccupancyBlockSizeVariableSMem + * cudaOccAvailableDynamicSMemPerBlock + * + * DESCRIPTION + * + * The CUDA occupancy calculator provides a standalone, programmatical + * interface to compute the occupancy of a function on a device. It can also + * provide occupancy-oriented launch configuration suggestions. + * + * The function and device are defined by the user through + * cudaOccFuncAttributes, cudaOccDeviceProp, and cudaOccDeviceState + * structures. All APIs require all 3 of them. + * + * See the structure definition for more details about the device / function + * descriptors. + * + * See each API's prototype for API usage. + * + * COMPATIBILITY + * + * The occupancy calculator will be updated on each major CUDA toolkit + * release. It does not provide forward compatibility, i.e. new hardwares + * released after this implementation's release will not be supported. + * + * NOTE + * + * If there is access to CUDA runtime, and the sole intent is to calculate + * occupancy related values on one of the accessible CUDA devices, using CUDA + * runtime's occupancy calculation APIs is recommended. + * + */ + +#ifndef __cuda_occupancy_h__ +#define __cuda_occupancy_h__ + +#include +#include +#include + + +// __OCC_INLINE will be undefined at the end of this header +// +#ifdef __CUDACC__ +#define __OCC_INLINE inline __host__ __device__ +#elif defined _MSC_VER +#define __OCC_INLINE __inline +#else // GNUCC assumed +#define __OCC_INLINE inline +#endif + +enum cudaOccError_enum { + CUDA_OCC_SUCCESS = 0, // no error encountered + CUDA_OCC_ERROR_INVALID_INPUT = 1, // input parameter is invalid + CUDA_OCC_ERROR_UNKNOWN_DEVICE = 2, // requested device is not supported in + // current implementation or device is + // invalid +}; +typedef enum cudaOccError_enum cudaOccError; + +typedef struct cudaOccResult cudaOccResult; +typedef struct cudaOccDeviceProp cudaOccDeviceProp; +typedef struct cudaOccFuncAttributes cudaOccFuncAttributes; +typedef struct cudaOccDeviceState cudaOccDeviceState; + +/** + * The CUDA occupancy calculator computes the occupancy of the function + * described by attributes with the given block size (blockSize), static device + * properties (properties), dynamic device states (states) and per-block dynamic + * shared memory allocation (dynamicSMemSize) in bytes, and output it through + * result along with other useful information. The occupancy is computed in + * terms of the maximum number of active blocks per multiprocessor. The user can + * then convert it to other metrics, such as number of active warps. + * + * RETURN VALUE + * + * The occupancy and related information is returned through result. + * + * If result->activeBlocksPerMultiprocessor is 0, then the given parameter + * combination cannot run on the device. + * + * ERRORS + * + * CUDA_OCC_ERROR_INVALID_INPUT input parameter is invalid. + * CUDA_OCC_ERROR_UNKNOWN_DEVICE requested device is not supported in + * current implementation or device is invalid + */ +static __OCC_INLINE +cudaOccError cudaOccMaxActiveBlocksPerMultiprocessor( + cudaOccResult *result, // out + const cudaOccDeviceProp *properties, // in + const cudaOccFuncAttributes *attributes, // in + const cudaOccDeviceState *state, // in + int blockSize, // in + size_t dynamicSmemSize); // in + +/** + * The CUDA launch configurator C API suggests a grid / block size pair (in + * minGridSize and blockSize) that achieves the best potential occupancy + * (i.e. maximum number of active warps with the smallest number of blocks) for + * the given function described by attributes, on a device described by + * properties with settings in state. + * + * If per-block dynamic shared memory allocation is not needed, the user should + * leave both blockSizeToDynamicSMemSize and dynamicSMemSize as 0. + * + * If per-block dynamic shared memory allocation is needed, then if the dynamic + * shared memory size is constant regardless of block size, the size should be + * passed through dynamicSMemSize, and blockSizeToDynamicSMemSize should be + * NULL. + * + * Otherwise, if the per-block dynamic shared memory size varies with different + * block sizes, the user needs to provide a pointer to an unary function through + * blockSizeToDynamicSMemSize that computes the dynamic shared memory needed by + * a block of the function for any given block size. dynamicSMemSize is + * ignored. An example signature is: + * + * // Take block size, returns dynamic shared memory needed + * size_t blockToSmem(int blockSize); + * + * RETURN VALUE + * + * The suggested block size and the minimum number of blocks needed to achieve + * the maximum occupancy are returned through blockSize and minGridSize. + * + * If *blockSize is 0, then the given combination cannot run on the device. + * + * ERRORS + * + * CUDA_OCC_ERROR_INVALID_INPUT input parameter is invalid. + * CUDA_OCC_ERROR_UNKNOWN_DEVICE requested device is not supported in + * current implementation or device is invalid + * + */ +static __OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSize( + int *minGridSize, // out + int *blockSize, // out + const cudaOccDeviceProp *properties, // in + const cudaOccFuncAttributes *attributes, // in + const cudaOccDeviceState *state, // in + size_t (*blockSizeToDynamicSMemSize)(int), // in + size_t dynamicSMemSize); // in + +/** + * The CUDA launch configurator C++ API suggests a grid / block size pair (in + * minGridSize and blockSize) that achieves the best potential occupancy + * (i.e. the maximum number of active warps with the smallest number of blocks) + * for the given function described by attributes, on a device described by + * properties with settings in state. + * + * If per-block dynamic shared memory allocation is 0 or constant regardless of + * block size, the user can use cudaOccMaxPotentialOccupancyBlockSize to + * configure the launch. A constant dynamic shared memory allocation size in + * bytes can be passed through dynamicSMemSize. + * + * Otherwise, if the per-block dynamic shared memory size varies with different + * block sizes, the user needs to use + * cudaOccMaxPotentialOccupancyBlockSizeVariableSmem instead, and provide a + * functor / pointer to an unary function (blockSizeToDynamicSMemSize) that + * computes the dynamic shared memory needed by func for any given block + * size. An example signature is: + * + * // Take block size, returns per-block dynamic shared memory needed + * size_t blockToSmem(int blockSize); + * + * RETURN VALUE + * + * The suggested block size and the minimum number of blocks needed to achieve + * the maximum occupancy are returned through blockSize and minGridSize. + * + * If *blockSize is 0, then the given combination cannot run on the device. + * + * ERRORS + * + * CUDA_OCC_ERROR_INVALID_INPUT input parameter is invalid. + * CUDA_OCC_ERROR_UNKNOWN_DEVICE requested device is not supported in + * current implementation or device is invalid + * + */ + +#if defined(__cplusplus) +namespace { + +__OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSize( + int *minGridSize, // out + int *blockSize, // out + const cudaOccDeviceProp *properties, // in + const cudaOccFuncAttributes *attributes, // in + const cudaOccDeviceState *state, // in + size_t dynamicSMemSize = 0); // in + +template +__OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSizeVariableSMem( + int *minGridSize, // out + int *blockSize, // out + const cudaOccDeviceProp *properties, // in + const cudaOccFuncAttributes *attributes, // in + const cudaOccDeviceState *state, // in + UnaryFunction blockSizeToDynamicSMemSize); // in + +} // namespace anonymous +#endif // defined(__cplusplus) + +/** + * + * The CUDA dynamic shared memory calculator computes the maximum size of + * per-block dynamic shared memory if we want to place numBlocks blocks + * on an SM. + * + * RETURN VALUE + * + * Returns in *dynamicSmemSize the maximum size of dynamic shared memory to allow + * numBlocks blocks per SM. + * + * ERRORS + * + * CUDA_OCC_ERROR_INVALID_INPUT input parameter is invalid. + * CUDA_OCC_ERROR_UNKNOWN_DEVICE requested device is not supported in + * current implementation or device is invalid + * + */ +static __OCC_INLINE +cudaOccError cudaOccAvailableDynamicSMemPerBlock( + size_t *dynamicSmemSize, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + int numBlocks, + int blockSize); + +/** + * Data structures + * + * These structures are subject to change for future architecture and CUDA + * releases. C users should initialize the structure as {0}. + * + */ + +/** + * Device descriptor + * + * This structure describes a device. + */ +struct cudaOccDeviceProp { + int computeMajor; // Compute capability major version + int computeMinor; // Compute capability minor + // version. None supported minor version + // may cause error + int maxThreadsPerBlock; // Maximum number of threads per block + int maxThreadsPerMultiprocessor; // Maximum number of threads per SM + // i.e. (Max. number of warps) x (warp + // size) + int regsPerBlock; // Maximum number of registers per block + int regsPerMultiprocessor; // Maximum number of registers per SM + int warpSize; // Warp size + size_t sharedMemPerBlock; // Maximum shared memory size per block + size_t sharedMemPerMultiprocessor; // Maximum shared memory size per SM + int numSms; // Number of SMs available + size_t sharedMemPerBlockOptin; // Maximum optin shared memory size per block + size_t reservedSharedMemPerBlock; // Shared memory per block reserved by driver + +#ifdef __cplusplus + // This structure can be converted from a cudaDeviceProp structure for users + // that use this header in their CUDA applications. + // + // If the application have access to the CUDA Runtime API, the application + // can obtain the device properties of a CUDA device through + // cudaGetDeviceProperties, and initialize a cudaOccDeviceProp with the + // cudaDeviceProp structure. + // + // Example: + /* + { + cudaDeviceProp prop; + + cudaGetDeviceProperties(&prop, ...); + + cudaOccDeviceProp occProp = prop; + + ... + + cudaOccMaxPotentialOccupancyBlockSize(..., &occProp, ...); + } + */ + // + template + __OCC_INLINE + cudaOccDeviceProp(const DeviceProp &props) + : computeMajor (props.major), + computeMinor (props.minor), + maxThreadsPerBlock (props.maxThreadsPerBlock), + maxThreadsPerMultiprocessor (props.maxThreadsPerMultiProcessor), + regsPerBlock (props.regsPerBlock), + regsPerMultiprocessor (props.regsPerMultiprocessor), + warpSize (props.warpSize), + sharedMemPerBlock (props.sharedMemPerBlock), + sharedMemPerMultiprocessor (props.sharedMemPerMultiprocessor), + numSms (props.multiProcessorCount), + sharedMemPerBlockOptin (props.sharedMemPerBlockOptin), + reservedSharedMemPerBlock (props.reservedSharedMemPerBlock) + {} + + __OCC_INLINE + cudaOccDeviceProp() + : computeMajor (0), + computeMinor (0), + maxThreadsPerBlock (0), + maxThreadsPerMultiprocessor (0), + regsPerBlock (0), + regsPerMultiprocessor (0), + warpSize (0), + sharedMemPerBlock (0), + sharedMemPerMultiprocessor (0), + numSms (0), + sharedMemPerBlockOptin (0), + reservedSharedMemPerBlock (0) + {} +#endif // __cplusplus +}; + +/** + * Partitioned global caching option + */ +typedef enum cudaOccPartitionedGCConfig_enum { + PARTITIONED_GC_OFF, // Disable partitioned global caching + PARTITIONED_GC_ON, // Prefer partitioned global caching + PARTITIONED_GC_ON_STRICT // Force partitioned global caching +} cudaOccPartitionedGCConfig; + +/** + * Per function opt in maximum dynamic shared memory limit + */ +typedef enum cudaOccFuncShmemConfig_enum { + FUNC_SHMEM_LIMIT_DEFAULT, // Default shmem limit + FUNC_SHMEM_LIMIT_OPTIN, // Use the optin shmem limit +} cudaOccFuncShmemConfig; + +/** + * Function descriptor + * + * This structure describes a CUDA function. + */ +struct cudaOccFuncAttributes { + int maxThreadsPerBlock; // Maximum block size the function can work with. If + // unlimited, use INT_MAX or any value greater than + // or equal to maxThreadsPerBlock of the device + int numRegs; // Number of registers used. When the function is + // launched on device, the register count may change + // due to internal tools requirements. + size_t sharedSizeBytes; // Number of static shared memory used + + cudaOccPartitionedGCConfig partitionedGCConfig; + // Partitioned global caching is required to enable + // caching on certain chips, such as sm_52 + // devices. Partitioned global caching can be + // automatically disabled if the occupancy + // requirement of the launch cannot support caching. + // + // To override this behavior with caching on and + // calculate occupancy strictly according to the + // preference, set partitionedGCConfig to + // PARTITIONED_GC_ON_STRICT. This is especially + // useful for experimenting and finding launch + // configurations (MaxPotentialOccupancyBlockSize) + // that allow global caching to take effect. + // + // This flag only affects the occupancy calculation. + + cudaOccFuncShmemConfig shmemLimitConfig; + // Certain chips like sm_70 allow a user to opt into + // a higher per block limit of dynamic shared memory + // This optin is performed on a per function basis + // using the cuFuncSetAttribute function + + size_t maxDynamicSharedSizeBytes; + // User set limit on maximum dynamic shared memory + // usable by the kernel + // This limit is set using the cuFuncSetAttribute + // function. + + int numBlockBarriers; // Number of block barriers used (default to 1) +#ifdef __cplusplus + // This structure can be converted from a cudaFuncAttributes structure for + // users that use this header in their CUDA applications. + // + // If the application have access to the CUDA Runtime API, the application + // can obtain the function attributes of a CUDA kernel function through + // cudaFuncGetAttributes, and initialize a cudaOccFuncAttributes with the + // cudaFuncAttributes structure. + // + // Example: + /* + __global__ void foo() {...} + + ... + + { + cudaFuncAttributes attr; + + cudaFuncGetAttributes(&attr, foo); + + cudaOccFuncAttributes occAttr = attr; + + ... + + cudaOccMaxPotentialOccupancyBlockSize(..., &occAttr, ...); + } + */ + // + template + __OCC_INLINE + cudaOccFuncAttributes(const FuncAttributes &attr) + : maxThreadsPerBlock (attr.maxThreadsPerBlock), + numRegs (attr.numRegs), + sharedSizeBytes (attr.sharedSizeBytes), + partitionedGCConfig (PARTITIONED_GC_OFF), + shmemLimitConfig (FUNC_SHMEM_LIMIT_OPTIN), + maxDynamicSharedSizeBytes (attr.maxDynamicSharedSizeBytes), + numBlockBarriers (1) + {} + + __OCC_INLINE + cudaOccFuncAttributes() + : maxThreadsPerBlock (0), + numRegs (0), + sharedSizeBytes (0), + partitionedGCConfig (PARTITIONED_GC_OFF), + shmemLimitConfig (FUNC_SHMEM_LIMIT_DEFAULT), + maxDynamicSharedSizeBytes (0), + numBlockBarriers (0) + {} +#endif +}; + +typedef enum cudaOccCacheConfig_enum { + CACHE_PREFER_NONE = 0x00, // no preference for shared memory or L1 (default) + CACHE_PREFER_SHARED = 0x01, // prefer larger shared memory and smaller L1 cache + CACHE_PREFER_L1 = 0x02, // prefer larger L1 cache and smaller shared memory + CACHE_PREFER_EQUAL = 0x03 // prefer equal sized L1 cache and shared memory +} cudaOccCacheConfig; + +typedef enum cudaOccCarveoutConfig_enum { + SHAREDMEM_CARVEOUT_DEFAULT = -1, // no preference for shared memory or L1 (default) + SHAREDMEM_CARVEOUT_MAX_SHARED = 100, // prefer maximum available shared memory, minimum L1 cache + SHAREDMEM_CARVEOUT_MAX_L1 = 0, // prefer maximum available L1 cache, minimum shared memory + SHAREDMEM_CARVEOUT_HALF = 50 // prefer half of maximum available shared memory, with the rest as L1 cache +} cudaOccCarveoutConfig; + +/** + * Device state descriptor + * + * This structure describes device settings that affect occupancy calculation. + */ +struct cudaOccDeviceState +{ + // Cache / shared memory split preference. Deprecated on Volta + cudaOccCacheConfig cacheConfig; + // Shared memory / L1 split preference. Supported on only Volta + int carveoutConfig; + +#ifdef __cplusplus + __OCC_INLINE + cudaOccDeviceState() + : cacheConfig (CACHE_PREFER_NONE), + carveoutConfig (SHAREDMEM_CARVEOUT_DEFAULT) + {} +#endif +}; + +typedef enum cudaOccLimitingFactor_enum { + // Occupancy limited due to: + OCC_LIMIT_WARPS = 0x01, // - warps available + OCC_LIMIT_REGISTERS = 0x02, // - registers available + OCC_LIMIT_SHARED_MEMORY = 0x04, // - shared memory available + OCC_LIMIT_BLOCKS = 0x08, // - blocks available + OCC_LIMIT_BARRIERS = 0x10 // - barrier available +} cudaOccLimitingFactor; + +/** + * Occupancy output + * + * This structure contains occupancy calculator's output. + */ +struct cudaOccResult { + int activeBlocksPerMultiprocessor; // Occupancy + unsigned int limitingFactors; // Factors that limited occupancy. A bit + // field that counts the limiting + // factors, see cudaOccLimitingFactor + int blockLimitRegs; // Occupancy due to register + // usage, INT_MAX if the kernel does not + // use any register. + int blockLimitSharedMem; // Occupancy due to shared memory + // usage, INT_MAX if the kernel does not + // use shared memory. + int blockLimitWarps; // Occupancy due to block size limit + int blockLimitBlocks; // Occupancy due to maximum number of blocks + // managable per SM + int blockLimitBarriers; // Occupancy due to block barrier usage + int allocatedRegistersPerBlock; // Actual number of registers allocated per + // block + size_t allocatedSharedMemPerBlock; // Actual size of shared memory allocated + // per block + cudaOccPartitionedGCConfig partitionedGCConfig; + // Report if partitioned global caching + // is actually enabled. +}; + +/** + * Partitioned global caching support + * + * See cudaOccPartitionedGlobalCachingModeSupport + */ +typedef enum cudaOccPartitionedGCSupport_enum { + PARTITIONED_GC_NOT_SUPPORTED, // Partitioned global caching is not supported + PARTITIONED_GC_SUPPORTED, // Partitioned global caching is supported +} cudaOccPartitionedGCSupport; + +/** + * Implementation + */ + +/** + * Max compute capability supported + */ +#define __CUDA_OCC_MAJOR__ 9 +#define __CUDA_OCC_MINOR__ 0 + +////////////////////////////////////////// +// Mathematical Helper Functions // +////////////////////////////////////////// + +static __OCC_INLINE int __occMin(int lhs, int rhs) +{ + return rhs < lhs ? rhs : lhs; +} + +static __OCC_INLINE int __occDivideRoundUp(int x, int y) +{ + return (x + (y - 1)) / y; +} + +static __OCC_INLINE int __occRoundUp(int x, int y) +{ + return y * __occDivideRoundUp(x, y); +} + +////////////////////////////////////////// +// Architectural Properties // +////////////////////////////////////////// + +/** + * Granularity of shared memory allocation + */ +static __OCC_INLINE cudaOccError cudaOccSMemAllocationGranularity(int *limit, const cudaOccDeviceProp *properties) +{ + int value; + + switch(properties->computeMajor) { + case 3: + case 5: + case 6: + case 7: + value = 256; + break; + case 8: + case 9: + value = 128; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = value; + + return CUDA_OCC_SUCCESS; +} + +/** + * Maximum number of registers per thread + */ +static __OCC_INLINE cudaOccError cudaOccRegAllocationMaxPerThread(int *limit, const cudaOccDeviceProp *properties) +{ + int value; + + switch(properties->computeMajor) { + case 3: + case 5: + case 6: + value = 255; + break; + case 7: + case 8: + case 9: + value = 256; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = value; + + return CUDA_OCC_SUCCESS; +} + +/** + * Granularity of register allocation + */ +static __OCC_INLINE cudaOccError cudaOccRegAllocationGranularity(int *limit, const cudaOccDeviceProp *properties) +{ + int value; + + switch(properties->computeMajor) { + case 3: + case 5: + case 6: + case 7: + case 8: + case 9: + value = 256; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = value; + + return CUDA_OCC_SUCCESS; +} + +/** + * Number of sub-partitions + */ +static __OCC_INLINE cudaOccError cudaOccSubPartitionsPerMultiprocessor(int *limit, const cudaOccDeviceProp *properties) +{ + int value; + + switch(properties->computeMajor) { + case 3: + case 5: + case 7: + case 8: + case 9: + value = 4; + break; + case 6: + value = properties->computeMinor ? 4 : 2; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = value; + + return CUDA_OCC_SUCCESS; +} + + +/** + * Maximum number of blocks that can run simultaneously on a multiprocessor + */ +static __OCC_INLINE cudaOccError cudaOccMaxBlocksPerMultiprocessor(int* limit, const cudaOccDeviceProp *properties) +{ + int value; + + switch(properties->computeMajor) { + case 3: + value = 16; + break; + case 5: + case 6: + value = 32; + break; + case 7: { + int isTuring = properties->computeMinor == 5; + value = (isTuring) ? 16 : 32; + break; + } + case 8: + if (properties->computeMinor == 0) { + value = 32; + } + else if (properties->computeMinor == 9) { + value = 24; + } + else { + value = 16; + } + break; + case 9: + value = 32; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = value; + + return CUDA_OCC_SUCCESS; +} + +/** + * Align up shared memory based on compute major configurations + */ +static __OCC_INLINE cudaOccError cudaOccAlignUpShmemSizeVoltaPlus(size_t *shMemSize, const cudaOccDeviceProp *properties) +{ + // Volta and Turing have shared L1 cache / shared memory, and support cache + // configuration to trade one for the other. These values are needed to + // map carveout config ratio to the next available architecture size + size_t size = *shMemSize; + + switch (properties->computeMajor) { + case 7: { + // Turing supports 32KB and 64KB shared mem. + int isTuring = properties->computeMinor == 5; + if (isTuring) { + if (size <= 32 * 1024) { + *shMemSize = 32 * 1024; + } + else if (size <= 64 * 1024) { + *shMemSize = 64 * 1024; + } + else { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + } + // Volta supports 0KB, 8KB, 16KB, 32KB, 64KB, and 96KB shared mem. + else { + if (size == 0) { + *shMemSize = 0; + } + else if (size <= 8 * 1024) { + *shMemSize = 8 * 1024; + } + else if (size <= 16 * 1024) { + *shMemSize = 16 * 1024; + } + else if (size <= 32 * 1024) { + *shMemSize = 32 * 1024; + } + else if (size <= 64 * 1024) { + *shMemSize = 64 * 1024; + } + else if (size <= 96 * 1024) { + *shMemSize = 96 * 1024; + } + else { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + } + break; + } + case 8: + if (properties->computeMinor == 0 || properties->computeMinor == 7) { + if (size == 0) { + *shMemSize = 0; + } + else if (size <= 8 * 1024) { + *shMemSize = 8 * 1024; + } + else if (size <= 16 * 1024) { + *shMemSize = 16 * 1024; + } + else if (size <= 32 * 1024) { + *shMemSize = 32 * 1024; + } + else if (size <= 64 * 1024) { + *shMemSize = 64 * 1024; + } + else if (size <= 100 * 1024) { + *shMemSize = 100 * 1024; + } + else if (size <= 132 * 1024) { + *shMemSize = 132 * 1024; + } + else if (size <= 164 * 1024) { + *shMemSize = 164 * 1024; + } + else { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + } + else { + if (size == 0) { + *shMemSize = 0; + } + else if (size <= 8 * 1024) { + *shMemSize = 8 * 1024; + } + else if (size <= 16 * 1024) { + *shMemSize = 16 * 1024; + } + else if (size <= 32 * 1024) { + *shMemSize = 32 * 1024; + } + else if (size <= 64 * 1024) { + *shMemSize = 64 * 1024; + } + else if (size <= 100 * 1024) { + *shMemSize = 100 * 1024; + } + else { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + } + break; + case 9: { + if (size == 0) { + *shMemSize = 0; + } + else if (size <= 8 * 1024) { + *shMemSize = 8 * 1024; + } + else if (size <= 16 * 1024) { + *shMemSize = 16 * 1024; + } + else if (size <= 32 * 1024) { + *shMemSize = 32 * 1024; + } + else if (size <= 64 * 1024) { + *shMemSize = 64 * 1024; + } + else if (size <= 100 * 1024) { + *shMemSize = 100 * 1024; + } + else if (size <= 132 * 1024) { + *shMemSize = 132 * 1024; + } + else if (size <= 164 * 1024) { + *shMemSize = 164 * 1024; + } + else if (size <= 196 * 1024) { + *shMemSize = 196 * 1024; + } + else if (size <= 228 * 1024) { + *shMemSize = 228 * 1024; + } + else { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + break; + } + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + return CUDA_OCC_SUCCESS; +} + +/** + * Shared memory based on the new carveoutConfig API introduced with Volta + */ +static __OCC_INLINE cudaOccError cudaOccSMemPreferenceVoltaPlus(size_t *limit, const cudaOccDeviceProp *properties, const cudaOccDeviceState *state) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + size_t preferenceShmemSize; + + // CUDA 9.0 introduces a new API to set shared memory - L1 configuration on supported + // devices. This preference will take precedence over the older cacheConfig setting. + // Map cacheConfig to its effective preference value. + int effectivePreference = state->carveoutConfig; + if ((effectivePreference < SHAREDMEM_CARVEOUT_DEFAULT) || (effectivePreference > SHAREDMEM_CARVEOUT_MAX_SHARED)) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + if (effectivePreference == SHAREDMEM_CARVEOUT_DEFAULT) { + switch (state->cacheConfig) + { + case CACHE_PREFER_L1: + effectivePreference = SHAREDMEM_CARVEOUT_MAX_L1; + break; + case CACHE_PREFER_SHARED: + effectivePreference = SHAREDMEM_CARVEOUT_MAX_SHARED; + break; + case CACHE_PREFER_EQUAL: + effectivePreference = SHAREDMEM_CARVEOUT_HALF; + break; + default: + effectivePreference = SHAREDMEM_CARVEOUT_DEFAULT; + break; + } + } + + if (effectivePreference == SHAREDMEM_CARVEOUT_DEFAULT) { + preferenceShmemSize = properties->sharedMemPerMultiprocessor; + } + else { + preferenceShmemSize = (size_t) (effectivePreference * properties->sharedMemPerMultiprocessor) / 100; + } + + status = cudaOccAlignUpShmemSizeVoltaPlus(&preferenceShmemSize, properties); + *limit = preferenceShmemSize; + return status; +} + +/** + * Shared memory based on the cacheConfig + */ +static __OCC_INLINE cudaOccError cudaOccSMemPreference(size_t *limit, const cudaOccDeviceProp *properties, const cudaOccDeviceState *state) +{ + size_t bytes = 0; + size_t sharedMemPerMultiprocessorHigh = properties->sharedMemPerMultiprocessor; + cudaOccCacheConfig cacheConfig = state->cacheConfig; + + // Kepler has shared L1 cache / shared memory, and support cache + // configuration to trade one for the other. These values are needed to + // calculate the correct shared memory size for user requested cache + // configuration. + // + size_t minCacheSize = 16384; + size_t maxCacheSize = 49152; + size_t cacheAndSharedTotal = sharedMemPerMultiprocessorHigh + minCacheSize; + size_t sharedMemPerMultiprocessorLow = cacheAndSharedTotal - maxCacheSize; + + switch (properties->computeMajor) { + case 3: + // Kepler supports 16KB, 32KB, or 48KB partitions for L1. The rest + // is shared memory. + // + switch (cacheConfig) { + default : + case CACHE_PREFER_NONE: + case CACHE_PREFER_SHARED: + bytes = sharedMemPerMultiprocessorHigh; + break; + case CACHE_PREFER_L1: + bytes = sharedMemPerMultiprocessorLow; + break; + case CACHE_PREFER_EQUAL: + // Equal is the mid-point between high and low. It should be + // equivalent to low + 16KB. + // + bytes = (sharedMemPerMultiprocessorHigh + sharedMemPerMultiprocessorLow) / 2; + break; + } + break; + case 5: + case 6: + // Maxwell and Pascal have dedicated shared memory. + // + bytes = sharedMemPerMultiprocessorHigh; + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + *limit = bytes; + + return CUDA_OCC_SUCCESS; +} + +/** + * Shared memory based on config requested by User + */ +static __OCC_INLINE cudaOccError cudaOccSMemPerMultiprocessor(size_t *limit, const cudaOccDeviceProp *properties, const cudaOccDeviceState *state) +{ + // Volta introduces a new API that allows for shared memory carveout preference. Because it is a shared memory preference, + // it is handled separately from the cache config preference. + if (properties->computeMajor >= 7) { + return cudaOccSMemPreferenceVoltaPlus(limit, properties, state); + } + return cudaOccSMemPreference(limit, properties, state); +} + +/** + * Return the per block shared memory limit based on function config + */ +static __OCC_INLINE cudaOccError cudaOccSMemPerBlock(size_t *limit, const cudaOccDeviceProp *properties, cudaOccFuncShmemConfig shmemLimitConfig, size_t smemPerCta) +{ + switch (properties->computeMajor) { + case 2: + case 3: + case 4: + case 5: + case 6: + *limit = properties->sharedMemPerBlock; + break; + case 7: + case 8: + case 9: + switch (shmemLimitConfig) { + default: + case FUNC_SHMEM_LIMIT_DEFAULT: + *limit = properties->sharedMemPerBlock; + break; + case FUNC_SHMEM_LIMIT_OPTIN: + if (smemPerCta > properties->sharedMemPerBlock) { + *limit = properties->sharedMemPerBlockOptin; + } + else { + *limit = properties->sharedMemPerBlock; + } + break; + } + break; + default: + return CUDA_OCC_ERROR_UNKNOWN_DEVICE; + } + + // Starting Ampere, CUDA driver reserves additional shared memory per block + if (properties->computeMajor >= 8) { + *limit += properties->reservedSharedMemPerBlock; + } + + return CUDA_OCC_SUCCESS; +} + +/** + * Partitioned global caching mode support + */ +static __OCC_INLINE cudaOccError cudaOccPartitionedGlobalCachingModeSupport(cudaOccPartitionedGCSupport *limit, const cudaOccDeviceProp *properties) +{ + *limit = PARTITIONED_GC_NOT_SUPPORTED; + + if ((properties->computeMajor == 5 && (properties->computeMinor == 2 || properties->computeMinor == 3)) || + properties->computeMajor == 6) { + *limit = PARTITIONED_GC_SUPPORTED; + } + + if (properties->computeMajor == 6 && properties->computeMinor == 0) { + *limit = PARTITIONED_GC_NOT_SUPPORTED; + } + + return CUDA_OCC_SUCCESS; +} + +/////////////////////////////////////////////// +// User Input Sanity // +/////////////////////////////////////////////// + +static __OCC_INLINE cudaOccError cudaOccDevicePropCheck(const cudaOccDeviceProp *properties) +{ + // Verify device properties + // + // Each of these limits must be a positive number. + // + // Compute capacity is checked during the occupancy calculation + // + if (properties->maxThreadsPerBlock <= 0 || + properties->maxThreadsPerMultiprocessor <= 0 || + properties->regsPerBlock <= 0 || + properties->regsPerMultiprocessor <= 0 || + properties->warpSize <= 0 || + properties->sharedMemPerBlock <= 0 || + properties->sharedMemPerMultiprocessor <= 0 || + properties->numSms <= 0) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + return CUDA_OCC_SUCCESS; +} + +static __OCC_INLINE cudaOccError cudaOccFuncAttributesCheck(const cudaOccFuncAttributes *attributes) +{ + // Verify function attributes + // + if (attributes->maxThreadsPerBlock <= 0 || + attributes->numRegs < 0) { // Compiler may choose not to use + // any register (empty kernels, + // etc.) + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + return CUDA_OCC_SUCCESS; +} + +static __OCC_INLINE cudaOccError cudaOccDeviceStateCheck(const cudaOccDeviceState *state) +{ + (void)state; // silence unused-variable warning + // Placeholder + // + + return CUDA_OCC_SUCCESS; +} + +static __OCC_INLINE cudaOccError cudaOccInputCheck( + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + + status = cudaOccDevicePropCheck(properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + status = cudaOccFuncAttributesCheck(attributes); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + status = cudaOccDeviceStateCheck(state); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + return status; +} + +/////////////////////////////////////////////// +// Occupancy calculation Functions // +/////////////////////////////////////////////// + +static __OCC_INLINE cudaOccPartitionedGCConfig cudaOccPartitionedGCExpected( + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes) +{ + cudaOccPartitionedGCSupport gcSupport; + cudaOccPartitionedGCConfig gcConfig; + + cudaOccPartitionedGlobalCachingModeSupport(&gcSupport, properties); + + gcConfig = attributes->partitionedGCConfig; + + if (gcSupport == PARTITIONED_GC_NOT_SUPPORTED) { + gcConfig = PARTITIONED_GC_OFF; + } + + return gcConfig; +} + +// Warp limit +// +static __OCC_INLINE cudaOccError cudaOccMaxBlocksPerSMWarpsLimit( + int *limit, + cudaOccPartitionedGCConfig gcConfig, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + int blockSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + int maxWarpsPerSm; + int warpsAllocatedPerCTA; + int maxBlocks; + (void)attributes; // silence unused-variable warning + + if (blockSize > properties->maxThreadsPerBlock) { + maxBlocks = 0; + } + else { + maxWarpsPerSm = properties->maxThreadsPerMultiprocessor / properties->warpSize; + warpsAllocatedPerCTA = __occDivideRoundUp(blockSize, properties->warpSize); + maxBlocks = 0; + + if (gcConfig != PARTITIONED_GC_OFF) { + int maxBlocksPerSmPartition; + int maxWarpsPerSmPartition; + + // If partitioned global caching is on, then a CTA can only use a SM + // partition (a half SM), and thus a half of the warp slots + // available per SM + // + maxWarpsPerSmPartition = maxWarpsPerSm / 2; + maxBlocksPerSmPartition = maxWarpsPerSmPartition / warpsAllocatedPerCTA; + maxBlocks = maxBlocksPerSmPartition * 2; + } + // On hardware that supports partitioned global caching, each half SM is + // guaranteed to support at least 32 warps (maximum number of warps of a + // CTA), so caching will not cause 0 occupancy due to insufficient warp + // allocation slots. + // + else { + maxBlocks = maxWarpsPerSm / warpsAllocatedPerCTA; + } + } + + *limit = maxBlocks; + + return status; +} + +// Shared memory limit +// +static __OCC_INLINE cudaOccError cudaOccMaxBlocksPerSMSmemLimit( + int *limit, + cudaOccResult *result, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + int blockSize, + size_t dynamicSmemSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + int allocationGranularity; + size_t userSmemPreference = 0; + size_t totalSmemUsagePerCTA; + size_t maxSmemUsagePerCTA; + size_t smemAllocatedPerCTA; + size_t staticSmemSize; + size_t sharedMemPerMultiprocessor; + size_t smemLimitPerCTA; + int maxBlocks; + int dynamicSmemSizeExceeded = 0; + int totalSmemSizeExceeded = 0; + (void)blockSize; // silence unused-variable warning + + status = cudaOccSMemAllocationGranularity(&allocationGranularity, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // Obtain the user preferred shared memory size. This setting is ignored if + // user requests more shared memory than preferred. + // + status = cudaOccSMemPerMultiprocessor(&userSmemPreference, properties, state); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + staticSmemSize = attributes->sharedSizeBytes + properties->reservedSharedMemPerBlock; + totalSmemUsagePerCTA = staticSmemSize + dynamicSmemSize; + smemAllocatedPerCTA = __occRoundUp((int)totalSmemUsagePerCTA, (int)allocationGranularity); + + maxSmemUsagePerCTA = staticSmemSize + attributes->maxDynamicSharedSizeBytes; + + dynamicSmemSizeExceeded = 0; + totalSmemSizeExceeded = 0; + + // Obtain the user set maximum dynamic size if it exists + // If so, the current launch dynamic shared memory must not + // exceed the set limit + if (attributes->shmemLimitConfig != FUNC_SHMEM_LIMIT_DEFAULT && + dynamicSmemSize > attributes->maxDynamicSharedSizeBytes) { + dynamicSmemSizeExceeded = 1; + } + + status = cudaOccSMemPerBlock(&smemLimitPerCTA, properties, attributes->shmemLimitConfig, maxSmemUsagePerCTA); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + if (smemAllocatedPerCTA > smemLimitPerCTA) { + totalSmemSizeExceeded = 1; + } + + if (dynamicSmemSizeExceeded || totalSmemSizeExceeded) { + maxBlocks = 0; + } + else { + // User requested shared memory limit is used as long as it is greater + // than the total shared memory used per CTA, i.e. as long as at least + // one CTA can be launched. + if (userSmemPreference >= smemAllocatedPerCTA) { + sharedMemPerMultiprocessor = userSmemPreference; + } + else { + // On Volta+, user requested shared memory will limit occupancy + // if it's less than shared memory per CTA. Otherwise, the + // maximum shared memory limit is used. + if (properties->computeMajor >= 7) { + sharedMemPerMultiprocessor = smemAllocatedPerCTA; + status = cudaOccAlignUpShmemSizeVoltaPlus(&sharedMemPerMultiprocessor, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + } + else { + sharedMemPerMultiprocessor = properties->sharedMemPerMultiprocessor; + } + } + + if (smemAllocatedPerCTA > 0) { + maxBlocks = (int)(sharedMemPerMultiprocessor / smemAllocatedPerCTA); + } + else { + maxBlocks = INT_MAX; + } + } + + result->allocatedSharedMemPerBlock = smemAllocatedPerCTA; + + *limit = maxBlocks; + + return status; +} + +static __OCC_INLINE +cudaOccError cudaOccMaxBlocksPerSMRegsLimit( + int *limit, + cudaOccPartitionedGCConfig *gcConfig, + cudaOccResult *result, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + int blockSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + int allocationGranularity; + int warpsAllocatedPerCTA; + int regsAllocatedPerCTA; + int regsAssumedPerCTA; + int regsPerWarp; + int regsAllocatedPerWarp; + int numSubPartitions; + int numRegsPerSubPartition; + int numWarpsPerSubPartition; + int numWarpsPerSM; + int maxBlocks; + int maxRegsPerThread; + + status = cudaOccRegAllocationGranularity( + &allocationGranularity, + properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + status = cudaOccRegAllocationMaxPerThread( + &maxRegsPerThread, + properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + status = cudaOccSubPartitionsPerMultiprocessor(&numSubPartitions, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + warpsAllocatedPerCTA = __occDivideRoundUp(blockSize, properties->warpSize); + + // GPUs of compute capability 2.x and higher allocate registers to warps + // + // Number of regs per warp is regs per thread x warp size, rounded up to + // register allocation granularity + // + regsPerWarp = attributes->numRegs * properties->warpSize; + regsAllocatedPerWarp = __occRoundUp(regsPerWarp, allocationGranularity); + regsAllocatedPerCTA = regsAllocatedPerWarp * warpsAllocatedPerCTA; + + // Hardware verifies if a launch fits the per-CTA register limit. For + // historical reasons, the verification logic assumes register + // allocations are made to all partitions simultaneously. Therefore, to + // simulate the hardware check, the warp allocation needs to be rounded + // up to the number of partitions. + // + regsAssumedPerCTA = regsAllocatedPerWarp * __occRoundUp(warpsAllocatedPerCTA, numSubPartitions); + + if (properties->regsPerBlock < regsAssumedPerCTA || // Hardware check + properties->regsPerBlock < regsAllocatedPerCTA || // Software check + attributes->numRegs > maxRegsPerThread) { // Per thread limit check + maxBlocks = 0; + } + else { + if (regsAllocatedPerWarp > 0) { + // Registers are allocated in each sub-partition. The max number + // of warps that can fit on an SM is equal to the max number of + // warps per sub-partition x number of sub-partitions. + // + numRegsPerSubPartition = properties->regsPerMultiprocessor / numSubPartitions; + numWarpsPerSubPartition = numRegsPerSubPartition / regsAllocatedPerWarp; + + maxBlocks = 0; + + if (*gcConfig != PARTITIONED_GC_OFF) { + int numSubPartitionsPerSmPartition; + int numWarpsPerSmPartition; + int maxBlocksPerSmPartition; + + // If partitioned global caching is on, then a CTA can only + // use a half SM, and thus a half of the registers available + // per SM + // + numSubPartitionsPerSmPartition = numSubPartitions / 2; + numWarpsPerSmPartition = numWarpsPerSubPartition * numSubPartitionsPerSmPartition; + maxBlocksPerSmPartition = numWarpsPerSmPartition / warpsAllocatedPerCTA; + maxBlocks = maxBlocksPerSmPartition * 2; + } + + // Try again if partitioned global caching is not enabled, or if + // the CTA cannot fit on the SM with caching on (maxBlocks == 0). In the latter + // case, the device will automatically turn off caching, except + // if the user forces enablement via PARTITIONED_GC_ON_STRICT to calculate + // occupancy and launch configuration. + // + if (maxBlocks == 0 && *gcConfig != PARTITIONED_GC_ON_STRICT) { + // In case *gcConfig was PARTITIONED_GC_ON flip it OFF since + // this is what it will be if we spread CTA across partitions. + // + *gcConfig = PARTITIONED_GC_OFF; + numWarpsPerSM = numWarpsPerSubPartition * numSubPartitions; + maxBlocks = numWarpsPerSM / warpsAllocatedPerCTA; + } + } + else { + maxBlocks = INT_MAX; + } + } + + + result->allocatedRegistersPerBlock = regsAllocatedPerCTA; + + *limit = maxBlocks; + + return status; +} + +// Barrier limit +// +static __OCC_INLINE cudaOccError cudaOccMaxBlocksPerSMBlockBarrierLimit( + int *limit, + int ctaLimitBlocks, + const cudaOccFuncAttributes *attributes) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + int numBarriersAvailable = ctaLimitBlocks * 2; + int numBarriersUsed = attributes->numBlockBarriers; + int maxBlocks = INT_MAX; + + if (numBarriersUsed) { + maxBlocks = numBarriersAvailable / numBarriersUsed; + } + + *limit = maxBlocks; + + return status; +} + +/////////////////////////////////// +// API Implementations // +/////////////////////////////////// + +static __OCC_INLINE +cudaOccError cudaOccMaxActiveBlocksPerMultiprocessor( + cudaOccResult *result, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + int blockSize, + size_t dynamicSmemSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + int ctaLimitWarps = 0; + int ctaLimitBlocks = 0; + int ctaLimitSMem = 0; + int ctaLimitRegs = 0; + int ctaLimitBars = 0; + int ctaLimit = 0; + unsigned int limitingFactors = 0; + + cudaOccPartitionedGCConfig gcConfig = PARTITIONED_GC_OFF; + + if (!result || !properties || !attributes || !state || blockSize <= 0) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + /////////////////////////// + // Check user input + /////////////////////////// + + status = cudaOccInputCheck(properties, attributes, state); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + /////////////////////////// + // Initialization + /////////////////////////// + + gcConfig = cudaOccPartitionedGCExpected(properties, attributes); + + /////////////////////////// + // Compute occupancy + /////////////////////////// + + // Limits due to registers/SM + // Also compute if partitioned global caching has to be turned off + // + status = cudaOccMaxBlocksPerSMRegsLimit(&ctaLimitRegs, &gcConfig, result, properties, attributes, blockSize); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // SMs on GP100 (6.0) have 2 subpartitions, while those on GP10x have 4. + // As a result, an SM on GP100 may be able to run more CTAs than the one on GP10x. + // For forward compatibility within Pascal family, if a function cannot run on GP10x (maxBlock == 0), + // we do not let it run on any Pascal processor, even though it may be able to run on GP100. + // Therefore, we check the occupancy on GP10x when it can run on GP100 + // + if (properties->computeMajor == 6 && properties->computeMinor == 0 && ctaLimitRegs) { + cudaOccDeviceProp propertiesGP10x; + cudaOccPartitionedGCConfig gcConfigGP10x = gcConfig; + int ctaLimitRegsGP10x = 0; + + // Set up properties for GP10x + memcpy(&propertiesGP10x, properties, sizeof(propertiesGP10x)); + propertiesGP10x.computeMinor = 1; + + status = cudaOccMaxBlocksPerSMRegsLimit(&ctaLimitRegsGP10x, &gcConfigGP10x, result, &propertiesGP10x, attributes, blockSize); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + if (ctaLimitRegsGP10x == 0) { + ctaLimitRegs = 0; + } + } + + // Limits due to warps/SM + // + status = cudaOccMaxBlocksPerSMWarpsLimit(&ctaLimitWarps, gcConfig, properties, attributes, blockSize); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // Limits due to blocks/SM + // + status = cudaOccMaxBlocksPerMultiprocessor(&ctaLimitBlocks, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // Limits due to shared memory/SM + // + status = cudaOccMaxBlocksPerSMSmemLimit(&ctaLimitSMem, result, properties, attributes, state, blockSize, dynamicSmemSize); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + /////////////////////////// + // Overall occupancy + /////////////////////////// + + // Overall limit is min() of limits due to above reasons + // + ctaLimit = __occMin(ctaLimitRegs, __occMin(ctaLimitSMem, __occMin(ctaLimitWarps, ctaLimitBlocks))); + + // Determine occupancy limiting factors + // + if (ctaLimit == ctaLimitWarps) { + limitingFactors |= OCC_LIMIT_WARPS; + } + if (ctaLimit == ctaLimitRegs) { + limitingFactors |= OCC_LIMIT_REGISTERS; + } + if (ctaLimit == ctaLimitSMem) { + limitingFactors |= OCC_LIMIT_SHARED_MEMORY; + } + if (ctaLimit == ctaLimitBlocks) { + limitingFactors |= OCC_LIMIT_BLOCKS; + } + + // For Hopper onwards compute the limits to occupancy based on block barrier count + // + if (properties->computeMajor >= 9 && attributes->numBlockBarriers > 0) { + // Limits due to barrier/SM + // + status = cudaOccMaxBlocksPerSMBlockBarrierLimit(&ctaLimitBars, ctaLimitBlocks, attributes); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // Recompute overall limit based on barrier/SM + // + ctaLimit = __occMin(ctaLimitBars, ctaLimit); + + // Determine if this is occupancy limiting factor + // + if (ctaLimit == ctaLimitBars) { + limitingFactors |= OCC_LIMIT_BARRIERS; + } + } + else { + ctaLimitBars = INT_MAX; + } + + // Fill in the return values + // + result->limitingFactors = limitingFactors; + + result->blockLimitRegs = ctaLimitRegs; + result->blockLimitSharedMem = ctaLimitSMem; + result->blockLimitWarps = ctaLimitWarps; + result->blockLimitBlocks = ctaLimitBlocks; + result->blockLimitBarriers = ctaLimitBars; + result->partitionedGCConfig = gcConfig; + + // Final occupancy + result->activeBlocksPerMultiprocessor = ctaLimit; + + return CUDA_OCC_SUCCESS; +} + +static __OCC_INLINE +cudaOccError cudaOccAvailableDynamicSMemPerBlock( + size_t *bytesAvailable, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + int numBlocks, + int blockSize) +{ + int allocationGranularity; + size_t smemLimitPerBlock; + size_t smemAvailableForDynamic; + size_t userSmemPreference = 0; + size_t sharedMemPerMultiprocessor; + cudaOccResult result; + cudaOccError status = CUDA_OCC_SUCCESS; + + if (numBlocks <= 0) + return CUDA_OCC_ERROR_INVALID_INPUT; + + // First compute occupancy of potential kernel launch. + // + status = cudaOccMaxActiveBlocksPerMultiprocessor(&result, properties, attributes, state, blockSize, 0); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + // Check if occupancy is achievable given user requested number of blocks. + // + if (result.activeBlocksPerMultiprocessor < numBlocks) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + status = cudaOccSMemAllocationGranularity(&allocationGranularity, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // Return the per block shared memory limit based on function config. + // + status = cudaOccSMemPerBlock(&smemLimitPerBlock, properties, attributes->shmemLimitConfig, properties->sharedMemPerMultiprocessor); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + // If there is only a single block needed per SM, then the user preference can be ignored and the fully SW + // limit is allowed to be used as shared memory otherwise if more than one block is needed, then the user + // preference sets the total limit of available shared memory. + // + cudaOccSMemPerMultiprocessor(&userSmemPreference, properties, state); + if (numBlocks == 1) { + sharedMemPerMultiprocessor = smemLimitPerBlock; + } + else { + if (!userSmemPreference) { + userSmemPreference = 1 ; + status = cudaOccAlignUpShmemSizeVoltaPlus(&userSmemPreference, properties); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + } + sharedMemPerMultiprocessor = userSmemPreference; + } + + // Compute total shared memory available per SM + // + smemAvailableForDynamic = sharedMemPerMultiprocessor / numBlocks; + smemAvailableForDynamic = (smemAvailableForDynamic / allocationGranularity) * allocationGranularity; + + // Cap shared memory + // + if (smemAvailableForDynamic > smemLimitPerBlock) { + smemAvailableForDynamic = smemLimitPerBlock; + } + + // Now compute dynamic shared memory size + smemAvailableForDynamic = smemAvailableForDynamic - attributes->sharedSizeBytes; + + // Cap computed dynamic SM by user requested limit specified via cuFuncSetAttribute() + // + if (smemAvailableForDynamic > attributes->maxDynamicSharedSizeBytes) + smemAvailableForDynamic = attributes->maxDynamicSharedSizeBytes; + + *bytesAvailable = smemAvailableForDynamic; + return CUDA_OCC_SUCCESS; +} + +static __OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSize( + int *minGridSize, + int *blockSize, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + size_t (*blockSizeToDynamicSMemSize)(int), + size_t dynamicSMemSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + cudaOccResult result; + + // Limits + int occupancyLimit; + int granularity; + int blockSizeLimit; + + // Recorded maximum + int maxBlockSize = 0; + int numBlocks = 0; + int maxOccupancy = 0; + + // Temporary + int blockSizeToTryAligned; + int blockSizeToTry; + int blockSizeLimitAligned; + int occupancyInBlocks; + int occupancyInThreads; + + /////////////////////////// + // Check user input + /////////////////////////// + + if (!minGridSize || !blockSize || !properties || !attributes || !state) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + status = cudaOccInputCheck(properties, attributes, state); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + ///////////////////////////////////////////////////////////////////////////////// + // Try each block size, and pick the block size with maximum occupancy + ///////////////////////////////////////////////////////////////////////////////// + + occupancyLimit = properties->maxThreadsPerMultiprocessor; + granularity = properties->warpSize; + + blockSizeLimit = __occMin(properties->maxThreadsPerBlock, attributes->maxThreadsPerBlock); + blockSizeLimitAligned = __occRoundUp(blockSizeLimit, granularity); + + for (blockSizeToTryAligned = blockSizeLimitAligned; blockSizeToTryAligned > 0; blockSizeToTryAligned -= granularity) { + blockSizeToTry = __occMin(blockSizeLimit, blockSizeToTryAligned); + + // Ignore dynamicSMemSize if the user provides a mapping + // + if (blockSizeToDynamicSMemSize) { + dynamicSMemSize = (*blockSizeToDynamicSMemSize)(blockSizeToTry); + } + + status = cudaOccMaxActiveBlocksPerMultiprocessor( + &result, + properties, + attributes, + state, + blockSizeToTry, + dynamicSMemSize); + + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + occupancyInBlocks = result.activeBlocksPerMultiprocessor; + occupancyInThreads = blockSizeToTry * occupancyInBlocks; + + if (occupancyInThreads > maxOccupancy) { + maxBlockSize = blockSizeToTry; + numBlocks = occupancyInBlocks; + maxOccupancy = occupancyInThreads; + } + + // Early out if we have reached the maximum + // + if (occupancyLimit == maxOccupancy) { + break; + } + } + + /////////////////////////// + // Return best available + /////////////////////////// + + // Suggested min grid size to achieve a full machine launch + // + *minGridSize = numBlocks * properties->numSms; + *blockSize = maxBlockSize; + + return status; +} + + +#if defined(__cplusplus) + +namespace { + +__OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSize( + int *minGridSize, + int *blockSize, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + size_t dynamicSMemSize) +{ + return cudaOccMaxPotentialOccupancyBlockSize( + minGridSize, + blockSize, + properties, + attributes, + state, + NULL, + dynamicSMemSize); +} + +template +__OCC_INLINE +cudaOccError cudaOccMaxPotentialOccupancyBlockSizeVariableSMem( + int *minGridSize, + int *blockSize, + const cudaOccDeviceProp *properties, + const cudaOccFuncAttributes *attributes, + const cudaOccDeviceState *state, + UnaryFunction blockSizeToDynamicSMemSize) +{ + cudaOccError status = CUDA_OCC_SUCCESS; + cudaOccResult result; + + // Limits + int occupancyLimit; + int granularity; + int blockSizeLimit; + + // Recorded maximum + int maxBlockSize = 0; + int numBlocks = 0; + int maxOccupancy = 0; + + // Temporary + int blockSizeToTryAligned; + int blockSizeToTry; + int blockSizeLimitAligned; + int occupancyInBlocks; + int occupancyInThreads; + size_t dynamicSMemSize; + + /////////////////////////// + // Check user input + /////////////////////////// + + if (!minGridSize || !blockSize || !properties || !attributes || !state) { + return CUDA_OCC_ERROR_INVALID_INPUT; + } + + status = cudaOccInputCheck(properties, attributes, state); + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + ///////////////////////////////////////////////////////////////////////////////// + // Try each block size, and pick the block size with maximum occupancy + ///////////////////////////////////////////////////////////////////////////////// + + occupancyLimit = properties->maxThreadsPerMultiprocessor; + granularity = properties->warpSize; + blockSizeLimit = __occMin(properties->maxThreadsPerBlock, attributes->maxThreadsPerBlock); + blockSizeLimitAligned = __occRoundUp(blockSizeLimit, granularity); + + for (blockSizeToTryAligned = blockSizeLimitAligned; blockSizeToTryAligned > 0; blockSizeToTryAligned -= granularity) { + blockSizeToTry = __occMin(blockSizeLimit, blockSizeToTryAligned); + + dynamicSMemSize = blockSizeToDynamicSMemSize(blockSizeToTry); + + status = cudaOccMaxActiveBlocksPerMultiprocessor( + &result, + properties, + attributes, + state, + blockSizeToTry, + dynamicSMemSize); + + if (status != CUDA_OCC_SUCCESS) { + return status; + } + + occupancyInBlocks = result.activeBlocksPerMultiprocessor; + + occupancyInThreads = blockSizeToTry * occupancyInBlocks; + + if (occupancyInThreads > maxOccupancy) { + maxBlockSize = blockSizeToTry; + numBlocks = occupancyInBlocks; + maxOccupancy = occupancyInThreads; + } + + // Early out if we have reached the maximum + // + if (occupancyLimit == maxOccupancy) { + break; + } + } + + /////////////////////////// + // Return best available + /////////////////////////// + + // Suggested min grid size to achieve a full machine launch + // + *minGridSize = numBlocks * properties->numSms; + *blockSize = maxBlockSize; + + return status; +} + +} // namespace anonymous + +#endif /*__cplusplus */ + +#undef __OCC_INLINE + +#endif /*__cuda_occupancy_h__*/ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_pipeline_primitives.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_pipeline_primitives.h new file mode 100644 index 0000000000000000000000000000000000000000..eaba0cfb5ac9184bec5e837d2ec2f9db11d873ae --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_pipeline_primitives.h @@ -0,0 +1,148 @@ +/* + * Copyright 1993-2019 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef _CUDA_PIPELINE_PRIMITIVES_H_ +# define _CUDA_PIPELINE_PRIMITIVES_H_ + +# include "cuda_pipeline_helpers.h" + +_CUDA_PIPELINE_STATIC_QUALIFIER +void __pipeline_memcpy_async(void* __restrict__ dst_shared, const void* __restrict__ src_global, size_t size_and_align, + size_t zfill = 0) +{ + _CUDA_PIPELINE_ASSERT(size_and_align == 4 || size_and_align == 8 || size_and_align == 16); + _CUDA_PIPELINE_ASSERT(zfill <= size_and_align); + _CUDA_PIPELINE_ASSERT(__isShared(dst_shared)); + _CUDA_PIPELINE_ASSERT(__isGlobal(src_global)); + _CUDA_PIPELINE_ASSERT(!(reinterpret_cast(dst_shared) & (size_and_align - 1))); + _CUDA_PIPELINE_ASSERT(!(reinterpret_cast(src_global) & (size_and_align - 1))); + + switch (size_and_align) { + case 16: + switch (zfill) { + case 0: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 16>(dst_shared, src_global); return; + case 1: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 15>(dst_shared, src_global); return; + case 2: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 14>(dst_shared, src_global); return; + case 3: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 13>(dst_shared, src_global); return; + case 4: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 12>(dst_shared, src_global); return; + case 5: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 11>(dst_shared, src_global); return; + case 6: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 10>(dst_shared, src_global); return; + case 7: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 9>(dst_shared, src_global); return; + case 8: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 8>(dst_shared, src_global); return; + case 9: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 7>(dst_shared, src_global); return; + case 10: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 6>(dst_shared, src_global); return; + case 11: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 5>(dst_shared, src_global); return; + case 12: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 4>(dst_shared, src_global); return; + case 13: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 3>(dst_shared, src_global); return; + case 14: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 2>(dst_shared, src_global); return; + case 15: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 1>(dst_shared, src_global); return; + case 16: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async<16, 0>(dst_shared, src_global); return; + default: _CUDA_PIPELINE_ABORT(); return; + } + case 8: + switch (zfill) { + case 0: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 8>(dst_shared, src_global); return; + case 1: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 7>(dst_shared, src_global); return; + case 2: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 6>(dst_shared, src_global); return; + case 3: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 5>(dst_shared, src_global); return; + case 4: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 4>(dst_shared, src_global); return; + case 5: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 3>(dst_shared, src_global); return; + case 6: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 2>(dst_shared, src_global); return; + case 7: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 1>(dst_shared, src_global); return; + case 8: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 8, 0>(dst_shared, src_global); return; + default: _CUDA_PIPELINE_ABORT(); return; + } + case 4: + switch (zfill) { + case 0: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 4, 4>(dst_shared, src_global); return; + case 1: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 4, 3>(dst_shared, src_global); return; + case 2: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 4, 2>(dst_shared, src_global); return; + case 3: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 4, 1>(dst_shared, src_global); return; + case 4: _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_memcpy_async< 4, 0>(dst_shared, src_global); return; + default: _CUDA_PIPELINE_ABORT(); return; + } + default: + _CUDA_PIPELINE_ABORT(); + return; + } +} + +_CUDA_PIPELINE_STATIC_QUALIFIER +void __pipeline_commit() +{ + _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_commit(); +} + +_CUDA_PIPELINE_STATIC_QUALIFIER +void __pipeline_wait_prior(size_t prior) +{ + switch (prior) { + case 0 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<0>(); return; + case 1 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<1>(); return; + case 2 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<2>(); return; + case 3 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<3>(); return; + case 4 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<4>(); return; + case 5 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<5>(); return; + case 6 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<6>(); return; + case 7 : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<7>(); return; + default : _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_wait_prior<8>(); return; + } +} + +# if defined(_CUDA_PIPELINE_ARCH_700_OR_LATER) +# include "cuda_awbarrier_primitives.h" + +_CUDA_PIPELINE_STATIC_QUALIFIER +void __pipeline_arrive_on(__mbarrier_t* barrier) +{ + _CUDA_PIPELINE_INTERNAL_NAMESPACE::pipeline_arrive_on(barrier); +} +# endif + +#endif /* !_CUDA_PIPELINE_PRIMITIVES_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_vdpau_interop.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_vdpau_interop.h new file mode 100644 index 0000000000000000000000000000000000000000..2cf1ba357eb02ed82afc2f1812627a8a2d88c6f7 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cuda_vdpau_interop.h @@ -0,0 +1,201 @@ +/* + * Copyright 1993-2012 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUDA_VDPAU_INTEROP_H__) +#define __CUDA_VDPAU_INTEROP_H__ + +#include "cuda_runtime_api.h" + +#include + +#if defined(__cplusplus) +extern "C" { +#endif /* __cplusplus */ + +/** + * \addtogroup CUDART_VDPAU VDPAU Interoperability + * This section describes the VDPAU interoperability functions of the CUDA + * runtime application programming interface. + * + * @{ + */ + +/** + * \brief Gets the CUDA device associated with a VdpDevice. + * + * Returns the CUDA device associated with a VdpDevice, if applicable. + * + * \param device - Returns the device associated with vdpDevice, or -1 if + * the device associated with vdpDevice is not a compute device. + * \param vdpDevice - A VdpDevice handle + * \param vdpGetProcAddress - VDPAU's VdpGetProcAddress function pointer + * + * \return + * ::cudaSuccess + * \notefnerr + * + * \sa + * ::cudaVDPAUSetVDPAUDevice, + * ::cuVDPAUGetDevice + */ +extern __host__ cudaError_t CUDARTAPI cudaVDPAUGetDevice(int *device, VdpDevice vdpDevice, VdpGetProcAddress *vdpGetProcAddress); + +/** + * \brief Sets a CUDA device to use VDPAU interoperability + * + * Records \p vdpDevice as the VdpDevice for VDPAU interoperability + * with the CUDA device \p device and sets \p device as the current + * device for the calling host thread. + * + * This function will immediately initialize the primary context on + * \p device if needed. + * + * If \p device has already been initialized then this call will fail + * with the error ::cudaErrorSetOnActiveProcess. In this case it is + * necessary to reset \p device using ::cudaDeviceReset() before + * VDPAU interoperability on \p device may be enabled. + * + * \param device - Device to use for VDPAU interoperability + * \param vdpDevice - The VdpDevice to interoperate with + * \param vdpGetProcAddress - VDPAU's VdpGetProcAddress function pointer + * + * \return + * ::cudaSuccess, + * ::cudaErrorInvalidDevice, + * ::cudaErrorSetOnActiveProcess + * \notefnerr + * + * \sa ::cudaGraphicsVDPAURegisterVideoSurface, + * ::cudaGraphicsVDPAURegisterOutputSurface, + * ::cudaDeviceReset + */ +extern __host__ cudaError_t CUDARTAPI cudaVDPAUSetVDPAUDevice(int device, VdpDevice vdpDevice, VdpGetProcAddress *vdpGetProcAddress); + +/** + * \brief Register a VdpVideoSurface object + * + * Registers the VdpVideoSurface specified by \p vdpSurface for access by CUDA. + * A handle to the registered object is returned as \p resource. + * The surface's intended usage is specified using \p flags, as follows: + * + * - ::cudaGraphicsMapFlagsNone: Specifies no hints about how this + * resource will be used. It is therefore assumed that this resource will be + * read from and written to by CUDA. This is the default value. + * - ::cudaGraphicsMapFlagsReadOnly: Specifies that CUDA + * will not write to this resource. + * - ::cudaGraphicsMapFlagsWriteDiscard: Specifies that + * CUDA will not read from this resource and will write over the + * entire contents of the resource, so none of the data previously + * stored in the resource will be preserved. + * + * \param resource - Pointer to the returned object handle + * \param vdpSurface - VDPAU object to be registered + * \param flags - Map flags + * + * \return + * ::cudaSuccess, + * ::cudaErrorInvalidDevice, + * ::cudaErrorInvalidValue, + * ::cudaErrorInvalidResourceHandle, + * ::cudaErrorUnknown + * \notefnerr + * + * \sa + * ::cudaVDPAUSetVDPAUDevice, + * ::cudaGraphicsUnregisterResource, + * ::cudaGraphicsSubResourceGetMappedArray, + * ::cuGraphicsVDPAURegisterVideoSurface + */ +extern __host__ cudaError_t CUDARTAPI cudaGraphicsVDPAURegisterVideoSurface(struct cudaGraphicsResource **resource, VdpVideoSurface vdpSurface, unsigned int flags); + +/** + * \brief Register a VdpOutputSurface object + * + * Registers the VdpOutputSurface specified by \p vdpSurface for access by CUDA. + * A handle to the registered object is returned as \p resource. + * The surface's intended usage is specified using \p flags, as follows: + * + * - ::cudaGraphicsMapFlagsNone: Specifies no hints about how this + * resource will be used. It is therefore assumed that this resource will be + * read from and written to by CUDA. This is the default value. + * - ::cudaGraphicsMapFlagsReadOnly: Specifies that CUDA + * will not write to this resource. + * - ::cudaGraphicsMapFlagsWriteDiscard: Specifies that + * CUDA will not read from this resource and will write over the + * entire contents of the resource, so none of the data previously + * stored in the resource will be preserved. + * + * \param resource - Pointer to the returned object handle + * \param vdpSurface - VDPAU object to be registered + * \param flags - Map flags + * + * \return + * ::cudaSuccess, + * ::cudaErrorInvalidDevice, + * ::cudaErrorInvalidValue, + * ::cudaErrorInvalidResourceHandle, + * ::cudaErrorUnknown + * \notefnerr + * + * \sa + * ::cudaVDPAUSetVDPAUDevice, + * ::cudaGraphicsUnregisterResource, + * ::cudaGraphicsSubResourceGetMappedArray, + * ::cuGraphicsVDPAURegisterOutputSurface + */ +extern __host__ cudaError_t CUDARTAPI cudaGraphicsVDPAURegisterOutputSurface(struct cudaGraphicsResource **resource, VdpOutputSurface vdpSurface, unsigned int flags); + +/** @} */ /* END CUDART_VDPAU */ + +#if defined(__cplusplus) +} +#endif /* __cplusplus */ + +#endif /* __CUDA_VDPAU_INTEROP_H__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_atomic_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_atomic_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..611b715edfb82b6ee731419c30505676e1f07c52 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_atomic_functions.h @@ -0,0 +1,217 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__DEVICE_ATOMIC_FUNCTIONS_H__) +#define __DEVICE_ATOMIC_FUNCTIONS_H__ + +#if defined(__CUDACC_RTC__) +#define __DEVICE_ATOMIC_FUNCTIONS_DECL__ __device__ +#elif defined(_NVHPC_CUDA) +# define __DEVICE_ATOMIC_FUNCTIONS_DECL__ extern __device__ __cudart_builtin__ +#else /* __CUDACC_RTC__ */ +#define __DEVICE_ATOMIC_FUNCTIONS_DECL__ static __inline__ __device__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +/* Add !defined(_NVHPC_CUDA) to avoid empty function definition in PGI CUDA + * C++ compiler where the macro __CUDA_ARCH__ is not defined. */ +#if !defined(__CUDA_ARCH__) && !defined(_NVHPC_CUDA) +#define __DEF_IF_HOST { } +#else /* !__CUDA_ARCH__ */ +#define __DEF_IF_HOST ; +#endif /* __CUDA_ARCH__ */ + +#if defined(__CUDA_ARCH__) || defined(_NVHPC_CUDA) +extern "C" +{ +extern __device__ __device_builtin__ int __iAtomicAdd(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicAdd(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicExch(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicExch(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ float __fAtomicExch(float *address, float val); +extern __device__ __device_builtin__ int __iAtomicMin(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicMin(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicMax(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicMax(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ unsigned int __uAtomicInc(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ unsigned int __uAtomicDec(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicAnd(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicAnd(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicOr(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicOr(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicXor(int *address, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicXor(unsigned int *address, unsigned int val); +extern __device__ __device_builtin__ int __iAtomicCAS(int *address, int compare, int val); +extern __device__ __device_builtin__ unsigned int __uAtomicCAS(unsigned int *address, unsigned int compare, unsigned int val); +} +#endif /* __CUDA_ARCH__ || defined(_NVHPC_CUDA) */ + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicAdd(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicAdd(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicSub(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicSub(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicExch(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicExch(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ float atomicExch(float *address, float val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicMin(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicMin(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicMax(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicMax(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicInc(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicDec(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicAnd(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicAnd(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicOr(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicOr(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicXor(int *address, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicXor(unsigned int *address, unsigned int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ int atomicCAS(int *address, int compare, int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned int atomicCAS(unsigned int *address, unsigned int compare, unsigned int val) __DEF_IF_HOST + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +#if defined(_WIN32) +# define __DEPRECATED__(msg) __declspec(deprecated(msg)) +#elif (defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 5 && !defined(__clang__)))) +# define __DEPRECATED__(msg) __attribute__((deprecated)) +#else +# define __DEPRECATED__(msg) __attribute__((deprecated(msg))) +#endif + +#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 700 +#define __WSB_DEPRECATION_MESSAGE(x) #x"() is not valid on compute_70 and above, and should be replaced with "#x"_sync()."\ + "To continue using "#x"(), specify virtual architecture compute_60 when targeting sm_70 and above, for example, using the pair of compiler options: -arch=compute_60 -code=sm_70." +#elif defined(_NVHPC_CUDA) +#define __WSB_DEPRECATION_MESSAGE(x) #x"() is not valid on cc70 and above, and should be replaced with "#x"_sync()." +#else +#define __WSB_DEPRECATION_MESSAGE(x) #x"() is deprecated in favor of "#x"_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning)." +#endif + +extern "C" +{ +#if defined(__CUDA_ARCH__) || defined(_NVHPC_CUDA) +extern __device__ __device_builtin__ unsigned long long int __ullAtomicAdd(unsigned long long int *address, unsigned long long int val); +extern __device__ __device_builtin__ unsigned long long int __ullAtomicExch(unsigned long long int *address, unsigned long long int val); +extern __device__ __device_builtin__ unsigned long long int __ullAtomicCAS(unsigned long long int *address, unsigned long long int compare, unsigned long long int val); +#endif /* __CUDA_ARCH__ || _NVHPC_CUDA */ +extern __device__ __device_builtin__ __DEPRECATED__(__WSB_DEPRECATION_MESSAGE(__any)) int __any(int cond); +extern __device__ __device_builtin__ __DEPRECATED__(__WSB_DEPRECATION_MESSAGE(__all)) int __all(int cond); +} + + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned long long int atomicAdd(unsigned long long int *address, unsigned long long int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned long long int atomicExch(unsigned long long int *address, unsigned long long int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ unsigned long long int atomicCAS(unsigned long long int *address, unsigned long long int compare, unsigned long long int val) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ __DEPRECATED__(__WSB_DEPRECATION_MESSAGE(__any)) bool any(bool cond) __DEF_IF_HOST + +__DEVICE_ATOMIC_FUNCTIONS_DECL__ __DEPRECATED__(__WSB_DEPRECATION_MESSAGE(__all)) bool all(bool cond) __DEF_IF_HOST + +#undef __DEPRECATED__ +#undef __WSB_DEPRECATION_MESSAGE + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __DEF_IF_HOST +#undef __DEVICE_ATOMIC_FUNCTIONS_DECL__ + +#if !defined(__CUDACC_RTC__) && defined(__CUDA_ARCH__) +#include "device_atomic_functions.hpp" +#endif /* !__CUDACC_RTC__ && defined(__CUDA_ARCH__) */ + +#endif /* !__DEVICE_ATOMIC_FUNCTIONS_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_double_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_double_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..82b25e59b40aeaf1e475ff3179e49640a44918b8 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_double_functions.h @@ -0,0 +1,65 @@ +/* + * Copyright 1993-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__) +#if defined(_MSC_VER) +#pragma message("device_double_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead.") +#else +#warning "device_double_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." +#endif +#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#define __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_DOUBLE_FUNCTIONS_H_WRAPPER__ +#endif + +#include "crt/device_double_functions.h" + +#if defined(__UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_DOUBLE_FUNCTIONS_H_WRAPPER__) +#undef __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#undef __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_DOUBLE_FUNCTIONS_H_WRAPPER__ +#endif diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..0094cc9a0a57f53f47421a8ecc400fb84c26babe --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/device_functions.h @@ -0,0 +1,65 @@ +/* + * Copyright 1993-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__) +#if defined(_MSC_VER) +#pragma message("device_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead.") +#else +#warning "device_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." +#endif +#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#define __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_FUNCTIONS_H_WRAPPER__ +#endif + +#include "crt/device_functions.h" + +#if defined(__UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_FUNCTIONS_H_WRAPPER__) +#undef __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#undef __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DEVICE_FUNCTIONS_H_WRAPPER__ +#endif diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/driver_types.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/driver_types.h new file mode 100644 index 0000000000000000000000000000000000000000..988702c0e7c6b796c286ef3237c8af3341bad846 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/driver_types.h @@ -0,0 +1,3162 @@ +/* + * Copyright 1993-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__DRIVER_TYPES_H__) +#define __DRIVER_TYPES_H__ + +#if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__) +#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#define __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DRIVER_TYPES_H__ +#endif + +#ifndef __DOXYGEN_ONLY__ +#include "crt/host_defines.h" +#endif +#include "vector_types.h" + + + +/** + * \defgroup CUDART_TYPES Data types used by CUDA Runtime + * \ingroup CUDART + * + * @{ + */ + +/******************************************************************************* +* * +* TYPE DEFINITIONS USED BY RUNTIME API * +* * +*******************************************************************************/ + +#if !defined(__CUDA_INTERNAL_COMPILATION__) + +#if !defined(__CUDACC_RTC__) +#include +#include +#endif /* !defined(__CUDACC_RTC__) */ + +#define cudaHostAllocDefault 0x00 /**< Default page-locked allocation flag */ +#define cudaHostAllocPortable 0x01 /**< Pinned memory accessible by all CUDA contexts */ +#define cudaHostAllocMapped 0x02 /**< Map allocation into device space */ +#define cudaHostAllocWriteCombined 0x04 /**< Write-combined memory */ + +#define cudaHostRegisterDefault 0x00 /**< Default host memory registration flag */ +#define cudaHostRegisterPortable 0x01 /**< Pinned memory accessible by all CUDA contexts */ +#define cudaHostRegisterMapped 0x02 /**< Map registered memory into device space */ +#define cudaHostRegisterIoMemory 0x04 /**< Memory-mapped I/O space */ +#define cudaHostRegisterReadOnly 0x08 /**< Memory-mapped read-only */ + +#define cudaPeerAccessDefault 0x00 /**< Default peer addressing enable flag */ + +#define cudaStreamDefault 0x00 /**< Default stream flag */ +#define cudaStreamNonBlocking 0x01 /**< Stream does not synchronize with stream 0 (the NULL stream) */ + + /** + * Legacy stream handle + * + * Stream handle that can be passed as a cudaStream_t to use an implicit stream + * with legacy synchronization behavior. + * + * See details of the \link_sync_behavior + */ +#define cudaStreamLegacy ((cudaStream_t)0x1) + +/** + * Per-thread stream handle + * + * Stream handle that can be passed as a cudaStream_t to use an implicit stream + * with per-thread synchronization behavior. + * + * See details of the \link_sync_behavior + */ +#define cudaStreamPerThread ((cudaStream_t)0x2) + +#define cudaEventDefault 0x00 /**< Default event flag */ +#define cudaEventBlockingSync 0x01 /**< Event uses blocking synchronization */ +#define cudaEventDisableTiming 0x02 /**< Event will not record timing data */ +#define cudaEventInterprocess 0x04 /**< Event is suitable for interprocess use. cudaEventDisableTiming must be set */ + +#define cudaEventRecordDefault 0x00 /**< Default event record flag */ +#define cudaEventRecordExternal 0x01 /**< Event is captured in the graph as an external event node when performing stream capture */ + +#define cudaEventWaitDefault 0x00 /**< Default event wait flag */ +#define cudaEventWaitExternal 0x01 /**< Event is captured in the graph as an external event node when performing stream capture */ + +#define cudaDeviceScheduleAuto 0x00 /**< Device flag - Automatic scheduling */ +#define cudaDeviceScheduleSpin 0x01 /**< Device flag - Spin default scheduling */ +#define cudaDeviceScheduleYield 0x02 /**< Device flag - Yield default scheduling */ +#define cudaDeviceScheduleBlockingSync 0x04 /**< Device flag - Use blocking synchronization */ +#define cudaDeviceBlockingSync 0x04 /**< Device flag - Use blocking synchronization + * \deprecated This flag was deprecated as of CUDA 4.0 and + * replaced with ::cudaDeviceScheduleBlockingSync. */ +#define cudaDeviceScheduleMask 0x07 /**< Device schedule flags mask */ +#define cudaDeviceMapHost 0x08 /**< Device flag - Support mapped pinned allocations */ +#define cudaDeviceLmemResizeToMax 0x10 /**< Device flag - Keep local memory allocation after launch */ +#define cudaDeviceSyncMemops 0x80 /**< Device flag - Use synchronous behavior for cudaMemcpy/cudaMemset */ +#define cudaDeviceMask 0xff /**< Device flags mask */ + +#define cudaArrayDefault 0x00 /**< Default CUDA array allocation flag */ +#define cudaArrayLayered 0x01 /**< Must be set in cudaMalloc3DArray to create a layered CUDA array */ +#define cudaArraySurfaceLoadStore 0x02 /**< Must be set in cudaMallocArray or cudaMalloc3DArray in order to bind surfaces to the CUDA array */ +#define cudaArrayCubemap 0x04 /**< Must be set in cudaMalloc3DArray to create a cubemap CUDA array */ +#define cudaArrayTextureGather 0x08 /**< Must be set in cudaMallocArray or cudaMalloc3DArray in order to perform texture gather operations on the CUDA array */ +#define cudaArrayColorAttachment 0x20 /**< Must be set in cudaExternalMemoryGetMappedMipmappedArray if the mipmapped array is used as a color target in a graphics API */ +#define cudaArraySparse 0x40 /**< Must be set in cudaMallocArray, cudaMalloc3DArray or cudaMallocMipmappedArray in order to create a sparse CUDA array or CUDA mipmapped array */ +#define cudaArrayDeferredMapping 0x80 /**< Must be set in cudaMallocArray, cudaMalloc3DArray or cudaMallocMipmappedArray in order to create a deferred mapping CUDA array or CUDA mipmapped array */ + +#define cudaIpcMemLazyEnablePeerAccess 0x01 /**< Automatically enable peer access between remote devices as needed */ + +#define cudaMemAttachGlobal 0x01 /**< Memory can be accessed by any stream on any device*/ +#define cudaMemAttachHost 0x02 /**< Memory cannot be accessed by any stream on any device */ +#define cudaMemAttachSingle 0x04 /**< Memory can only be accessed by a single stream on the associated device */ + +#define cudaOccupancyDefault 0x00 /**< Default behavior */ +#define cudaOccupancyDisableCachingOverride 0x01 /**< Assume global caching is enabled and cannot be automatically turned off */ + +#define cudaCpuDeviceId ((int)-1) /**< Device id that represents the CPU */ +#define cudaInvalidDeviceId ((int)-2) /**< Device id that represents an invalid device */ +#define cudaInitDeviceFlagsAreValid 0x01 /**< Tell the CUDA runtime that DeviceFlags is being set in cudaInitDevice call */ +/** + * If set, each kernel launched as part of ::cudaLaunchCooperativeKernelMultiDevice only + * waits for prior work in the stream corresponding to that GPU to complete before the + * kernel begins execution. + */ +#define cudaCooperativeLaunchMultiDeviceNoPreSync 0x01 + +/** + * If set, any subsequent work pushed in a stream that participated in a call to + * ::cudaLaunchCooperativeKernelMultiDevice will only wait for the kernel launched on + * the GPU corresponding to that stream to complete before it begins execution. + */ +#define cudaCooperativeLaunchMultiDeviceNoPostSync 0x02 + +#endif /* !__CUDA_INTERNAL_COMPILATION__ */ + +/** \cond impl_private */ +#if defined(__DOXYGEN_ONLY__) || defined(CUDA_ENABLE_DEPRECATED) +#define __CUDA_DEPRECATED +#elif defined(_MSC_VER) +#define __CUDA_DEPRECATED __declspec(deprecated) +#elif defined(__GNUC__) +#define __CUDA_DEPRECATED __attribute__((deprecated)) +#else +#define __CUDA_DEPRECATED +#endif +/** \endcond impl_private */ + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +/** + * CUDA error types + */ +enum __device_builtin__ cudaError +{ + /** + * The API call returned with no errors. In the case of query calls, this + * also means that the operation being queried is complete (see + * ::cudaEventQuery() and ::cudaStreamQuery()). + */ + cudaSuccess = 0, + + /** + * This indicates that one or more of the parameters passed to the API call + * is not within an acceptable range of values. + */ + cudaErrorInvalidValue = 1, + + /** + * The API call failed because it was unable to allocate enough memory to + * perform the requested operation. + */ + cudaErrorMemoryAllocation = 2, + + /** + * The API call failed because the CUDA driver and runtime could not be + * initialized. + */ + cudaErrorInitializationError = 3, + + /** + * This indicates that a CUDA Runtime API call cannot be executed because + * it is being called during process shut down, at a point in time after + * CUDA driver has been unloaded. + */ + cudaErrorCudartUnloading = 4, + + /** + * This indicates profiler is not initialized for this run. This can + * happen when the application is running with external profiling tools + * like visual profiler. + */ + cudaErrorProfilerDisabled = 5, + + /** + * \deprecated + * This error return is deprecated as of CUDA 5.0. It is no longer an error + * to attempt to enable/disable the profiling via ::cudaProfilerStart or + * ::cudaProfilerStop without initialization. + */ + cudaErrorProfilerNotInitialized = 6, + + /** + * \deprecated + * This error return is deprecated as of CUDA 5.0. It is no longer an error + * to call cudaProfilerStart() when profiling is already enabled. + */ + cudaErrorProfilerAlreadyStarted = 7, + + /** + * \deprecated + * This error return is deprecated as of CUDA 5.0. It is no longer an error + * to call cudaProfilerStop() when profiling is already disabled. + */ + cudaErrorProfilerAlreadyStopped = 8, + + /** + * This indicates that a kernel launch is requesting resources that can + * never be satisfied by the current device. Requesting more shared memory + * per block than the device supports will trigger this error, as will + * requesting too many threads or blocks. See ::cudaDeviceProp for more + * device limitations. + */ + cudaErrorInvalidConfiguration = 9, + + /** + * This indicates that one or more of the pitch-related parameters passed + * to the API call is not within the acceptable range for pitch. + */ + cudaErrorInvalidPitchValue = 12, + + /** + * This indicates that the symbol name/identifier passed to the API call + * is not a valid name or identifier. + */ + cudaErrorInvalidSymbol = 13, + + /** + * This indicates that at least one host pointer passed to the API call is + * not a valid host pointer. + * \deprecated + * This error return is deprecated as of CUDA 10.1. + */ + cudaErrorInvalidHostPointer = 16, + + /** + * This indicates that at least one device pointer passed to the API call is + * not a valid device pointer. + * \deprecated + * This error return is deprecated as of CUDA 10.1. + */ + cudaErrorInvalidDevicePointer = 17, + + /** + * This indicates that the texture passed to the API call is not a valid + * texture. + */ + cudaErrorInvalidTexture = 18, + + /** + * This indicates that the texture binding is not valid. This occurs if you + * call ::cudaGetTextureAlignmentOffset() with an unbound texture. + */ + cudaErrorInvalidTextureBinding = 19, + + /** + * This indicates that the channel descriptor passed to the API call is not + * valid. This occurs if the format is not one of the formats specified by + * ::cudaChannelFormatKind, or if one of the dimensions is invalid. + */ + cudaErrorInvalidChannelDescriptor = 20, + + /** + * This indicates that the direction of the memcpy passed to the API call is + * not one of the types specified by ::cudaMemcpyKind. + */ + cudaErrorInvalidMemcpyDirection = 21, + + /** + * This indicated that the user has taken the address of a constant variable, + * which was forbidden up until the CUDA 3.1 release. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Variables in constant + * memory may now have their address taken by the runtime via + * ::cudaGetSymbolAddress(). + */ + cudaErrorAddressOfConstant = 22, + + /** + * This indicated that a texture fetch was not able to be performed. + * This was previously used for device emulation of texture operations. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorTextureFetchFailed = 23, + + /** + * This indicated that a texture was not bound for access. + * This was previously used for device emulation of texture operations. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorTextureNotBound = 24, + + /** + * This indicated that a synchronization operation had failed. + * This was previously used for some device emulation functions. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorSynchronizationError = 25, + + /** + * This indicates that a non-float texture was being accessed with linear + * filtering. This is not supported by CUDA. + */ + cudaErrorInvalidFilterSetting = 26, + + /** + * This indicates that an attempt was made to read a non-float texture as a + * normalized float. This is not supported by CUDA. + */ + cudaErrorInvalidNormSetting = 27, + + /** + * Mixing of device and device emulation code was not allowed. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorMixedDeviceExecution = 28, + + /** + * This indicates that the API call is not yet implemented. Production + * releases of CUDA will never return this error. + * \deprecated + * This error return is deprecated as of CUDA 4.1. + */ + cudaErrorNotYetImplemented = 31, + + /** + * This indicated that an emulated device pointer exceeded the 32-bit address + * range. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorMemoryValueTooLarge = 32, + + /** + * This indicates that the CUDA driver that the application has loaded is a + * stub library. Applications that run with the stub rather than a real + * driver loaded will result in CUDA API returning this error. + */ + cudaErrorStubLibrary = 34, + + /** + * This indicates that the installed NVIDIA CUDA driver is older than the + * CUDA runtime library. This is not a supported configuration. Users should + * install an updated NVIDIA display driver to allow the application to run. + */ + cudaErrorInsufficientDriver = 35, + + /** + * This indicates that the API call requires a newer CUDA driver than the one + * currently installed. Users should install an updated NVIDIA CUDA driver + * to allow the API call to succeed. + */ + cudaErrorCallRequiresNewerDriver = 36, + + /** + * This indicates that the surface passed to the API call is not a valid + * surface. + */ + cudaErrorInvalidSurface = 37, + + /** + * This indicates that multiple global or constant variables (across separate + * CUDA source files in the application) share the same string name. + */ + cudaErrorDuplicateVariableName = 43, + + /** + * This indicates that multiple textures (across separate CUDA source + * files in the application) share the same string name. + */ + cudaErrorDuplicateTextureName = 44, + + /** + * This indicates that multiple surfaces (across separate CUDA source + * files in the application) share the same string name. + */ + cudaErrorDuplicateSurfaceName = 45, + + /** + * This indicates that all CUDA devices are busy or unavailable at the current + * time. Devices are often busy/unavailable due to use of + * ::cudaComputeModeProhibited, ::cudaComputeModeExclusiveProcess, or when long + * running CUDA kernels have filled up the GPU and are blocking new work + * from starting. They can also be unavailable due to memory constraints + * on a device that already has active CUDA work being performed. + */ + cudaErrorDevicesUnavailable = 46, + + /** + * This indicates that the current context is not compatible with this + * the CUDA Runtime. This can only occur if you are using CUDA + * Runtime/Driver interoperability and have created an existing Driver + * context using the driver API. The Driver context may be incompatible + * either because the Driver context was created using an older version + * of the API, because the Runtime API call expects a primary driver + * context and the Driver context is not primary, or because the Driver + * context has been destroyed. Please see \ref CUDART_DRIVER "Interactions + * with the CUDA Driver API" for more information. + */ + cudaErrorIncompatibleDriverContext = 49, + + /** + * The device function being invoked (usually via ::cudaLaunchKernel()) was not + * previously configured via the ::cudaConfigureCall() function. + */ + cudaErrorMissingConfiguration = 52, + + /** + * This indicated that a previous kernel launch failed. This was previously + * used for device emulation of kernel launches. + * \deprecated + * This error return is deprecated as of CUDA 3.1. Device emulation mode was + * removed with the CUDA 3.1 release. + */ + cudaErrorPriorLaunchFailure = 53, + + /** + * This error indicates that a device runtime grid launch did not occur + * because the depth of the child grid would exceed the maximum supported + * number of nested grid launches. + */ + cudaErrorLaunchMaxDepthExceeded = 65, + + /** + * This error indicates that a grid launch did not occur because the kernel + * uses file-scoped textures which are unsupported by the device runtime. + * Kernels launched via the device runtime only support textures created with + * the Texture Object API's. + */ + cudaErrorLaunchFileScopedTex = 66, + + /** + * This error indicates that a grid launch did not occur because the kernel + * uses file-scoped surfaces which are unsupported by the device runtime. + * Kernels launched via the device runtime only support surfaces created with + * the Surface Object API's. + */ + cudaErrorLaunchFileScopedSurf = 67, + + /** + * This error indicates that a call to ::cudaDeviceSynchronize made from + * the device runtime failed because the call was made at grid depth greater + * than than either the default (2 levels of grids) or user specified device + * limit ::cudaLimitDevRuntimeSyncDepth. To be able to synchronize on + * launched grids at a greater depth successfully, the maximum nested + * depth at which ::cudaDeviceSynchronize will be called must be specified + * with the ::cudaLimitDevRuntimeSyncDepth limit to the ::cudaDeviceSetLimit + * api before the host-side launch of a kernel using the device runtime. + * Keep in mind that additional levels of sync depth require the runtime + * to reserve large amounts of device memory that cannot be used for + * user allocations. Note that ::cudaDeviceSynchronize made from device + * runtime is only supported on devices of compute capability < 9.0. + */ + cudaErrorSyncDepthExceeded = 68, + + /** + * This error indicates that a device runtime grid launch failed because + * the launch would exceed the limit ::cudaLimitDevRuntimePendingLaunchCount. + * For this launch to proceed successfully, ::cudaDeviceSetLimit must be + * called to set the ::cudaLimitDevRuntimePendingLaunchCount to be higher + * than the upper bound of outstanding launches that can be issued to the + * device runtime. Keep in mind that raising the limit of pending device + * runtime launches will require the runtime to reserve device memory that + * cannot be used for user allocations. + */ + cudaErrorLaunchPendingCountExceeded = 69, + + /** + * The requested device function does not exist or is not compiled for the + * proper device architecture. + */ + cudaErrorInvalidDeviceFunction = 98, + + /** + * This indicates that no CUDA-capable devices were detected by the installed + * CUDA driver. + */ + cudaErrorNoDevice = 100, + + /** + * This indicates that the device ordinal supplied by the user does not + * correspond to a valid CUDA device or that the action requested is + * invalid for the specified device. + */ + cudaErrorInvalidDevice = 101, + + /** + * This indicates that the device doesn't have a valid Grid License. + */ + cudaErrorDeviceNotLicensed = 102, + + /** + * By default, the CUDA runtime may perform a minimal set of self-tests, + * as well as CUDA driver tests, to establish the validity of both. + * Introduced in CUDA 11.2, this error return indicates that at least one + * of these tests has failed and the validity of either the runtime + * or the driver could not be established. + */ + cudaErrorSoftwareValidityNotEstablished = 103, + + /** + * This indicates an internal startup failure in the CUDA runtime. + */ + cudaErrorStartupFailure = 127, + + /** + * This indicates that the device kernel image is invalid. + */ + cudaErrorInvalidKernelImage = 200, + + /** + * This most frequently indicates that there is no context bound to the + * current thread. This can also be returned if the context passed to an + * API call is not a valid handle (such as a context that has had + * ::cuCtxDestroy() invoked on it). This can also be returned if a user + * mixes different API versions (i.e. 3010 context with 3020 API calls). + * See ::cuCtxGetApiVersion() for more details. + */ + cudaErrorDeviceUninitialized = 201, + + /** + * This indicates that the buffer object could not be mapped. + */ + cudaErrorMapBufferObjectFailed = 205, + + /** + * This indicates that the buffer object could not be unmapped. + */ + cudaErrorUnmapBufferObjectFailed = 206, + + /** + * This indicates that the specified array is currently mapped and thus + * cannot be destroyed. + */ + cudaErrorArrayIsMapped = 207, + + /** + * This indicates that the resource is already mapped. + */ + cudaErrorAlreadyMapped = 208, + + /** + * This indicates that there is no kernel image available that is suitable + * for the device. This can occur when a user specifies code generation + * options for a particular CUDA source file that do not include the + * corresponding device configuration. + */ + cudaErrorNoKernelImageForDevice = 209, + + /** + * This indicates that a resource has already been acquired. + */ + cudaErrorAlreadyAcquired = 210, + + /** + * This indicates that a resource is not mapped. + */ + cudaErrorNotMapped = 211, + + /** + * This indicates that a mapped resource is not available for access as an + * array. + */ + cudaErrorNotMappedAsArray = 212, + + /** + * This indicates that a mapped resource is not available for access as a + * pointer. + */ + cudaErrorNotMappedAsPointer = 213, + + /** + * This indicates that an uncorrectable ECC error was detected during + * execution. + */ + cudaErrorECCUncorrectable = 214, + + /** + * This indicates that the ::cudaLimit passed to the API call is not + * supported by the active device. + */ + cudaErrorUnsupportedLimit = 215, + + /** + * This indicates that a call tried to access an exclusive-thread device that + * is already in use by a different thread. + */ + cudaErrorDeviceAlreadyInUse = 216, + + /** + * This error indicates that P2P access is not supported across the given + * devices. + */ + cudaErrorPeerAccessUnsupported = 217, + + /** + * A PTX compilation failed. The runtime may fall back to compiling PTX if + * an application does not contain a suitable binary for the current device. + */ + cudaErrorInvalidPtx = 218, + + /** + * This indicates an error with the OpenGL or DirectX context. + */ + cudaErrorInvalidGraphicsContext = 219, + + /** + * This indicates that an uncorrectable NVLink error was detected during the + * execution. + */ + cudaErrorNvlinkUncorrectable = 220, + + /** + * This indicates that the PTX JIT compiler library was not found. The JIT Compiler + * library is used for PTX compilation. The runtime may fall back to compiling PTX + * if an application does not contain a suitable binary for the current device. + */ + cudaErrorJitCompilerNotFound = 221, + + /** + * This indicates that the provided PTX was compiled with an unsupported toolchain. + * The most common reason for this, is the PTX was generated by a compiler newer + * than what is supported by the CUDA driver and PTX JIT compiler. + */ + cudaErrorUnsupportedPtxVersion = 222, + + /** + * This indicates that the JIT compilation was disabled. The JIT compilation compiles + * PTX. The runtime may fall back to compiling PTX if an application does not contain + * a suitable binary for the current device. + */ + cudaErrorJitCompilationDisabled = 223, + + /** + * This indicates that the provided execution affinity is not supported by the device. + */ + cudaErrorUnsupportedExecAffinity = 224, + + /** + * This indicates that the code to be compiled by the PTX JIT contains + * unsupported call to cudaDeviceSynchronize. + */ + cudaErrorUnsupportedDevSideSync = 225, + + /** + * This indicates that the device kernel source is invalid. + */ + cudaErrorInvalidSource = 300, + + /** + * This indicates that the file specified was not found. + */ + cudaErrorFileNotFound = 301, + + /** + * This indicates that a link to a shared object failed to resolve. + */ + cudaErrorSharedObjectSymbolNotFound = 302, + + /** + * This indicates that initialization of a shared object failed. + */ + cudaErrorSharedObjectInitFailed = 303, + + /** + * This error indicates that an OS call failed. + */ + cudaErrorOperatingSystem = 304, + + /** + * This indicates that a resource handle passed to the API call was not + * valid. Resource handles are opaque types like ::cudaStream_t and + * ::cudaEvent_t. + */ + cudaErrorInvalidResourceHandle = 400, + + /** + * This indicates that a resource required by the API call is not in a + * valid state to perform the requested operation. + */ + cudaErrorIllegalState = 401, + + /** + * This indicates that a named symbol was not found. Examples of symbols + * are global/constant variable names, driver function names, texture names, + * and surface names. + */ + cudaErrorSymbolNotFound = 500, + + /** + * This indicates that asynchronous operations issued previously have not + * completed yet. This result is not actually an error, but must be indicated + * differently than ::cudaSuccess (which indicates completion). Calls that + * may return this value include ::cudaEventQuery() and ::cudaStreamQuery(). + */ + cudaErrorNotReady = 600, + + /** + * The device encountered a load or store instruction on an invalid memory address. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorIllegalAddress = 700, + + /** + * This indicates that a launch did not occur because it did not have + * appropriate resources. Although this error is similar to + * ::cudaErrorInvalidConfiguration, this error usually indicates that the + * user has attempted to pass too many arguments to the device kernel, or the + * kernel launch specifies too many threads for the kernel's register count. + */ + cudaErrorLaunchOutOfResources = 701, + + /** + * This indicates that the device kernel took too long to execute. This can + * only occur if timeouts are enabled - see the device property + * \ref ::cudaDeviceProp::kernelExecTimeoutEnabled "kernelExecTimeoutEnabled" + * for more information. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorLaunchTimeout = 702, + + /** + * This error indicates a kernel launch that uses an incompatible texturing + * mode. + */ + cudaErrorLaunchIncompatibleTexturing = 703, + + /** + * This error indicates that a call to ::cudaDeviceEnablePeerAccess() is + * trying to re-enable peer addressing on from a context which has already + * had peer addressing enabled. + */ + cudaErrorPeerAccessAlreadyEnabled = 704, + + /** + * This error indicates that ::cudaDeviceDisablePeerAccess() is trying to + * disable peer addressing which has not been enabled yet via + * ::cudaDeviceEnablePeerAccess(). + */ + cudaErrorPeerAccessNotEnabled = 705, + + /** + * This indicates that the user has called ::cudaSetValidDevices(), + * ::cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice(), + * ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or + * ::cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by + * calling non-device management operations (allocating memory and + * launching kernels are examples of non-device management operations). + * This error can also be returned if using runtime/driver + * interoperability and there is an existing ::CUcontext active on the + * host thread. + */ + cudaErrorSetOnActiveProcess = 708, + + /** + * This error indicates that the context current to the calling thread + * has been destroyed using ::cuCtxDestroy, or is a primary context which + * has not yet been initialized. + */ + cudaErrorContextIsDestroyed = 709, + + /** + * An assert triggered in device code during kernel execution. The device + * cannot be used again. All existing allocations are invalid. To continue + * using CUDA, the process must be terminated and relaunched. + */ + cudaErrorAssert = 710, + + /** + * This error indicates that the hardware resources required to enable + * peer access have been exhausted for one or more of the devices + * passed to ::cudaEnablePeerAccess(). + */ + cudaErrorTooManyPeers = 711, + + /** + * This error indicates that the memory range passed to ::cudaHostRegister() + * has already been registered. + */ + cudaErrorHostMemoryAlreadyRegistered = 712, + + /** + * This error indicates that the pointer passed to ::cudaHostUnregister() + * does not correspond to any currently registered memory region. + */ + cudaErrorHostMemoryNotRegistered = 713, + + /** + * Device encountered an error in the call stack during kernel execution, + * possibly due to stack corruption or exceeding the stack size limit. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorHardwareStackError = 714, + + /** + * The device encountered an illegal instruction during kernel execution + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorIllegalInstruction = 715, + + /** + * The device encountered a load or store instruction + * on a memory address which is not aligned. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorMisalignedAddress = 716, + + /** + * While executing a kernel, the device encountered an instruction + * which can only operate on memory locations in certain address spaces + * (global, shared, or local), but was supplied a memory address not + * belonging to an allowed address space. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorInvalidAddressSpace = 717, + + /** + * The device encountered an invalid program counter. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorInvalidPc = 718, + + /** + * An exception occurred on the device while executing a kernel. Common + * causes include dereferencing an invalid device pointer and accessing + * out of bounds shared memory. Less common cases can be system specific - more + * information about these cases can be found in the system specific user guide. + * This leaves the process in an inconsistent state and any further CUDA work + * will return the same error. To continue using CUDA, the process must be terminated + * and relaunched. + */ + cudaErrorLaunchFailure = 719, + + /** + * This error indicates that the number of blocks launched per grid for a kernel that was + * launched via either ::cudaLaunchCooperativeKernel or ::cudaLaunchCooperativeKernelMultiDevice + * exceeds the maximum number of blocks as allowed by ::cudaOccupancyMaxActiveBlocksPerMultiprocessor + * or ::cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors + * as specified by the device attribute ::cudaDevAttrMultiProcessorCount. + */ + cudaErrorCooperativeLaunchTooLarge = 720, + + /** + * This error indicates the attempted operation is not permitted. + */ + cudaErrorNotPermitted = 800, + + /** + * This error indicates the attempted operation is not supported + * on the current system or device. + */ + cudaErrorNotSupported = 801, + + /** + * This error indicates that the system is not yet ready to start any CUDA + * work. To continue using CUDA, verify the system configuration is in a + * valid state and all required driver daemons are actively running. + * More information about this error can be found in the system specific + * user guide. + */ + cudaErrorSystemNotReady = 802, + + /** + * This error indicates that there is a mismatch between the versions of + * the display driver and the CUDA driver. Refer to the compatibility documentation + * for supported versions. + */ + cudaErrorSystemDriverMismatch = 803, + + /** + * This error indicates that the system was upgraded to run with forward compatibility + * but the visible hardware detected by CUDA does not support this configuration. + * Refer to the compatibility documentation for the supported hardware matrix or ensure + * that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES + * environment variable. + */ + cudaErrorCompatNotSupportedOnDevice = 804, + + /** + * This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server. + */ + cudaErrorMpsConnectionFailed = 805, + + /** + * This error indicates that the remote procedural call between the MPS server and the MPS client failed. + */ + cudaErrorMpsRpcFailure = 806, + + /** + * This error indicates that the MPS server is not ready to accept new MPS client requests. + * This error can be returned when the MPS server is in the process of recovering from a fatal failure. + */ + cudaErrorMpsServerNotReady = 807, + + /** + * This error indicates that the hardware resources required to create MPS client have been exhausted. + */ + cudaErrorMpsMaxClientsReached = 808, + + /** + * This error indicates the the hardware resources required to device connections have been exhausted. + */ + cudaErrorMpsMaxConnectionsReached = 809, + + /** + * This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched. + */ + cudaErrorMpsClientTerminated = 810, + + /** + * This error indicates, that the program is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it. + */ + cudaErrorCdpNotSupported = 811, + + /** + * This error indicates, that the program contains an unsupported interaction between different versions of CUDA Dynamic Parallelism. + */ + cudaErrorCdpVersionMismatch = 812, + + /** + * The operation is not permitted when the stream is capturing. + */ + cudaErrorStreamCaptureUnsupported = 900, + + /** + * The current capture sequence on the stream has been invalidated due to + * a previous error. + */ + cudaErrorStreamCaptureInvalidated = 901, + + /** + * The operation would have resulted in a merge of two independent capture + * sequences. + */ + cudaErrorStreamCaptureMerge = 902, + + /** + * The capture was not initiated in this stream. + */ + cudaErrorStreamCaptureUnmatched = 903, + + /** + * The capture sequence contains a fork that was not joined to the primary + * stream. + */ + cudaErrorStreamCaptureUnjoined = 904, + + /** + * A dependency would have been created which crosses the capture sequence + * boundary. Only implicit in-stream ordering dependencies are allowed to + * cross the boundary. + */ + cudaErrorStreamCaptureIsolation = 905, + + /** + * The operation would have resulted in a disallowed implicit dependency on + * a current capture sequence from cudaStreamLegacy. + */ + cudaErrorStreamCaptureImplicit = 906, + + /** + * The operation is not permitted on an event which was last recorded in a + * capturing stream. + */ + cudaErrorCapturedEvent = 907, + + /** + * A stream capture sequence not initiated with the ::cudaStreamCaptureModeRelaxed + * argument to ::cudaStreamBeginCapture was passed to ::cudaStreamEndCapture in a + * different thread. + */ + cudaErrorStreamCaptureWrongThread = 908, + + /** + * This indicates that the wait operation has timed out. + */ + cudaErrorTimeout = 909, + + /** + * This error indicates that the graph update was not performed because it included + * changes which violated constraints specific to instantiated graph update. + */ + cudaErrorGraphExecUpdateFailure = 910, + + /** + * This indicates that an async error has occurred in a device outside of CUDA. + * If CUDA was waiting for an external device's signal before consuming shared data, + * the external device signaled an error indicating that the data is not valid for + * consumption. This leaves the process in an inconsistent state and any further CUDA + * work will return the same error. To continue using CUDA, the process must be + * terminated and relaunched. + */ + cudaErrorExternalDevice = 911, + + /** + * This indicates that a kernel launch error has occurred due to cluster + * misconfiguration. + */ + cudaErrorInvalidClusterSize = 912, + + /** + * This indicates that an unknown internal error has occurred. + */ + cudaErrorUnknown = 999, + + /** + * Any unhandled CUDA driver error is added to this value and returned via + * the runtime. Production releases of CUDA should not return such errors. + * \deprecated + * This error return is deprecated as of CUDA 4.1. + */ + cudaErrorApiFailureBase = 10000 +}; + +/** + * Channel format kind + */ +enum __device_builtin__ cudaChannelFormatKind +{ + cudaChannelFormatKindSigned = 0, /**< Signed channel format */ + cudaChannelFormatKindUnsigned = 1, /**< Unsigned channel format */ + cudaChannelFormatKindFloat = 2, /**< Float channel format */ + cudaChannelFormatKindNone = 3, /**< No channel format */ + cudaChannelFormatKindNV12 = 4, /**< Unsigned 8-bit integers, planar 4:2:0 YUV format */ + cudaChannelFormatKindUnsignedNormalized8X1 = 5, /**< 1 channel unsigned 8-bit normalized integer */ + cudaChannelFormatKindUnsignedNormalized8X2 = 6, /**< 2 channel unsigned 8-bit normalized integer */ + cudaChannelFormatKindUnsignedNormalized8X4 = 7, /**< 4 channel unsigned 8-bit normalized integer */ + cudaChannelFormatKindUnsignedNormalized16X1 = 8, /**< 1 channel unsigned 16-bit normalized integer */ + cudaChannelFormatKindUnsignedNormalized16X2 = 9, /**< 2 channel unsigned 16-bit normalized integer */ + cudaChannelFormatKindUnsignedNormalized16X4 = 10, /**< 4 channel unsigned 16-bit normalized integer */ + cudaChannelFormatKindSignedNormalized8X1 = 11, /**< 1 channel signed 8-bit normalized integer */ + cudaChannelFormatKindSignedNormalized8X2 = 12, /**< 2 channel signed 8-bit normalized integer */ + cudaChannelFormatKindSignedNormalized8X4 = 13, /**< 4 channel signed 8-bit normalized integer */ + cudaChannelFormatKindSignedNormalized16X1 = 14, /**< 1 channel signed 16-bit normalized integer */ + cudaChannelFormatKindSignedNormalized16X2 = 15, /**< 2 channel signed 16-bit normalized integer */ + cudaChannelFormatKindSignedNormalized16X4 = 16, /**< 4 channel signed 16-bit normalized integer */ + cudaChannelFormatKindUnsignedBlockCompressed1 = 17, /**< 4 channel unsigned normalized block-compressed (BC1 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed1SRGB = 18, /**< 4 channel unsigned normalized block-compressed (BC1 compression) format with sRGB encoding*/ + cudaChannelFormatKindUnsignedBlockCompressed2 = 19, /**< 4 channel unsigned normalized block-compressed (BC2 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed2SRGB = 20, /**< 4 channel unsigned normalized block-compressed (BC2 compression) format with sRGB encoding */ + cudaChannelFormatKindUnsignedBlockCompressed3 = 21, /**< 4 channel unsigned normalized block-compressed (BC3 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed3SRGB = 22, /**< 4 channel unsigned normalized block-compressed (BC3 compression) format with sRGB encoding */ + cudaChannelFormatKindUnsignedBlockCompressed4 = 23, /**< 1 channel unsigned normalized block-compressed (BC4 compression) format */ + cudaChannelFormatKindSignedBlockCompressed4 = 24, /**< 1 channel signed normalized block-compressed (BC4 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed5 = 25, /**< 2 channel unsigned normalized block-compressed (BC5 compression) format */ + cudaChannelFormatKindSignedBlockCompressed5 = 26, /**< 2 channel signed normalized block-compressed (BC5 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed6H = 27, /**< 3 channel unsigned half-float block-compressed (BC6H compression) format */ + cudaChannelFormatKindSignedBlockCompressed6H = 28, /**< 3 channel signed half-float block-compressed (BC6H compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed7 = 29, /**< 4 channel unsigned normalized block-compressed (BC7 compression) format */ + cudaChannelFormatKindUnsignedBlockCompressed7SRGB = 30 /**< 4 channel unsigned normalized block-compressed (BC7 compression) format with sRGB encoding */ +}; + +/** + * CUDA Channel format descriptor + */ +struct __device_builtin__ cudaChannelFormatDesc +{ + int x; /**< x */ + int y; /**< y */ + int z; /**< z */ + int w; /**< w */ + enum cudaChannelFormatKind f; /**< Channel format kind */ +}; + +/** + * CUDA array + */ +typedef struct cudaArray *cudaArray_t; + +/** + * CUDA array (as source copy argument) + */ +typedef const struct cudaArray *cudaArray_const_t; + +struct cudaArray; + +/** + * CUDA mipmapped array + */ +typedef struct cudaMipmappedArray *cudaMipmappedArray_t; + +/** + * CUDA mipmapped array (as source argument) + */ +typedef const struct cudaMipmappedArray *cudaMipmappedArray_const_t; + +struct cudaMipmappedArray; + +/** + * Indicates that the layered sparse CUDA array or CUDA mipmapped array has a single mip tail region for all layers + */ +#define cudaArraySparsePropertiesSingleMipTail 0x1 + +/** + * Sparse CUDA array and CUDA mipmapped array properties + */ +struct __device_builtin__ cudaArraySparseProperties { + struct { + unsigned int width; /**< Tile width in elements */ + unsigned int height; /**< Tile height in elements */ + unsigned int depth; /**< Tile depth in elements */ + } tileExtent; + unsigned int miptailFirstLevel; /**< First mip level at which the mip tail begins */ + unsigned long long miptailSize; /**< Total size of the mip tail. */ + unsigned int flags; /**< Flags will either be zero or ::cudaArraySparsePropertiesSingleMipTail */ + unsigned int reserved[4]; +}; + +/** + * CUDA array and CUDA mipmapped array memory requirements + */ +struct __device_builtin__ cudaArrayMemoryRequirements { + size_t size; /**< Total size of the array. */ + size_t alignment; /**< Alignment necessary for mapping the array. */ + unsigned int reserved[4]; +}; + +/** + * CUDA memory types + */ +enum __device_builtin__ cudaMemoryType +{ + cudaMemoryTypeUnregistered = 0, /**< Unregistered memory */ + cudaMemoryTypeHost = 1, /**< Host memory */ + cudaMemoryTypeDevice = 2, /**< Device memory */ + cudaMemoryTypeManaged = 3 /**< Managed memory */ +}; + +/** + * CUDA memory copy types + */ +enum __device_builtin__ cudaMemcpyKind +{ + cudaMemcpyHostToHost = 0, /**< Host -> Host */ + cudaMemcpyHostToDevice = 1, /**< Host -> Device */ + cudaMemcpyDeviceToHost = 2, /**< Device -> Host */ + cudaMemcpyDeviceToDevice = 3, /**< Device -> Device */ + cudaMemcpyDefault = 4 /**< Direction of the transfer is inferred from the pointer values. Requires unified virtual addressing */ +}; + +/** + * CUDA Pitched memory pointer + * + * \sa ::make_cudaPitchedPtr + */ +struct __device_builtin__ cudaPitchedPtr +{ + void *ptr; /**< Pointer to allocated memory */ + size_t pitch; /**< Pitch of allocated memory in bytes */ + size_t xsize; /**< Logical width of allocation in elements */ + size_t ysize; /**< Logical height of allocation in elements */ +}; + +/** + * CUDA extent + * + * \sa ::make_cudaExtent + */ +struct __device_builtin__ cudaExtent +{ + size_t width; /**< Width in elements when referring to array memory, in bytes when referring to linear memory */ + size_t height; /**< Height in elements */ + size_t depth; /**< Depth in elements */ +}; + +/** + * CUDA 3D position + * + * \sa ::make_cudaPos + */ +struct __device_builtin__ cudaPos +{ + size_t x; /**< x */ + size_t y; /**< y */ + size_t z; /**< z */ +}; + +/** + * CUDA 3D memory copying parameters + */ +struct __device_builtin__ cudaMemcpy3DParms +{ + cudaArray_t srcArray; /**< Source memory address */ + struct cudaPos srcPos; /**< Source position offset */ + struct cudaPitchedPtr srcPtr; /**< Pitched source memory address */ + + cudaArray_t dstArray; /**< Destination memory address */ + struct cudaPos dstPos; /**< Destination position offset */ + struct cudaPitchedPtr dstPtr; /**< Pitched destination memory address */ + + struct cudaExtent extent; /**< Requested memory copy size */ + enum cudaMemcpyKind kind; /**< Type of transfer */ +}; + +/** + * CUDA 3D cross-device memory copying parameters + */ +struct __device_builtin__ cudaMemcpy3DPeerParms +{ + cudaArray_t srcArray; /**< Source memory address */ + struct cudaPos srcPos; /**< Source position offset */ + struct cudaPitchedPtr srcPtr; /**< Pitched source memory address */ + int srcDevice; /**< Source device */ + + cudaArray_t dstArray; /**< Destination memory address */ + struct cudaPos dstPos; /**< Destination position offset */ + struct cudaPitchedPtr dstPtr; /**< Pitched destination memory address */ + int dstDevice; /**< Destination device */ + + struct cudaExtent extent; /**< Requested memory copy size */ +}; + +/** + * CUDA Memset node parameters + */ +struct __device_builtin__ cudaMemsetParams { + void *dst; /**< Destination device pointer */ + size_t pitch; /**< Pitch of destination device pointer. Unused if height is 1 */ + unsigned int value; /**< Value to be set */ + unsigned int elementSize; /**< Size of each element in bytes. Must be 1, 2, or 4. */ + size_t width; /**< Width of the row in elements */ + size_t height; /**< Number of rows */ +}; + +/** + * Specifies performance hint with ::cudaAccessPolicyWindow for hitProp and missProp members. + */ +enum __device_builtin__ cudaAccessProperty { + cudaAccessPropertyNormal = 0, /**< Normal cache persistence. */ + cudaAccessPropertyStreaming = 1, /**< Streaming access is less likely to persit from cache. */ + cudaAccessPropertyPersisting = 2 /**< Persisting access is more likely to persist in cache.*/ +}; + +/** + * Specifies an access policy for a window, a contiguous extent of memory + * beginning at base_ptr and ending at base_ptr + num_bytes. + * Partition into many segments and assign segments such that. + * sum of "hit segments" / window == approx. ratio. + * sum of "miss segments" / window == approx 1-ratio. + * Segments and ratio specifications are fitted to the capabilities of + * the architecture. + * Accesses in a hit segment apply the hitProp access policy. + * Accesses in a miss segment apply the missProp access policy. + */ +struct __device_builtin__ cudaAccessPolicyWindow { + void *base_ptr; /**< Starting address of the access policy window. CUDA driver may align it. */ + size_t num_bytes; /**< Size in bytes of the window policy. CUDA driver may restrict the maximum size and alignment. */ + float hitRatio; /**< hitRatio specifies percentage of lines assigned hitProp, rest are assigned missProp. */ + enum cudaAccessProperty hitProp; /**< ::CUaccessProperty set for hit. */ + enum cudaAccessProperty missProp; /**< ::CUaccessProperty set for miss. Must be either NORMAL or STREAMING. */ +}; + +#ifdef _WIN32 +#define CUDART_CB __stdcall +#else +#define CUDART_CB +#endif + +/** + * CUDA host function + * \param userData Argument value passed to the function + */ +typedef void (CUDART_CB *cudaHostFn_t)(void *userData); + +/** + * CUDA host node parameters + */ +struct __device_builtin__ cudaHostNodeParams { + cudaHostFn_t fn; /**< The function to call when the node executes */ + void* userData; /**< Argument to pass to the function */ +}; + +/** + * Possible stream capture statuses returned by ::cudaStreamIsCapturing + */ +enum __device_builtin__ cudaStreamCaptureStatus { + cudaStreamCaptureStatusNone = 0, /**< Stream is not capturing */ + cudaStreamCaptureStatusActive = 1, /**< Stream is actively capturing */ + cudaStreamCaptureStatusInvalidated = 2 /**< Stream is part of a capture sequence that + has been invalidated, but not terminated */ +}; + +/** + * Possible modes for stream capture thread interactions. For more details see + * ::cudaStreamBeginCapture and ::cudaThreadExchangeStreamCaptureMode + */ +enum __device_builtin__ cudaStreamCaptureMode { + cudaStreamCaptureModeGlobal = 0, + cudaStreamCaptureModeThreadLocal = 1, + cudaStreamCaptureModeRelaxed = 2 +}; + +enum __device_builtin__ cudaSynchronizationPolicy { + cudaSyncPolicyAuto = 1, + cudaSyncPolicySpin = 2, + cudaSyncPolicyYield = 3, + cudaSyncPolicyBlockingSync = 4 +}; + +/** + * Cluster scheduling policies. These may be passed to ::cudaFuncSetAttribute + */ +enum __device_builtin__ cudaClusterSchedulingPolicy { + cudaClusterSchedulingPolicyDefault = 0, /**< the default policy */ + cudaClusterSchedulingPolicySpread = 1, /**< spread the blocks within a cluster to the SMs */ + cudaClusterSchedulingPolicyLoadBalancing = 2 /**< allow the hardware to load-balance the blocks in a cluster to the SMs */ +}; + +/** + * Flags for ::cudaStreamUpdateCaptureDependencies + */ +enum __device_builtin__ cudaStreamUpdateCaptureDependenciesFlags { + cudaStreamAddCaptureDependencies = 0x0, /**< Add new nodes to the dependency set */ + cudaStreamSetCaptureDependencies = 0x1 /**< Replace the dependency set with the new nodes */ +}; + +/** + * Flags for user objects for graphs + */ +enum __device_builtin__ cudaUserObjectFlags { + cudaUserObjectNoDestructorSync = 0x1 /**< Indicates the destructor execution is not synchronized by any CUDA handle. */ +}; + +/** + * Flags for retaining user object references for graphs + */ +enum __device_builtin__ cudaUserObjectRetainFlags { + cudaGraphUserObjectMove = 0x1 /**< Transfer references from the caller rather than creating new references. */ +}; + +/** + * CUDA graphics interop resource + */ +struct cudaGraphicsResource; + +/** + * CUDA graphics interop register flags + */ +enum __device_builtin__ cudaGraphicsRegisterFlags +{ + cudaGraphicsRegisterFlagsNone = 0, /**< Default */ + cudaGraphicsRegisterFlagsReadOnly = 1, /**< CUDA will not write to this resource */ + cudaGraphicsRegisterFlagsWriteDiscard = 2, /**< CUDA will only write to and will not read from this resource */ + cudaGraphicsRegisterFlagsSurfaceLoadStore = 4, /**< CUDA will bind this resource to a surface reference */ + cudaGraphicsRegisterFlagsTextureGather = 8 /**< CUDA will perform texture gather operations on this resource */ +}; + +/** + * CUDA graphics interop map flags + */ +enum __device_builtin__ cudaGraphicsMapFlags +{ + cudaGraphicsMapFlagsNone = 0, /**< Default; Assume resource can be read/written */ + cudaGraphicsMapFlagsReadOnly = 1, /**< CUDA will not write to this resource */ + cudaGraphicsMapFlagsWriteDiscard = 2 /**< CUDA will only write to and will not read from this resource */ +}; + +/** + * CUDA graphics interop array indices for cube maps + */ +enum __device_builtin__ cudaGraphicsCubeFace +{ + cudaGraphicsCubeFacePositiveX = 0x00, /**< Positive X face of cubemap */ + cudaGraphicsCubeFaceNegativeX = 0x01, /**< Negative X face of cubemap */ + cudaGraphicsCubeFacePositiveY = 0x02, /**< Positive Y face of cubemap */ + cudaGraphicsCubeFaceNegativeY = 0x03, /**< Negative Y face of cubemap */ + cudaGraphicsCubeFacePositiveZ = 0x04, /**< Positive Z face of cubemap */ + cudaGraphicsCubeFaceNegativeZ = 0x05 /**< Negative Z face of cubemap */ +}; + +/** + * CUDA resource types + */ +enum __device_builtin__ cudaResourceType +{ + cudaResourceTypeArray = 0x00, /**< Array resource */ + cudaResourceTypeMipmappedArray = 0x01, /**< Mipmapped array resource */ + cudaResourceTypeLinear = 0x02, /**< Linear resource */ + cudaResourceTypePitch2D = 0x03 /**< Pitch 2D resource */ +}; + +/** + * CUDA texture resource view formats + */ +enum __device_builtin__ cudaResourceViewFormat +{ + cudaResViewFormatNone = 0x00, /**< No resource view format (use underlying resource format) */ + cudaResViewFormatUnsignedChar1 = 0x01, /**< 1 channel unsigned 8-bit integers */ + cudaResViewFormatUnsignedChar2 = 0x02, /**< 2 channel unsigned 8-bit integers */ + cudaResViewFormatUnsignedChar4 = 0x03, /**< 4 channel unsigned 8-bit integers */ + cudaResViewFormatSignedChar1 = 0x04, /**< 1 channel signed 8-bit integers */ + cudaResViewFormatSignedChar2 = 0x05, /**< 2 channel signed 8-bit integers */ + cudaResViewFormatSignedChar4 = 0x06, /**< 4 channel signed 8-bit integers */ + cudaResViewFormatUnsignedShort1 = 0x07, /**< 1 channel unsigned 16-bit integers */ + cudaResViewFormatUnsignedShort2 = 0x08, /**< 2 channel unsigned 16-bit integers */ + cudaResViewFormatUnsignedShort4 = 0x09, /**< 4 channel unsigned 16-bit integers */ + cudaResViewFormatSignedShort1 = 0x0a, /**< 1 channel signed 16-bit integers */ + cudaResViewFormatSignedShort2 = 0x0b, /**< 2 channel signed 16-bit integers */ + cudaResViewFormatSignedShort4 = 0x0c, /**< 4 channel signed 16-bit integers */ + cudaResViewFormatUnsignedInt1 = 0x0d, /**< 1 channel unsigned 32-bit integers */ + cudaResViewFormatUnsignedInt2 = 0x0e, /**< 2 channel unsigned 32-bit integers */ + cudaResViewFormatUnsignedInt4 = 0x0f, /**< 4 channel unsigned 32-bit integers */ + cudaResViewFormatSignedInt1 = 0x10, /**< 1 channel signed 32-bit integers */ + cudaResViewFormatSignedInt2 = 0x11, /**< 2 channel signed 32-bit integers */ + cudaResViewFormatSignedInt4 = 0x12, /**< 4 channel signed 32-bit integers */ + cudaResViewFormatHalf1 = 0x13, /**< 1 channel 16-bit floating point */ + cudaResViewFormatHalf2 = 0x14, /**< 2 channel 16-bit floating point */ + cudaResViewFormatHalf4 = 0x15, /**< 4 channel 16-bit floating point */ + cudaResViewFormatFloat1 = 0x16, /**< 1 channel 32-bit floating point */ + cudaResViewFormatFloat2 = 0x17, /**< 2 channel 32-bit floating point */ + cudaResViewFormatFloat4 = 0x18, /**< 4 channel 32-bit floating point */ + cudaResViewFormatUnsignedBlockCompressed1 = 0x19, /**< Block compressed 1 */ + cudaResViewFormatUnsignedBlockCompressed2 = 0x1a, /**< Block compressed 2 */ + cudaResViewFormatUnsignedBlockCompressed3 = 0x1b, /**< Block compressed 3 */ + cudaResViewFormatUnsignedBlockCompressed4 = 0x1c, /**< Block compressed 4 unsigned */ + cudaResViewFormatSignedBlockCompressed4 = 0x1d, /**< Block compressed 4 signed */ + cudaResViewFormatUnsignedBlockCompressed5 = 0x1e, /**< Block compressed 5 unsigned */ + cudaResViewFormatSignedBlockCompressed5 = 0x1f, /**< Block compressed 5 signed */ + cudaResViewFormatUnsignedBlockCompressed6H = 0x20, /**< Block compressed 6 unsigned half-float */ + cudaResViewFormatSignedBlockCompressed6H = 0x21, /**< Block compressed 6 signed half-float */ + cudaResViewFormatUnsignedBlockCompressed7 = 0x22 /**< Block compressed 7 */ +}; + +/** + * CUDA resource descriptor + */ +struct __device_builtin__ cudaResourceDesc { + enum cudaResourceType resType; /**< Resource type */ + + union { + struct { + cudaArray_t array; /**< CUDA array */ + } array; + struct { + cudaMipmappedArray_t mipmap; /**< CUDA mipmapped array */ + } mipmap; + struct { + void *devPtr; /**< Device pointer */ + struct cudaChannelFormatDesc desc; /**< Channel descriptor */ + size_t sizeInBytes; /**< Size in bytes */ + } linear; + struct { + void *devPtr; /**< Device pointer */ + struct cudaChannelFormatDesc desc; /**< Channel descriptor */ + size_t width; /**< Width of the array in elements */ + size_t height; /**< Height of the array in elements */ + size_t pitchInBytes; /**< Pitch between two rows in bytes */ + } pitch2D; + } res; +}; + +/** + * CUDA resource view descriptor + */ +struct __device_builtin__ cudaResourceViewDesc +{ + enum cudaResourceViewFormat format; /**< Resource view format */ + size_t width; /**< Width of the resource view */ + size_t height; /**< Height of the resource view */ + size_t depth; /**< Depth of the resource view */ + unsigned int firstMipmapLevel; /**< First defined mipmap level */ + unsigned int lastMipmapLevel; /**< Last defined mipmap level */ + unsigned int firstLayer; /**< First layer index */ + unsigned int lastLayer; /**< Last layer index */ +}; + +/** + * CUDA pointer attributes + */ +struct __device_builtin__ cudaPointerAttributes +{ + /** + * The type of memory - ::cudaMemoryTypeUnregistered, ::cudaMemoryTypeHost, + * ::cudaMemoryTypeDevice or ::cudaMemoryTypeManaged. + */ + enum cudaMemoryType type; + + /** + * The device against which the memory was allocated or registered. + * If the memory type is ::cudaMemoryTypeDevice then this identifies + * the device on which the memory referred physically resides. If + * the memory type is ::cudaMemoryTypeHost or::cudaMemoryTypeManaged then + * this identifies the device which was current when the memory was allocated + * or registered (and if that device is deinitialized then this allocation + * will vanish with that device's state). + */ + int device; + + /** + * The address which may be dereferenced on the current device to access + * the memory or NULL if no such address exists. + */ + void *devicePointer; + + /** + * The address which may be dereferenced on the host to access the + * memory or NULL if no such address exists. + * + * \note CUDA doesn't check if unregistered memory is allocated so this field + * may contain invalid pointer if an invalid pointer has been passed to CUDA. + */ + void *hostPointer; +}; + +/** + * CUDA function attributes + */ +struct __device_builtin__ cudaFuncAttributes +{ + /** + * The size in bytes of statically-allocated shared memory per block + * required by this function. This does not include dynamically-allocated + * shared memory requested by the user at runtime. + */ + size_t sharedSizeBytes; + + /** + * The size in bytes of user-allocated constant memory required by this + * function. + */ + size_t constSizeBytes; + + /** + * The size in bytes of local memory used by each thread of this function. + */ + size_t localSizeBytes; + + /** + * The maximum number of threads per block, beyond which a launch of the + * function would fail. This number depends on both the function and the + * device on which the function is currently loaded. + */ + int maxThreadsPerBlock; + + /** + * The number of registers used by each thread of this function. + */ + int numRegs; + + /** + * The PTX virtual architecture version for which the function was + * compiled. This value is the major PTX version * 10 + the minor PTX + * version, so a PTX version 1.3 function would return the value 13. + */ + int ptxVersion; + + /** + * The binary architecture version for which the function was compiled. + * This value is the major binary version * 10 + the minor binary version, + * so a binary version 1.3 function would return the value 13. + */ + int binaryVersion; + + /** + * The attribute to indicate whether the function has been compiled with + * user specified option "-Xptxas --dlcm=ca" set. + */ + int cacheModeCA; + + /** + * The maximum size in bytes of dynamic shared memory per block for + * this function. Any launch must have a dynamic shared memory size + * smaller than this value. + */ + int maxDynamicSharedSizeBytes; + + /** + * On devices where the L1 cache and shared memory use the same hardware resources, + * this sets the shared memory carveout preference, in percent of the maximum shared memory. + * Refer to ::cudaDevAttrMaxSharedMemoryPerMultiprocessor. + * This is only a hint, and the driver can choose a different ratio if required to execute the function. + * See ::cudaFuncSetAttribute + */ + int preferredShmemCarveout; + + /** + * If this attribute is set, the kernel must launch with a valid cluster dimension + * specified. + */ + int clusterDimMustBeSet; + + /** + * The required cluster width/height/depth in blocks. The values must either + * all be 0 or all be positive. The validity of the cluster dimensions is + * otherwise checked at launch time. + * + * If the value is set during compile time, it cannot be set at runtime. + * Setting it at runtime should return cudaErrorNotPermitted. + * See ::cudaFuncSetAttribute + */ + int requiredClusterWidth; + int requiredClusterHeight; + int requiredClusterDepth; + + /** + * The block scheduling policy of a function. + * See ::cudaFuncSetAttribute + */ + int clusterSchedulingPolicyPreference; + + /** + * Whether the function can be launched with non-portable cluster size. 1 is + * allowed, 0 is disallowed. A non-portable cluster size may only function + * on the specific SKUs the program is tested on. The launch might fail if + * the program is run on a different hardware platform. + * + * CUDA API provides ::cudaOccupancyMaxActiveClusters to assist with checking + * whether the desired size can be launched on the current device. + * + * Portable Cluster Size + * + * A portable cluster size is guaranteed to be functional on all compute + * capabilities higher than the target compute capability. The portable + * cluster size for sm_90 is 8 blocks per cluster. This value may increase + * for future compute capabilities. + * + * The specific hardware unit may support higher cluster sizes that’s not + * guaranteed to be portable. + * See ::cudaFuncSetAttribute + */ + int nonPortableClusterSizeAllowed; + + /** + * Reserved for future use. + */ + int reserved[16]; +}; + +/** + * CUDA function attributes that can be set using ::cudaFuncSetAttribute + */ +enum __device_builtin__ cudaFuncAttribute +{ + cudaFuncAttributeMaxDynamicSharedMemorySize = 8, /**< Maximum dynamic shared memory size */ + cudaFuncAttributePreferredSharedMemoryCarveout = 9, /**< Preferred shared memory-L1 cache split */ + cudaFuncAttributeClusterDimMustBeSet = 10, /**< Indicator to enforce valid cluster dimension specification on kernel launch */ + cudaFuncAttributeRequiredClusterWidth = 11, /**< Required cluster width */ + cudaFuncAttributeRequiredClusterHeight = 12, /**< Required cluster height */ + cudaFuncAttributeRequiredClusterDepth = 13, /**< Required cluster depth */ + cudaFuncAttributeNonPortableClusterSizeAllowed = 14, /**< Whether non-portable cluster scheduling policy is supported */ + cudaFuncAttributeClusterSchedulingPolicyPreference = 15, /**< Required cluster scheduling policy preference */ + cudaFuncAttributeMax +}; + +/** + * CUDA function cache configurations + */ +enum __device_builtin__ cudaFuncCache +{ + cudaFuncCachePreferNone = 0, /**< Default function cache configuration, no preference */ + cudaFuncCachePreferShared = 1, /**< Prefer larger shared memory and smaller L1 cache */ + cudaFuncCachePreferL1 = 2, /**< Prefer larger L1 cache and smaller shared memory */ + cudaFuncCachePreferEqual = 3 /**< Prefer equal size L1 cache and shared memory */ +}; + +/** + * CUDA shared memory configuration + */ + +enum __device_builtin__ cudaSharedMemConfig +{ + cudaSharedMemBankSizeDefault = 0, + cudaSharedMemBankSizeFourByte = 1, + cudaSharedMemBankSizeEightByte = 2 +}; + +/** + * Shared memory carveout configurations. These may be passed to cudaFuncSetAttribute + */ +enum __device_builtin__ cudaSharedCarveout { + cudaSharedmemCarveoutDefault = -1, /**< No preference for shared memory or L1 (default) */ + cudaSharedmemCarveoutMaxShared = 100, /**< Prefer maximum available shared memory, minimum L1 cache */ + cudaSharedmemCarveoutMaxL1 = 0 /**< Prefer maximum available L1 cache, minimum shared memory */ +}; + +/** + * CUDA device compute modes + */ +enum __device_builtin__ cudaComputeMode +{ + cudaComputeModeDefault = 0, /**< Default compute mode (Multiple threads can use ::cudaSetDevice() with this device) */ + cudaComputeModeExclusive = 1, /**< Compute-exclusive-thread mode (Only one thread in one process will be able to use ::cudaSetDevice() with this device) */ + cudaComputeModeProhibited = 2, /**< Compute-prohibited mode (No threads can use ::cudaSetDevice() with this device) */ + cudaComputeModeExclusiveProcess = 3 /**< Compute-exclusive-process mode (Many threads in one process will be able to use ::cudaSetDevice() with this device) */ +}; + +/** + * CUDA Limits + */ +enum __device_builtin__ cudaLimit +{ + cudaLimitStackSize = 0x00, /**< GPU thread stack size */ + cudaLimitPrintfFifoSize = 0x01, /**< GPU printf FIFO size */ + cudaLimitMallocHeapSize = 0x02, /**< GPU malloc heap size */ + cudaLimitDevRuntimeSyncDepth = 0x03, /**< GPU device runtime synchronize depth */ + cudaLimitDevRuntimePendingLaunchCount = 0x04, /**< GPU device runtime pending launch count */ + cudaLimitMaxL2FetchGranularity = 0x05, /**< A value between 0 and 128 that indicates the maximum fetch granularity of L2 (in Bytes). This is a hint */ + cudaLimitPersistingL2CacheSize = 0x06 /**< A size in bytes for L2 persisting lines cache size */ +}; + +/** + * CUDA Memory Advise values + */ +enum __device_builtin__ cudaMemoryAdvise +{ + cudaMemAdviseSetReadMostly = 1, /**< Data will mostly be read and only occassionally be written to */ + cudaMemAdviseUnsetReadMostly = 2, /**< Undo the effect of ::cudaMemAdviseSetReadMostly */ + cudaMemAdviseSetPreferredLocation = 3, /**< Set the preferred location for the data as the specified device */ + cudaMemAdviseUnsetPreferredLocation = 4, /**< Clear the preferred location for the data */ + cudaMemAdviseSetAccessedBy = 5, /**< Data will be accessed by the specified device, so prevent page faults as much as possible */ + cudaMemAdviseUnsetAccessedBy = 6 /**< Let the Unified Memory subsystem decide on the page faulting policy for the specified device */ +}; + +/** + * CUDA range attributes + */ +enum __device_builtin__ cudaMemRangeAttribute +{ + cudaMemRangeAttributeReadMostly = 1, /**< Whether the range will mostly be read and only occassionally be written to */ + cudaMemRangeAttributePreferredLocation = 2, /**< The preferred location of the range */ + cudaMemRangeAttributeAccessedBy = 3, /**< Memory range has ::cudaMemAdviseSetAccessedBy set for specified device */ + cudaMemRangeAttributeLastPrefetchLocation = 4 /**< The last location to which the range was prefetched */ +}; + +/** + * CUDA GPUDirect RDMA flush writes APIs supported on the device + */ +enum __device_builtin__ cudaFlushGPUDirectRDMAWritesOptions { + cudaFlushGPUDirectRDMAWritesOptionHost = 1<<0, /**< ::cudaDeviceFlushGPUDirectRDMAWrites() and its CUDA Driver API counterpart are supported on the device. */ + cudaFlushGPUDirectRDMAWritesOptionMemOps = 1<<1 /**< The ::CU_STREAM_WAIT_VALUE_FLUSH flag and the ::CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES MemOp are supported on the CUDA device. */ +}; + +/** + * CUDA GPUDirect RDMA flush writes ordering features of the device + */ +enum __device_builtin__ cudaGPUDirectRDMAWritesOrdering { + cudaGPUDirectRDMAWritesOrderingNone = 0, /**< The device does not natively support ordering of GPUDirect RDMA writes. ::cudaFlushGPUDirectRDMAWrites() can be leveraged if supported. */ + cudaGPUDirectRDMAWritesOrderingOwner = 100, /**< Natively, the device can consistently consume GPUDirect RDMA writes, although other CUDA devices may not. */ + cudaGPUDirectRDMAWritesOrderingAllDevices = 200 /**< Any CUDA device in the system can consistently consume GPUDirect RDMA writes to this device. */ +}; + +/** + * CUDA GPUDirect RDMA flush writes scopes + */ +enum __device_builtin__ cudaFlushGPUDirectRDMAWritesScope { + cudaFlushGPUDirectRDMAWritesToOwner = 100, /**< Blocks until remote writes are visible to the CUDA device context owning the data. */ + cudaFlushGPUDirectRDMAWritesToAllDevices = 200 /**< Blocks until remote writes are visible to all CUDA device contexts. */ +}; + +/** + * CUDA GPUDirect RDMA flush writes targets + */ +enum __device_builtin__ cudaFlushGPUDirectRDMAWritesTarget { + cudaFlushGPUDirectRDMAWritesTargetCurrentDevice /**< Sets the target for ::cudaDeviceFlushGPUDirectRDMAWrites() to the currently active CUDA device context. */ +}; + + +/** + * CUDA device attributes + */ +enum __device_builtin__ cudaDeviceAttr +{ + cudaDevAttrMaxThreadsPerBlock = 1, /**< Maximum number of threads per block */ + cudaDevAttrMaxBlockDimX = 2, /**< Maximum block dimension X */ + cudaDevAttrMaxBlockDimY = 3, /**< Maximum block dimension Y */ + cudaDevAttrMaxBlockDimZ = 4, /**< Maximum block dimension Z */ + cudaDevAttrMaxGridDimX = 5, /**< Maximum grid dimension X */ + cudaDevAttrMaxGridDimY = 6, /**< Maximum grid dimension Y */ + cudaDevAttrMaxGridDimZ = 7, /**< Maximum grid dimension Z */ + cudaDevAttrMaxSharedMemoryPerBlock = 8, /**< Maximum shared memory available per block in bytes */ + cudaDevAttrTotalConstantMemory = 9, /**< Memory available on device for __constant__ variables in a CUDA C kernel in bytes */ + cudaDevAttrWarpSize = 10, /**< Warp size in threads */ + cudaDevAttrMaxPitch = 11, /**< Maximum pitch in bytes allowed by memory copies */ + cudaDevAttrMaxRegistersPerBlock = 12, /**< Maximum number of 32-bit registers available per block */ + cudaDevAttrClockRate = 13, /**< Peak clock frequency in kilohertz */ + cudaDevAttrTextureAlignment = 14, /**< Alignment requirement for textures */ + cudaDevAttrGpuOverlap = 15, /**< Device can possibly copy memory and execute a kernel concurrently */ + cudaDevAttrMultiProcessorCount = 16, /**< Number of multiprocessors on device */ + cudaDevAttrKernelExecTimeout = 17, /**< Specifies whether there is a run time limit on kernels */ + cudaDevAttrIntegrated = 18, /**< Device is integrated with host memory */ + cudaDevAttrCanMapHostMemory = 19, /**< Device can map host memory into CUDA address space */ + cudaDevAttrComputeMode = 20, /**< Compute mode (See ::cudaComputeMode for details) */ + cudaDevAttrMaxTexture1DWidth = 21, /**< Maximum 1D texture width */ + cudaDevAttrMaxTexture2DWidth = 22, /**< Maximum 2D texture width */ + cudaDevAttrMaxTexture2DHeight = 23, /**< Maximum 2D texture height */ + cudaDevAttrMaxTexture3DWidth = 24, /**< Maximum 3D texture width */ + cudaDevAttrMaxTexture3DHeight = 25, /**< Maximum 3D texture height */ + cudaDevAttrMaxTexture3DDepth = 26, /**< Maximum 3D texture depth */ + cudaDevAttrMaxTexture2DLayeredWidth = 27, /**< Maximum 2D layered texture width */ + cudaDevAttrMaxTexture2DLayeredHeight = 28, /**< Maximum 2D layered texture height */ + cudaDevAttrMaxTexture2DLayeredLayers = 29, /**< Maximum layers in a 2D layered texture */ + cudaDevAttrSurfaceAlignment = 30, /**< Alignment requirement for surfaces */ + cudaDevAttrConcurrentKernels = 31, /**< Device can possibly execute multiple kernels concurrently */ + cudaDevAttrEccEnabled = 32, /**< Device has ECC support enabled */ + cudaDevAttrPciBusId = 33, /**< PCI bus ID of the device */ + cudaDevAttrPciDeviceId = 34, /**< PCI device ID of the device */ + cudaDevAttrTccDriver = 35, /**< Device is using TCC driver model */ + cudaDevAttrMemoryClockRate = 36, /**< Peak memory clock frequency in kilohertz */ + cudaDevAttrGlobalMemoryBusWidth = 37, /**< Global memory bus width in bits */ + cudaDevAttrL2CacheSize = 38, /**< Size of L2 cache in bytes */ + cudaDevAttrMaxThreadsPerMultiProcessor = 39, /**< Maximum resident threads per multiprocessor */ + cudaDevAttrAsyncEngineCount = 40, /**< Number of asynchronous engines */ + cudaDevAttrUnifiedAddressing = 41, /**< Device shares a unified address space with the host */ + cudaDevAttrMaxTexture1DLayeredWidth = 42, /**< Maximum 1D layered texture width */ + cudaDevAttrMaxTexture1DLayeredLayers = 43, /**< Maximum layers in a 1D layered texture */ + cudaDevAttrMaxTexture2DGatherWidth = 45, /**< Maximum 2D texture width if cudaArrayTextureGather is set */ + cudaDevAttrMaxTexture2DGatherHeight = 46, /**< Maximum 2D texture height if cudaArrayTextureGather is set */ + cudaDevAttrMaxTexture3DWidthAlt = 47, /**< Alternate maximum 3D texture width */ + cudaDevAttrMaxTexture3DHeightAlt = 48, /**< Alternate maximum 3D texture height */ + cudaDevAttrMaxTexture3DDepthAlt = 49, /**< Alternate maximum 3D texture depth */ + cudaDevAttrPciDomainId = 50, /**< PCI domain ID of the device */ + cudaDevAttrTexturePitchAlignment = 51, /**< Pitch alignment requirement for textures */ + cudaDevAttrMaxTextureCubemapWidth = 52, /**< Maximum cubemap texture width/height */ + cudaDevAttrMaxTextureCubemapLayeredWidth = 53, /**< Maximum cubemap layered texture width/height */ + cudaDevAttrMaxTextureCubemapLayeredLayers = 54, /**< Maximum layers in a cubemap layered texture */ + cudaDevAttrMaxSurface1DWidth = 55, /**< Maximum 1D surface width */ + cudaDevAttrMaxSurface2DWidth = 56, /**< Maximum 2D surface width */ + cudaDevAttrMaxSurface2DHeight = 57, /**< Maximum 2D surface height */ + cudaDevAttrMaxSurface3DWidth = 58, /**< Maximum 3D surface width */ + cudaDevAttrMaxSurface3DHeight = 59, /**< Maximum 3D surface height */ + cudaDevAttrMaxSurface3DDepth = 60, /**< Maximum 3D surface depth */ + cudaDevAttrMaxSurface1DLayeredWidth = 61, /**< Maximum 1D layered surface width */ + cudaDevAttrMaxSurface1DLayeredLayers = 62, /**< Maximum layers in a 1D layered surface */ + cudaDevAttrMaxSurface2DLayeredWidth = 63, /**< Maximum 2D layered surface width */ + cudaDevAttrMaxSurface2DLayeredHeight = 64, /**< Maximum 2D layered surface height */ + cudaDevAttrMaxSurface2DLayeredLayers = 65, /**< Maximum layers in a 2D layered surface */ + cudaDevAttrMaxSurfaceCubemapWidth = 66, /**< Maximum cubemap surface width */ + cudaDevAttrMaxSurfaceCubemapLayeredWidth = 67, /**< Maximum cubemap layered surface width */ + cudaDevAttrMaxSurfaceCubemapLayeredLayers = 68, /**< Maximum layers in a cubemap layered surface */ + cudaDevAttrMaxTexture1DLinearWidth = 69, /**< Maximum 1D linear texture width */ + cudaDevAttrMaxTexture2DLinearWidth = 70, /**< Maximum 2D linear texture width */ + cudaDevAttrMaxTexture2DLinearHeight = 71, /**< Maximum 2D linear texture height */ + cudaDevAttrMaxTexture2DLinearPitch = 72, /**< Maximum 2D linear texture pitch in bytes */ + cudaDevAttrMaxTexture2DMipmappedWidth = 73, /**< Maximum mipmapped 2D texture width */ + cudaDevAttrMaxTexture2DMipmappedHeight = 74, /**< Maximum mipmapped 2D texture height */ + cudaDevAttrComputeCapabilityMajor = 75, /**< Major compute capability version number */ + cudaDevAttrComputeCapabilityMinor = 76, /**< Minor compute capability version number */ + cudaDevAttrMaxTexture1DMipmappedWidth = 77, /**< Maximum mipmapped 1D texture width */ + cudaDevAttrStreamPrioritiesSupported = 78, /**< Device supports stream priorities */ + cudaDevAttrGlobalL1CacheSupported = 79, /**< Device supports caching globals in L1 */ + cudaDevAttrLocalL1CacheSupported = 80, /**< Device supports caching locals in L1 */ + cudaDevAttrMaxSharedMemoryPerMultiprocessor = 81, /**< Maximum shared memory available per multiprocessor in bytes */ + cudaDevAttrMaxRegistersPerMultiprocessor = 82, /**< Maximum number of 32-bit registers available per multiprocessor */ + cudaDevAttrManagedMemory = 83, /**< Device can allocate managed memory on this system */ + cudaDevAttrIsMultiGpuBoard = 84, /**< Device is on a multi-GPU board */ + cudaDevAttrMultiGpuBoardGroupID = 85, /**< Unique identifier for a group of devices on the same multi-GPU board */ + cudaDevAttrHostNativeAtomicSupported = 86, /**< Link between the device and the host supports native atomic operations */ + cudaDevAttrSingleToDoublePrecisionPerfRatio = 87, /**< Ratio of single precision performance (in floating-point operations per second) to double precision performance */ + cudaDevAttrPageableMemoryAccess = 88, /**< Device supports coherently accessing pageable memory without calling cudaHostRegister on it */ + cudaDevAttrConcurrentManagedAccess = 89, /**< Device can coherently access managed memory concurrently with the CPU */ + cudaDevAttrComputePreemptionSupported = 90, /**< Device supports Compute Preemption */ + cudaDevAttrCanUseHostPointerForRegisteredMem = 91, /**< Device can access host registered memory at the same virtual address as the CPU */ + cudaDevAttrReserved92 = 92, + cudaDevAttrReserved93 = 93, + cudaDevAttrReserved94 = 94, + cudaDevAttrCooperativeLaunch = 95, /**< Device supports launching cooperative kernels via ::cudaLaunchCooperativeKernel*/ + cudaDevAttrCooperativeMultiDeviceLaunch = 96, /**< Deprecated, cudaLaunchCooperativeKernelMultiDevice is deprecated. */ + cudaDevAttrMaxSharedMemoryPerBlockOptin = 97, /**< The maximum optin shared memory per block. This value may vary by chip. See ::cudaFuncSetAttribute */ + cudaDevAttrCanFlushRemoteWrites = 98, /**< Device supports flushing of outstanding remote writes. */ + cudaDevAttrHostRegisterSupported = 99, /**< Device supports host memory registration via ::cudaHostRegister. */ + cudaDevAttrPageableMemoryAccessUsesHostPageTables = 100, /**< Device accesses pageable memory via the host's page tables. */ + cudaDevAttrDirectManagedMemAccessFromHost = 101, /**< Host can directly access managed memory on the device without migration. */ + cudaDevAttrMaxBlocksPerMultiprocessor = 106, /**< Maximum number of blocks per multiprocessor */ + cudaDevAttrMaxPersistingL2CacheSize = 108, /**< Maximum L2 persisting lines capacity setting in bytes. */ + cudaDevAttrMaxAccessPolicyWindowSize = 109, /**< Maximum value of cudaAccessPolicyWindow::num_bytes. */ + cudaDevAttrReservedSharedMemoryPerBlock = 111, /**< Shared memory reserved by CUDA driver per block in bytes */ + cudaDevAttrSparseCudaArraySupported = 112, /**< Device supports sparse CUDA arrays and sparse CUDA mipmapped arrays */ + cudaDevAttrHostRegisterReadOnlySupported = 113, /**< Device supports using the ::cudaHostRegister flag cudaHostRegisterReadOnly to register memory that must be mapped as read-only to the GPU */ + cudaDevAttrTimelineSemaphoreInteropSupported = 114, /**< External timeline semaphore interop is supported on the device */ + cudaDevAttrMaxTimelineSemaphoreInteropSupported = 114, /**< Deprecated, External timeline semaphore interop is supported on the device */ + cudaDevAttrMemoryPoolsSupported = 115, /**< Device supports using the ::cudaMallocAsync and ::cudaMemPool family of APIs */ + cudaDevAttrGPUDirectRDMASupported = 116, /**< Device supports GPUDirect RDMA APIs, like nvidia_p2p_get_pages (see https://docs.nvidia.com/cuda/gpudirect-rdma for more information) */ + cudaDevAttrGPUDirectRDMAFlushWritesOptions = 117, /**< The returned attribute shall be interpreted as a bitmask, where the individual bits are listed in the ::cudaFlushGPUDirectRDMAWritesOptions enum */ + cudaDevAttrGPUDirectRDMAWritesOrdering = 118, /**< GPUDirect RDMA writes to the device do not need to be flushed for consumers within the scope indicated by the returned attribute. See ::cudaGPUDirectRDMAWritesOrdering for the numerical values returned here. */ + cudaDevAttrMemoryPoolSupportedHandleTypes = 119, /**< Handle types supported with mempool based IPC */ + cudaDevAttrClusterLaunch = 120, /**< Indicates device supports cluster launch */ + cudaDevAttrDeferredMappingCudaArraySupported = 121, /**< Device supports deferred mapping CUDA arrays and CUDA mipmapped arrays */ + cudaDevAttrReserved122 = 122, + cudaDevAttrReserved123 = 123, + cudaDevAttrReserved124 = 124, + cudaDevAttrIpcEventSupport = 125, /**< Device supports IPC Events. */ + cudaDevAttrMemSyncDomainCount = 126, /**< Number of memory synchronization domains the device supports. */ + cudaDevAttrReserved127 = 127, + cudaDevAttrReserved128 = 128, + cudaDevAttrReserved129 = 129, + cudaDevAttrReserved132 = 132, + cudaDevAttrMax +}; + +/** + * CUDA memory pool attributes + */ +enum __device_builtin__ cudaMemPoolAttr +{ + /** + * (value type = int) + * Allow cuMemAllocAsync to use memory asynchronously freed + * in another streams as long as a stream ordering dependency + * of the allocating stream on the free action exists. + * Cuda events and null stream interactions can create the required + * stream ordered dependencies. (default enabled) + */ + cudaMemPoolReuseFollowEventDependencies = 0x1, + + /** + * (value type = int) + * Allow reuse of already completed frees when there is no dependency + * between the free and allocation. (default enabled) + */ + cudaMemPoolReuseAllowOpportunistic = 0x2, + + /** + * (value type = int) + * Allow cuMemAllocAsync to insert new stream dependencies + * in order to establish the stream ordering required to reuse + * a piece of memory released by cuFreeAsync (default enabled). + */ + cudaMemPoolReuseAllowInternalDependencies = 0x3, + + + /** + * (value type = cuuint64_t) + * Amount of reserved memory in bytes to hold onto before trying + * to release memory back to the OS. When more than the release + * threshold bytes of memory are held by the memory pool, the + * allocator will try to release memory back to the OS on the + * next call to stream, event or context synchronize. (default 0) + */ + cudaMemPoolAttrReleaseThreshold = 0x4, + + /** + * (value type = cuuint64_t) + * Amount of backing memory currently allocated for the mempool. + */ + cudaMemPoolAttrReservedMemCurrent = 0x5, + + /** + * (value type = cuuint64_t) + * High watermark of backing memory allocated for the mempool since the + * last time it was reset. High watermark can only be reset to zero. + */ + cudaMemPoolAttrReservedMemHigh = 0x6, + + /** + * (value type = cuuint64_t) + * Amount of memory from the pool that is currently in use by the application. + */ + cudaMemPoolAttrUsedMemCurrent = 0x7, + + /** + * (value type = cuuint64_t) + * High watermark of the amount of memory from the pool that was in use by the application since + * the last time it was reset. High watermark can only be reset to zero. + */ + cudaMemPoolAttrUsedMemHigh = 0x8 +}; + +/** + * Specifies the type of location + */ +enum __device_builtin__ cudaMemLocationType { + cudaMemLocationTypeInvalid = 0, + cudaMemLocationTypeDevice = 1 /**< Location is a device location, thus id is a device ordinal */ +}; + +/** + * Specifies a memory location. + * + * To specify a gpu, set type = ::cudaMemLocationTypeDevice and set id = the gpu's device ordinal. + */ +struct __device_builtin__ cudaMemLocation { + enum cudaMemLocationType type; /**< Specifies the location type, which modifies the meaning of id. */ + int id; /**< identifier for a given this location's ::CUmemLocationType. */ +}; + +/** + * Specifies the memory protection flags for mapping. + */ +enum __device_builtin__ cudaMemAccessFlags { + cudaMemAccessFlagsProtNone = 0, /**< Default, make the address range not accessible */ + cudaMemAccessFlagsProtRead = 1, /**< Make the address range read accessible */ + cudaMemAccessFlagsProtReadWrite = 3 /**< Make the address range read-write accessible */ +}; + +/** + * Memory access descriptor + */ +struct __device_builtin__ cudaMemAccessDesc { + struct cudaMemLocation location; /**< Location on which the request is to change it's accessibility */ + enum cudaMemAccessFlags flags; /**< ::CUmemProt accessibility flags to set on the request */ +}; + +/** + * Defines the allocation types available + */ +enum __device_builtin__ cudaMemAllocationType { + cudaMemAllocationTypeInvalid = 0x0, + /** This allocation type is 'pinned', i.e. cannot migrate from its current + * location while the application is actively using it + */ + cudaMemAllocationTypePinned = 0x1, + cudaMemAllocationTypeMax = 0x7FFFFFFF +}; + +/** + * Flags for specifying particular handle types + */ +enum __device_builtin__ cudaMemAllocationHandleType { + cudaMemHandleTypeNone = 0x0, /**< Does not allow any export mechanism. > */ + cudaMemHandleTypePosixFileDescriptor = 0x1, /**< Allows a file descriptor to be used for exporting. Permitted only on POSIX systems. (int) */ + cudaMemHandleTypeWin32 = 0x2, /**< Allows a Win32 NT handle to be used for exporting. (HANDLE) */ + cudaMemHandleTypeWin32Kmt = 0x4 /**< Allows a Win32 KMT handle to be used for exporting. (D3DKMT_HANDLE) */ +}; + +/** + * Specifies the properties of allocations made from the pool. + */ +struct __device_builtin__ cudaMemPoolProps { + enum cudaMemAllocationType allocType; /**< Allocation type. Currently must be specified as cudaMemAllocationTypePinned */ + enum cudaMemAllocationHandleType handleTypes; /**< Handle types that will be supported by allocations from the pool. */ + struct cudaMemLocation location; /**< Location allocations should reside. */ + /** + * Windows-specific LPSECURITYATTRIBUTES required when + * ::cudaMemHandleTypeWin32 is specified. This security attribute defines + * the scope of which exported allocations may be tranferred to other + * processes. In all other cases, this field is required to be zero. + */ + void *win32SecurityAttributes; + unsigned char reserved[64]; /**< reserved for future use, must be 0 */ +}; + +/** + * Opaque data for exporting a pool allocation + */ +struct __device_builtin__ cudaMemPoolPtrExportData { + unsigned char reserved[64]; +}; + +/** + * Memory allocation node parameters + */ +struct __device_builtin__ cudaMemAllocNodeParams { + /** + * in: location where the allocation should reside (specified in ::location). + * ::handleTypes must be ::cudaMemHandleTypeNone. IPC is not supported. + */ + struct cudaMemPoolProps poolProps; /**< in: array of memory access descriptors. Used to describe peer GPU access */ + const struct cudaMemAccessDesc *accessDescs; /**< in: number of memory access descriptors. Must not exceed the number of GPUs. */ + size_t accessDescCount; /**< in: Number of `accessDescs`s */ + size_t bytesize; /**< in: size in bytes of the requested allocation */ + void *dptr; /**< out: address of the allocation returned by CUDA */ +}; + +/** + * Graph memory attributes + */ +enum __device_builtin__ cudaGraphMemAttributeType { + /** + * (value type = cuuint64_t) + * Amount of memory, in bytes, currently associated with graphs. + */ + cudaGraphMemAttrUsedMemCurrent = 0x0, + + /** + * (value type = cuuint64_t) + * High watermark of memory, in bytes, associated with graphs since the + * last time it was reset. High watermark can only be reset to zero. + */ + cudaGraphMemAttrUsedMemHigh = 0x1, + + /** + * (value type = cuuint64_t) + * Amount of memory, in bytes, currently allocated for use by + * the CUDA graphs asynchronous allocator. + */ + cudaGraphMemAttrReservedMemCurrent = 0x2, + + /** + * (value type = cuuint64_t) + * High watermark of memory, in bytes, currently allocated for use by + * the CUDA graphs asynchronous allocator. + */ + cudaGraphMemAttrReservedMemHigh = 0x3 +}; + +/** + * CUDA device P2P attributes + */ + +enum __device_builtin__ cudaDeviceP2PAttr { + cudaDevP2PAttrPerformanceRank = 1, /**< A relative value indicating the performance of the link between two devices */ + cudaDevP2PAttrAccessSupported = 2, /**< Peer access is enabled */ + cudaDevP2PAttrNativeAtomicSupported = 3, /**< Native atomic operation over the link supported */ + cudaDevP2PAttrCudaArrayAccessSupported = 4 /**< Accessing CUDA arrays over the link supported */ +}; + +/** + * CUDA UUID types + */ +#ifndef CU_UUID_HAS_BEEN_DEFINED +#define CU_UUID_HAS_BEEN_DEFINED +struct __device_builtin__ CUuuid_st { /**< CUDA definition of UUID */ + char bytes[16]; +}; +typedef __device_builtin__ struct CUuuid_st CUuuid; +#endif +typedef __device_builtin__ struct CUuuid_st cudaUUID_t; + +/** + * CUDA device properties + */ +struct __device_builtin__ cudaDeviceProp +{ + char name[256]; /**< ASCII string identifying device */ + cudaUUID_t uuid; /**< 16-byte unique identifier */ + char luid[8]; /**< 8-byte locally unique identifier. Value is undefined on TCC and non-Windows platforms */ + unsigned int luidDeviceNodeMask; /**< LUID device node mask. Value is undefined on TCC and non-Windows platforms */ + size_t totalGlobalMem; /**< Global memory available on device in bytes */ + size_t sharedMemPerBlock; /**< Shared memory available per block in bytes */ + int regsPerBlock; /**< 32-bit registers available per block */ + int warpSize; /**< Warp size in threads */ + size_t memPitch; /**< Maximum pitch in bytes allowed by memory copies */ + int maxThreadsPerBlock; /**< Maximum number of threads per block */ + int maxThreadsDim[3]; /**< Maximum size of each dimension of a block */ + int maxGridSize[3]; /**< Maximum size of each dimension of a grid */ + int clockRate; /**< Deprecated, Clock frequency in kilohertz */ + size_t totalConstMem; /**< Constant memory available on device in bytes */ + int major; /**< Major compute capability */ + int minor; /**< Minor compute capability */ + size_t textureAlignment; /**< Alignment requirement for textures */ + size_t texturePitchAlignment; /**< Pitch alignment requirement for texture references bound to pitched memory */ + int deviceOverlap; /**< Device can concurrently copy memory and execute a kernel. Deprecated. Use instead asyncEngineCount. */ + int multiProcessorCount; /**< Number of multiprocessors on device */ + int kernelExecTimeoutEnabled; /**< Deprecated, Specified whether there is a run time limit on kernels */ + int integrated; /**< Device is integrated as opposed to discrete */ + int canMapHostMemory; /**< Device can map host memory with cudaHostAlloc/cudaHostGetDevicePointer */ + int computeMode; /**< Deprecated, Compute mode (See ::cudaComputeMode) */ + int maxTexture1D; /**< Maximum 1D texture size */ + int maxTexture1DMipmap; /**< Maximum 1D mipmapped texture size */ + int maxTexture1DLinear; /**< Deprecated, do not use. Use cudaDeviceGetTexture1DLinearMaxWidth() or cuDeviceGetTexture1DLinearMaxWidth() instead. */ + int maxTexture2D[2]; /**< Maximum 2D texture dimensions */ + int maxTexture2DMipmap[2]; /**< Maximum 2D mipmapped texture dimensions */ + int maxTexture2DLinear[3]; /**< Maximum dimensions (width, height, pitch) for 2D textures bound to pitched memory */ + int maxTexture2DGather[2]; /**< Maximum 2D texture dimensions if texture gather operations have to be performed */ + int maxTexture3D[3]; /**< Maximum 3D texture dimensions */ + int maxTexture3DAlt[3]; /**< Maximum alternate 3D texture dimensions */ + int maxTextureCubemap; /**< Maximum Cubemap texture dimensions */ + int maxTexture1DLayered[2]; /**< Maximum 1D layered texture dimensions */ + int maxTexture2DLayered[3]; /**< Maximum 2D layered texture dimensions */ + int maxTextureCubemapLayered[2];/**< Maximum Cubemap layered texture dimensions */ + int maxSurface1D; /**< Maximum 1D surface size */ + int maxSurface2D[2]; /**< Maximum 2D surface dimensions */ + int maxSurface3D[3]; /**< Maximum 3D surface dimensions */ + int maxSurface1DLayered[2]; /**< Maximum 1D layered surface dimensions */ + int maxSurface2DLayered[3]; /**< Maximum 2D layered surface dimensions */ + int maxSurfaceCubemap; /**< Maximum Cubemap surface dimensions */ + int maxSurfaceCubemapLayered[2];/**< Maximum Cubemap layered surface dimensions */ + size_t surfaceAlignment; /**< Alignment requirements for surfaces */ + int concurrentKernels; /**< Device can possibly execute multiple kernels concurrently */ + int ECCEnabled; /**< Device has ECC support enabled */ + int pciBusID; /**< PCI bus ID of the device */ + int pciDeviceID; /**< PCI device ID of the device */ + int pciDomainID; /**< PCI domain ID of the device */ + int tccDriver; /**< 1 if device is a Tesla device using TCC driver, 0 otherwise */ + int asyncEngineCount; /**< Number of asynchronous engines */ + int unifiedAddressing; /**< Device shares a unified address space with the host */ + int memoryClockRate; /**< Deprecated, Peak memory clock frequency in kilohertz */ + int memoryBusWidth; /**< Global memory bus width in bits */ + int l2CacheSize; /**< Size of L2 cache in bytes */ + int persistingL2CacheMaxSize; /**< Device's maximum l2 persisting lines capacity setting in bytes */ + int maxThreadsPerMultiProcessor;/**< Maximum resident threads per multiprocessor */ + int streamPrioritiesSupported; /**< Device supports stream priorities */ + int globalL1CacheSupported; /**< Device supports caching globals in L1 */ + int localL1CacheSupported; /**< Device supports caching locals in L1 */ + size_t sharedMemPerMultiprocessor; /**< Shared memory available per multiprocessor in bytes */ + int regsPerMultiprocessor; /**< 32-bit registers available per multiprocessor */ + int managedMemory; /**< Device supports allocating managed memory on this system */ + int isMultiGpuBoard; /**< Device is on a multi-GPU board */ + int multiGpuBoardGroupID; /**< Unique identifier for a group of devices on the same multi-GPU board */ + int hostNativeAtomicSupported; /**< Link between the device and the host supports native atomic operations */ + int singleToDoublePrecisionPerfRatio; /**< Deprecated, Ratio of single precision performance (in floating-point operations per second) to double precision performance */ + int pageableMemoryAccess; /**< Device supports coherently accessing pageable memory without calling cudaHostRegister on it */ + int concurrentManagedAccess; /**< Device can coherently access managed memory concurrently with the CPU */ + int computePreemptionSupported; /**< Device supports Compute Preemption */ + int canUseHostPointerForRegisteredMem; /**< Device can access host registered memory at the same virtual address as the CPU */ + int cooperativeLaunch; /**< Device supports launching cooperative kernels via ::cudaLaunchCooperativeKernel */ + int cooperativeMultiDeviceLaunch; /**< Deprecated, cudaLaunchCooperativeKernelMultiDevice is deprecated. */ + size_t sharedMemPerBlockOptin; /**< Per device maximum shared memory per block usable by special opt in */ + int pageableMemoryAccessUsesHostPageTables; /**< Device accesses pageable memory via the host's page tables */ + int directManagedMemAccessFromHost; /**< Host can directly access managed memory on the device without migration. */ + int maxBlocksPerMultiProcessor; /**< Maximum number of resident blocks per multiprocessor */ + int accessPolicyMaxWindowSize; /**< The maximum value of ::cudaAccessPolicyWindow::num_bytes. */ + size_t reservedSharedMemPerBlock; /**< Shared memory reserved by CUDA driver per block in bytes */ + int hostRegisterSupported; /**< Device supports host memory registration via ::cudaHostRegister. */ + int sparseCudaArraySupported; /**< 1 if the device supports sparse CUDA arrays and sparse CUDA mipmapped arrays, 0 otherwise */ + int hostRegisterReadOnlySupported; /**< Device supports using the ::cudaHostRegister flag cudaHostRegisterReadOnly to register memory that must be mapped as read-only to the GPU */ + int timelineSemaphoreInteropSupported; /**< External timeline semaphore interop is supported on the device */ + int memoryPoolsSupported; /**< 1 if the device supports using the cudaMallocAsync and cudaMemPool family of APIs, 0 otherwise */ + int gpuDirectRDMASupported; /**< 1 if the device supports GPUDirect RDMA APIs, 0 otherwise */ + unsigned int gpuDirectRDMAFlushWritesOptions; /**< Bitmask to be interpreted according to the ::cudaFlushGPUDirectRDMAWritesOptions enum */ + int gpuDirectRDMAWritesOrdering;/**< See the ::cudaGPUDirectRDMAWritesOrdering enum for numerical values */ + unsigned int memoryPoolSupportedHandleTypes; /**< Bitmask of handle types supported with mempool-based IPC */ + int deferredMappingCudaArraySupported; /**< 1 if the device supports deferred mapping CUDA arrays and CUDA mipmapped arrays */ + int ipcEventSupported; /**< Device supports IPC Events. */ + int clusterLaunch; /**< Indicates device supports cluster launch */ + int unifiedFunctionPointers; /**< Indicates device supports unified pointers */ + int reserved2[2]; + int reserved[61]; /**< Reserved for future use */ +}; + +/** + * CUDA IPC Handle Size + */ +#define CUDA_IPC_HANDLE_SIZE 64 + +/** + * CUDA IPC event handle + */ +typedef __device_builtin__ struct __device_builtin__ cudaIpcEventHandle_st +{ + char reserved[CUDA_IPC_HANDLE_SIZE]; +}cudaIpcEventHandle_t; + +/** + * CUDA IPC memory handle + */ +typedef __device_builtin__ struct __device_builtin__ cudaIpcMemHandle_st +{ + char reserved[CUDA_IPC_HANDLE_SIZE]; +}cudaIpcMemHandle_t; + +/** + * External memory handle types + */ +enum __device_builtin__ cudaExternalMemoryHandleType { + /** + * Handle is an opaque file descriptor + */ + cudaExternalMemoryHandleTypeOpaqueFd = 1, + /** + * Handle is an opaque shared NT handle + */ + cudaExternalMemoryHandleTypeOpaqueWin32 = 2, + /** + * Handle is an opaque, globally shared handle + */ + cudaExternalMemoryHandleTypeOpaqueWin32Kmt = 3, + /** + * Handle is a D3D12 heap object + */ + cudaExternalMemoryHandleTypeD3D12Heap = 4, + /** + * Handle is a D3D12 committed resource + */ + cudaExternalMemoryHandleTypeD3D12Resource = 5, + /** + * Handle is a shared NT handle to a D3D11 resource + */ + cudaExternalMemoryHandleTypeD3D11Resource = 6, + /** + * Handle is a globally shared handle to a D3D11 resource + */ + cudaExternalMemoryHandleTypeD3D11ResourceKmt = 7, + /** + * Handle is an NvSciBuf object + */ + cudaExternalMemoryHandleTypeNvSciBuf = 8 +}; + +/** + * Indicates that the external memory object is a dedicated resource + */ +#define cudaExternalMemoryDedicated 0x1 + +/** When the /p flags parameter of ::cudaExternalSemaphoreSignalParams + * contains this flag, it indicates that signaling an external semaphore object + * should skip performing appropriate memory synchronization operations over all + * the external memory objects that are imported as ::cudaExternalMemoryHandleTypeNvSciBuf, + * which otherwise are performed by default to ensure data coherency with other + * importers of the same NvSciBuf memory objects. + */ +#define cudaExternalSemaphoreSignalSkipNvSciBufMemSync 0x01 + +/** When the /p flags parameter of ::cudaExternalSemaphoreWaitParams + * contains this flag, it indicates that waiting an external semaphore object + * should skip performing appropriate memory synchronization operations over all + * the external memory objects that are imported as ::cudaExternalMemoryHandleTypeNvSciBuf, + * which otherwise are performed by default to ensure data coherency with other + * importers of the same NvSciBuf memory objects. + */ +#define cudaExternalSemaphoreWaitSkipNvSciBufMemSync 0x02 + +/** + * When /p flags of ::cudaDeviceGetNvSciSyncAttributes is set to this, + * it indicates that application need signaler specific NvSciSyncAttr + * to be filled by ::cudaDeviceGetNvSciSyncAttributes. + */ +#define cudaNvSciSyncAttrSignal 0x1 + +/** + * When /p flags of ::cudaDeviceGetNvSciSyncAttributes is set to this, + * it indicates that application need waiter specific NvSciSyncAttr + * to be filled by ::cudaDeviceGetNvSciSyncAttributes. + */ +#define cudaNvSciSyncAttrWait 0x2 + +/** + * External memory handle descriptor + */ +struct __device_builtin__ cudaExternalMemoryHandleDesc { + /** + * Type of the handle + */ + enum cudaExternalMemoryHandleType type; + union { + /** + * File descriptor referencing the memory object. Valid + * when type is + * ::cudaExternalMemoryHandleTypeOpaqueFd + */ + int fd; + /** + * Win32 handle referencing the semaphore object. Valid when + * type is one of the following: + * - ::cudaExternalMemoryHandleTypeOpaqueWin32 + * - ::cudaExternalMemoryHandleTypeOpaqueWin32Kmt + * - ::cudaExternalMemoryHandleTypeD3D12Heap + * - ::cudaExternalMemoryHandleTypeD3D12Resource + * - ::cudaExternalMemoryHandleTypeD3D11Resource + * - ::cudaExternalMemoryHandleTypeD3D11ResourceKmt + * Exactly one of 'handle' and 'name' must be non-NULL. If + * type is one of the following: + * ::cudaExternalMemoryHandleTypeOpaqueWin32Kmt + * ::cudaExternalMemoryHandleTypeD3D11ResourceKmt + * then 'name' must be NULL. + */ + struct { + /** + * Valid NT handle. Must be NULL if 'name' is non-NULL + */ + void *handle; + /** + * Name of a valid memory object. + * Must be NULL if 'handle' is non-NULL. + */ + const void *name; + } win32; + /** + * A handle representing NvSciBuf Object. Valid when type + * is ::cudaExternalMemoryHandleTypeNvSciBuf + */ + const void *nvSciBufObject; + } handle; + /** + * Size of the memory allocation + */ + unsigned long long size; + /** + * Flags must either be zero or ::cudaExternalMemoryDedicated + */ + unsigned int flags; +}; + +/** + * External memory buffer descriptor + */ +struct __device_builtin__ cudaExternalMemoryBufferDesc { + /** + * Offset into the memory object where the buffer's base is + */ + unsigned long long offset; + /** + * Size of the buffer + */ + unsigned long long size; + /** + * Flags reserved for future use. Must be zero. + */ + unsigned int flags; +}; + +/** + * External memory mipmap descriptor + */ +struct __device_builtin__ cudaExternalMemoryMipmappedArrayDesc { + /** + * Offset into the memory object where the base level of the + * mipmap chain is. + */ + unsigned long long offset; + /** + * Format of base level of the mipmap chain + */ + struct cudaChannelFormatDesc formatDesc; + /** + * Dimensions of base level of the mipmap chain + */ + struct cudaExtent extent; + /** + * Flags associated with CUDA mipmapped arrays. + * See ::cudaMallocMipmappedArray + */ + unsigned int flags; + /** + * Total number of levels in the mipmap chain + */ + unsigned int numLevels; +}; + +/** + * External semaphore handle types + */ +enum __device_builtin__ cudaExternalSemaphoreHandleType { + /** + * Handle is an opaque file descriptor + */ + cudaExternalSemaphoreHandleTypeOpaqueFd = 1, + /** + * Handle is an opaque shared NT handle + */ + cudaExternalSemaphoreHandleTypeOpaqueWin32 = 2, + /** + * Handle is an opaque, globally shared handle + */ + cudaExternalSemaphoreHandleTypeOpaqueWin32Kmt = 3, + /** + * Handle is a shared NT handle referencing a D3D12 fence object + */ + cudaExternalSemaphoreHandleTypeD3D12Fence = 4, + /** + * Handle is a shared NT handle referencing a D3D11 fence object + */ + cudaExternalSemaphoreHandleTypeD3D11Fence = 5, + /** + * Opaque handle to NvSciSync Object + */ + cudaExternalSemaphoreHandleTypeNvSciSync = 6, + /** + * Handle is a shared NT handle referencing a D3D11 keyed mutex object + */ + cudaExternalSemaphoreHandleTypeKeyedMutex = 7, + /** + * Handle is a shared KMT handle referencing a D3D11 keyed mutex object + */ + cudaExternalSemaphoreHandleTypeKeyedMutexKmt = 8, + /** + * Handle is an opaque handle file descriptor referencing a timeline semaphore + */ + cudaExternalSemaphoreHandleTypeTimelineSemaphoreFd = 9, + /** + * Handle is an opaque handle file descriptor referencing a timeline semaphore + */ + cudaExternalSemaphoreHandleTypeTimelineSemaphoreWin32 = 10 +}; + +/** + * External semaphore handle descriptor + */ +struct __device_builtin__ cudaExternalSemaphoreHandleDesc { + /** + * Type of the handle + */ + enum cudaExternalSemaphoreHandleType type; + union { + /** + * File descriptor referencing the semaphore object. Valid when + * type is one of the following: + * - ::cudaExternalSemaphoreHandleTypeOpaqueFd + * - ::cudaExternalSemaphoreHandleTypeTimelineSemaphoreFd + */ + int fd; + /** + * Win32 handle referencing the semaphore object. Valid when + * type is one of the following: + * - ::cudaExternalSemaphoreHandleTypeOpaqueWin32 + * - ::cudaExternalSemaphoreHandleTypeOpaqueWin32Kmt + * - ::cudaExternalSemaphoreHandleTypeD3D12Fence + * - ::cudaExternalSemaphoreHandleTypeD3D11Fence + * - ::cudaExternalSemaphoreHandleTypeKeyedMutex + * - ::cudaExternalSemaphoreHandleTypeTimelineSemaphoreWin32 + * Exactly one of 'handle' and 'name' must be non-NULL. If + * type is one of the following: + * ::cudaExternalSemaphoreHandleTypeOpaqueWin32Kmt + * ::cudaExternalSemaphoreHandleTypeKeyedMutexKmt + * then 'name' must be NULL. + */ + struct { + /** + * Valid NT handle. Must be NULL if 'name' is non-NULL + */ + void *handle; + /** + * Name of a valid synchronization primitive. + * Must be NULL if 'handle' is non-NULL. + */ + const void *name; + } win32; + /** + * Valid NvSciSyncObj. Must be non NULL + */ + const void* nvSciSyncObj; + } handle; + /** + * Flags reserved for the future. Must be zero. + */ + unsigned int flags; +}; + +/** + * External semaphore signal parameters(deprecated) + */ +struct __device_builtin__ cudaExternalSemaphoreSignalParams_v1 { + struct { + /** + * Parameters for fence objects + */ + struct { + /** + * Value of fence to be signaled + */ + unsigned long long value; + } fence; + union { + /** + * Pointer to NvSciSyncFence. Valid if ::cudaExternalSemaphoreHandleType + * is of type ::cudaExternalSemaphoreHandleTypeNvSciSync. + */ + void *fence; + unsigned long long reserved; + } nvSciSync; + /** + * Parameters for keyed mutex objects + */ + struct { + /* + * Value of key to release the mutex with + */ + unsigned long long key; + } keyedMutex; + } params; + /** + * Only when ::cudaExternalSemaphoreSignalParams is used to + * signal a ::cudaExternalSemaphore_t of type + * ::cudaExternalSemaphoreHandleTypeNvSciSync, the valid flag is + * ::cudaExternalSemaphoreSignalSkipNvSciBufMemSync: which indicates + * that while signaling the ::cudaExternalSemaphore_t, no memory + * synchronization operations should be performed for any external memory + * object imported as ::cudaExternalMemoryHandleTypeNvSciBuf. + * For all other types of ::cudaExternalSemaphore_t, flags must be zero. + */ + unsigned int flags; +}; + +/** +* External semaphore wait parameters(deprecated) +*/ +struct __device_builtin__ cudaExternalSemaphoreWaitParams_v1 { + struct { + /** + * Parameters for fence objects + */ + struct { + /** + * Value of fence to be waited on + */ + unsigned long long value; + } fence; + union { + /** + * Pointer to NvSciSyncFence. Valid if ::cudaExternalSemaphoreHandleType + * is of type ::cudaExternalSemaphoreHandleTypeNvSciSync. + */ + void *fence; + unsigned long long reserved; + } nvSciSync; + /** + * Parameters for keyed mutex objects + */ + struct { + /** + * Value of key to acquire the mutex with + */ + unsigned long long key; + /** + * Timeout in milliseconds to wait to acquire the mutex + */ + unsigned int timeoutMs; + } keyedMutex; + } params; + /** + * Only when ::cudaExternalSemaphoreSignalParams is used to + * signal a ::cudaExternalSemaphore_t of type + * ::cudaExternalSemaphoreHandleTypeNvSciSync, the valid flag is + * ::cudaExternalSemaphoreSignalSkipNvSciBufMemSync: which indicates + * that while waiting for the ::cudaExternalSemaphore_t, no memory + * synchronization operations should be performed for any external memory + * object imported as ::cudaExternalMemoryHandleTypeNvSciBuf. + * For all other types of ::cudaExternalSemaphore_t, flags must be zero. + */ + unsigned int flags; +}; + +/** + * External semaphore signal parameters, compatible with driver type + */ +struct __device_builtin__ cudaExternalSemaphoreSignalParams{ + struct { + /** + * Parameters for fence objects + */ + struct { + /** + * Value of fence to be signaled + */ + unsigned long long value; + } fence; + union { + /** + * Pointer to NvSciSyncFence. Valid if ::cudaExternalSemaphoreHandleType + * is of type ::cudaExternalSemaphoreHandleTypeNvSciSync. + */ + void *fence; + unsigned long long reserved; + } nvSciSync; + /** + * Parameters for keyed mutex objects + */ + struct { + /* + * Value of key to release the mutex with + */ + unsigned long long key; + } keyedMutex; + unsigned int reserved[12]; + } params; + /** + * Only when ::cudaExternalSemaphoreSignalParams is used to + * signal a ::cudaExternalSemaphore_t of type + * ::cudaExternalSemaphoreHandleTypeNvSciSync, the valid flag is + * ::cudaExternalSemaphoreSignalSkipNvSciBufMemSync: which indicates + * that while signaling the ::cudaExternalSemaphore_t, no memory + * synchronization operations should be performed for any external memory + * object imported as ::cudaExternalMemoryHandleTypeNvSciBuf. + * For all other types of ::cudaExternalSemaphore_t, flags must be zero. + */ + unsigned int flags; + unsigned int reserved[16]; +}; + +/** + * External semaphore wait parameters, compatible with driver type + */ +struct __device_builtin__ cudaExternalSemaphoreWaitParams { + struct { + /** + * Parameters for fence objects + */ + struct { + /** + * Value of fence to be waited on + */ + unsigned long long value; + } fence; + union { + /** + * Pointer to NvSciSyncFence. Valid if ::cudaExternalSemaphoreHandleType + * is of type ::cudaExternalSemaphoreHandleTypeNvSciSync. + */ + void *fence; + unsigned long long reserved; + } nvSciSync; + /** + * Parameters for keyed mutex objects + */ + struct { + /** + * Value of key to acquire the mutex with + */ + unsigned long long key; + /** + * Timeout in milliseconds to wait to acquire the mutex + */ + unsigned int timeoutMs; + } keyedMutex; + unsigned int reserved[10]; + } params; + /** + * Only when ::cudaExternalSemaphoreSignalParams is used to + * signal a ::cudaExternalSemaphore_t of type + * ::cudaExternalSemaphoreHandleTypeNvSciSync, the valid flag is + * ::cudaExternalSemaphoreSignalSkipNvSciBufMemSync: which indicates + * that while waiting for the ::cudaExternalSemaphore_t, no memory + * synchronization operations should be performed for any external memory + * object imported as ::cudaExternalMemoryHandleTypeNvSciBuf. + * For all other types of ::cudaExternalSemaphore_t, flags must be zero. + */ + unsigned int flags; + unsigned int reserved[16]; +}; + +/******************************************************************************* +* * +* SHORTHAND TYPE DEFINITION USED BY RUNTIME API * +* * +*******************************************************************************/ + +/** + * CUDA Error types + */ +typedef __device_builtin__ enum cudaError cudaError_t; + +/** + * CUDA stream + */ +typedef __device_builtin__ struct CUstream_st *cudaStream_t; + +/** + * CUDA event types + */ +typedef __device_builtin__ struct CUevent_st *cudaEvent_t; + +/** + * CUDA graphics resource types + */ +typedef __device_builtin__ struct cudaGraphicsResource *cudaGraphicsResource_t; + +/** + * CUDA external memory + */ +typedef __device_builtin__ struct CUexternalMemory_st *cudaExternalMemory_t; + +/** + * CUDA external semaphore + */ +typedef __device_builtin__ struct CUexternalSemaphore_st *cudaExternalSemaphore_t; + +/** + * CUDA graph + */ +typedef __device_builtin__ struct CUgraph_st *cudaGraph_t; + +/** + * CUDA graph node. + */ +typedef __device_builtin__ struct CUgraphNode_st *cudaGraphNode_t; + +/** + * CUDA user object for graphs + */ +typedef __device_builtin__ struct CUuserObject_st *cudaUserObject_t; + +/** + * CUDA function + */ +typedef __device_builtin__ struct CUfunc_st *cudaFunction_t; + +/** + * CUDA kernel + */ +typedef __device_builtin__ struct CUkern_st *cudaKernel_t; + +/** + * CUDA memory pool + */ +typedef __device_builtin__ struct CUmemPoolHandle_st *cudaMemPool_t; + +/** + * CUDA cooperative group scope + */ +enum __device_builtin__ cudaCGScope { + cudaCGScopeInvalid = 0, /**< Invalid cooperative group scope */ + cudaCGScopeGrid = 1, /**< Scope represented by a grid_group */ + cudaCGScopeMultiGrid = 2 /**< Scope represented by a multi_grid_group */ +}; + +/** + * CUDA launch parameters + */ +struct __device_builtin__ cudaLaunchParams +{ + void *func; /**< Device function symbol */ + dim3 gridDim; /**< Grid dimentions */ + dim3 blockDim; /**< Block dimentions */ + void **args; /**< Arguments */ + size_t sharedMem; /**< Shared memory */ + cudaStream_t stream; /**< Stream identifier */ +}; + +/** + * CUDA GPU kernel node parameters + */ +struct __device_builtin__ cudaKernelNodeParams { + void* func; /**< Kernel to launch */ + dim3 gridDim; /**< Grid dimensions */ + dim3 blockDim; /**< Block dimensions */ + unsigned int sharedMemBytes; /**< Dynamic shared-memory size per thread block in bytes */ + void **kernelParams; /**< Array of pointers to individual kernel arguments*/ + void **extra; /**< Pointer to kernel arguments in the "extra" format */ +}; + +/** + * External semaphore signal node parameters + */ +struct __device_builtin__ cudaExternalSemaphoreSignalNodeParams { + cudaExternalSemaphore_t* extSemArray; /**< Array of external semaphore handles. */ + const struct cudaExternalSemaphoreSignalParams* paramsArray; /**< Array of external semaphore signal parameters. */ + unsigned int numExtSems; /**< Number of handles and parameters supplied in extSemArray and paramsArray. */ +}; + +/** + * External semaphore wait node parameters + */ +struct __device_builtin__ cudaExternalSemaphoreWaitNodeParams { + cudaExternalSemaphore_t* extSemArray; /**< Array of external semaphore handles. */ + const struct cudaExternalSemaphoreWaitParams* paramsArray; /**< Array of external semaphore wait parameters. */ + unsigned int numExtSems; /**< Number of handles and parameters supplied in extSemArray and paramsArray. */ +}; + +/** +* CUDA Graph node types +*/ +enum __device_builtin__ cudaGraphNodeType { + cudaGraphNodeTypeKernel = 0x00, /**< GPU kernel node */ + cudaGraphNodeTypeMemcpy = 0x01, /**< Memcpy node */ + cudaGraphNodeTypeMemset = 0x02, /**< Memset node */ + cudaGraphNodeTypeHost = 0x03, /**< Host (executable) node */ + cudaGraphNodeTypeGraph = 0x04, /**< Node which executes an embedded graph */ + cudaGraphNodeTypeEmpty = 0x05, /**< Empty (no-op) node */ + cudaGraphNodeTypeWaitEvent = 0x06, /**< External event wait node */ + cudaGraphNodeTypeEventRecord = 0x07, /**< External event record node */ + cudaGraphNodeTypeExtSemaphoreSignal = 0x08, /**< External semaphore signal node */ + cudaGraphNodeTypeExtSemaphoreWait = 0x09, /**< External semaphore wait node */ + cudaGraphNodeTypeMemAlloc = 0x0a, /**< Memory allocation node */ + cudaGraphNodeTypeMemFree = 0x0b, /**< Memory free node */ + cudaGraphNodeTypeCount +}; + +/** + * CUDA executable (launchable) graph + */ +typedef struct CUgraphExec_st* cudaGraphExec_t; + +/** +* CUDA Graph Update error types +*/ +enum __device_builtin__ cudaGraphExecUpdateResult { + cudaGraphExecUpdateSuccess = 0x0, /**< The update succeeded */ + cudaGraphExecUpdateError = 0x1, /**< The update failed for an unexpected reason which is described in the return value of the function */ + cudaGraphExecUpdateErrorTopologyChanged = 0x2, /**< The update failed because the topology changed */ + cudaGraphExecUpdateErrorNodeTypeChanged = 0x3, /**< The update failed because a node type changed */ + cudaGraphExecUpdateErrorFunctionChanged = 0x4, /**< The update failed because the function of a kernel node changed (CUDA driver < 11.2) */ + cudaGraphExecUpdateErrorParametersChanged = 0x5, /**< The update failed because the parameters changed in a way that is not supported */ + cudaGraphExecUpdateErrorNotSupported = 0x6, /**< The update failed because something about the node is not supported */ + cudaGraphExecUpdateErrorUnsupportedFunctionChange = 0x7, /**< The update failed because the function of a kernel node changed in an unsupported way */ + cudaGraphExecUpdateErrorAttributesChanged = 0x8 /**< The update failed because the node attributes changed in a way that is not supported */ +}; + +/** + * Graph instantiation results +*/ +typedef __device_builtin__ enum cudaGraphInstantiateResult { + cudaGraphInstantiateSuccess = 0, /**< Instantiation succeeded */ + cudaGraphInstantiateError = 1, /**< Instantiation failed for an unexpected reason which is described in the return value of the function */ + cudaGraphInstantiateInvalidStructure = 2, /**< Instantiation failed due to invalid structure, such as cycles */ + cudaGraphInstantiateNodeOperationNotSupported = 3, /**< Instantiation for device launch failed because the graph contained an unsupported operation */ + cudaGraphInstantiateMultipleDevicesNotSupported = 4 /**< Instantiation for device launch failed due to the nodes belonging to different contexts */ +} cudaGraphInstantiateResult; + +/** + * Graph instantiation parameters + */ +typedef __device_builtin__ struct cudaGraphInstantiateParams_st +{ + unsigned long long flags; /**< Instantiation flags */ + cudaStream_t uploadStream; /**< Upload stream */ + cudaGraphNode_t errNode_out; /**< The node which caused instantiation to fail, if any */ + cudaGraphInstantiateResult result_out; /**< Whether instantiation was successful. If it failed, the reason why */ +} cudaGraphInstantiateParams; + +/** + * Result information returned by cudaGraphExecUpdate + */ +typedef __device_builtin__ struct cudaGraphExecUpdateResultInfo_st { + /** + * Gives more specific detail when a cuda graph update fails. + */ + enum cudaGraphExecUpdateResult result; + + /** + * The "to node" of the error edge when the topologies do not match. + * The error node when the error is associated with a specific node. + * NULL when the error is generic. + */ + cudaGraphNode_t errorNode; + + /** + * The from node of error edge when the topologies do not match. Otherwise NULL. + */ + cudaGraphNode_t errorFromNode; +} cudaGraphExecUpdateResultInfo; + +/** + * Flags to specify search options to be used with ::cudaGetDriverEntryPoint + * For more details see ::cuGetProcAddress + */ +enum __device_builtin__ cudaGetDriverEntryPointFlags { + cudaEnableDefault = 0x0, /**< Default search mode for driver symbols. */ + cudaEnableLegacyStream = 0x1, /**< Search for legacy versions of driver symbols. */ + cudaEnablePerThreadDefaultStream = 0x2 /**< Search for per-thread versions of driver symbols. */ +}; + +/** + * Enum for status from obtaining driver entry points, used with ::cudaApiGetDriverEntryPoint + */ +enum __device_builtin__ cudaDriverEntryPointQueryResult { + cudaDriverEntryPointSuccess = 0, /**< Search for symbol found a match */ + cudaDriverEntryPointSymbolNotFound = 1, /**< Search for symbol was not found */ + cudaDriverEntryPointVersionNotSufficent = 2 /**< Search for symbol was found but version wasn't great enough */ +}; + +/** + * CUDA Graph debug write options + */ +enum __device_builtin__ cudaGraphDebugDotFlags { + cudaGraphDebugDotFlagsVerbose = 1<<0, /**< Output all debug data as if every debug flag is enabled */ + cudaGraphDebugDotFlagsKernelNodeParams = 1<<2, /**< Adds cudaKernelNodeParams to output */ + cudaGraphDebugDotFlagsMemcpyNodeParams = 1<<3, /**< Adds cudaMemcpy3DParms to output */ + cudaGraphDebugDotFlagsMemsetNodeParams = 1<<4, /**< Adds cudaMemsetParams to output */ + cudaGraphDebugDotFlagsHostNodeParams = 1<<5, /**< Adds cudaHostNodeParams to output */ + cudaGraphDebugDotFlagsEventNodeParams = 1<<6, /**< Adds cudaEvent_t handle from record and wait nodes to output */ + cudaGraphDebugDotFlagsExtSemasSignalNodeParams = 1<<7, /**< Adds cudaExternalSemaphoreSignalNodeParams values to output */ + cudaGraphDebugDotFlagsExtSemasWaitNodeParams = 1<<8, /**< Adds cudaExternalSemaphoreWaitNodeParams to output */ + cudaGraphDebugDotFlagsKernelNodeAttributes = 1<<9, /**< Adds cudaKernelNodeAttrID values to output */ + cudaGraphDebugDotFlagsHandles = 1<<10 /**< Adds node handles and every kernel function handle to output */ +}; + +/** + * Flags for instantiating a graph + */ +enum __device_builtin__ cudaGraphInstantiateFlags { + cudaGraphInstantiateFlagAutoFreeOnLaunch = 1 /**< Automatically free memory allocated in a graph before relaunching. */ + , cudaGraphInstantiateFlagUpload = 2 /**< Automatically upload the graph after instantiaton. */ + , cudaGraphInstantiateFlagDeviceLaunch = 4 /**< Instantiate the graph to be launchable from the device. */ + , cudaGraphInstantiateFlagUseNodePriority = 8 /**< Run the graph using the per-node priority attributes rather than the + priority of the stream it is launched into. */ +}; + +typedef __device_builtin__ enum cudaLaunchMemSyncDomain { + cudaLaunchMemSyncDomainDefault = 0, + cudaLaunchMemSyncDomainRemote = 1 +} cudaLaunchMemSyncDomain; + +typedef __device_builtin__ struct cudaLaunchMemSyncDomainMap_st { + unsigned char default_; + unsigned char remote; +} cudaLaunchMemSyncDomainMap; + +/** + * Launch attributes enum; used as id field of ::cudaLaunchAttribute + */ +typedef __device_builtin__ enum cudaLaunchAttributeID { + cudaLaunchAttributeIgnore = 0 /**< Ignored entry, for convenient composition */ + , cudaLaunchAttributeAccessPolicyWindow = 1 /**< Valid for streams, graph nodes, launches. */ + , cudaLaunchAttributeCooperative = 2 /**< Valid for graph nodes, launches. */ + , cudaLaunchAttributeSynchronizationPolicy = 3 /**< Valid for streams. */ + , cudaLaunchAttributeClusterDimension = 4 /**< Valid for graph nodes, launches. */ + , cudaLaunchAttributeClusterSchedulingPolicyPreference = 5 /**< Valid for graph nodes, launches. */ + , cudaLaunchAttributeProgrammaticStreamSerialization = 6 /**< Valid for launches. Setting + programmaticStreamSerializationAllowed to non-0 + signals that the kernel will use programmatic + means to resolve its stream dependency, so that + the CUDA runtime should opportunistically allow + the grid's execution to overlap with the previous + kernel in the stream, if that kernel requests the + overlap. The dependent launches can choose to wait on + the dependency using the programmatic sync + (cudaGridDependencySynchronize() or equivalent PTX + instructions). */ + , cudaLaunchAttributeProgrammaticEvent = 7 /**< Valid for launches. Event recorded through this + launch attribute is guaranteed to only trigger after + all block in the associated kernel trigger the event. + A block can trigger the event programmatically in a + future CUDA release. A trigger can also be inserted at + the beginning of each block's execution if + triggerAtBlockStart is set to non-0. The dependent + launches can choose to wait on the dependency using + the programmatic sync (cudaGridDependencySynchronize() + or equivalent PTX instructions). Note that dependents + (including the CPU thread calling + cudaEventSynchronize()) are not guaranteed to observe + the release precisely when it is released. For + example, cudaEventSynchronize() may only observe the + event trigger long after the associated kernel has + completed. This recording type is primarily meant for + establishing programmatic dependency between device + tasks. The event supplied must not be an interprocess + or interop event. The event must disable timing (i.e. + created with ::cudaEventDisableTiming flag set). */ + , cudaLaunchAttributePriority = 8 /**< Valid for streams, graph nodes, launches. */ + , cudaLaunchAttributeMemSyncDomainMap = 9 + , cudaLaunchAttributeMemSyncDomain = 10 +} cudaLaunchAttributeID; + +/** + * Launch attributes union; used as value field of ::cudaLaunchAttribute + */ +typedef __device_builtin__ union cudaLaunchAttributeValue { + char pad[64]; /* Pad to 64 bytes */ + struct cudaAccessPolicyWindow accessPolicyWindow; /**< Attribute ::cudaAccessPolicyWindow. */ + int cooperative; /**< Nonzero indicates a cooperative kernel (see ::cudaLaunchCooperativeKernel). */ + enum cudaSynchronizationPolicy syncPolicy; /**< ::cudaSynchronizationPolicy for work queued up in this stream */ + struct { + unsigned int x; + unsigned int y; + unsigned int z; + } clusterDim; /**< Cluster dimensions for the kernel node. */ + enum cudaClusterSchedulingPolicy clusterSchedulingPolicyPreference; /**< Cluster scheduling policy preference for the kernel node. */ + int programmaticStreamSerializationAllowed; + struct { + cudaEvent_t event; + int flags; + int triggerAtBlockStart; + } programmaticEvent; + int priority; /**< Execution priority of the kernel. */ + cudaLaunchMemSyncDomainMap memSyncDomainMap; + cudaLaunchMemSyncDomain memSyncDomain; +} cudaLaunchAttributeValue; + +/** + * Launch attribute + */ +typedef __device_builtin__ struct cudaLaunchAttribute_st { + cudaLaunchAttributeID id; + char pad[8 - sizeof(cudaLaunchAttributeID)]; + cudaLaunchAttributeValue val; +} cudaLaunchAttribute; + +/** + * CUDA extensible launch configuration + */ +typedef __device_builtin__ struct cudaLaunchConfig_st { + dim3 gridDim; /**< Grid dimensions */ + dim3 blockDim; /**< Block dimensions */ + size_t dynamicSmemBytes; /**< Dynamic shared-memory size per thread block in bytes */ + cudaStream_t stream; /**< Stream identifier */ + cudaLaunchAttribute *attrs; /**< nullable if numAttrs == 0 */ + unsigned int numAttrs; /**< Number of attributes populated in attrs */ +} cudaLaunchConfig_t; + +#define cudaStreamAttrID cudaLaunchAttributeID +#define cudaStreamAttributeAccessPolicyWindow cudaLaunchAttributeAccessPolicyWindow +#define cudaStreamAttributeSynchronizationPolicy cudaLaunchAttributeSynchronizationPolicy +#define cudaStreamAttributeMemSyncDomainMap cudaLaunchAttributeMemSyncDomainMap +#define cudaStreamAttributeMemSyncDomain cudaLaunchAttributeMemSyncDomain +#define cudaStreamAttributePriority cudaLaunchAttributePriority + +#define cudaStreamAttrValue cudaLaunchAttributeValue + +#define cudaKernelNodeAttrID cudaLaunchAttributeID +#define cudaKernelNodeAttributeAccessPolicyWindow cudaLaunchAttributeAccessPolicyWindow +#define cudaKernelNodeAttributeCooperative cudaLaunchAttributeCooperative +#define cudaKernelNodeAttributePriority cudaLaunchAttributePriority +#define cudaKernelNodeAttributeClusterDimension cudaLaunchAttributeClusterDimension +#define cudaKernelNodeAttributeClusterSchedulingPolicyPreference cudaLaunchAttributeClusterSchedulingPolicyPreference +#define cudaKernelNodeAttributeMemSyncDomainMap cudaLaunchAttributeMemSyncDomainMap +#define cudaKernelNodeAttributeMemSyncDomain cudaLaunchAttributeMemSyncDomain + +#define cudaKernelNodeAttrValue cudaLaunchAttributeValue + +/** @} */ +/** @} */ /* END CUDART_TYPES */ + +#if defined(__UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DRIVER_TYPES_H__) +#undef __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#undef __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_DRIVER_TYPES_H__ +#endif + +#undef __CUDA_DEPRECATED + +#endif /* !__DRIVER_TYPES_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/library_types.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/library_types.h new file mode 100644 index 0000000000000000000000000000000000000000..4a7e42c6b89ba4b446d4cf3d52c8bacd74e73b0d --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/library_types.h @@ -0,0 +1,103 @@ +/* + * Copyright 1993-2015 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__LIBRARY_TYPES_H__) +#define __LIBRARY_TYPES_H__ + + + +typedef enum cudaDataType_t +{ + CUDA_R_16F = 2, /* real as a half */ + CUDA_C_16F = 6, /* complex as a pair of half numbers */ + CUDA_R_16BF = 14, /* real as a nv_bfloat16 */ + CUDA_C_16BF = 15, /* complex as a pair of nv_bfloat16 numbers */ + CUDA_R_32F = 0, /* real as a float */ + CUDA_C_32F = 4, /* complex as a pair of float numbers */ + CUDA_R_64F = 1, /* real as a double */ + CUDA_C_64F = 5, /* complex as a pair of double numbers */ + CUDA_R_4I = 16, /* real as a signed 4-bit int */ + CUDA_C_4I = 17, /* complex as a pair of signed 4-bit int numbers */ + CUDA_R_4U = 18, /* real as a unsigned 4-bit int */ + CUDA_C_4U = 19, /* complex as a pair of unsigned 4-bit int numbers */ + CUDA_R_8I = 3, /* real as a signed 8-bit int */ + CUDA_C_8I = 7, /* complex as a pair of signed 8-bit int numbers */ + CUDA_R_8U = 8, /* real as a unsigned 8-bit int */ + CUDA_C_8U = 9, /* complex as a pair of unsigned 8-bit int numbers */ + CUDA_R_16I = 20, /* real as a signed 16-bit int */ + CUDA_C_16I = 21, /* complex as a pair of signed 16-bit int numbers */ + CUDA_R_16U = 22, /* real as a unsigned 16-bit int */ + CUDA_C_16U = 23, /* complex as a pair of unsigned 16-bit int numbers */ + CUDA_R_32I = 10, /* real as a signed 32-bit int */ + CUDA_C_32I = 11, /* complex as a pair of signed 32-bit int numbers */ + CUDA_R_32U = 12, /* real as a unsigned 32-bit int */ + CUDA_C_32U = 13, /* complex as a pair of unsigned 32-bit int numbers */ + CUDA_R_64I = 24, /* real as a signed 64-bit int */ + CUDA_C_64I = 25, /* complex as a pair of signed 64-bit int numbers */ + CUDA_R_64U = 26, /* real as a unsigned 64-bit int */ + CUDA_C_64U = 27, /* complex as a pair of unsigned 64-bit int numbers */ + CUDA_R_8F_E4M3 = 28, /* real as a nv_fp8_e4m3 */ + CUDA_R_8F_E5M2 = 29, /* real as a nv_fp8_e5m2 */ +} cudaDataType; + + +typedef enum libraryPropertyType_t +{ + MAJOR_VERSION, + MINOR_VERSION, + PATCH_LEVEL +} libraryPropertyType; + + +#ifndef __cplusplus +typedef enum cudaDataType_t cudaDataType_t; +typedef enum libraryPropertyType_t libraryPropertyType_t; +#endif + +#endif /* !__LIBRARY_TYPES_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/math_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/math_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..bc806976784e494edc905d8b8bd9ad138054bbea --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/math_functions.h @@ -0,0 +1,65 @@ +/* + * Copyright 1993-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__) +#if defined(_MSC_VER) +#pragma message("math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead.") +#else +#warning "math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." +#endif +#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#define __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_MATH_FUNCTIONS_H_WRAPPER__ +#endif + +#include "crt/math_functions.h" + +#if defined(__UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_MATH_FUNCTIONS_H_WRAPPER__) +#undef __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#undef __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_MATH_FUNCTIONS_H_WRAPPER__ +#endif diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_atomic_functions.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_atomic_functions.hpp new file mode 100644 index 0000000000000000000000000000000000000000..ac4aa9bfc6b8d5d4d240e05a2fd557889f30c47f --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_atomic_functions.hpp @@ -0,0 +1,85 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_20_ATOMIC_FUNCTIONS_HPP__) +#define __SM_20_ATOMIC_FUNCTIONS_HPP__ + +#if defined(__CUDACC_RTC__) +#define __SM_20_ATOMIC_FUNCTIONS_DECL__ __device__ +#else /* __CUDACC_RTC__ */ +#define __SM_20_ATOMIC_FUNCTIONS_DECL__ static __inline__ __device__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +__SM_20_ATOMIC_FUNCTIONS_DECL__ float atomicAdd(float *address, float val) +{ + return __fAtomicAdd(address, val); +} + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __SM_20_ATOMIC_FUNCTIONS_DECL__ + +#endif /* !__SM_20_ATOMIC_FUNCTIONS_HPP__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_intrinsics.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_intrinsics.hpp new file mode 100644 index 0000000000000000000000000000000000000000..30c1ab99e0d66ebbceb8fe88b1122443cbf5f998 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_20_intrinsics.hpp @@ -0,0 +1,221 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_20_INTRINSICS_HPP__) +#define __SM_20_INTRINSICS_HPP__ + +#if defined(__CUDACC_RTC__) +#define __SM_20_INTRINSICS_DECL__ __device__ +#else /* __CUDACC_RTC__ */ +#define __SM_20_INTRINSICS_DECL__ static __inline__ __device__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +__SM_20_INTRINSICS_DECL__ unsigned int ballot(bool pred) +{ + return __ballot((int)pred); +} + +__SM_20_INTRINSICS_DECL__ int syncthreads_count(bool pred) +{ + return __syncthreads_count((int)pred); +} + +__SM_20_INTRINSICS_DECL__ bool syncthreads_and(bool pred) +{ + return (bool)__syncthreads_and((int)pred); +} + +__SM_20_INTRINSICS_DECL__ bool syncthreads_or(bool pred) +{ + return (bool)__syncthreads_or((int)pred); +} + + +extern "C" { + __device__ unsigned __nv_isGlobal_impl(const void *); + __device__ unsigned __nv_isShared_impl(const void *); + __device__ unsigned __nv_isConstant_impl(const void *); + __device__ unsigned __nv_isLocal_impl(const void *); + __device__ unsigned __nv_isGridConstant_impl(const void *); +} + +__SM_20_INTRINSICS_DECL__ unsigned int __isGlobal(const void *ptr) +{ + return __nv_isGlobal_impl(ptr); +} + +__SM_20_INTRINSICS_DECL__ unsigned int __isShared(const void *ptr) +{ + return __nv_isShared_impl(ptr); +} + +__SM_20_INTRINSICS_DECL__ unsigned int __isConstant(const void *ptr) +{ + return __nv_isConstant_impl(ptr); +} + +__SM_20_INTRINSICS_DECL__ unsigned int __isLocal(const void *ptr) +{ + return __nv_isLocal_impl(ptr); +} + +#if !defined(__CUDA_ARCH__) || (__CUDA_ARCH__ >= 700) +__SM_20_INTRINSICS_DECL__ unsigned int __isGridConstant(const void *ptr) +{ + return __nv_isGridConstant_impl(ptr); +} +#endif /* !defined(__CUDA_ARCH__) || (__CUDA_ARCH__ >= 700) */ + +extern "C" { + __device__ size_t __nv_cvta_generic_to_global_impl(const void *); + __device__ size_t __nv_cvta_generic_to_shared_impl(const void *); + __device__ size_t __nv_cvta_generic_to_constant_impl(const void *); + __device__ size_t __nv_cvta_generic_to_local_impl(const void *); + __device__ void * __nv_cvta_global_to_generic_impl(size_t); + __device__ void * __nv_cvta_shared_to_generic_impl(size_t); + __device__ void * __nv_cvta_constant_to_generic_impl(size_t); + __device__ void * __nv_cvta_local_to_generic_impl(size_t); +} + +__SM_20_INTRINSICS_DECL__ size_t __cvta_generic_to_global(const void *p) +{ + return __nv_cvta_generic_to_global_impl(p); +} + +__SM_20_INTRINSICS_DECL__ size_t __cvta_generic_to_shared(const void *p) +{ + return __nv_cvta_generic_to_shared_impl(p); +} + +__SM_20_INTRINSICS_DECL__ size_t __cvta_generic_to_constant(const void *p) +{ + return __nv_cvta_generic_to_constant_impl(p); +} + +__SM_20_INTRINSICS_DECL__ size_t __cvta_generic_to_local(const void *p) +{ + return __nv_cvta_generic_to_local_impl(p); +} + +__SM_20_INTRINSICS_DECL__ void * __cvta_global_to_generic(size_t rawbits) +{ + return __nv_cvta_global_to_generic_impl(rawbits); +} + +__SM_20_INTRINSICS_DECL__ void * __cvta_shared_to_generic(size_t rawbits) +{ + return __nv_cvta_shared_to_generic_impl(rawbits); +} + +__SM_20_INTRINSICS_DECL__ void * __cvta_constant_to_generic(size_t rawbits) +{ + return __nv_cvta_constant_to_generic_impl(rawbits); +} + +__SM_20_INTRINSICS_DECL__ void * __cvta_local_to_generic(size_t rawbits) +{ + return __nv_cvta_local_to_generic_impl(rawbits); +} + +#if !defined(__CUDA_ARCH__) || (__CUDA_ARCH__ >= 700) +#if (defined(_MSC_VER) && defined(_WIN64)) || defined(__LP64__) || defined(__CUDACC_RTC__) +#define __CVTA_PTR_64 1 +#endif + +__SM_20_INTRINSICS_DECL__ size_t __cvta_generic_to_grid_constant(const void *ptr) +{ +#if __CVTA_PTR_64 + unsigned long long ret; + asm("cvta.to.param.u64 %0, %1;" : "=l"(ret) : "l"(ptr)); +#else /* !__CVTA_PTR_64 */ + unsigned ret; + asm("cvta.to.param.u32 %0, %1;" : "=r"(ret) : "r"(ptr)); +#endif /* __CVTA_PTR_64 */ + return (size_t)ret; + +} + +__SM_20_INTRINSICS_DECL__ void * __cvta_grid_constant_to_generic(size_t rawbits) +{ + void *ret; +#if __CVTA_PTR_64 + unsigned long long in = rawbits; + asm("cvta.param.u64 %0, %1;" : "=l"(ret) : "l"(in)); +#else /* !__CVTA_PTR_64 */ + unsigned in = rawbits; + asm("cvta.param.u32 %0, %1;" : "=r"(ret) : "r"(in)); +#endif /* __CVTA_PTR_64 */ + return ret; +} +#undef __CVTA_PTR_64 +#endif /* !defined(__CUDA_ARCH__) || (__CUDA_ARCH__ >= 700) */ + + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __SM_20_INTRINSICS_DECL__ + +#endif /* !__SM_20_INTRINSICS_HPP__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_30_intrinsics.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_30_intrinsics.hpp new file mode 100644 index 0000000000000000000000000000000000000000..a5bcac5ee68c0cf547e4de7c08badf37106639dc --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_30_intrinsics.hpp @@ -0,0 +1,604 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_30_INTRINSICS_HPP__) +#define __SM_30_INTRINSICS_HPP__ + +#if defined(__CUDACC_RTC__) +#define __SM_30_INTRINSICS_DECL__ __device__ +#else /* !__CUDACC_RTC__ */ +#define __SM_30_INTRINSICS_DECL__ static __device__ __inline__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +#if defined(_NVHPC_CUDA) || !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 300 + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +// In here are intrinsics which are built in to the compiler. These may be +// referenced by intrinsic implementations from this file. +extern "C" +{ +} + +/******************************************************************************* +* * +* Below are implementations of SM-3.0 intrinsics which are included as * +* source (instead of being built in to the compiler) * +* * +*******************************************************************************/ + +#if !defined warpSize && !defined __local_warpSize +#define warpSize 32 +#define __local_warpSize +#endif + +__SM_30_INTRINSICS_DECL__ +unsigned __fns(unsigned mask, unsigned base, int offset) { + extern __device__ __device_builtin__ unsigned int __nvvm_fns(unsigned int mask, unsigned int base, int offset); + return __nvvm_fns(mask, base, offset); +} + +__SM_30_INTRINSICS_DECL__ +void __barrier_sync(unsigned id) { + extern __device__ __device_builtin__ void __nvvm_barrier_sync(unsigned id); + return __nvvm_barrier_sync(id); +} + +__SM_30_INTRINSICS_DECL__ +void __barrier_sync_count(unsigned id, unsigned cnt) { + extern __device__ __device_builtin__ void __nvvm_barrier_sync_cnt(unsigned id, unsigned cnt); + return __nvvm_barrier_sync_cnt(id, cnt); +} + +__SM_30_INTRINSICS_DECL__ +void __syncwarp(unsigned mask) { + extern __device__ __device_builtin__ void __nvvm_bar_warp_sync(unsigned mask); + return __nvvm_bar_warp_sync(mask); +} + +__SM_30_INTRINSICS_DECL__ +int __all_sync(unsigned mask, int pred) { + extern __device__ __device_builtin__ int __nvvm_vote_all_sync(unsigned int mask, int pred); + return __nvvm_vote_all_sync(mask, pred); +} + +__SM_30_INTRINSICS_DECL__ +int __any_sync(unsigned mask, int pred) { + extern __device__ __device_builtin__ int __nvvm_vote_any_sync(unsigned int mask, int pred); + return __nvvm_vote_any_sync(mask, pred); +} + +__SM_30_INTRINSICS_DECL__ +int __uni_sync(unsigned mask, int pred) { + extern __device__ __device_builtin__ int __nvvm_vote_uni_sync(unsigned int mask, int pred); + return __nvvm_vote_uni_sync(mask, pred); +} + +__SM_30_INTRINSICS_DECL__ +unsigned __ballot_sync(unsigned mask, int pred) { + extern __device__ __device_builtin__ unsigned int __nvvm_vote_ballot_sync(unsigned int mask, int pred); + return __nvvm_vote_ballot_sync(mask, pred); +} + +__SM_30_INTRINSICS_DECL__ +unsigned __activemask() { + unsigned ret; + asm volatile ("activemask.b32 %0;" : "=r"(ret)); + return ret; +} + +// These are removed starting with compute_70 and onwards +#if defined(_NVHPC_CUDA) || !defined(__CUDA_ARCH__) || __CUDA_ARCH__ < 700 + +__SM_30_INTRINSICS_DECL__ int __shfl(int var, int srcLane, int width) { + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.idx.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(var), "r"(srcLane), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl(unsigned int var, int srcLane, int width) { + return (unsigned int) __shfl((int)var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_up(int var, unsigned int delta, int width) { + int ret; + int c = (warpSize-width) << 8; + asm volatile ("shfl.up.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(var), "r"(delta), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_up(unsigned int var, unsigned int delta, int width) { + return (unsigned int) __shfl_up((int)var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_down(int var, unsigned int delta, int width) { + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.down.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(var), "r"(delta), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_down(unsigned int var, unsigned int delta, int width) { + return (unsigned int) __shfl_down((int)var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_xor(int var, int laneMask, int width) { + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.bfly.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(var), "r"(laneMask), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_xor(unsigned int var, int laneMask, int width) { + return (unsigned int) __shfl_xor((int)var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ float __shfl(float var, int srcLane, int width) { + float ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.idx.b32 %0, %1, %2, %3;" : "=f"(ret) : "f"(var), "r"(srcLane), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ float __shfl_up(float var, unsigned int delta, int width) { + float ret; + int c; + c = (warpSize-width) << 8; + asm volatile ("shfl.up.b32 %0, %1, %2, %3;" : "=f"(ret) : "f"(var), "r"(delta), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ float __shfl_down(float var, unsigned int delta, int width) { + float ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.down.b32 %0, %1, %2, %3;" : "=f"(ret) : "f"(var), "r"(delta), "r"(c)); + return ret; +} + +__SM_30_INTRINSICS_DECL__ float __shfl_xor(float var, int laneMask, int width) { + float ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + asm volatile ("shfl.bfly.b32 %0, %1, %2, %3;" : "=f"(ret) : "f"(var), "r"(laneMask), "r"(c)); + return ret; +} + +// 64-bits SHFL + +__SM_30_INTRINSICS_DECL__ long long __shfl(long long var, int srcLane, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl(hi, srcLane, width); + lo = __shfl(lo, srcLane, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl(unsigned long long var, int srcLane, int width) { + return (unsigned long long) __shfl((long long) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_up(long long var, unsigned int delta, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_up(hi, delta, width); + lo = __shfl_up(lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_up(unsigned long long var, unsigned int delta, int width) { + return (unsigned long long) __shfl_up((long long) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_down(long long var, unsigned int delta, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_down(hi, delta, width); + lo = __shfl_down(lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_down(unsigned long long var, unsigned int delta, int width) { + return (unsigned long long) __shfl_down((long long) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_xor(long long var, int laneMask, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_xor(hi, laneMask, width); + lo = __shfl_xor(lo, laneMask, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_xor(unsigned long long var, int laneMask, int width) { + return (unsigned long long) __shfl_xor((long long) var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ double __shfl(double var, int srcLane, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl(hi, srcLane, width); + lo = __shfl(lo, srcLane, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_up(double var, unsigned int delta, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_up(hi, delta, width); + lo = __shfl_up(lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_down(double var, unsigned int delta, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_down(hi, delta, width); + lo = __shfl_down(lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_xor(double var, int laneMask, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_xor(hi, laneMask, width); + lo = __shfl_xor(lo, laneMask, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ long __shfl(long var, int srcLane, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl((long long) var, srcLane, width) : + __shfl((int) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl(unsigned long var, int srcLane, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl((unsigned long long) var, srcLane, width) : + __shfl((unsigned int) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_up(long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_up((long long) var, delta, width) : + __shfl_up((int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_up(unsigned long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_up((unsigned long long) var, delta, width) : + __shfl_up((unsigned int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_down(long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_down((long long) var, delta, width) : + __shfl_down((int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_down(unsigned long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_down((unsigned long long) var, delta, width) : + __shfl_down((unsigned int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_xor(long var, int laneMask, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_xor((long long) var, laneMask, width) : + __shfl_xor((int) var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_xor(unsigned long var, int laneMask, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_xor((unsigned long long) var, laneMask, width) : + __shfl_xor((unsigned int) var, laneMask, width); +} + +#endif /* defined(_NVHPC_CUDA) || !defined(__CUDA_ARCH__) || __CUDA_ARCH__ < 700 */ + +// Warp register exchange (shuffle) intrinsics. +// Notes: +// a) Warp size is hardcoded to 32 here, because the compiler does not know +// the "warpSize" constant at this time +// b) we cannot map the float __shfl to the int __shfl because it'll mess with +// the register number (especially if you're doing two shfls to move a double). +__SM_30_INTRINSICS_DECL__ int __shfl_sync(unsigned mask, int var, int srcLane, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_idx_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_idx_sync(mask, var, srcLane, c); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_sync(unsigned mask, unsigned int var, int srcLane, int width) { + return (unsigned int) __shfl_sync(mask, (int)var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_up_sync(unsigned mask, int var, unsigned int delta, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_up_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c = (warpSize-width) << 8; + ret = __nvvm_shfl_up_sync(mask, var, delta, c); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_up_sync(unsigned mask, unsigned int var, unsigned int delta, int width) { + return (unsigned int) __shfl_up_sync(mask, (int)var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_down_sync(unsigned mask, int var, unsigned int delta, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_down_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_down_sync(mask, var, delta, c); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_down_sync(unsigned mask, unsigned int var, unsigned int delta, int width) { + return (unsigned int) __shfl_down_sync(mask, (int)var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ int __shfl_xor_sync(unsigned mask, int var, int laneMask, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_bfly_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_bfly_sync(mask, var, laneMask, c); + return ret; +} + +__SM_30_INTRINSICS_DECL__ unsigned int __shfl_xor_sync(unsigned mask, unsigned int var, int laneMask, int width) { + return (unsigned int) __shfl_xor_sync(mask, (int)var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ float __shfl_sync(unsigned mask, float var, int srcLane, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_idx_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_idx_sync(mask, __float_as_int(var), srcLane, c); + return __int_as_float(ret); +} + +__SM_30_INTRINSICS_DECL__ float __shfl_up_sync(unsigned mask, float var, unsigned int delta, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_up_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c; + c = (warpSize-width) << 8; + ret = __nvvm_shfl_up_sync(mask, __float_as_int(var), delta, c); + return __int_as_float(ret); +} + +__SM_30_INTRINSICS_DECL__ float __shfl_down_sync(unsigned mask, float var, unsigned int delta, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_down_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_down_sync(mask, __float_as_int(var), delta, c); + return __int_as_float(ret); +} + +__SM_30_INTRINSICS_DECL__ float __shfl_xor_sync(unsigned mask, float var, int laneMask, int width) { + extern __device__ __device_builtin__ unsigned __nvvm_shfl_bfly_sync(unsigned mask, unsigned a, unsigned b, unsigned c); + int ret; + int c; + c = ((warpSize-width) << 8) | 0x1f; + ret = __nvvm_shfl_bfly_sync(mask, __float_as_int(var), laneMask, c); + return __int_as_float(ret); +} + +// 64-bits SHFL +__SM_30_INTRINSICS_DECL__ long long __shfl_sync(unsigned mask, long long var, int srcLane, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_sync(mask, hi, srcLane, width); + lo = __shfl_sync(mask, lo, srcLane, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_sync(unsigned mask, unsigned long long var, int srcLane, int width) { + return (unsigned long long) __shfl_sync(mask, (long long) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_up_sync(unsigned mask, long long var, unsigned int delta, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_up_sync(mask, hi, delta, width); + lo = __shfl_up_sync(mask, lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_up_sync(unsigned mask, unsigned long long var, unsigned int delta, int width) { + return (unsigned long long) __shfl_up_sync(mask, (long long) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_down_sync(unsigned mask, long long var, unsigned int delta, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_down_sync(mask, hi, delta, width); + lo = __shfl_down_sync(mask, lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_down_sync(unsigned mask, unsigned long long var, unsigned int delta, int width) { + return (unsigned long long) __shfl_down_sync(mask, (long long) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long long __shfl_xor_sync(unsigned mask, long long var, int laneMask, int width) { + int lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "l"(var)); + hi = __shfl_xor_sync(mask, hi, laneMask, width); + lo = __shfl_xor_sync(mask, lo, laneMask, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=l"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ unsigned long long __shfl_xor_sync(unsigned mask, unsigned long long var, int laneMask, int width) { + return (unsigned long long) __shfl_xor_sync(mask, (long long) var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ double __shfl_sync(unsigned mask, double var, int srcLane, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_sync(mask, hi, srcLane, width); + lo = __shfl_sync(mask, lo, srcLane, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_up_sync(unsigned mask, double var, unsigned int delta, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_up_sync(mask, hi, delta, width); + lo = __shfl_up_sync(mask, lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_down_sync(unsigned mask, double var, unsigned int delta, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_down_sync(mask, hi, delta, width); + lo = __shfl_down_sync(mask, lo, delta, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +__SM_30_INTRINSICS_DECL__ double __shfl_xor_sync(unsigned mask, double var, int laneMask, int width) { + unsigned lo, hi; + asm volatile("mov.b64 {%0,%1}, %2;" : "=r"(lo), "=r"(hi) : "d"(var)); + hi = __shfl_xor_sync(mask, hi, laneMask, width); + lo = __shfl_xor_sync(mask, lo, laneMask, width); + asm volatile("mov.b64 %0, {%1,%2};" : "=d"(var) : "r"(lo), "r"(hi)); + return var; +} + +// long needs some help to choose between 32-bits and 64-bits + +__SM_30_INTRINSICS_DECL__ long __shfl_sync(unsigned mask, long var, int srcLane, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_sync(mask, (long long) var, srcLane, width) : + __shfl_sync(mask, (int) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_sync(unsigned mask, unsigned long var, int srcLane, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_sync(mask, (unsigned long long) var, srcLane, width) : + __shfl_sync(mask, (unsigned int) var, srcLane, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_up_sync(unsigned mask, long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_up_sync(mask, (long long) var, delta, width) : + __shfl_up_sync(mask, (int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_up_sync(unsigned mask, unsigned long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_up_sync(mask, (unsigned long long) var, delta, width) : + __shfl_up_sync(mask, (unsigned int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_down_sync(unsigned mask, long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_down_sync(mask, (long long) var, delta, width) : + __shfl_down_sync(mask, (int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_down_sync(unsigned mask, unsigned long var, unsigned int delta, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_down_sync(mask, (unsigned long long) var, delta, width) : + __shfl_down_sync(mask, (unsigned int) var, delta, width); +} + +__SM_30_INTRINSICS_DECL__ long __shfl_xor_sync(unsigned mask, long var, int laneMask, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_xor_sync(mask, (long long) var, laneMask, width) : + __shfl_xor_sync(mask, (int) var, laneMask, width); +} + +__SM_30_INTRINSICS_DECL__ unsigned long __shfl_xor_sync(unsigned mask, unsigned long var, int laneMask, int width) { + return (sizeof(long) == sizeof(long long)) ? + __shfl_xor_sync(mask, (unsigned long long) var, laneMask, width) : + __shfl_xor_sync(mask, (unsigned int) var, laneMask, width); +} + +#if defined(__local_warpSize) +#undef warpSize +#undef __local_warpSize +#endif + +#endif /* _NVHPC_CUDA || !__CUDA_ARCH__ || __CUDA_ARCH__ >= 300 */ + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __SM_30_INTRINSICS_DECL__ + +#endif /* !__SM_30_INTRINSICS_HPP__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_32_intrinsics.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_32_intrinsics.hpp new file mode 100644 index 0000000000000000000000000000000000000000..d50f9cea5c4d89bc555855a8ca73d617bcfa461a --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_32_intrinsics.hpp @@ -0,0 +1,588 @@ +/* + * Copyright 1993-2020 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_32_INTRINSICS_HPP__) +#define __SM_32_INTRINSICS_HPP__ + +#if defined(__CUDACC_RTC__) +#define __SM_32_INTRINSICS_DECL__ __device__ +#else /* !__CUDACC_RTC__ */ +#define __SM_32_INTRINSICS_DECL__ static __device__ __inline__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +#if defined(_NVHPC_CUDA) || !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 320 + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +// In here are intrinsics which are built in to the compiler. These may be +// referenced by intrinsic implementations from this file. +extern "C" +{ + // There are no intrinsics built in to the compiler for SM-3.5, + // all intrinsics are now implemented as inline PTX below. +} + +/******************************************************************************* +* * +* Below are implementations of SM-3.5 intrinsics which are included as * +* source (instead of being built in to the compiler) * +* * +*******************************************************************************/ + +// LDG is a "load from global via texture path" command which can exhibit higher +// bandwidth on GK110 than a regular LD. +// Define a different pointer storage size for 64 and 32 bit +#if (defined(_MSC_VER) && defined(_WIN64)) || defined(__LP64__) || defined(__CUDACC_RTC__) +#define __LDG_PTR "l" +#else +#define __LDG_PTR "r" +#endif + +/****************************************************************************** + * __ldg * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldg(const long *ptr) { unsigned long ret; asm volatile ("ld.global.nc.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldg(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.nc.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldg(const long *ptr) { unsigned long ret; asm volatile ("ld.global.nc.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldg(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.nc.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldg(const char *ptr) { unsigned int ret; asm volatile ("ld.global.nc.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldg(const signed char *ptr) { unsigned int ret; asm volatile ("ld.global.nc.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldg(const short *ptr) { unsigned short ret; asm volatile ("ld.global.nc.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldg(const int *ptr) { unsigned int ret; asm volatile ("ld.global.nc.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldg(const long long *ptr) { unsigned long long ret; asm volatile ("ld.global.nc.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldg(const char2 *ptr) { char2 ret; int2 tmp; asm volatile ("ld.global.nc.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldg(const char4 *ptr) { char4 ret; int4 tmp; asm volatile ("ld.global.nc.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldg(const short2 *ptr) { short2 ret; asm volatile ("ld.global.nc.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldg(const short4 *ptr) { short4 ret; asm volatile ("ld.global.nc.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldg(const int2 *ptr) { int2 ret; asm volatile ("ld.global.nc.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldg(const int4 *ptr) { int4 ret; asm volatile ("ld.global.nc.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldg(const longlong2 *ptr) { longlong2 ret; asm volatile ("ld.global.nc.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldg(const unsigned char *ptr) { unsigned int ret; asm volatile ("ld.global.nc.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldg(const unsigned short *ptr) { unsigned short ret; asm volatile ("ld.global.nc.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldg(const unsigned int *ptr) { unsigned int ret; asm volatile ("ld.global.nc.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldg(const unsigned long long *ptr) { unsigned long long ret; asm volatile ("ld.global.nc.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldg(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm volatile ("ld.global.nc.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldg(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm volatile ("ld.global.nc.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldg(const ushort2 *ptr) { ushort2 ret; asm volatile ("ld.global.nc.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldg(const ushort4 *ptr) { ushort4 ret; asm volatile ("ld.global.nc.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldg(const uint2 *ptr) { uint2 ret; asm volatile ("ld.global.nc.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldg(const uint4 *ptr) { uint4 ret; asm volatile ("ld.global.nc.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldg(const ulonglong2 *ptr) { ulonglong2 ret; asm volatile ("ld.global.nc.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldg(const float *ptr) { float ret; asm volatile ("ld.global.nc.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldg(const double *ptr) { double ret; asm volatile ("ld.global.nc.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldg(const float2 *ptr) { float2 ret; asm volatile ("ld.global.nc.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldg(const float4 *ptr) { float4 ret; asm volatile ("ld.global.nc.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldg(const double2 *ptr) { double2 ret; asm volatile ("ld.global.nc.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr)); return ret; } + + +/****************************************************************************** + * __ldcg * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldcg(const long *ptr) { unsigned long ret; asm volatile ("ld.global.cg.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcg(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.cg.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldcg(const long *ptr) { unsigned long ret; asm volatile ("ld.global.cg.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcg(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.cg.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldcg(const char *ptr) { unsigned int ret; asm volatile ("ld.global.cg.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldcg(const signed char *ptr) { unsigned int ret; asm volatile ("ld.global.cg.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldcg(const short *ptr) { unsigned short ret; asm volatile ("ld.global.cg.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldcg(const int *ptr) { unsigned int ret; asm volatile ("ld.global.cg.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldcg(const long long *ptr) { unsigned long long ret; asm volatile ("ld.global.cg.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldcg(const char2 *ptr) { char2 ret; int2 tmp; asm volatile ("ld.global.cg.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldcg(const char4 *ptr) { char4 ret; int4 tmp; asm volatile ("ld.global.cg.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldcg(const short2 *ptr) { short2 ret; asm volatile ("ld.global.cg.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldcg(const short4 *ptr) { short4 ret; asm volatile ("ld.global.cg.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldcg(const int2 *ptr) { int2 ret; asm volatile ("ld.global.cg.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldcg(const int4 *ptr) { int4 ret; asm volatile ("ld.global.cg.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldcg(const longlong2 *ptr) { longlong2 ret; asm volatile ("ld.global.cg.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldcg(const unsigned char *ptr) { unsigned int ret; asm volatile ("ld.global.cg.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldcg(const unsigned short *ptr) { unsigned short ret; asm volatile ("ld.global.cg.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldcg(const unsigned int *ptr) { unsigned int ret; asm volatile ("ld.global.cg.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldcg(const unsigned long long *ptr) { unsigned long long ret; asm volatile ("ld.global.cg.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldcg(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm volatile ("ld.global.cg.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldcg(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm volatile ("ld.global.cg.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldcg(const ushort2 *ptr) { ushort2 ret; asm volatile ("ld.global.cg.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldcg(const ushort4 *ptr) { ushort4 ret; asm volatile ("ld.global.cg.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldcg(const uint2 *ptr) { uint2 ret; asm volatile ("ld.global.cg.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldcg(const uint4 *ptr) { uint4 ret; asm volatile ("ld.global.cg.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldcg(const ulonglong2 *ptr) { ulonglong2 ret; asm volatile ("ld.global.cg.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldcg(const float *ptr) { float ret; asm volatile ("ld.global.cg.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldcg(const double *ptr) { double ret; asm volatile ("ld.global.cg.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldcg(const float2 *ptr) { float2 ret; asm volatile ("ld.global.cg.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldcg(const float4 *ptr) { float4 ret; asm volatile ("ld.global.cg.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldcg(const double2 *ptr) { double2 ret; asm volatile ("ld.global.cg.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr)); return ret; } + +/****************************************************************************** + * __ldca * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldca(const long *ptr) { unsigned long ret; asm volatile ("ld.global.ca.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldca(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.ca.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldca(const long *ptr) { unsigned long ret; asm volatile ("ld.global.ca.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldca(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.ca.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldca(const char *ptr) { unsigned int ret; asm volatile ("ld.global.ca.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldca(const signed char *ptr) { unsigned int ret; asm volatile ("ld.global.ca.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldca(const short *ptr) { unsigned short ret; asm volatile ("ld.global.ca.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldca(const int *ptr) { unsigned int ret; asm volatile ("ld.global.ca.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldca(const long long *ptr) { unsigned long long ret; asm volatile ("ld.global.ca.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldca(const char2 *ptr) { char2 ret; int2 tmp; asm volatile ("ld.global.ca.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldca(const char4 *ptr) { char4 ret; int4 tmp; asm volatile ("ld.global.ca.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldca(const short2 *ptr) { short2 ret; asm volatile ("ld.global.ca.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldca(const short4 *ptr) { short4 ret; asm volatile ("ld.global.ca.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldca(const int2 *ptr) { int2 ret; asm volatile ("ld.global.ca.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldca(const int4 *ptr) { int4 ret; asm volatile ("ld.global.ca.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldca(const longlong2 *ptr) { longlong2 ret; asm volatile ("ld.global.ca.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldca(const unsigned char *ptr) { unsigned int ret; asm volatile ("ld.global.ca.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldca(const unsigned short *ptr) { unsigned short ret; asm volatile ("ld.global.ca.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldca(const unsigned int *ptr) { unsigned int ret; asm volatile ("ld.global.ca.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldca(const unsigned long long *ptr) { unsigned long long ret; asm volatile ("ld.global.ca.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldca(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm volatile ("ld.global.ca.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldca(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm volatile ("ld.global.ca.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldca(const ushort2 *ptr) { ushort2 ret; asm volatile ("ld.global.ca.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldca(const ushort4 *ptr) { ushort4 ret; asm volatile ("ld.global.ca.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldca(const uint2 *ptr) { uint2 ret; asm volatile ("ld.global.ca.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldca(const uint4 *ptr) { uint4 ret; asm volatile ("ld.global.ca.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldca(const ulonglong2 *ptr) { ulonglong2 ret; asm volatile ("ld.global.ca.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldca(const float *ptr) { float ret; asm volatile ("ld.global.ca.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldca(const double *ptr) { double ret; asm volatile ("ld.global.ca.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldca(const float2 *ptr) { float2 ret; asm volatile ("ld.global.ca.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldca(const float4 *ptr) { float4 ret; asm volatile ("ld.global.ca.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldca(const double2 *ptr) { double2 ret; asm volatile ("ld.global.ca.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr)); return ret; } + +/****************************************************************************** + * __ldcs * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldcs(const long *ptr) { unsigned long ret; asm volatile ("ld.global.cs.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcs(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.cs.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldcs(const long *ptr) { unsigned long ret; asm volatile ("ld.global.cs.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcs(const unsigned long *ptr) { unsigned long ret; asm volatile ("ld.global.cs.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldcs(const char *ptr) { unsigned int ret; asm volatile ("ld.global.cs.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldcs(const signed char *ptr) { unsigned int ret; asm volatile ("ld.global.cs.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldcs(const short *ptr) { unsigned short ret; asm volatile ("ld.global.cs.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldcs(const int *ptr) { unsigned int ret; asm volatile ("ld.global.cs.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldcs(const long long *ptr) { unsigned long long ret; asm volatile ("ld.global.cs.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldcs(const char2 *ptr) { char2 ret; int2 tmp; asm volatile ("ld.global.cs.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldcs(const char4 *ptr) { char4 ret; int4 tmp; asm volatile ("ld.global.cs.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldcs(const short2 *ptr) { short2 ret; asm volatile ("ld.global.cs.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldcs(const short4 *ptr) { short4 ret; asm volatile ("ld.global.cs.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldcs(const int2 *ptr) { int2 ret; asm volatile ("ld.global.cs.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldcs(const int4 *ptr) { int4 ret; asm volatile ("ld.global.cs.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldcs(const longlong2 *ptr) { longlong2 ret; asm volatile ("ld.global.cs.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldcs(const unsigned char *ptr) { unsigned int ret; asm volatile ("ld.global.cs.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldcs(const unsigned short *ptr) { unsigned short ret; asm volatile ("ld.global.cs.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldcs(const unsigned int *ptr) { unsigned int ret; asm volatile ("ld.global.cs.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldcs(const unsigned long long *ptr) { unsigned long long ret; asm volatile ("ld.global.cs.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldcs(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm volatile ("ld.global.cs.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldcs(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm volatile ("ld.global.cs.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr)); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldcs(const ushort2 *ptr) { ushort2 ret; asm volatile ("ld.global.cs.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldcs(const ushort4 *ptr) { ushort4 ret; asm volatile ("ld.global.cs.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldcs(const uint2 *ptr) { uint2 ret; asm volatile ("ld.global.cs.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldcs(const uint4 *ptr) { uint4 ret; asm volatile ("ld.global.cs.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldcs(const ulonglong2 *ptr) { ulonglong2 ret; asm volatile ("ld.global.cs.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr)); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldcs(const float *ptr) { float ret; asm volatile ("ld.global.cs.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldcs(const double *ptr) { double ret; asm volatile ("ld.global.cs.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldcs(const float2 *ptr) { float2 ret; asm volatile ("ld.global.cs.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldcs(const float4 *ptr) { float4 ret; asm volatile ("ld.global.cs.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr)); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldcs(const double2 *ptr) { double2 ret; asm volatile ("ld.global.cs.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr)); return ret; } + +/****************************************************************************** + * __ldlu * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldlu(const long *ptr) { unsigned long ret; asm ("ld.global.lu.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldlu(const unsigned long *ptr) { unsigned long ret; asm ("ld.global.lu.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldlu(const long *ptr) { unsigned long ret; asm ("ld.global.lu.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldlu(const unsigned long *ptr) { unsigned long ret; asm ("ld.global.lu.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldlu(const char *ptr) { unsigned int ret; asm ("ld.global.lu.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldlu(const signed char *ptr) { unsigned int ret; asm ("ld.global.lu.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldlu(const short *ptr) { unsigned short ret; asm ("ld.global.lu.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr) : "memory"); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldlu(const int *ptr) { unsigned int ret; asm ("ld.global.lu.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldlu(const long long *ptr) { unsigned long long ret; asm ("ld.global.lu.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldlu(const char2 *ptr) { char2 ret; int2 tmp; asm ("ld.global.lu.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr) : "memory"); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldlu(const char4 *ptr) { char4 ret; int4 tmp; asm ("ld.global.lu.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr) : "memory"); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldlu(const short2 *ptr) { short2 ret; asm ("ld.global.lu.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldlu(const short4 *ptr) { short4 ret; asm ("ld.global.lu.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldlu(const int2 *ptr) { int2 ret; asm ("ld.global.lu.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldlu(const int4 *ptr) { int4 ret; asm ("ld.global.lu.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldlu(const longlong2 *ptr) { longlong2 ret; asm ("ld.global.lu.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldlu(const unsigned char *ptr) { unsigned int ret; asm ("ld.global.lu.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldlu(const unsigned short *ptr) { unsigned short ret; asm ("ld.global.lu.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldlu(const unsigned int *ptr) { unsigned int ret; asm ("ld.global.lu.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldlu(const unsigned long long *ptr) { unsigned long long ret; asm ("ld.global.lu.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldlu(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm ("ld.global.lu.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr) : "memory"); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldlu(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm ("ld.global.lu.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr) : "memory"); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldlu(const ushort2 *ptr) { ushort2 ret; asm ("ld.global.lu.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldlu(const ushort4 *ptr) { ushort4 ret; asm ("ld.global.lu.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldlu(const uint2 *ptr) { uint2 ret; asm ("ld.global.lu.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldlu(const uint4 *ptr) { uint4 ret; asm ("ld.global.lu.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldlu(const ulonglong2 *ptr) { ulonglong2 ret; asm ("ld.global.lu.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldlu(const float *ptr) { float ret; asm ("ld.global.lu.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldlu(const double *ptr) { double ret; asm ("ld.global.lu.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldlu(const float2 *ptr) { float2 ret; asm ("ld.global.lu.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldlu(const float4 *ptr) { float4 ret; asm ("ld.global.lu.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldlu(const double2 *ptr) { double2 ret; asm ("ld.global.lu.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +/****************************************************************************** + * __ldcv * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ long __ldcv(const long *ptr) { unsigned long ret; asm ("ld.global.cv.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcv(const unsigned long *ptr) { unsigned long ret; asm ("ld.global.cv.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ long __ldcv(const long *ptr) { unsigned long ret; asm ("ld.global.cv.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (long)ret; } +__SM_32_INTRINSICS_DECL__ unsigned long __ldcv(const unsigned long *ptr) { unsigned long ret; asm ("ld.global.cv.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +#endif + + +__SM_32_INTRINSICS_DECL__ char __ldcv(const char *ptr) { unsigned int ret; asm ("ld.global.cv.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (char)ret; } +__SM_32_INTRINSICS_DECL__ signed char __ldcv(const signed char *ptr) { unsigned int ret; asm ("ld.global.cv.s8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (signed char)ret; } +__SM_32_INTRINSICS_DECL__ short __ldcv(const short *ptr) { unsigned short ret; asm ("ld.global.cv.s16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr) : "memory"); return (short)ret; } +__SM_32_INTRINSICS_DECL__ int __ldcv(const int *ptr) { unsigned int ret; asm ("ld.global.cv.s32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (int)ret; } +__SM_32_INTRINSICS_DECL__ long long __ldcv(const long long *ptr) { unsigned long long ret; asm ("ld.global.cv.s64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return (long long)ret; } +__SM_32_INTRINSICS_DECL__ char2 __ldcv(const char2 *ptr) { char2 ret; int2 tmp; asm ("ld.global.cv.v2.s8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr) : "memory"); ret.x = (char)tmp.x; ret.y = (char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ char4 __ldcv(const char4 *ptr) { char4 ret; int4 tmp; asm ("ld.global.cv.v4.s8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr) : "memory"); ret.x = (char)tmp.x; ret.y = (char)tmp.y; ret.z = (char)tmp.z; ret.w = (char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ short2 __ldcv(const short2 *ptr) { short2 ret; asm ("ld.global.cv.v2.s16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ short4 __ldcv(const short4 *ptr) { short4 ret; asm ("ld.global.cv.v4.s16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ int2 __ldcv(const int2 *ptr) { int2 ret; asm ("ld.global.cv.v2.s32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ int4 __ldcv(const int4 *ptr) { int4 ret; asm ("ld.global.cv.v4.s32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ longlong2 __ldcv(const longlong2 *ptr) { longlong2 ret; asm ("ld.global.cv.v2.s64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +__SM_32_INTRINSICS_DECL__ unsigned char __ldcv(const unsigned char *ptr) { unsigned int ret; asm ("ld.global.cv.u8 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return (unsigned char)ret; } +__SM_32_INTRINSICS_DECL__ unsigned short __ldcv(const unsigned short *ptr) { unsigned short ret; asm ("ld.global.cv.u16 %0, [%1];" : "=h"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned int __ldcv(const unsigned int *ptr) { unsigned int ret; asm ("ld.global.cv.u32 %0, [%1];" : "=r"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ unsigned long long __ldcv(const unsigned long long *ptr) { unsigned long long ret; asm ("ld.global.cv.u64 %0, [%1];" : "=l"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uchar2 __ldcv(const uchar2 *ptr) { uchar2 ret; uint2 tmp; asm ("ld.global.cv.v2.u8 {%0,%1}, [%2];" : "=r"(tmp.x), "=r"(tmp.y) : __LDG_PTR (ptr) : "memory"); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; return ret; } +__SM_32_INTRINSICS_DECL__ uchar4 __ldcv(const uchar4 *ptr) { uchar4 ret; uint4 tmp; asm ("ld.global.cv.v4.u8 {%0,%1,%2,%3}, [%4];" : "=r"(tmp.x), "=r"(tmp.y), "=r"(tmp.z), "=r"(tmp.w) : __LDG_PTR (ptr) : "memory"); ret.x = (unsigned char)tmp.x; ret.y = (unsigned char)tmp.y; ret.z = (unsigned char)tmp.z; ret.w = (unsigned char)tmp.w; return ret; } +__SM_32_INTRINSICS_DECL__ ushort2 __ldcv(const ushort2 *ptr) { ushort2 ret; asm ("ld.global.cv.v2.u16 {%0,%1}, [%2];" : "=h"(ret.x), "=h"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ ushort4 __ldcv(const ushort4 *ptr) { ushort4 ret; asm ("ld.global.cv.v4.u16 {%0,%1,%2,%3}, [%4];" : "=h"(ret.x), "=h"(ret.y), "=h"(ret.z), "=h"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uint2 __ldcv(const uint2 *ptr) { uint2 ret; asm ("ld.global.cv.v2.u32 {%0,%1}, [%2];" : "=r"(ret.x), "=r"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ uint4 __ldcv(const uint4 *ptr) { uint4 ret; asm ("ld.global.cv.v4.u32 {%0,%1,%2,%3}, [%4];" : "=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ ulonglong2 __ldcv(const ulonglong2 *ptr) { ulonglong2 ret; asm ("ld.global.cv.v2.u64 {%0,%1}, [%2];" : "=l"(ret.x), "=l"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +__SM_32_INTRINSICS_DECL__ float __ldcv(const float *ptr) { float ret; asm ("ld.global.cv.f32 %0, [%1];" : "=f"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ double __ldcv(const double *ptr) { double ret; asm ("ld.global.cv.f64 %0, [%1];" : "=d"(ret) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ float2 __ldcv(const float2 *ptr) { float2 ret; asm ("ld.global.cv.v2.f32 {%0,%1}, [%2];" : "=f"(ret.x), "=f"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ float4 __ldcv(const float4 *ptr) { float4 ret; asm ("ld.global.cv.v4.f32 {%0,%1,%2,%3}, [%4];" : "=f"(ret.x), "=f"(ret.y), "=f"(ret.z), "=f"(ret.w) : __LDG_PTR (ptr) : "memory"); return ret; } +__SM_32_INTRINSICS_DECL__ double2 __ldcv(const double2 *ptr) { double2 ret; asm ("ld.global.cv.v2.f64 {%0,%1}, [%2];" : "=d"(ret.x), "=d"(ret.y) : __LDG_PTR (ptr) : "memory"); return ret; } + +/****************************************************************************** + * __stwb * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ void __stwb(long *ptr, long value) { asm ("st.global.wb.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned long *ptr, unsigned long value) { asm ("st.global.wb.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ void __stwb(long *ptr, long value) { asm ("st.global.wb.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned long *ptr, unsigned long value) { asm ("st.global.wb.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +#endif + + +__SM_32_INTRINSICS_DECL__ void __stwb(char *ptr, char value) { asm ("st.global.wb.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(signed char *ptr, signed char value) { asm ("st.global.wb.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(short *ptr, short value) { asm ("st.global.wb.s16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(int *ptr, int value) { asm ("st.global.wb.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(long long *ptr, long long value) { asm ("st.global.wb.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(char2 *ptr, char2 value) { const int x = value.x, y = value.y; asm ("st.global.wb.v2.s8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(char4 *ptr, char4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.wb.v4.s8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(short2 *ptr, short2 value) { asm ("st.global.wb.v2.s16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(short4 *ptr, short4 value) { asm ("st.global.wb.v4.s16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(int2 *ptr, int2 value) { asm ("st.global.wb.v2.s32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(int4 *ptr, int4 value) { asm ("st.global.wb.v4.s32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(longlong2 *ptr, longlong2 value) { asm ("st.global.wb.v2.s64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned char *ptr, unsigned char value) { asm ("st.global.wb.u8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned short *ptr, unsigned short value) { asm ("st.global.wb.u16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned int *ptr, unsigned int value) { asm ("st.global.wb.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(unsigned long long *ptr, unsigned long long value) { asm ("st.global.wb.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(uchar2 *ptr, uchar2 value) { const int x = value.x, y = value.y; asm ("st.global.wb.v2.u8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(uchar4 *ptr, uchar4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.wb.v4.u8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(ushort2 *ptr, ushort2 value) { asm ("st.global.wb.v2.u16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(ushort4 *ptr, ushort4 value) { asm ("st.global.wb.v4.u16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(uint2 *ptr, uint2 value) { asm ("st.global.wb.v2.u32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(uint4 *ptr, uint4 value) { asm ("st.global.wb.v4.u32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(ulonglong2 *ptr, ulonglong2 value) { asm ("st.global.wb.v2.u64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stwb(float *ptr, float value) { asm ("st.global.wb.f32 [%0], %1;" :: __LDG_PTR (ptr), "f"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(double *ptr, double value) { asm ("st.global.wb.f64 [%0], %1;" :: __LDG_PTR (ptr), "d"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(float2 *ptr, float2 value) { asm ("st.global.wb.v2.f32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(float4 *ptr, float4 value) { asm ("st.global.wb.v4.f32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y), "f"(value.z), "f"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwb(double2 *ptr, double2 value) { asm ("st.global.wb.v2.f64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "d"(value.x), "d"(value.y) : "memory"); } + +/****************************************************************************** + * __stcg * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ void __stcg(long *ptr, long value) { asm ("st.global.cg.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned long *ptr, unsigned long value) { asm ("st.global.cg.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ void __stcg(long *ptr, long value) { asm ("st.global.cg.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned long *ptr, unsigned long value) { asm ("st.global.cg.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +#endif + + +__SM_32_INTRINSICS_DECL__ void __stcg(char *ptr, char value) { asm ("st.global.cg.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(signed char *ptr, signed char value) { asm ("st.global.cg.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(short *ptr, short value) { asm ("st.global.cg.s16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(int *ptr, int value) { asm ("st.global.cg.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(long long *ptr, long long value) { asm ("st.global.cg.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(char2 *ptr, char2 value) { const int x = value.x, y = value.y; asm ("st.global.cg.v2.s8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(char4 *ptr, char4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.cg.v4.s8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(short2 *ptr, short2 value) { asm ("st.global.cg.v2.s16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(short4 *ptr, short4 value) { asm ("st.global.cg.v4.s16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(int2 *ptr, int2 value) { asm ("st.global.cg.v2.s32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(int4 *ptr, int4 value) { asm ("st.global.cg.v4.s32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(longlong2 *ptr, longlong2 value) { asm ("st.global.cg.v2.s64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned char *ptr, unsigned char value) { asm ("st.global.cg.u8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned short *ptr, unsigned short value) { asm ("st.global.cg.u16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned int *ptr, unsigned int value) { asm ("st.global.cg.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(unsigned long long *ptr, unsigned long long value) { asm ("st.global.cg.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(uchar2 *ptr, uchar2 value) { const int x = value.x, y = value.y; asm ("st.global.cg.v2.u8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(uchar4 *ptr, uchar4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.cg.v4.u8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(ushort2 *ptr, ushort2 value) { asm ("st.global.cg.v2.u16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(ushort4 *ptr, ushort4 value) { asm ("st.global.cg.v4.u16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(uint2 *ptr, uint2 value) { asm ("st.global.cg.v2.u32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(uint4 *ptr, uint4 value) { asm ("st.global.cg.v4.u32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(ulonglong2 *ptr, ulonglong2 value) { asm ("st.global.cg.v2.u64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stcg(float *ptr, float value) { asm ("st.global.cg.f32 [%0], %1;" :: __LDG_PTR (ptr), "f"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(double *ptr, double value) { asm ("st.global.cg.f64 [%0], %1;" :: __LDG_PTR (ptr), "d"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(float2 *ptr, float2 value) { asm ("st.global.cg.v2.f32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(float4 *ptr, float4 value) { asm ("st.global.cg.v4.f32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y), "f"(value.z), "f"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcg(double2 *ptr, double2 value) { asm ("st.global.cg.v2.f64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "d"(value.x), "d"(value.y) : "memory"); } + +/****************************************************************************** + * __stcs * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ void __stcs(long *ptr, long value) { asm ("st.global.cs.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned long *ptr, unsigned long value) { asm ("st.global.cs.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ void __stcs(long *ptr, long value) { asm ("st.global.cs.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned long *ptr, unsigned long value) { asm ("st.global.cs.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +#endif + + +__SM_32_INTRINSICS_DECL__ void __stcs(char *ptr, char value) { asm ("st.global.cs.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(signed char *ptr, signed char value) { asm ("st.global.cs.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(short *ptr, short value) { asm ("st.global.cs.s16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(int *ptr, int value) { asm ("st.global.cs.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(long long *ptr, long long value) { asm ("st.global.cs.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(char2 *ptr, char2 value) { const int x = value.x, y = value.y; asm ("st.global.cs.v2.s8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(char4 *ptr, char4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.cs.v4.s8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(short2 *ptr, short2 value) { asm ("st.global.cs.v2.s16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(short4 *ptr, short4 value) { asm ("st.global.cs.v4.s16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(int2 *ptr, int2 value) { asm ("st.global.cs.v2.s32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(int4 *ptr, int4 value) { asm ("st.global.cs.v4.s32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(longlong2 *ptr, longlong2 value) { asm ("st.global.cs.v2.s64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned char *ptr, unsigned char value) { asm ("st.global.cs.u8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned short *ptr, unsigned short value) { asm ("st.global.cs.u16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned int *ptr, unsigned int value) { asm ("st.global.cs.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(unsigned long long *ptr, unsigned long long value) { asm ("st.global.cs.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(uchar2 *ptr, uchar2 value) { const int x = value.x, y = value.y; asm ("st.global.cs.v2.u8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(uchar4 *ptr, uchar4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.cs.v4.u8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(ushort2 *ptr, ushort2 value) { asm ("st.global.cs.v2.u16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(ushort4 *ptr, ushort4 value) { asm ("st.global.cs.v4.u16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(uint2 *ptr, uint2 value) { asm ("st.global.cs.v2.u32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(uint4 *ptr, uint4 value) { asm ("st.global.cs.v4.u32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(ulonglong2 *ptr, ulonglong2 value) { asm ("st.global.cs.v2.u64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stcs(float *ptr, float value) { asm ("st.global.cs.f32 [%0], %1;" :: __LDG_PTR (ptr), "f"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(double *ptr, double value) { asm ("st.global.cs.f64 [%0], %1;" :: __LDG_PTR (ptr), "d"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(float2 *ptr, float2 value) { asm ("st.global.cs.v2.f32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(float4 *ptr, float4 value) { asm ("st.global.cs.v4.f32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y), "f"(value.z), "f"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stcs(double2 *ptr, double2 value) { asm ("st.global.cs.v2.f64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "d"(value.x), "d"(value.y) : "memory"); } + +/****************************************************************************** + * __stwt * + ******************************************************************************/ + +// Size of long is architecture and OS specific. +#if defined(__LP64__) // 64 bits +__SM_32_INTRINSICS_DECL__ void __stwt(long *ptr, long value) { asm ("st.global.wt.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned long *ptr, unsigned long value) { asm ("st.global.wt.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +#else // 32 bits +__SM_32_INTRINSICS_DECL__ void __stwt(long *ptr, long value) { asm ("st.global.wt.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned long *ptr, unsigned long value) { asm ("st.global.wt.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +#endif + + +__SM_32_INTRINSICS_DECL__ void __stwt(char *ptr, char value) { asm ("st.global.wt.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(signed char *ptr, signed char value) { asm ("st.global.wt.s8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(short *ptr, short value) { asm ("st.global.wt.s16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(int *ptr, int value) { asm ("st.global.wt.s32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(long long *ptr, long long value) { asm ("st.global.wt.s64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(char2 *ptr, char2 value) { const int x = value.x, y = value.y; asm ("st.global.wt.v2.s8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(char4 *ptr, char4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.wt.v4.s8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(short2 *ptr, short2 value) { asm ("st.global.wt.v2.s16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(short4 *ptr, short4 value) { asm ("st.global.wt.v4.s16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(int2 *ptr, int2 value) { asm ("st.global.wt.v2.s32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(int4 *ptr, int4 value) { asm ("st.global.wt.v4.s32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(longlong2 *ptr, longlong2 value) { asm ("st.global.wt.v2.s64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned char *ptr, unsigned char value) { asm ("st.global.wt.u8 [%0], %1;" :: __LDG_PTR (ptr), "r"((int)value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned short *ptr, unsigned short value) { asm ("st.global.wt.u16 [%0], %1;" :: __LDG_PTR (ptr), "h"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned int *ptr, unsigned int value) { asm ("st.global.wt.u32 [%0], %1;" :: __LDG_PTR (ptr), "r"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(unsigned long long *ptr, unsigned long long value) { asm ("st.global.wt.u64 [%0], %1;" :: __LDG_PTR (ptr), "l"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(uchar2 *ptr, uchar2 value) { const int x = value.x, y = value.y; asm ("st.global.wt.v2.u8 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(x), "r"(y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(uchar4 *ptr, uchar4 value) { const int x = value.x, y = value.y, z = value.z, w = value.w; asm ("st.global.wt.v4.u8 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(x), "r"(y), "r"(z), "r"(w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(ushort2 *ptr, ushort2 value) { asm ("st.global.wt.v2.u16 [%0], {%1,%2};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(ushort4 *ptr, ushort4 value) { asm ("st.global.wt.v4.u16 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "h"(value.x), "h"(value.y), "h"(value.z), "h"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(uint2 *ptr, uint2 value) { asm ("st.global.wt.v2.u32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(uint4 *ptr, uint4 value) { asm ("st.global.wt.v4.u32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "r"(value.x), "r"(value.y), "r"(value.z), "r"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(ulonglong2 *ptr, ulonglong2 value) { asm ("st.global.wt.v2.u64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "l"(value.x), "l"(value.y) : "memory"); } + +__SM_32_INTRINSICS_DECL__ void __stwt(float *ptr, float value) { asm ("st.global.wt.f32 [%0], %1;" :: __LDG_PTR (ptr), "f"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(double *ptr, double value) { asm ("st.global.wt.f64 [%0], %1;" :: __LDG_PTR (ptr), "d"(value) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(float2 *ptr, float2 value) { asm ("st.global.wt.v2.f32 [%0], {%1,%2};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(float4 *ptr, float4 value) { asm ("st.global.wt.v4.f32 [%0], {%1,%2,%3,%4};" :: __LDG_PTR (ptr), "f"(value.x), "f"(value.y), "f"(value.z), "f"(value.w) : "memory"); } +__SM_32_INTRINSICS_DECL__ void __stwt(double2 *ptr, double2 value) { asm ("st.global.wt.v2.f64 [%0], {%1,%2};" :: __LDG_PTR (ptr), "d"(value.x), "d"(value.y) : "memory"); } + +#undef __LDG_PTR + + +// SHF is the "funnel shift" operation - an accelerated left/right shift with carry +// operating on 64-bit quantities, which are concatenations of two 32-bit registers. + +// This shifts [b:a] left by "shift" bits, returning the most significant bits of the result. +__SM_32_INTRINSICS_DECL__ unsigned int __funnelshift_l(unsigned int lo, unsigned int hi, unsigned int shift) +{ + unsigned int ret; + asm volatile ("shf.l.wrap.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(lo), "r"(hi), "r"(shift)); + return ret; +} +__SM_32_INTRINSICS_DECL__ unsigned int __funnelshift_lc(unsigned int lo, unsigned int hi, unsigned int shift) +{ + unsigned int ret; + asm volatile ("shf.l.clamp.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(lo), "r"(hi), "r"(shift)); + return ret; +} + +// This shifts [b:a] right by "shift" bits, returning the least significant bits of the result. +__SM_32_INTRINSICS_DECL__ unsigned int __funnelshift_r(unsigned int lo, unsigned int hi, unsigned int shift) +{ + unsigned int ret; + asm volatile ("shf.r.wrap.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(lo), "r"(hi), "r"(shift)); + return ret; +} +__SM_32_INTRINSICS_DECL__ unsigned int __funnelshift_rc(unsigned int lo, unsigned int hi, unsigned int shift) +{ + unsigned int ret; + asm volatile ("shf.r.clamp.b32 %0, %1, %2, %3;" : "=r"(ret) : "r"(lo), "r"(hi), "r"(shift)); + return ret; +} + + +#endif /* _NVHPC_CUDA || !__CUDA_ARCH__ || __CUDA_ARCH__ >= 320 */ + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __SM_32_INTRINSICS_DECL__ + +#endif /* !__SM_32_INTRINSICS_HPP__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_atomic_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_atomic_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..c8961079aeac4c9e73a7c2825cf9ea10b171af09 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_atomic_functions.h @@ -0,0 +1,58 @@ +/* + * Copyright 1993-2012 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 35.235 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.35.235 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_35_ATOMIC_FUNCTIONS_H__) +#define __SM_35_ATOMIC_FUNCTIONS_H__ + +/******************************************************************************* +* All sm_35 atomics are supported by sm_32 so simply include its header file * +*******************************************************************************/ +#include "sm_32_atomic_functions.h" + +#endif /* !__SM_35_ATOMIC_FUNCTIONS_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_intrinsics.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_intrinsics.h new file mode 100644 index 0000000000000000000000000000000000000000..da1e823a24171ed1ca9414955c6c68159a4411f5 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_35_intrinsics.h @@ -0,0 +1,116 @@ +/* + + * Copyright 1993-2012 NVIDIA Corporation. All rights reserved. + + * + + * NOTICE TO LICENSEE: + + * + + * This source code and/or documentation ("Licensed Deliverables") are + + * subject to NVIDIA intellectual property rights under U.S. and + + * international Copyright laws. + + * + + * These Licensed Deliverables contained herein is PROPRIETARY and + + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + + * conditions of a form of NVIDIA software license agreement by and + + * between NVIDIA and Licensee ("License Agreement") or electronically + + * accepted by Licensee. Notwithstanding any terms or conditions to + + * the contrary in the License Agreement, reproduction or disclosure + + * of the Licensed Deliverables to any third party without the express + + * written consent of NVIDIA is prohibited. + + * + + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + + * OF THESE LICENSED DELIVERABLES. + + * + + * U.S. Government End Users. These Licensed Deliverables are a + + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + + * 1995), consisting of "commercial computer software" and "commercial + + * computer software documentation" as such terms are used in 48 + + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + + * U.S. Government End Users acquire the Licensed Deliverables with + + * only those rights set forth herein. + + * + + * Any use of the Licensed Deliverables in individual and commercial + + * software must include, in the user documentation and internal + + * comments to the code, the above Disclaimer and U.S. Government End + + * Users Notice. + + */ + + + +#if !defined(__SM_35_INTRINSICS_H__) + +#define __SM_35_INTRINSICS_H__ + + + +/********************************************************************************** + +* All sm_35 intrinsics are supported by sm_32 so simply include its header file * + +**********************************************************************************/ + +#include "sm_32_intrinsics.h" + + + +#endif /* !__SM_35_INTRINSICS_H__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_60_atomic_functions.hpp b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_60_atomic_functions.hpp new file mode 100644 index 0000000000000000000000000000000000000000..858b373238a4d87f8d5fc669bf4145f4f2a6e877 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/sm_60_atomic_functions.hpp @@ -0,0 +1,527 @@ +/* + * Copyright 1993-2014 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__SM_60_ATOMIC_FUNCTIONS_HPP__) +#define __SM_60_ATOMIC_FUNCTIONS_HPP__ + +#if defined(__CUDACC_RTC__) +#define __SM_60_ATOMIC_FUNCTIONS_DECL__ __device__ +#else /* __CUDACC_RTC__ */ +#define __SM_60_ATOMIC_FUNCTIONS_DECL__ static __inline__ __device__ +#endif /* __CUDACC_RTC__ */ + +#if defined(__cplusplus) && defined(__CUDACC__) + +#if defined(_NVHPC_CUDA) || !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 600 + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +__SM_60_ATOMIC_FUNCTIONS_DECL__ double atomicAdd(double *address, double val) +{ + return __dAtomicAdd(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicAdd_block(int *address, int val) +{ + return __iAtomicAdd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicAdd_system(int *address, int val) +{ + return __iAtomicAdd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicAdd_block(unsigned int *address, unsigned int val) +{ + return __uAtomicAdd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicAdd_system(unsigned int *address, unsigned int val) +{ + return __uAtomicAdd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicAdd_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicAdd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicAdd_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicAdd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +float atomicAdd_block(float *address, float val) +{ + return __fAtomicAdd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +float atomicAdd_system(float *address, float val) +{ + return __fAtomicAdd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +double atomicAdd_block(double *address, double val) +{ + return __dAtomicAdd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +double atomicAdd_system(double *address, double val) +{ + return __dAtomicAdd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicSub_block(int *address, int val) +{ + return __iAtomicAdd_block(address, (unsigned int)-(int)val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicSub_system(int *address, int val) +{ + return __iAtomicAdd_system(address, (unsigned int)-(int)val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicSub_block(unsigned int *address, unsigned int val) +{ + return __uAtomicAdd_block(address, (unsigned int)-(int)val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicSub_system(unsigned int *address, unsigned int val) +{ + return __uAtomicAdd_system(address, (unsigned int)-(int)val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicExch_block(int *address, int val) +{ + return __iAtomicExch_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicExch_system(int *address, int val) +{ + return __iAtomicExch_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicExch_block(unsigned int *address, unsigned int val) +{ + return __uAtomicExch_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicExch_system(unsigned int *address, unsigned int val) +{ + return __uAtomicExch_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicExch_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicExch_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicExch_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicExch_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +float atomicExch_block(float *address, float val) +{ + return __fAtomicExch_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +float atomicExch_system(float *address, float val) +{ + return __fAtomicExch_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicMin_block(int *address, int val) +{ + return __iAtomicMin_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicMin_system(int *address, int val) +{ + return __iAtomicMin_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicMin_block(long long *address, long long val) +{ + return __illAtomicMin_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicMin_system(long long *address, long long val) +{ + return __illAtomicMin_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicMin_block(unsigned int *address, unsigned int val) +{ + return __uAtomicMin_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicMin_system(unsigned int *address, unsigned int val) +{ + return __uAtomicMin_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicMin_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicMin_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicMin_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicMin_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicMax_block(int *address, int val) +{ + return __iAtomicMax_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicMax_system(int *address, int val) +{ + return __iAtomicMax_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicMax_block(long long *address, long long val) +{ + return __illAtomicMax_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicMax_system(long long *address, long long val) +{ + return __illAtomicMax_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicMax_block(unsigned int *address, unsigned int val) +{ + return __uAtomicMax_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicMax_system(unsigned int *address, unsigned int val) +{ + return __uAtomicMax_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicMax_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicMax_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicMax_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicMax_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicInc_block(unsigned int *address, unsigned int val) +{ + return __uAtomicInc_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicInc_system(unsigned int *address, unsigned int val) +{ + return __uAtomicInc_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicDec_block(unsigned int *address, unsigned int val) +{ + return __uAtomicDec_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicDec_system(unsigned int *address, unsigned int val) +{ + return __uAtomicDec_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicCAS_block(int *address, int compare, int val) +{ + return __iAtomicCAS_block(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicCAS_system(int *address, int compare, int val) +{ + return __iAtomicCAS_system(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicCAS_block(unsigned int *address, unsigned int compare, + unsigned int val) +{ + return __uAtomicCAS_block(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicCAS_system(unsigned int *address, unsigned int compare, + unsigned int val) +{ + return __uAtomicCAS_system(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long int atomicCAS_block(unsigned long long int *address, + unsigned long long int compare, + unsigned long long int val) +{ + return __ullAtomicCAS_block(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long int atomicCAS_system(unsigned long long int *address, + unsigned long long int compare, + unsigned long long int val) +{ + return __ullAtomicCAS_system(address, compare, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicAnd_block(int *address, int val) +{ + return __iAtomicAnd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicAnd_system(int *address, int val) +{ + return __iAtomicAnd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicAnd_block(long long *address, long long val) +{ + return __llAtomicAnd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicAnd_system(long long *address, long long val) +{ + return __llAtomicAnd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicAnd_block(unsigned int *address, unsigned int val) +{ + return __uAtomicAnd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicAnd_system(unsigned int *address, unsigned int val) +{ + return __uAtomicAnd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicAnd_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicAnd_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicAnd_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicAnd_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicOr_block(int *address, int val) +{ + return __iAtomicOr_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicOr_system(int *address, int val) +{ + return __iAtomicOr_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicOr_block(long long *address, long long val) +{ + return __llAtomicOr_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicOr_system(long long *address, long long val) +{ + return __llAtomicOr_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicOr_block(unsigned int *address, unsigned int val) +{ + return __uAtomicOr_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicOr_system(unsigned int *address, unsigned int val) +{ + return __uAtomicOr_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicOr_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicOr_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicOr_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicOr_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicXor_block(int *address, int val) +{ + return __iAtomicXor_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +int atomicXor_system(int *address, int val) +{ + return __iAtomicXor_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicXor_block(long long *address, long long val) +{ + return __llAtomicXor_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +long long atomicXor_system(long long *address, long long val) +{ + return __llAtomicXor_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicXor_block(unsigned int *address, unsigned int val) +{ + return __uAtomicXor_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned int atomicXor_system(unsigned int *address, unsigned int val) +{ + return __uAtomicXor_system(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicXor_block(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicXor_block(address, val); +} + +__SM_60_ATOMIC_FUNCTIONS_DECL__ +unsigned long long atomicXor_system(unsigned long long *address, unsigned long long val) +{ + return __ullAtomicXor_system(address, val); +} + +#endif /* !__CUDA_ARCH__ || __CUDA_ARCH__ >= 600 */ + +#endif /* __cplusplus && __CUDACC__ */ + +#undef __SM_60_ATOMIC_FUNCTIONS_DECL__ + +#endif /* !__SM_60_ATOMIC_FUNCTIONS_HPP__ */ + diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_fetch_functions.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_fetch_functions.h new file mode 100644 index 0000000000000000000000000000000000000000..704e8518da6b3cf7b77e7b9d34638bc06dd3937f --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_fetch_functions.h @@ -0,0 +1,223 @@ +/* + * Copyright 1993-2022 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__TEXTURE_FETCH_FUNCTIONS_H__) +#define __TEXTURE_FETCH_FUNCTIONS_H__ + + +#if defined(__cplusplus) && defined(__CUDACC__) + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "cuda_runtime_api.h" +#include "cuda_texture_types.h" + +#if defined(_WIN32) +# define __DEPRECATED__ __declspec(deprecated) +#else +# define __DEPRECATED__ __attribute__((deprecated)) +#endif + + +template +struct __nv_tex_rmet_ret { }; + +template<> struct __nv_tex_rmet_ret { typedef char type; }; +template<> struct __nv_tex_rmet_ret { typedef signed char type; }; +template<> struct __nv_tex_rmet_ret { typedef unsigned char type; }; +template<> struct __nv_tex_rmet_ret { typedef char1 type; }; +template<> struct __nv_tex_rmet_ret { typedef uchar1 type; }; +template<> struct __nv_tex_rmet_ret { typedef char2 type; }; +template<> struct __nv_tex_rmet_ret { typedef uchar2 type; }; +template<> struct __nv_tex_rmet_ret { typedef char4 type; }; +template<> struct __nv_tex_rmet_ret { typedef uchar4 type; }; + +template<> struct __nv_tex_rmet_ret { typedef short type; }; +template<> struct __nv_tex_rmet_ret { typedef unsigned short type; }; +template<> struct __nv_tex_rmet_ret { typedef short1 type; }; +template<> struct __nv_tex_rmet_ret { typedef ushort1 type; }; +template<> struct __nv_tex_rmet_ret { typedef short2 type; }; +template<> struct __nv_tex_rmet_ret { typedef ushort2 type; }; +template<> struct __nv_tex_rmet_ret { typedef short4 type; }; +template<> struct __nv_tex_rmet_ret { typedef ushort4 type; }; + +template<> struct __nv_tex_rmet_ret { typedef int type; }; +template<> struct __nv_tex_rmet_ret { typedef unsigned int type; }; +template<> struct __nv_tex_rmet_ret { typedef int1 type; }; +template<> struct __nv_tex_rmet_ret { typedef uint1 type; }; +template<> struct __nv_tex_rmet_ret { typedef int2 type; }; +template<> struct __nv_tex_rmet_ret { typedef uint2 type; }; +template<> struct __nv_tex_rmet_ret { typedef int4 type; }; +template<> struct __nv_tex_rmet_ret { typedef uint4 type; }; + +#if !defined(__LP64__) +template<> struct __nv_tex_rmet_ret { typedef long type; }; +template<> struct __nv_tex_rmet_ret { typedef unsigned long type; }; +template<> struct __nv_tex_rmet_ret { typedef long1 type; }; +template<> struct __nv_tex_rmet_ret { typedef ulong1 type; }; +template<> struct __nv_tex_rmet_ret { typedef long2 type; }; +template<> struct __nv_tex_rmet_ret { typedef ulong2 type; }; +template<> struct __nv_tex_rmet_ret { typedef long4 type; }; +template<> struct __nv_tex_rmet_ret { typedef ulong4 type; }; +#endif /* !__LP64__ */ +template<> struct __nv_tex_rmet_ret { typedef float type; }; +template<> struct __nv_tex_rmet_ret { typedef float1 type; }; +template<> struct __nv_tex_rmet_ret { typedef float2 type; }; +template<> struct __nv_tex_rmet_ret { typedef float4 type; }; + + +template struct __nv_tex_rmet_cast { typedef T* type; }; +#if !defined(__LP64__) +template<> struct __nv_tex_rmet_cast { typedef int *type; }; +template<> struct __nv_tex_rmet_cast { typedef unsigned int *type; }; +template<> struct __nv_tex_rmet_cast { typedef int1 *type; }; +template<> struct __nv_tex_rmet_cast { typedef uint1 *type; }; +template<> struct __nv_tex_rmet_cast { typedef int2 *type; }; +template<> struct __nv_tex_rmet_cast { typedef uint2 *type; }; +template<> struct __nv_tex_rmet_cast { typedef int4 *type; }; +template<> struct __nv_tex_rmet_cast { typedef uint4 *type; }; +#endif /* !__LP64__ */ + +template +struct __nv_tex_rmnf_ret { }; + +template <> struct __nv_tex_rmnf_ret { typedef float type; }; +template <> struct __nv_tex_rmnf_ret { typedef float type; }; +template <> struct __nv_tex_rmnf_ret { typedef float type; }; +template <> struct __nv_tex_rmnf_ret { typedef float type; }; +template <> struct __nv_tex_rmnf_ret { typedef float type; }; +template <> struct __nv_tex_rmnf_ret { typedef float1 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float1 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float1 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float1 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float2 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float2 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float2 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float2 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float4 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float4 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float4 type; }; +template <> struct __nv_tex_rmnf_ret { typedef float4 type; }; + + +template +struct __nv_tex2dgather_ret { }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef char4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uchar4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uchar4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uchar4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uchar4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uchar4 type; }; + +template <> struct __nv_tex2dgather_ret { typedef short4 type; }; +template <> struct __nv_tex2dgather_ret { typedef short4 type; }; +template <> struct __nv_tex2dgather_ret { typedef short4 type; }; +template <> struct __nv_tex2dgather_ret { typedef short4 type; }; +template <> struct __nv_tex2dgather_ret { typedef short4 type; }; +template <> struct __nv_tex2dgather_ret { typedef ushort4 type; }; +template <> struct __nv_tex2dgather_ret { typedef ushort4 type; }; +template <> struct __nv_tex2dgather_ret { typedef ushort4 type; }; +template <> struct __nv_tex2dgather_ret { typedef ushort4 type; }; +template <> struct __nv_tex2dgather_ret { typedef ushort4 type; }; + +template <> struct __nv_tex2dgather_ret { typedef int4 type; }; +template <> struct __nv_tex2dgather_ret { typedef int4 type; }; +template <> struct __nv_tex2dgather_ret { typedef int4 type; }; +template <> struct __nv_tex2dgather_ret { typedef int4 type; }; +template <> struct __nv_tex2dgather_ret { typedef int4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uint4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uint4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uint4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uint4 type; }; +template <> struct __nv_tex2dgather_ret { typedef uint4 type; }; + +template <> struct __nv_tex2dgather_ret { typedef float4 type; }; +template <> struct __nv_tex2dgather_ret { typedef float4 type; }; +template <> struct __nv_tex2dgather_ret { typedef float4 type; }; +template <> struct __nv_tex2dgather_ret { typedef float4 type; }; +template <> struct __nv_tex2dgather_ret { typedef float4 type; }; + + +template struct __nv_tex2dgather_rmnf_ret { }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; +template<> struct __nv_tex2dgather_rmnf_ret { typedef float4 type; }; + +#undef __DEPRECATED__ + +#endif /* __cplusplus && __CUDACC__ */ + +#endif /* !__TEXTURE_FETCH_FUNCTIONS_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_types.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_types.h new file mode 100644 index 0000000000000000000000000000000000000000..5289dc6bbcab9f08bfad11e0a3dde1acca6adde4 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/texture_types.h @@ -0,0 +1,177 @@ +/* + * Copyright 1993-2012 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__TEXTURE_TYPES_H__) +#define __TEXTURE_TYPES_H__ + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#include "driver_types.h" + +/** + * \addtogroup CUDART_TYPES + * + * @{ + */ + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#define cudaTextureType1D 0x01 +#define cudaTextureType2D 0x02 +#define cudaTextureType3D 0x03 +#define cudaTextureTypeCubemap 0x0C +#define cudaTextureType1DLayered 0xF1 +#define cudaTextureType2DLayered 0xF2 +#define cudaTextureTypeCubemapLayered 0xFC + +/** + * CUDA texture address modes + */ +enum __device_builtin__ cudaTextureAddressMode +{ + cudaAddressModeWrap = 0, /**< Wrapping address mode */ + cudaAddressModeClamp = 1, /**< Clamp to edge address mode */ + cudaAddressModeMirror = 2, /**< Mirror address mode */ + cudaAddressModeBorder = 3 /**< Border address mode */ +}; + +/** + * CUDA texture filter modes + */ +enum __device_builtin__ cudaTextureFilterMode +{ + cudaFilterModePoint = 0, /**< Point filter mode */ + cudaFilterModeLinear = 1 /**< Linear filter mode */ +}; + +/** + * CUDA texture read modes + */ +enum __device_builtin__ cudaTextureReadMode +{ + cudaReadModeElementType = 0, /**< Read texture as specified element type */ + cudaReadModeNormalizedFloat = 1 /**< Read texture as normalized float */ +}; + +/** + * CUDA texture descriptor + */ +struct __device_builtin__ cudaTextureDesc +{ + /** + * Texture address mode for up to 3 dimensions + */ + enum cudaTextureAddressMode addressMode[3]; + /** + * Texture filter mode + */ + enum cudaTextureFilterMode filterMode; + /** + * Texture read mode + */ + enum cudaTextureReadMode readMode; + /** + * Perform sRGB->linear conversion during texture read + */ + int sRGB; + /** + * Texture Border Color + */ + float borderColor[4]; + /** + * Indicates whether texture reads are normalized or not + */ + int normalizedCoords; + /** + * Limit to the anisotropy ratio + */ + unsigned int maxAnisotropy; + /** + * Mipmap filter mode + */ + enum cudaTextureFilterMode mipmapFilterMode; + /** + * Offset applied to the supplied mipmap level + */ + float mipmapLevelBias; + /** + * Lower end of the mipmap level range to clamp access to + */ + float minMipmapLevelClamp; + /** + * Upper end of the mipmap level range to clamp access to + */ + float maxMipmapLevelClamp; + /** + * Disable any trilinear filtering optimizations. + */ + int disableTrilinearOptimization; + /** + * Enable seamless cube map filtering. + */ + int seamlessCubemap; +}; + +/** + * An opaque value that represents a CUDA texture object + */ +typedef __device_builtin__ unsigned long long cudaTextureObject_t; + +/** @} */ +/** @} */ /* END CUDART_TYPES */ + +#endif /* !__TEXTURE_TYPES_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/vector_types.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/vector_types.h new file mode 100644 index 0000000000000000000000000000000000000000..4cfabcff8a25adaf3f589d38d531bc63cae2fcf6 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/include/vector_types.h @@ -0,0 +1,443 @@ +/* + * Copyright 1993-2018 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#if !defined(__VECTOR_TYPES_H__) +#define __VECTOR_TYPES_H__ + +#if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__) +#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#define __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_VECTOR_TYPES_H__ +#endif + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#ifndef __DOXYGEN_ONLY__ +#include "crt/host_defines.h" +#endif + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +#if !defined(__CUDACC__) && !defined(__CUDACC_RTC__) && \ + defined(_WIN32) && !defined(_WIN64) + +#pragma warning(push) +#pragma warning(disable: 4201 4408) + +#define __cuda_builtin_vector_align8(tag, members) \ +struct __device_builtin__ tag \ +{ \ + union \ + { \ + struct { members }; \ + struct { long long int :1,:0; }; \ + }; \ +} + +#else /* !__CUDACC__ && !__CUDACC_RTC__ && _WIN32 && !_WIN64 */ + +#define __cuda_builtin_vector_align8(tag, members) \ +struct __device_builtin__ __align__(8) tag \ +{ \ + members \ +} + +#endif /* !__CUDACC__ && !__CUDACC_RTC__ && _WIN32 && !_WIN64 */ + +struct __device_builtin__ char1 +{ + signed char x; +}; + +struct __device_builtin__ uchar1 +{ + unsigned char x; +}; + + +struct __device_builtin__ __align__(2) char2 +{ + signed char x, y; +}; + +struct __device_builtin__ __align__(2) uchar2 +{ + unsigned char x, y; +}; + +struct __device_builtin__ char3 +{ + signed char x, y, z; +}; + +struct __device_builtin__ uchar3 +{ + unsigned char x, y, z; +}; + +struct __device_builtin__ __align__(4) char4 +{ + signed char x, y, z, w; +}; + +struct __device_builtin__ __align__(4) uchar4 +{ + unsigned char x, y, z, w; +}; + +struct __device_builtin__ short1 +{ + short x; +}; + +struct __device_builtin__ ushort1 +{ + unsigned short x; +}; + +struct __device_builtin__ __align__(4) short2 +{ + short x, y; +}; + +struct __device_builtin__ __align__(4) ushort2 +{ + unsigned short x, y; +}; + +struct __device_builtin__ short3 +{ + short x, y, z; +}; + +struct __device_builtin__ ushort3 +{ + unsigned short x, y, z; +}; + +__cuda_builtin_vector_align8(short4, short x; short y; short z; short w;); +__cuda_builtin_vector_align8(ushort4, unsigned short x; unsigned short y; unsigned short z; unsigned short w;); + +struct __device_builtin__ int1 +{ + int x; +}; + +struct __device_builtin__ uint1 +{ + unsigned int x; +}; + +__cuda_builtin_vector_align8(int2, int x; int y;); +__cuda_builtin_vector_align8(uint2, unsigned int x; unsigned int y;); + +struct __device_builtin__ int3 +{ + int x, y, z; +}; + +struct __device_builtin__ uint3 +{ + unsigned int x, y, z; +}; + +struct __device_builtin__ __builtin_align__(16) int4 +{ + int x, y, z, w; +}; + +struct __device_builtin__ __builtin_align__(16) uint4 +{ + unsigned int x, y, z, w; +}; + +struct __device_builtin__ long1 +{ + long int x; +}; + +struct __device_builtin__ ulong1 +{ + unsigned long x; +}; + +#if defined(_WIN32) +__cuda_builtin_vector_align8(long2, long int x; long int y;); +__cuda_builtin_vector_align8(ulong2, unsigned long int x; unsigned long int y;); +#else /* !_WIN32 */ + +struct __device_builtin__ __align__(2*sizeof(long int)) long2 +{ + long int x, y; +}; + +struct __device_builtin__ __align__(2*sizeof(unsigned long int)) ulong2 +{ + unsigned long int x, y; +}; + +#endif /* _WIN32 */ + +struct __device_builtin__ long3 +{ + long int x, y, z; +}; + +struct __device_builtin__ ulong3 +{ + unsigned long int x, y, z; +}; + +struct __device_builtin__ __builtin_align__(16) long4 +{ + long int x, y, z, w; +}; + +struct __device_builtin__ __builtin_align__(16) ulong4 +{ + unsigned long int x, y, z, w; +}; + +struct __device_builtin__ float1 +{ + float x; +}; + +#if !defined(__CUDACC__) && defined(__arm__) && \ + defined(__ARM_PCS_VFP) && __GNUC__ == 4 && __GNUC_MINOR__ == 6 + +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-pedantic" + +struct __device_builtin__ __attribute__((aligned(8))) float2 +{ + float x; float y; float __cuda_gnu_arm_ice_workaround[0]; +}; + +#pragma GCC poison __cuda_gnu_arm_ice_workaround +#pragma GCC diagnostic pop + +#else /* !__CUDACC__ && __arm__ && __ARM_PCS_VFP && + __GNUC__ == 4&& __GNUC_MINOR__ == 6 */ + +__cuda_builtin_vector_align8(float2, float x; float y;); + +#endif /* !__CUDACC__ && __arm__ && __ARM_PCS_VFP && + __GNUC__ == 4&& __GNUC_MINOR__ == 6 */ + +struct __device_builtin__ float3 +{ + float x, y, z; +}; + +struct __device_builtin__ __builtin_align__(16) float4 +{ + float x, y, z, w; +}; + +struct __device_builtin__ longlong1 +{ + long long int x; +}; + +struct __device_builtin__ ulonglong1 +{ + unsigned long long int x; +}; + +struct __device_builtin__ __builtin_align__(16) longlong2 +{ + long long int x, y; +}; + +struct __device_builtin__ __builtin_align__(16) ulonglong2 +{ + unsigned long long int x, y; +}; + +struct __device_builtin__ longlong3 +{ + long long int x, y, z; +}; + +struct __device_builtin__ ulonglong3 +{ + unsigned long long int x, y, z; +}; + +struct __device_builtin__ __builtin_align__(16) longlong4 +{ + long long int x, y, z ,w; +}; + +struct __device_builtin__ __builtin_align__(16) ulonglong4 +{ + unsigned long long int x, y, z, w; +}; + +struct __device_builtin__ double1 +{ + double x; +}; + +struct __device_builtin__ __builtin_align__(16) double2 +{ + double x, y; +}; + +struct __device_builtin__ double3 +{ + double x, y, z; +}; + +struct __device_builtin__ __builtin_align__(16) double4 +{ + double x, y, z, w; +}; + +#if !defined(__CUDACC__) && defined(_WIN32) && !defined(_WIN64) + +#pragma warning(pop) + +#endif /* !__CUDACC__ && _WIN32 && !_WIN64 */ + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +typedef __device_builtin__ struct char1 char1; +typedef __device_builtin__ struct uchar1 uchar1; +typedef __device_builtin__ struct char2 char2; +typedef __device_builtin__ struct uchar2 uchar2; +typedef __device_builtin__ struct char3 char3; +typedef __device_builtin__ struct uchar3 uchar3; +typedef __device_builtin__ struct char4 char4; +typedef __device_builtin__ struct uchar4 uchar4; +typedef __device_builtin__ struct short1 short1; +typedef __device_builtin__ struct ushort1 ushort1; +typedef __device_builtin__ struct short2 short2; +typedef __device_builtin__ struct ushort2 ushort2; +typedef __device_builtin__ struct short3 short3; +typedef __device_builtin__ struct ushort3 ushort3; +typedef __device_builtin__ struct short4 short4; +typedef __device_builtin__ struct ushort4 ushort4; +typedef __device_builtin__ struct int1 int1; +typedef __device_builtin__ struct uint1 uint1; +typedef __device_builtin__ struct int2 int2; +typedef __device_builtin__ struct uint2 uint2; +typedef __device_builtin__ struct int3 int3; +typedef __device_builtin__ struct uint3 uint3; +typedef __device_builtin__ struct int4 int4; +typedef __device_builtin__ struct uint4 uint4; +typedef __device_builtin__ struct long1 long1; +typedef __device_builtin__ struct ulong1 ulong1; +typedef __device_builtin__ struct long2 long2; +typedef __device_builtin__ struct ulong2 ulong2; +typedef __device_builtin__ struct long3 long3; +typedef __device_builtin__ struct ulong3 ulong3; +typedef __device_builtin__ struct long4 long4; +typedef __device_builtin__ struct ulong4 ulong4; +typedef __device_builtin__ struct float1 float1; +typedef __device_builtin__ struct float2 float2; +typedef __device_builtin__ struct float3 float3; +typedef __device_builtin__ struct float4 float4; +typedef __device_builtin__ struct longlong1 longlong1; +typedef __device_builtin__ struct ulonglong1 ulonglong1; +typedef __device_builtin__ struct longlong2 longlong2; +typedef __device_builtin__ struct ulonglong2 ulonglong2; +typedef __device_builtin__ struct longlong3 longlong3; +typedef __device_builtin__ struct ulonglong3 ulonglong3; +typedef __device_builtin__ struct longlong4 longlong4; +typedef __device_builtin__ struct ulonglong4 ulonglong4; +typedef __device_builtin__ struct double1 double1; +typedef __device_builtin__ struct double2 double2; +typedef __device_builtin__ struct double3 double3; +typedef __device_builtin__ struct double4 double4; + +/******************************************************************************* +* * +* * +* * +*******************************************************************************/ + +struct __device_builtin__ dim3 +{ + unsigned int x, y, z; +#if defined(__cplusplus) +#if __cplusplus >= 201103L + __host__ __device__ constexpr dim3(unsigned int vx = 1, unsigned int vy = 1, unsigned int vz = 1) : x(vx), y(vy), z(vz) {} + __host__ __device__ constexpr dim3(uint3 v) : x(v.x), y(v.y), z(v.z) {} + __host__ __device__ constexpr operator uint3(void) const { return uint3{x, y, z}; } +#else + __host__ __device__ dim3(unsigned int vx = 1, unsigned int vy = 1, unsigned int vz = 1) : x(vx), y(vy), z(vz) {} + __host__ __device__ dim3(uint3 v) : x(v.x), y(v.y), z(v.z) {} + __host__ __device__ operator uint3(void) const { uint3 t; t.x = x; t.y = y; t.z = z; return t; } +#endif +#endif /* __cplusplus */ +}; + +typedef __device_builtin__ struct dim3 dim3; + +#undef __cuda_builtin_vector_align8 + +#if defined(__UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_VECTOR_TYPES_H__) +#undef __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__ +#undef __UNDEF_CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS_VECTOR_TYPES_H__ +#endif + +#endif /* !__VECTOR_TYPES_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..4c599fc7d196e95eaf4de7a4ff53b2cfbd62bf23 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h new file mode 100644 index 0000000000000000000000000000000000000000..3c8ddabbf3325e2eafce645da039f91226d93237 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_adv_infer.h @@ -0,0 +1,658 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* cudnn_adv_infer : cuDNN's advanced and experimental features. + +*/ + +#if !defined(CUDNN_ADV_INFER_H_) +#define CUDNN_ADV_INFER_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_ADV_INFER_MAJOR 8 +#define CUDNN_ADV_INFER_MINOR 9 +#define CUDNN_ADV_INFER_PATCH 2 + +#if (CUDNN_ADV_INFER_MAJOR != CUDNN_MAJOR) || (CUDNN_ADV_INFER_MINOR != CUDNN_MINOR) || \ + (CUDNN_ADV_INFER_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN ADV INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* BASIC RNN API */ + +typedef enum { + CUDNN_FWD_MODE_INFERENCE = 0, + CUDNN_FWD_MODE_TRAINING = 1, +} cudnnForwardMode_t; + +typedef enum { + CUDNN_RNN_RELU = 0, /* basic RNN cell type with ReLu activation */ + CUDNN_RNN_TANH = 1, /* basic RNN cell type with tanh activation */ + CUDNN_LSTM = 2, /* LSTM with optional recurrent projection and clipping */ + CUDNN_GRU = 3, /* Using h' = tanh(r * Uh(t-1) + Wx) and h = (1 - z) * h' + z * h(t-1); */ +} cudnnRNNMode_t; + +typedef enum { + CUDNN_RNN_NO_BIAS = 0, /* rnn cell formulas do not use biases */ + CUDNN_RNN_SINGLE_INP_BIAS = 1, /* rnn cell formulas use one input bias in input GEMM */ + CUDNN_RNN_DOUBLE_BIAS = 2, /* default, rnn cell formulas use two bias vectors */ + CUDNN_RNN_SINGLE_REC_BIAS = 3 /* rnn cell formulas use one recurrent bias in recurrent GEMM */ +} cudnnRNNBiasMode_t; + +typedef enum { + CUDNN_UNIDIRECTIONAL = 0, /* single direction network */ + CUDNN_BIDIRECTIONAL = 1, /* output concatination at each layer */ +} cudnnDirectionMode_t; + +typedef enum { + CUDNN_LINEAR_INPUT = 0, /* adjustable weight matrix in first layer input GEMM */ + CUDNN_SKIP_INPUT = 1, /* fixed identity matrix in the first layer input GEMM */ +} cudnnRNNInputMode_t; + +typedef enum { + CUDNN_RNN_CLIP_NONE = 0, /* disables LSTM cell clipping */ + CUDNN_RNN_CLIP_MINMAX = 1, /* enables LSTM cell clipping */ +} cudnnRNNClipMode_t; + +typedef enum { + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_UNPACKED = 0, /* padded, outer stride from one time-step to the next */ + CUDNN_RNN_DATA_LAYOUT_SEQ_MAJOR_PACKED = 1, /* sequence length sorted and packed as in basic RNN api */ + CUDNN_RNN_DATA_LAYOUT_BATCH_MAJOR_UNPACKED = 2, /* padded, outer stride from one batch to the next */ +} cudnnRNNDataLayout_t; + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnRNNPaddingMode_t; + +/* For auxFlags in cudnnSetRNNDescriptor_v8() and cudnnSetRNNPaddingMode() */ +#define CUDNN_RNN_PADDED_IO_DISABLED 0 +#define CUDNN_RNN_PADDED_IO_ENABLED (1U << 0) + +struct cudnnRNNStruct; +typedef struct cudnnRNNStruct *cudnnRNNDescriptor_t; + +struct cudnnPersistentRNNPlan; +typedef struct cudnnPersistentRNNPlan *cudnnPersistentRNNPlan_t; + +struct cudnnRNNDataStruct; +typedef struct cudnnRNNDataStruct *cudnnRNNDataDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDescriptor(cudnnRNNDescriptor_t *rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDescriptor(cudnnRNNDescriptor_t rnnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t algo, + cudnnRNNMode_t cellMode, + cudnnRNNBiasMode_t biasMode, + cudnnDirectionMode_t dirMode, + cudnnRNNInputMode_t inputMode, + cudnnDataType_t dataType, + cudnnDataType_t mathPrec, + cudnnMathType_t mathType, + int32_t inputSize, + int32_t hiddenSize, + int32_t projSize, + int32_t numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + uint32_t auxFlags); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNAlgo_t *algo, + cudnnRNNMode_t *cellMode, + cudnnRNNBiasMode_t *biasMode, + cudnnDirectionMode_t *dirMode, + cudnnRNNInputMode_t *inputMode, + cudnnDataType_t *dataType, + cudnnDataType_t *mathPrec, + cudnnMathType_t *mathType, + int32_t *inputSize, + int32_t *hiddenSize, + int32_t *projSize, + int32_t *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + uint32_t *auxFlags); + +/* + * mathPrec in cudnnSetRNNDescriptor_v6() specifies compute precision + * compute precision is further modified by cudnnSetRNNMatrixMathType() + * dataType in cudnnGetRNNParamsSize() and wDesc specify weight storage + * dropout is between RNN layers, not between recurrent steps + */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int hiddenSize, + const int numLayers, + cudnnDropoutDescriptor_t dropoutDesc, + cudnnRNNInputMode_t inputMode, + cudnnDirectionMode_t direction, + cudnnRNNMode_t cellMode, + cudnnRNNAlgo_t algo, + cudnnDataType_t mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDescriptor_v6(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int *hiddenSize, + int *numLayers, + cudnnDropoutDescriptor_t *dropoutDesc, + cudnnRNNInputMode_t *inputMode, + cudnnDirectionMode_t *direction, + cudnnRNNMode_t *cellMode, + cudnnRNNAlgo_t *algo, + cudnnDataType_t *mathPrec); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNMatrixMathType(cudnnRNNDescriptor_t rnnDesc, cudnnMathType_t *mType); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t biasMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNBiasMode(cudnnRNNDescriptor_t rnnDesc, cudnnRNNBiasMode_t *biasMode); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip_v8(cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNSetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t clipMode, + cudnnNanPropagation_t clipNanOpt, + double lclip, + double rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNGetClip(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnRNNClipMode_t *clipMode, + cudnnNanPropagation_t *clipNanOpt, + double *lclip, + double *rclip); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNProjectionLayers(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + const int recProjSize, + const int outProjSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNProjectionLayers(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + int *recProjSize, + int *outProjSize); + +/* Expensive. Creates the plan for the specific settings. */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreatePersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, + const int minibatch, + const cudnnDataType_t dataType, + cudnnPersistentRNNPlan_t *plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyPersistentRNNPlan(cudnnPersistentRNNPlan_t plan); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetPersistentRNNPlan(cudnnRNNDescriptor_t rnnDesc, cudnnPersistentRNNPlan_t plan); + +cudnnStatus_t CUDNNWINAPI +cudnnBuildRNNDynamic(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, int miniBatch); + +/* dataType in weight descriptors and input descriptors is used to describe storage */ +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWorkspaceSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTrainingReserveSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNTempSpaceSizes(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + cudnnRNNDataDescriptor_t xDesc, + size_t *workSpaceSize, + size_t *reserveSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNParamsSize(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + cudnnDataType_t dataType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightSpaceSize(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, size_t *weightSpaceSize); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerMatrixParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerMatDesc, + void **linLayerMat); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNLinLayerBiasParams(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int pseudoLayer, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const int linLayerID, + cudnnFilterDescriptor_t linLayerBiasDesc, + void **linLayerBias); + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNWeightParams(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + int32_t pseudoLayer, + size_t weightSpaceSize, + const void *weightSpace, + int32_t linLayerID, + cudnnTensorDescriptor_t mDesc, + void **mAddr, + cudnnTensorDescriptor_t bDesc, + void **bAddr); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInference(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + void *workSpace, + size_t workSpaceSizeInBytes); + +/* RNN EX API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned paddingMode); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNPaddingMode(cudnnRNNDescriptor_t rnnDesc, unsigned *paddingMode); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateRNNDataDescriptor(cudnnRNNDataDescriptor_t *rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t dataType, + cudnnRNNDataLayout_t layout, + int maxSeqLength, + int batchSize, + int vectorSize, + const int seqLengthArray[], /* length of each sequence in the batch */ + void *paddingFill); /* symbol for filling padding position in output */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetRNNDataDescriptor(cudnnRNNDataDescriptor_t rnnDataDesc, + cudnnDataType_t *dataType, + cudnnRNNDataLayout_t *layout, + int *maxSeqLength, + int *batchSize, + int *vectorSize, + int arrayLengthRequested, + int seqLengthArray[], + void *paddingFill); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRNNForwardInferenceEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const cudnnRNNDataDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnRNNDataDescriptor_t yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const cudnnRNNDataDescriptor_t kDesc, /* reserved, should pass NULL */ + const void *keys, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t cDesc, /* reserved, should pass NULL */ + void *cAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t iDesc, /* reserved, should pass NULL */ + void *iAttn, /* reserved, should pass NULL */ + const cudnnRNNDataDescriptor_t qDesc, /* reserved, should pass NULL */ + void *queries, /* reserved, should pass NULL */ + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnRNNForward(cudnnHandle_t handle, + cudnnRNNDescriptor_t rnnDesc, + cudnnForwardMode_t fwdMode, + const int32_t devSeqLengths[], + cudnnRNNDataDescriptor_t xDesc, + const void *x, + cudnnRNNDataDescriptor_t yDesc, + void *y, + cudnnTensorDescriptor_t hDesc, + const void *hx, + void *hy, + cudnnTensorDescriptor_t cDesc, + const void *cx, + void *cy, + size_t weightSpaceSize, + const void *weightSpace, + size_t workSpaceSize, + void *workSpace, + size_t reserveSpaceSize, + void *reserveSpace); + +/* RNN FIND API */ + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetRNNAlgorithmDescriptor(cudnnHandle_t handle, cudnnRNNDescriptor_t rnnDesc, cudnnAlgorithmDescriptor_t algoDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetRNNForwardInferenceAlgorithmMaxCount(cudnnHandle_t handle, const cudnnRNNDescriptor_t rnnDesc, int *count); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnFindRNNForwardInferenceAlgorithmEx(cudnnHandle_t handle, + const cudnnRNNDescriptor_t rnnDesc, + const int seqLength, + const cudnnTensorDescriptor_t *xDesc, + const void *x, + const cudnnTensorDescriptor_t hxDesc, + const void *hx, + const cudnnTensorDescriptor_t cxDesc, + const void *cx, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t *yDesc, + void *y, + const cudnnTensorDescriptor_t hyDesc, + void *hy, + const cudnnTensorDescriptor_t cyDesc, + void *cy, + const float findIntensity, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnAlgorithmPerformance_t *perfResults, + void *workspace, + size_t workSpaceSizeInBytes); + +/* Sequence data descriptor */ + +typedef enum { + CUDNN_SEQDATA_TIME_DIM = 0, /* index in time */ + CUDNN_SEQDATA_BATCH_DIM = 1, /* index in batch */ + CUDNN_SEQDATA_BEAM_DIM = 2, /* index in beam */ + CUDNN_SEQDATA_VECT_DIM = 3 /* index in vector */ +} cudnnSeqDataAxis_t; + +struct cudnnSeqDataStruct; +typedef struct cudnnSeqDataStruct *cudnnSeqDataDescriptor_t; + +#define CUDNN_SEQDATA_DIM_COUNT 4 /* dimension count */ + +cudnnStatus_t CUDNNWINAPI +cudnnCreateSeqDataDescriptor(cudnnSeqDataDescriptor_t *seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroySeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetSeqDataDescriptor(cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t dataType, + int nbDims, + const int dimA[], + const cudnnSeqDataAxis_t axes[], + size_t seqLengthArraySize, + const int seqLengthArray[], + void *paddingFill); + +cudnnStatus_t CUDNNWINAPI +cudnnGetSeqDataDescriptor(const cudnnSeqDataDescriptor_t seqDataDesc, + cudnnDataType_t *dataType, + int *nbDims, + int nbDimsRequested, + int dimA[], + cudnnSeqDataAxis_t axes[], + size_t *seqLengthArraySize, + size_t seqLengthSizeRequested, + int seqLengthArray[], + void *paddingFill); + +/* Multihead Attention */ + +/* Legacy type for backward compatibility */ +typedef unsigned cudnnAttnQueryMap_t; + +/* + * Multi-head attention options passed via 'attnMode' in cudnnSetAttnDescriptor(). + * Use the bitwise OR operator to combine several settings listed below. Additional + * minor options can be added here w/o changing or introducing new API functions. + */ +#define CUDNN_ATTN_QUERYMAP_ALL_TO_ONE 0 /* multiple Q-s map to a single (K,V) set when beam size > 1 */ +#define CUDNN_ATTN_QUERYMAP_ONE_TO_ONE (1U << 0) /* multiple Q-s map to multiple (K,V) sets when beam size > 1 */ +#define CUDNN_ATTN_DISABLE_PROJ_BIASES 0 /* no biases in attention input and output projections */ +#define CUDNN_ATTN_ENABLE_PROJ_BIASES (1U << 1) /* use biases in attention input and output projections */ + +struct cudnnAttnStruct; +typedef struct cudnnAttnStruct *cudnnAttnDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateAttnDescriptor(cudnnAttnDescriptor_t *attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyAttnDescriptor(cudnnAttnDescriptor_t attnDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned attnMode, + int nHeads, + double smScaler, + cudnnDataType_t dataType, + cudnnDataType_t computePrec, + cudnnMathType_t mathType, + cudnnDropoutDescriptor_t attnDropoutDesc, + cudnnDropoutDescriptor_t postDropoutDesc, + int qSize, + int kSize, + int vSize, + int qProjSize, + int kProjSize, + int vProjSize, + int oProjSize, + int qoMaxSeqLength, + int kvMaxSeqLength, + int maxBatchSize, + int maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetAttnDescriptor(cudnnAttnDescriptor_t attnDesc, + unsigned *attnMode, + int *nHeads, + double *smScaler, + cudnnDataType_t *dataType, + cudnnDataType_t *computePrec, + cudnnMathType_t *mathType, + cudnnDropoutDescriptor_t *attnDropoutDesc, + cudnnDropoutDescriptor_t *postDropoutDesc, + int *qSize, + int *kSize, + int *vSize, + int *qProjSize, + int *kProjSize, + int *vProjSize, + int *oProjSize, + int *qoMaxSeqLength, + int *kvMaxSeqLength, + int *maxBatchSize, + int *maxBeamSize); + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnBuffers(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + size_t *weightSizeInBytes, + size_t *workSpaceSizeInBytes, + size_t *reserveSpaceSizeInBytes); + +typedef enum { + CUDNN_MH_ATTN_Q_WEIGHTS = 0, /* input projection weights for 'queries' */ + CUDNN_MH_ATTN_K_WEIGHTS = 1, /* input projection weights for 'keys' */ + CUDNN_MH_ATTN_V_WEIGHTS = 2, /* input projection weights for 'values' */ + CUDNN_MH_ATTN_O_WEIGHTS = 3, /* output projection weights */ + CUDNN_MH_ATTN_Q_BIASES = 4, /* input projection bias tensor for 'queries' */ + CUDNN_MH_ATTN_K_BIASES = 5, /* input projection bias for 'keys' */ + CUDNN_MH_ATTN_V_BIASES = 6, /* input projection bias for 'values' */ + CUDNN_MH_ATTN_O_BIASES = 7, /* output projection biases */ +} cudnnMultiHeadAttnWeightKind_t; + +#define CUDNN_ATTN_WKIND_COUNT 8 /* Number of attention weight/bias tensors */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetMultiHeadAttnWeights(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + cudnnMultiHeadAttnWeightKind_t wKind, + size_t weightSizeInBytes, + const void *weights, + cudnnTensorDescriptor_t wDesc, + void **wAddr); + +cudnnStatus_t CUDNNWINAPI +cudnnMultiHeadAttnForward(cudnnHandle_t handle, + const cudnnAttnDescriptor_t attnDesc, + int currIdx, + const int loWinIdx[], + const int hiWinIdx[], + const int devSeqLengthsQO[], + const int devSeqLengthsKV[], + const cudnnSeqDataDescriptor_t qDesc, + const void *queries, + const void *residuals, + const cudnnSeqDataDescriptor_t kDesc, + const void *keys, + const cudnnSeqDataDescriptor_t vDesc, + const void *values, + const cudnnSeqDataDescriptor_t oDesc, + void *out, + size_t weightSizeInBytes, + const void *weights, + size_t workSpaceSizeInBytes, + void *workSpace, + size_t reserveSpaceSizeInBytes, + void *reserveSpace); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnAdvInferVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_ADV_INFER_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend.h new file mode 100644 index 0000000000000000000000000000000000000000..b0f41de3b1e87286037ed7d0351057d93287d88f --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_backend.h @@ -0,0 +1,608 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +#ifndef _CUDNN_BACKEND_H_ +#define _CUDNN_BACKEND_H_ + +/* + * The content in this header file is under development to be included in cudnn.h in the future + * Production code should have all include of this header file remove. + */ + +#include "cudnn_ops_infer.h" +#include "cudnn_cnn_infer.h" + +/* NOTE: definition in extern "C" to be copied later to public header */ +#if defined(__cplusplus) +extern "C" { +#endif + +typedef void *cudnnBackendDescriptor_t; + +typedef struct cudnnFractionStruct { + int64_t numerator; + int64_t denominator; +} cudnnFraction_t; + +typedef enum { + CUDNN_POINTWISE_ADD = 0, + CUDNN_POINTWISE_ADD_SQUARE = 5, + CUDNN_POINTWISE_DIV = 6, + CUDNN_POINTWISE_MAX = 3, + CUDNN_POINTWISE_MIN = 2, + CUDNN_POINTWISE_MOD = 7, + CUDNN_POINTWISE_MUL = 1, + CUDNN_POINTWISE_POW = 8, + CUDNN_POINTWISE_SUB = 9, + + CUDNN_POINTWISE_ABS = 10, + CUDNN_POINTWISE_CEIL = 11, + CUDNN_POINTWISE_COS = 12, + CUDNN_POINTWISE_EXP = 13, + CUDNN_POINTWISE_FLOOR = 14, + CUDNN_POINTWISE_LOG = 15, + CUDNN_POINTWISE_NEG = 16, + CUDNN_POINTWISE_RSQRT = 17, + CUDNN_POINTWISE_SIN = 18, + CUDNN_POINTWISE_SQRT = 4, + CUDNN_POINTWISE_TAN = 19, + CUDNN_POINTWISE_ERF = 20, + CUDNN_POINTWISE_IDENTITY = 21, + CUDNN_POINTWISE_RECIPROCAL = 22, + + CUDNN_POINTWISE_RELU_FWD = 100, + CUDNN_POINTWISE_TANH_FWD = 101, + CUDNN_POINTWISE_SIGMOID_FWD = 102, + CUDNN_POINTWISE_ELU_FWD = 103, + CUDNN_POINTWISE_GELU_FWD = 104, + CUDNN_POINTWISE_SOFTPLUS_FWD = 105, + CUDNN_POINTWISE_SWISH_FWD = 106, + CUDNN_POINTWISE_GELU_APPROX_TANH_FWD = 107, + + CUDNN_POINTWISE_RELU_BWD = 200, + CUDNN_POINTWISE_TANH_BWD = 201, + CUDNN_POINTWISE_SIGMOID_BWD = 202, + CUDNN_POINTWISE_ELU_BWD = 203, + CUDNN_POINTWISE_GELU_BWD = 204, + CUDNN_POINTWISE_SOFTPLUS_BWD = 205, + CUDNN_POINTWISE_SWISH_BWD = 206, + CUDNN_POINTWISE_GELU_APPROX_TANH_BWD = 207, + + CUDNN_POINTWISE_CMP_EQ = 300, + CUDNN_POINTWISE_CMP_NEQ = 301, + CUDNN_POINTWISE_CMP_GT = 302, + CUDNN_POINTWISE_CMP_GE = 303, + CUDNN_POINTWISE_CMP_LT = 304, + CUDNN_POINTWISE_CMP_LE = 305, + + CUDNN_POINTWISE_LOGICAL_AND = 400, + CUDNN_POINTWISE_LOGICAL_OR = 401, + CUDNN_POINTWISE_LOGICAL_NOT = 402, + + CUDNN_POINTWISE_GEN_INDEX = 501, + + CUDNN_POINTWISE_BINARY_SELECT = 601, +} cudnnPointwiseMode_t; + +typedef enum { + CUDNN_RESAMPLE_NEAREST = 0, + CUDNN_RESAMPLE_BILINEAR = 1, + CUDNN_RESAMPLE_AVGPOOL = 2, + CUDNN_RESAMPLE_AVGPOOL_INCLUDE_PADDING = 2, + CUDNN_RESAMPLE_AVGPOOL_EXCLUDE_PADDING = 4, + CUDNN_RESAMPLE_MAXPOOL = 3, +} cudnnResampleMode_t; + +typedef enum { + CUDNN_SIGNAL_SET = 0, + CUDNN_SIGNAL_WAIT = 1, +} cudnnSignalMode_t; + +typedef enum { + CUDNN_GENSTATS_SUM_SQSUM = 0, +} cudnnGenStatsMode_t; + +typedef enum { + CUDNN_BN_FINALIZE_STATISTICS_TRAINING = 0, + CUDNN_BN_FINALIZE_STATISTICS_INFERENCE = 1, +} cudnnBnFinalizeStatsMode_t; + +typedef enum { + CUDNN_RNG_DISTRIBUTION_BERNOULLI, + CUDNN_RNG_DISTRIBUTION_UNIFORM, + CUDNN_RNG_DISTRIBUTION_NORMAL, +} cudnnRngDistribution_t; + +typedef enum { + CUDNN_ATTR_POINTWISE_MODE = 0, + CUDNN_ATTR_POINTWISE_MATH_PREC = 1, + CUDNN_ATTR_POINTWISE_NAN_PROPAGATION = 2, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP = 3, + CUDNN_ATTR_POINTWISE_RELU_UPPER_CLIP = 4, + CUDNN_ATTR_POINTWISE_RELU_LOWER_CLIP_SLOPE = 5, + CUDNN_ATTR_POINTWISE_ELU_ALPHA = 6, + CUDNN_ATTR_POINTWISE_SOFTPLUS_BETA = 7, + CUDNN_ATTR_POINTWISE_SWISH_BETA = 8, + CUDNN_ATTR_POINTWISE_AXIS = 9, + + CUDNN_ATTR_CONVOLUTION_COMP_TYPE = 100, + CUDNN_ATTR_CONVOLUTION_CONV_MODE = 101, + CUDNN_ATTR_CONVOLUTION_DILATIONS = 102, + CUDNN_ATTR_CONVOLUTION_FILTER_STRIDES = 103, + CUDNN_ATTR_CONVOLUTION_POST_PADDINGS = 104, + CUDNN_ATTR_CONVOLUTION_PRE_PADDINGS = 105, + CUDNN_ATTR_CONVOLUTION_SPATIAL_DIMS = 106, + + CUDNN_ATTR_ENGINEHEUR_MODE = 200, + CUDNN_ATTR_ENGINEHEUR_OPERATION_GRAPH = 201, + CUDNN_ATTR_ENGINEHEUR_RESULTS = 202, + + CUDNN_ATTR_ENGINECFG_ENGINE = 300, + CUDNN_ATTR_ENGINECFG_INTERMEDIATE_INFO = 301, + CUDNN_ATTR_ENGINECFG_KNOB_CHOICES = 302, + + CUDNN_ATTR_EXECUTION_PLAN_HANDLE = 400, + CUDNN_ATTR_EXECUTION_PLAN_ENGINE_CONFIG = 401, + CUDNN_ATTR_EXECUTION_PLAN_WORKSPACE_SIZE = 402, + CUDNN_ATTR_EXECUTION_PLAN_COMPUTED_INTERMEDIATE_UIDS = 403, + CUDNN_ATTR_EXECUTION_PLAN_RUN_ONLY_INTERMEDIATE_UIDS = 404, + CUDNN_ATTR_EXECUTION_PLAN_JSON_REPRESENTATION = 405, + + CUDNN_ATTR_INTERMEDIATE_INFO_UNIQUE_ID = 500, + CUDNN_ATTR_INTERMEDIATE_INFO_SIZE = 501, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_DATA_UIDS = 502, + CUDNN_ATTR_INTERMEDIATE_INFO_DEPENDENT_ATTRIBUTES = 503, + + CUDNN_ATTR_KNOB_CHOICE_KNOB_TYPE = 600, + CUDNN_ATTR_KNOB_CHOICE_KNOB_VALUE = 601, + + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_ALPHA = 700, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_BETA = 701, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_CONV_DESC = 702, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_W = 703, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_X = 704, + CUDNN_ATTR_OPERATION_CONVOLUTION_FORWARD_Y = 705, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_ALPHA = 706, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_BETA = 707, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_CONV_DESC = 708, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_W = 709, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DX = 710, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_DATA_DY = 711, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_ALPHA = 712, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_BETA = 713, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_CONV_DESC = 714, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DW = 715, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_X = 716, + CUDNN_ATTR_OPERATION_CONVOLUTION_BWD_FILTER_DY = 717, + + CUDNN_ATTR_OPERATION_POINTWISE_PW_DESCRIPTOR = 750, + CUDNN_ATTR_OPERATION_POINTWISE_XDESC = 751, + CUDNN_ATTR_OPERATION_POINTWISE_BDESC = 752, + CUDNN_ATTR_OPERATION_POINTWISE_YDESC = 753, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA1 = 754, + CUDNN_ATTR_OPERATION_POINTWISE_ALPHA2 = 755, + CUDNN_ATTR_OPERATION_POINTWISE_DXDESC = 756, + CUDNN_ATTR_OPERATION_POINTWISE_DYDESC = 757, + CUDNN_ATTR_OPERATION_POINTWISE_TDESC = 758, + + CUDNN_ATTR_OPERATION_GENSTATS_MODE = 770, + CUDNN_ATTR_OPERATION_GENSTATS_MATH_PREC = 771, + CUDNN_ATTR_OPERATION_GENSTATS_XDESC = 772, + CUDNN_ATTR_OPERATION_GENSTATS_SUMDESC = 773, + CUDNN_ATTR_OPERATION_GENSTATS_SQSUMDESC = 774, + + CUDNN_ATTR_OPERATION_BN_FINALIZE_STATS_MODE = 780, + CUDNN_ATTR_OPERATION_BN_FINALIZE_MATH_PREC = 781, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SUM_DESC = 782, + CUDNN_ATTR_OPERATION_BN_FINALIZE_Y_SQ_SUM_DESC = 783, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SCALE_DESC = 784, + CUDNN_ATTR_OPERATION_BN_FINALIZE_BIAS_DESC = 785, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_MEAN_DESC = 786, + CUDNN_ATTR_OPERATION_BN_FINALIZE_PREV_RUNNING_VAR_DESC = 787, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_MEAN_DESC = 788, + CUDNN_ATTR_OPERATION_BN_FINALIZE_UPDATED_RUNNING_VAR_DESC = 789, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_MEAN_DESC = 790, + CUDNN_ATTR_OPERATION_BN_FINALIZE_SAVED_INV_STD_DESC = 791, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_SCALE_DESC = 792, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EQ_BIAS_DESC = 793, + CUDNN_ATTR_OPERATION_BN_FINALIZE_ACCUM_COUNT_DESC = 794, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EPSILON_DESC = 795, + CUDNN_ATTR_OPERATION_BN_FINALIZE_EXP_AVERATE_FACTOR_DESC = 796, + + CUDNN_ATTR_OPERATIONGRAPH_HANDLE = 800, + CUDNN_ATTR_OPERATIONGRAPH_OPS = 801, + CUDNN_ATTR_OPERATIONGRAPH_ENGINE_GLOBAL_COUNT = 802, + + CUDNN_ATTR_TENSOR_BYTE_ALIGNMENT = 900, + CUDNN_ATTR_TENSOR_DATA_TYPE = 901, + CUDNN_ATTR_TENSOR_DIMENSIONS = 902, + CUDNN_ATTR_TENSOR_STRIDES = 903, + CUDNN_ATTR_TENSOR_VECTOR_COUNT = 904, + CUDNN_ATTR_TENSOR_VECTORIZED_DIMENSION = 905, + CUDNN_ATTR_TENSOR_UNIQUE_ID = 906, + CUDNN_ATTR_TENSOR_IS_VIRTUAL = 907, + CUDNN_ATTR_TENSOR_IS_BY_VALUE = 908, + CUDNN_ATTR_TENSOR_REORDERING_MODE = 909, + CUDNN_ATTR_TENSOR_RAGGED_OFFSET_DESC = 913, + + CUDNN_ATTR_VARIANT_PACK_UNIQUE_IDS = 1000, + CUDNN_ATTR_VARIANT_PACK_DATA_POINTERS = 1001, + CUDNN_ATTR_VARIANT_PACK_INTERMEDIATES = 1002, + CUDNN_ATTR_VARIANT_PACK_WORKSPACE = 1003, + + CUDNN_ATTR_LAYOUT_INFO_TENSOR_UID = 1100, + CUDNN_ATTR_LAYOUT_INFO_TYPES = 1101, + + CUDNN_ATTR_KNOB_INFO_TYPE = 1200, + CUDNN_ATTR_KNOB_INFO_MAXIMUM_VALUE = 1201, + CUDNN_ATTR_KNOB_INFO_MINIMUM_VALUE = 1202, + CUDNN_ATTR_KNOB_INFO_STRIDE = 1203, + + CUDNN_ATTR_ENGINE_OPERATION_GRAPH = 1300, + CUDNN_ATTR_ENGINE_GLOBAL_INDEX = 1301, + CUDNN_ATTR_ENGINE_KNOB_INFO = 1302, + CUDNN_ATTR_ENGINE_NUMERICAL_NOTE = 1303, + CUDNN_ATTR_ENGINE_LAYOUT_INFO = 1304, + CUDNN_ATTR_ENGINE_BEHAVIOR_NOTE = 1305, + + CUDNN_ATTR_MATMUL_COMP_TYPE = 1500, + CUDNN_ATTR_MATMUL_PADDING_VALUE = 1503, + + CUDNN_ATTR_OPERATION_MATMUL_ADESC = 1520, + CUDNN_ATTR_OPERATION_MATMUL_BDESC = 1521, + CUDNN_ATTR_OPERATION_MATMUL_CDESC = 1522, + CUDNN_ATTR_OPERATION_MATMUL_DESC = 1523, + CUDNN_ATTR_OPERATION_MATMUL_IRREGULARLY_STRIDED_BATCH_COUNT = 1524, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_M_OVERRIDE_DESC = 1525, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_N_OVERRIDE_DESC = 1526, + CUDNN_ATTR_OPERATION_MATMUL_GEMM_K_OVERRIDE_DESC = 1527, + + CUDNN_ATTR_REDUCTION_OPERATOR = 1600, + CUDNN_ATTR_REDUCTION_COMP_TYPE = 1601, + + CUDNN_ATTR_OPERATION_REDUCTION_XDESC = 1610, + CUDNN_ATTR_OPERATION_REDUCTION_YDESC = 1611, + CUDNN_ATTR_OPERATION_REDUCTION_DESC = 1612, + + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MATH_PREC = 1620, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_MEAN_DESC = 1621, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_INVSTD_DESC = 1622, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_BN_SCALE_DESC = 1623, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_X_DESC = 1624, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DY_DESC = 1625, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_SCALE_DESC = 1626, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_DBN_BIAS_DESC = 1627, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_DY_SCALE_DESC = 1628, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_X_SCALE_DESC = 1629, + CUDNN_ATTR_OPERATION_BN_BWD_WEIGHTS_EQ_BIAS = 1630, + + CUDNN_ATTR_RESAMPLE_MODE = 1700, + CUDNN_ATTR_RESAMPLE_COMP_TYPE = 1701, + CUDNN_ATTR_RESAMPLE_SPATIAL_DIMS = 1702, + CUDNN_ATTR_RESAMPLE_POST_PADDINGS = 1703, + CUDNN_ATTR_RESAMPLE_PRE_PADDINGS = 1704, + CUDNN_ATTR_RESAMPLE_STRIDES = 1705, + CUDNN_ATTR_RESAMPLE_WINDOW_DIMS = 1706, + CUDNN_ATTR_RESAMPLE_NAN_PROPAGATION = 1707, + CUDNN_ATTR_RESAMPLE_PADDING_MODE = 1708, + + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_XDESC = 1710, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_YDESC = 1711, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_IDXDESC = 1712, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_ALPHA = 1713, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_BETA = 1714, + CUDNN_ATTR_OPERATION_RESAMPLE_FWD_DESC = 1716, + + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DXDESC = 1720, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DYDESC = 1721, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_IDXDESC = 1722, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_ALPHA = 1723, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_BETA = 1724, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_DESC = 1725, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_XDESC = 1726, + CUDNN_ATTR_OPERATION_RESAMPLE_BWD_YDESC = 1727, + + CUDNN_ATTR_OPERATION_CONCAT_AXIS = 1800, + CUDNN_ATTR_OPERATION_CONCAT_INPUT_DESCS = 1801, + CUDNN_ATTR_OPERATION_CONCAT_INPLACE_INDEX = 1802, + CUDNN_ATTR_OPERATION_CONCAT_OUTPUT_DESC = 1803, + + CUDNN_ATTR_OPERATION_SIGNAL_MODE = 1900, + CUDNN_ATTR_OPERATION_SIGNAL_FLAGDESC = 1901, + CUDNN_ATTR_OPERATION_SIGNAL_VALUE = 1902, + CUDNN_ATTR_OPERATION_SIGNAL_XDESC = 1903, + CUDNN_ATTR_OPERATION_SIGNAL_YDESC = 1904, + + CUDNN_ATTR_OPERATION_NORM_FWD_MODE = 2000, + CUDNN_ATTR_OPERATION_NORM_FWD_PHASE = 2001, + CUDNN_ATTR_OPERATION_NORM_FWD_XDESC = 2002, + CUDNN_ATTR_OPERATION_NORM_FWD_MEAN_DESC = 2003, + CUDNN_ATTR_OPERATION_NORM_FWD_INV_VARIANCE_DESC = 2004, + CUDNN_ATTR_OPERATION_NORM_FWD_SCALE_DESC = 2005, + CUDNN_ATTR_OPERATION_NORM_FWD_BIAS_DESC = 2006, + CUDNN_ATTR_OPERATION_NORM_FWD_EPSILON_DESC = 2007, + CUDNN_ATTR_OPERATION_NORM_FWD_EXP_AVG_FACTOR_DESC = 2008, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_MEAN_DESC = 2009, + CUDNN_ATTR_OPERATION_NORM_FWD_INPUT_RUNNING_VAR_DESC = 2010, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_MEAN_DESC = 2011, + CUDNN_ATTR_OPERATION_NORM_FWD_OUTPUT_RUNNING_VAR_DESC = 2012, + CUDNN_ATTR_OPERATION_NORM_FWD_YDESC = 2013, + CUDNN_ATTR_OPERATION_NORM_FWD_PEER_STAT_DESCS = 2014, + + CUDNN_ATTR_OPERATION_NORM_BWD_MODE = 2100, + CUDNN_ATTR_OPERATION_NORM_BWD_XDESC = 2101, + CUDNN_ATTR_OPERATION_NORM_BWD_MEAN_DESC = 2102, + CUDNN_ATTR_OPERATION_NORM_BWD_INV_VARIANCE_DESC = 2103, + CUDNN_ATTR_OPERATION_NORM_BWD_DYDESC = 2104, + CUDNN_ATTR_OPERATION_NORM_BWD_SCALE_DESC = 2105, + CUDNN_ATTR_OPERATION_NORM_BWD_EPSILON_DESC = 2106, + CUDNN_ATTR_OPERATION_NORM_BWD_DSCALE_DESC = 2107, + CUDNN_ATTR_OPERATION_NORM_BWD_DBIAS_DESC = 2108, + CUDNN_ATTR_OPERATION_NORM_BWD_DXDESC = 2109, + CUDNN_ATTR_OPERATION_NORM_BWD_PEER_STAT_DESCS = 2110, + + CUDNN_ATTR_OPERATION_RESHAPE_XDESC = 2200, + CUDNN_ATTR_OPERATION_RESHAPE_YDESC = 2201, + + CUDNN_ATTR_RNG_DISTRIBUTION = 2300, + CUDNN_ATTR_RNG_NORMAL_DIST_MEAN = 2301, + CUDNN_ATTR_RNG_NORMAL_DIST_STANDARD_DEVIATION = 2302, + CUDNN_ATTR_RNG_UNIFORM_DIST_MAXIMUM = 2303, + CUDNN_ATTR_RNG_UNIFORM_DIST_MINIMUM = 2304, + CUDNN_ATTR_RNG_BERNOULLI_DIST_PROBABILITY = 2305, + + CUDNN_ATTR_OPERATION_RNG_YDESC = 2310, + CUDNN_ATTR_OPERATION_RNG_SEED = 2311, + CUDNN_ATTR_OPERATION_RNG_DESC = 2312, + CUDNN_ATTR_OPERATION_RNG_OFFSET_DESC = 2313, + +} cudnnBackendAttributeName_t; + +typedef enum { + CUDNN_TYPE_HANDLE = 0, + CUDNN_TYPE_DATA_TYPE, + CUDNN_TYPE_BOOLEAN, + CUDNN_TYPE_INT64, + CUDNN_TYPE_FLOAT, + CUDNN_TYPE_DOUBLE, + CUDNN_TYPE_VOID_PTR, + CUDNN_TYPE_CONVOLUTION_MODE, + CUDNN_TYPE_HEUR_MODE, + CUDNN_TYPE_KNOB_TYPE, + CUDNN_TYPE_NAN_PROPOGATION, + CUDNN_TYPE_NUMERICAL_NOTE, + CUDNN_TYPE_LAYOUT_TYPE, + CUDNN_TYPE_ATTRIB_NAME, + CUDNN_TYPE_POINTWISE_MODE, + CUDNN_TYPE_BACKEND_DESCRIPTOR, + CUDNN_TYPE_GENSTATS_MODE, + CUDNN_TYPE_BN_FINALIZE_STATS_MODE, + CUDNN_TYPE_REDUCTION_OPERATOR_TYPE, + CUDNN_TYPE_BEHAVIOR_NOTE, + CUDNN_TYPE_TENSOR_REORDERING_MODE, + CUDNN_TYPE_RESAMPLE_MODE, + CUDNN_TYPE_PADDING_MODE, + CUDNN_TYPE_INT32, + CUDNN_TYPE_CHAR, + CUDNN_TYPE_SIGNAL_MODE, + CUDNN_TYPE_FRACTION, + CUDNN_TYPE_NORM_MODE, + CUDNN_TYPE_NORM_FWD_PHASE, + CUDNN_TYPE_RNG_DISTRIBUTION +} cudnnBackendAttributeType_t; + +typedef enum { + CUDNN_BACKEND_POINTWISE_DESCRIPTOR = 0, + CUDNN_BACKEND_CONVOLUTION_DESCRIPTOR, + CUDNN_BACKEND_ENGINE_DESCRIPTOR, + CUDNN_BACKEND_ENGINECFG_DESCRIPTOR, + CUDNN_BACKEND_ENGINEHEUR_DESCRIPTOR, + CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR, + CUDNN_BACKEND_INTERMEDIATE_INFO_DESCRIPTOR, + CUDNN_BACKEND_KNOB_CHOICE_DESCRIPTOR, + CUDNN_BACKEND_KNOB_INFO_DESCRIPTOR, + CUDNN_BACKEND_LAYOUT_INFO_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_FILTER_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONVOLUTION_BACKWARD_DATA_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_POINTWISE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_GEN_STATS_DESCRIPTOR, + CUDNN_BACKEND_OPERATIONGRAPH_DESCRIPTOR, + CUDNN_BACKEND_VARIANT_PACK_DESCRIPTOR, + CUDNN_BACKEND_TENSOR_DESCRIPTOR, + CUDNN_BACKEND_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_MATMUL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_FINALIZE_STATISTICS_DESCRIPTOR, + CUDNN_BACKEND_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_REDUCTION_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_BN_BWD_WEIGHTS_DESCRIPTOR, + CUDNN_BACKEND_RESAMPLE_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_FWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESAMPLE_BWD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_CONCAT_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_SIGNAL_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_FORWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_NORM_BACKWARD_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RESHAPE_DESCRIPTOR, + CUDNN_BACKEND_RNG_DESCRIPTOR, + CUDNN_BACKEND_OPERATION_RNG_DESCRIPTOR +} cudnnBackendDescriptorType_t; + +typedef enum { + CUDNN_NUMERICAL_NOTE_TENSOR_CORE = 0, + CUDNN_NUMERICAL_NOTE_DOWN_CONVERT_INPUTS, + CUDNN_NUMERICAL_NOTE_REDUCED_PRECISION_REDUCTION, + CUDNN_NUMERICAL_NOTE_FFT, + CUDNN_NUMERICAL_NOTE_NONDETERMINISTIC, + CUDNN_NUMERICAL_NOTE_WINOGRAD, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_4x4, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_6x6, + CUDNN_NUMERICAL_NOTE_WINOGRAD_TILE_13x13, + CUDNN_NUMERICAL_NOTE_TYPE_COUNT, +} cudnnBackendNumericalNote_t; + +typedef enum { + CUDNN_BEHAVIOR_NOTE_RUNTIME_COMPILATION = 0, + CUDNN_BEHAVIOR_NOTE_REQUIRES_FILTER_INT8x32_REORDER = 1, + CUDNN_BEHAVIOR_NOTE_REQUIRES_BIAS_INT8x32_REORDER = 2, + CUDNN_BEHAVIOR_NOTE_TYPE_COUNT, +} cudnnBackendBehaviorNote_t; + +typedef enum { + CUDNN_KNOB_TYPE_SPLIT_K = 0, + CUDNN_KNOB_TYPE_SWIZZLE = 1, + CUDNN_KNOB_TYPE_TILE_SIZE = 2, + CUDNN_KNOB_TYPE_USE_TEX = 3, + CUDNN_KNOB_TYPE_EDGE = 4, + CUDNN_KNOB_TYPE_KBLOCK = 5, + CUDNN_KNOB_TYPE_LDGA = 6, + CUDNN_KNOB_TYPE_LDGB = 7, + CUDNN_KNOB_TYPE_CHUNK_K = 8, + CUDNN_KNOB_TYPE_SPLIT_H = 9, + CUDNN_KNOB_TYPE_WINO_TILE = 10, + CUDNN_KNOB_TYPE_MULTIPLY = 11, + CUDNN_KNOB_TYPE_SPLIT_K_BUF = 12, + CUDNN_KNOB_TYPE_TILEK = 13, + CUDNN_KNOB_TYPE_STAGES = 14, + CUDNN_KNOB_TYPE_REDUCTION_MODE = 15, + CUDNN_KNOB_TYPE_CTA_SPLIT_K_MODE = 16, + CUDNN_KNOB_TYPE_SPLIT_K_SLC = 17, + CUDNN_KNOB_TYPE_IDX_MODE = 18, + CUDNN_KNOB_TYPE_SLICED = 19, + CUDNN_KNOB_TYPE_SPLIT_RS = 20, + CUDNN_KNOB_TYPE_SINGLEBUFFER = 21, + CUDNN_KNOB_TYPE_LDGC = 22, + CUDNN_KNOB_TYPE_SPECFILT = 23, + CUDNN_KNOB_TYPE_KERNEL_CFG = 24, + CUDNN_KNOB_TYPE_WORKSPACE = 25, + CUDNN_KNOB_TYPE_TILE_CGA = 26, + CUDNN_KNOB_TYPE_TILE_CGA_M = 27, + CUDNN_KNOB_TYPE_TILE_CGA_N = 28, + CUDNN_KNOB_TYPE_BLOCK_SIZE = 29, + CUDNN_KNOB_TYPE_OCCUPANCY = 30, + CUDNN_KNOB_TYPE_ARRAY_SIZE_PER_THREAD = 31, + CUDNN_KNOB_TYPE_NUM_C_PER_BLOCK = 32, + CUDNN_KNOB_TYPE_COUNTS, +} cudnnBackendKnobType_t; + +typedef enum { + CUDNN_LAYOUT_TYPE_PREFERRED_NCHW = 0, + CUDNN_LAYOUT_TYPE_PREFERRED_NHWC = 1, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD4CK = 2, + CUDNN_LAYOUT_TYPE_PREFERRED_PAD8CK = 3, + CUDNN_LAYOUT_TYPE_COUNT = 4, +} cudnnBackendLayoutType_t; + +typedef enum { + CUDNN_HEUR_MODE_INSTANT = 0, + CUDNN_HEUR_MODE_B = 1, + CUDNN_HEUR_MODE_FALLBACK = 2, + CUDNN_HEUR_MODE_A = 3, + CUDNN_HEUR_MODES_COUNT = 4, +} cudnnBackendHeurMode_t; + +typedef enum { + CUDNN_TENSOR_REORDERING_NONE = 0, + CUDNN_TENSOR_REORDERING_INT8x32 = 1, + CUDNN_TENSOR_REORDERING_F16x16 = 2, +} cudnnBackendTensorReordering_t; + +typedef enum { + CUDNN_ZERO_PAD = 0, + CUDNN_NEG_INF_PAD = 1, + CUDNN_EDGE_VAL_PAD = 2, +} cudnnPaddingMode_t; + +typedef enum { + CUDNN_LAYER_NORM = 0, + CUDNN_INSTANCE_NORM = 1, + CUDNN_BATCH_NORM = 2, + CUDNN_GROUP_NORM = 3, +} cudnnBackendNormMode_t; + +typedef enum { + CUDNN_NORM_FWD_INFERENCE = 0, + CUDNN_NORM_FWD_TRAINING = 1, +} cudnnBackendNormFwdPhase_t; + +cudnnStatus_t CUDNNWINAPI +cudnnBackendCreateDescriptor(cudnnBackendDescriptorType_t descriptorType, cudnnBackendDescriptor_t *descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendDestroyDescriptor(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendInitialize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendFinalize(cudnnBackendDescriptor_t descriptor); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendSetAttribute(cudnnBackendDescriptor_t descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t elementCount, + const void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendGetAttribute(cudnnBackendDescriptor_t const descriptor, + cudnnBackendAttributeName_t attributeName, + cudnnBackendAttributeType_t attributeType, + int64_t requestedElementCount, + int64_t *elementCount, + void *arrayOfElements); + +cudnnStatus_t CUDNNWINAPI +cudnnBackendExecute(cudnnHandle_t handle, cudnnBackendDescriptor_t executionPlan, cudnnBackendDescriptor_t variantPack); + +#if defined(__cplusplus) +} +#endif + +#endif /* _CUDNN_BACKEND_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_infer_v8.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_infer_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..5e4c91c93bdc0b5e69d9d6326b4e7384e35a8ca6 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_infer_v8.h @@ -0,0 +1,571 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_cnn_infer : cuDNN's basic definitions and inference CNN functions. + */ + +#if !defined(CUDNN_CNN_INFER_H_) +#define CUDNN_CNN_INFER_H_ + +#pragma once +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_CNN_INFER_MAJOR 8 +#define CUDNN_CNN_INFER_MINOR 9 +#define CUDNN_CNN_INFER_PATCH 2 + +#if (CUDNN_CNN_INFER_MAJOR != CUDNN_MAJOR) || (CUDNN_CNN_INFER_MINOR != CUDNN_MINOR) || \ + (CUDNN_CNN_INFER_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN CNN INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +typedef struct cudnnConvolutionStruct *cudnnConvolutionDescriptor_t; + +/* + * convolution mode + */ +typedef enum { CUDNN_CONVOLUTION = 0, CUDNN_CROSS_CORRELATION = 1 } cudnnConvolutionMode_t; + +/* + * CUDNN Reorder + */ +typedef enum { + CUDNN_DEFAULT_REORDER = 0, + CUDNN_NO_REORDER = 1, +} cudnnReorderType_t; + +typedef struct cudnnConvolutionFwdAlgoPerfStruct { + cudnnConvolutionFwdAlgo_t algo; + cudnnStatus_t status; + float time; + size_t memory; + cudnnDeterminism_t determinism; + cudnnMathType_t mathType; + int reserved[3]; +} cudnnConvolutionFwdAlgoPerf_t; + +/* Create an instance of convolution descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateConvolutionDescriptor(cudnnConvolutionDescriptor_t *convDesc); + +/* Destroy an instance of convolution descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnDestroyConvolutionDescriptor(cudnnConvolutionDescriptor_t convDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetConvolutionMathType(cudnnConvolutionDescriptor_t convDesc, cudnnMathType_t mathType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionMathType(cudnnConvolutionDescriptor_t convDesc, cudnnMathType_t *mathType); + +cudnnStatus_t CUDNNWINAPI +cudnnSetConvolutionGroupCount(cudnnConvolutionDescriptor_t convDesc, int groupCount); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionGroupCount(cudnnConvolutionDescriptor_t convDesc, int *groupCount); + +cudnnStatus_t CUDNNWINAPI +cudnnSetConvolutionReorderType(cudnnConvolutionDescriptor_t convDesc, cudnnReorderType_t reorderType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionReorderType(cudnnConvolutionDescriptor_t convDesc, cudnnReorderType_t *reorderType); + +cudnnStatus_t CUDNNWINAPI +cudnnSetConvolution2dDescriptor(cudnnConvolutionDescriptor_t convDesc, + int pad_h, /* zero-padding height */ + int pad_w, /* zero-padding width */ + int u, /* vertical filter stride */ + int v, /* horizontal filter stride */ + int dilation_h, /* filter dilation in the vertical dimension */ + int dilation_w, /* filter dilation in the horizontal dimension */ + cudnnConvolutionMode_t mode, + cudnnDataType_t computeType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolution2dDescriptor(const cudnnConvolutionDescriptor_t convDesc, + int *pad_h, /* zero-padding height */ + int *pad_w, /* zero-padding width */ + int *u, /* vertical filter stride */ + int *v, /* horizontal filter stride */ + int *dilation_h, /* filter dilation in the vertical dimension */ + int *dilation_w, /* filter dilation in the horizontal dimension */ + cudnnConvolutionMode_t *mode, + cudnnDataType_t *computeType); + +cudnnStatus_t CUDNNWINAPI +cudnnSetConvolutionNdDescriptor(cudnnConvolutionDescriptor_t convDesc, + int arrayLength, /* nbDims-2 size */ + const int padA[], + const int filterStrideA[], + const int dilationA[], + cudnnConvolutionMode_t mode, + cudnnDataType_t computeType); /* convolution data type */ + +/* Helper function to return the dimensions of the output tensor given a convolution descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionNdDescriptor(const cudnnConvolutionDescriptor_t convDesc, + int arrayLengthRequested, + int *arrayLength, + int padA[], + int strideA[], + int dilationA[], + cudnnConvolutionMode_t *mode, + cudnnDataType_t *computeType); /* convolution data type */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolution2dForwardOutputDim(const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t inputTensorDesc, + const cudnnFilterDescriptor_t filterDesc, + int *n, + int *c, + int *h, + int *w); + +/* Helper function to return the dimensions of the output tensor given a convolution descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionNdForwardOutputDim(const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t inputTensorDesc, + const cudnnFilterDescriptor_t filterDesc, + int nbDims, + int tensorOuputDimA[]); + +/* helper function to provide the convolution forward algo that fit best the requirement */ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionForwardAlgorithmMaxCount(cudnnHandle_t handle, int *count); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionForwardAlgorithm_v7(cudnnHandle_t handle, + const cudnnTensorDescriptor_t srcDesc, + const cudnnFilterDescriptor_t filterDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t destDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionFwdAlgoPerf_t *perfResults); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionForwardAlgorithm(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t yDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionFwdAlgoPerf_t *perfResults); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionForwardAlgorithmEx(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t yDesc, + void *y, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionFwdAlgoPerf_t *perfResults, + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnIm2Col(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnFilterDescriptor_t wDesc, + const cudnnConvolutionDescriptor_t convDesc, + void *colBuffer); + +cudnnStatus_t CUDNNWINAPI +cudnnReorderFilterAndBias(cudnnHandle_t handle, + const cudnnFilterDescriptor_t filterDesc, + cudnnReorderType_t reorderType, + const void *filterData, + void *reorderedFilterData, + int reorderBias, + const void *biasData, + void *reorderedBiasData); + +/* Helper function to return the minimum size of the workspace to be passed to the convolution given an algo*/ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionForwardWorkspaceSize(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnFilterDescriptor_t wDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t yDesc, + cudnnConvolutionFwdAlgo_t algo, + size_t *sizeInBytes); + +/* Convolution functions: All of the form "output = alpha * Op(inputs) + beta * output" */ + +/* Function to perform the forward pass for batch convolution */ +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionForward(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnConvolutionDescriptor_t convDesc, + cudnnConvolutionFwdAlgo_t algo, + void *workSpace, + size_t workSpaceSizeInBytes, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +/* Fused conv/bias/activation operation : y = Act( alpha1 * conv(x) + alpha2 * z + bias ) */ +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBiasActivationForward(cudnnHandle_t handle, + const void *alpha1, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnConvolutionDescriptor_t convDesc, + cudnnConvolutionFwdAlgo_t algo, + void *workSpace, + size_t workSpaceSizeInBytes, + const void *alpha2, + const cudnnTensorDescriptor_t zDesc, + const void *z, + const cudnnTensorDescriptor_t biasDesc, + const void *bias, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t yDesc, + void *y); + +/* helper function to provide the convolution backward data algo that fit best the requirement */ + +typedef struct cudnnConvolutionBwdDataAlgoPerfStruct { + cudnnConvolutionBwdDataAlgo_t algo; + cudnnStatus_t status; + float time; + size_t memory; + cudnnDeterminism_t determinism; + cudnnMathType_t mathType; + int reserved[3]; +} cudnnConvolutionBwdDataAlgoPerf_t; + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardDataAlgorithmMaxCount(cudnnHandle_t handle, int *count); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardDataAlgorithm(cudnnHandle_t handle, + const cudnnFilterDescriptor_t wDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t dxDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdDataAlgoPerf_t *perfResults); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardDataAlgorithmEx(cudnnHandle_t handle, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdDataAlgoPerf_t *perfResults, + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardDataAlgorithm_v7(cudnnHandle_t handle, + const cudnnFilterDescriptor_t filterDesc, + const cudnnTensorDescriptor_t diffDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t gradDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdDataAlgoPerf_t *perfResults); + +/* + * convolution algorithm (which requires potentially some workspace) + */ + +/* Helper function to return the minimum size of the workspace to be passed to the convolution given an algo*/ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardDataWorkspaceSize(cudnnHandle_t handle, + const cudnnFilterDescriptor_t wDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t dxDesc, + cudnnConvolutionBwdDataAlgo_t algo, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBackwardData(cudnnHandle_t handle, + const void *alpha, + const cudnnFilterDescriptor_t wDesc, + const void *w, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnConvolutionDescriptor_t convDesc, + cudnnConvolutionBwdDataAlgo_t algo, + void *workSpace, + size_t workSpaceSizeInBytes, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Helper function to calculate folding descriptors for dgrad */ +cudnnStatus_t CUDNNWINAPI +cudnnGetFoldedConvBackwardDataDescriptors(const cudnnHandle_t handle, + const cudnnFilterDescriptor_t filterDesc, + const cudnnTensorDescriptor_t diffDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnTensorDescriptor_t gradDesc, + const cudnnTensorFormat_t transformFormat, + cudnnFilterDescriptor_t foldedFilterDesc, + cudnnTensorDescriptor_t paddedDiffDesc, + cudnnConvolutionDescriptor_t foldedConvDesc, + cudnnTensorDescriptor_t foldedGradDesc, + cudnnTensorTransformDescriptor_t filterFoldTransDesc, + cudnnTensorTransformDescriptor_t diffPadTransDesc, + cudnnTensorTransformDescriptor_t gradFoldTransDesc, + cudnnTensorTransformDescriptor_t gradUnfoldTransDesc); + +/* cudnnFusedOps... */ +struct cudnnFusedOpsConstParamStruct; +typedef struct cudnnFusedOpsConstParamStruct *cudnnFusedOpsConstParamPack_t; + +struct cudnnFusedOpsVariantParamStruct; +typedef struct cudnnFusedOpsVariantParamStruct *cudnnFusedOpsVariantParamPack_t; + +struct cudnnFusedOpsPlanStruct; +typedef struct cudnnFusedOpsPlanStruct *cudnnFusedOpsPlan_t; + +typedef enum { + /* each op in [ ] can be disabled by passing NULL ptr */ + /* [per channel scale], [per channel bias], [activation], convolution, [generate BN stats] */ + CUDNN_FUSED_SCALE_BIAS_ACTIVATION_CONV_BNSTATS = 0, + /* [per channel scale], [per channel bias], [activation], convolutionBackwardWeights */ + CUDNN_FUSED_SCALE_BIAS_ACTIVATION_WGRAD = 1, + /* utility for BN training in BN-conv fusion */ + /* computes the equivalent scale and bias from ySum ySqSum and learned scale, bias */ + /* optionally update running stats and generate saved stats */ + CUDNN_FUSED_BN_FINALIZE_STATISTICS_TRAINING = 2, + /* utility for BN inference in BN-conv fusion */ + /* computes the equivalent scale and bias from learned running stats and learned scale, bias */ + CUDNN_FUSED_BN_FINALIZE_STATISTICS_INFERENCE = 3, + /* reserved for future use: convolution, [per channel scale], [per channel bias], [residual add], [activation] */ + CUDNN_FUSED_CONV_SCALE_BIAS_ADD_ACTIVATION = 4, + /* reserved for future use: [per channel scale], [per channel bias], [residual add], activation, bitmask */ + CUDNN_FUSED_SCALE_BIAS_ADD_ACTIVATION_GEN_BITMASK = 5, + /* reserved for future use */ + CUDNN_FUSED_DACTIVATION_FORK_DBATCHNORM = 6, +} cudnnFusedOps_t; + +typedef enum { + /* set XDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get XDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_XDESC = 0, + /* set/get XDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_XDATA_PLACEHOLDER = 1, + /* set/get BN_MODE: pass cudnnBatchNormMode_t* */ + CUDNN_PARAM_BN_MODE = 2, + /* set CUDNN_PARAM_BN_EQSCALEBIAS_DESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get CUDNN_PARAM_BN_EQSCALEBIAS_DESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_BN_EQSCALEBIAS_DESC = 3, + /* set/get BN_EQSCALE_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_EQSCALE_PLACEHOLDER = 4, + /* set/get BN_EQBIAS_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_EQBIAS_PLACEHOLDER = 5, + /* set ACTIVATION_DESC: pass previously initialized cudnnActivationDescriptor_t */ + /* get ACTIVATION_DESC: pass previously created cudnnActivationDescriptor_t */ + CUDNN_PARAM_ACTIVATION_DESC = 6, + /* set CONV_DESC: pass previously initialized cudnnConvolutionDescriptor_t */ + /* get CONV_DESC: pass previously created cudnnConvolutionDescriptor_t */ + CUDNN_PARAM_CONV_DESC = 7, + /* set WDESC: pass previously initialized cudnnFilterDescriptor_t */ + /* get WDESC: pass previously created cudnnFilterDescriptor_t */ + CUDNN_PARAM_WDESC = 8, + /* set/get WDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_WDATA_PLACEHOLDER = 9, + /* set DWDESC: pass previously initialized cudnnFilterDescriptor_t */ + /* get DWDESC: pass previously created cudnnFilterDescriptor_t */ + CUDNN_PARAM_DWDESC = 10, + /* set/get DWDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_DWDATA_PLACEHOLDER = 11, + /* set YDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get YDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_YDESC = 12, + /* set/get YDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_YDATA_PLACEHOLDER = 13, + /* set DYDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get DYDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_DYDESC = 14, + /* set/get DYDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_DYDATA_PLACEHOLDER = 15, + /* set YSTATS_DESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get YSTATS_DESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_YSTATS_DESC = 16, + /* set/get YSUM_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_YSUM_PLACEHOLDER = 17, + /* set/get YSQSUM_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_YSQSUM_PLACEHOLDER = 18, + /* set CUDNN_PARAM_BN_SCALEBIAS_MEANVAR_DESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get CUDNN_PARAM_BN_SCALEBIAS_MEANVAR_DESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_BN_SCALEBIAS_MEANVAR_DESC = 19, + /* set/get CUDNN_PARAM_BN_SCALE_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_SCALE_PLACEHOLDER = 20, + /* set/get CUDNN_PARAM_BN_BIAS_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_BIAS_PLACEHOLDER = 21, + /* set/get CUDNN_PARAM_BN_SAVED_MEAN_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_SAVED_MEAN_PLACEHOLDER = 22, + /* set/get CUDNN_PARAM_BN_SAVED_INVSTD_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_SAVED_INVSTD_PLACEHOLDER = 23, + /* set/get CUDNN_PARAM_BN_RUNNING_MEAN_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_RUNNING_MEAN_PLACEHOLDER = 24, + /* set/get CUDNN_PARAM_BN_RUNNING_VAR_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_RUNNING_VAR_PLACEHOLDER = 25, + + /* set ZDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get ZDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_ZDESC = 26, + /* set/get ZDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_ZDATA_PLACEHOLDER = 27, + /* set BN_Z_EQSCALEBIAS_DESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get BN_Z_EQSCALEBIAS_DESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_BN_Z_EQSCALEBIAS_DESC = 28, + /* set/get BN_Z_EQSCALE_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_Z_EQSCALE_PLACEHOLDER = 29, + /* set/get BN_Z_EQBIAS_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_Z_EQBIAS_PLACEHOLDER = 30, + + /* set ACTIVATION_BITMASK_DESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get ACTIVATION_BITMASK_DESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_ACTIVATION_BITMASK_DESC = 31, + /* set/get ACTIVATION_BITMASK_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_ACTIVATION_BITMASK_PLACEHOLDER = 32, + + /* set DXDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get DXDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_DXDESC = 33, + /* set/get DXDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_DXDATA_PLACEHOLDER = 34, + /* set DZDESC: pass previously initialized cudnnTensorDescriptor_t */ + /* get DZDESC: pass previously created cudnnTensorDescriptor_t */ + CUDNN_PARAM_DZDESC = 35, + /* set/get DZDATA_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_DZDATA_PLACEHOLDER = 36, + /* set/get CUDNN_PARAM_BN_DSCALE_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_DSCALE_PLACEHOLDER = 37, + /* set/get CUDNN_PARAM_BN_DBIAS_PLACEHOLDER: pass cudnnFusedOpsPointerPlaceHolder_t* */ + CUDNN_PARAM_BN_DBIAS_PLACEHOLDER = 38, +} cudnnFusedOpsConstParamLabel_t; + +typedef enum { + CUDNN_PTR_NULL = 0, + CUDNN_PTR_ELEM_ALIGNED = 1, + CUDNN_PTR_16B_ALIGNED = 2, +} cudnnFusedOpsPointerPlaceHolder_t; + +typedef enum { + /* set: pass void* pointing to dev memory */ + /* get: pass void** pointing to host memory */ + CUDNN_PTR_XDATA = 0, + CUDNN_PTR_BN_EQSCALE = 1, + CUDNN_PTR_BN_EQBIAS = 2, + CUDNN_PTR_WDATA = 3, + CUDNN_PTR_DWDATA = 4, + CUDNN_PTR_YDATA = 5, + CUDNN_PTR_DYDATA = 6, + CUDNN_PTR_YSUM = 7, + CUDNN_PTR_YSQSUM = 8, + CUDNN_PTR_WORKSPACE = 9, + CUDNN_PTR_BN_SCALE = 10, + CUDNN_PTR_BN_BIAS = 11, + CUDNN_PTR_BN_SAVED_MEAN = 12, + CUDNN_PTR_BN_SAVED_INVSTD = 13, + CUDNN_PTR_BN_RUNNING_MEAN = 14, + CUDNN_PTR_BN_RUNNING_VAR = 15, + CUDNN_PTR_ZDATA = 16, + CUDNN_PTR_BN_Z_EQSCALE = 17, + CUDNN_PTR_BN_Z_EQBIAS = 18, + CUDNN_PTR_ACTIVATION_BITMASK = 19, + CUDNN_PTR_DXDATA = 20, + CUDNN_PTR_DZDATA = 21, + CUDNN_PTR_BN_DSCALE = 22, + CUDNN_PTR_BN_DBIAS = 23, + + /* set/get: pass size_t* pointing to host memory */ + CUDNN_SCALAR_SIZE_T_WORKSPACE_SIZE_IN_BYTES = 100, + /* set/get: pass int64_t* pointing to host memory */ + CUDNN_SCALAR_INT64_T_BN_ACCUMULATION_COUNT = 101, + /* set/get: pass double* pointing to host memory */ + CUDNN_SCALAR_DOUBLE_BN_EXP_AVG_FACTOR = 102, + /* set/get: pass double* pointing to host memory */ + CUDNN_SCALAR_DOUBLE_BN_EPSILON = 103, +} cudnnFusedOpsVariantParamLabel_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCnnInferVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_CNN_INFER_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train_v8.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..ee0358b51d8b2c48880cf2f3cde7adf83c112336 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_cnn_train_v8.h @@ -0,0 +1,219 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_cnn_train : cuDNN's basic definitions and inference CNN functions. + */ + +#pragma once +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" +#include "cudnn_ops_train.h" +#include "cudnn_cnn_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_CNN_TRAIN_MAJOR 8 +#define CUDNN_CNN_TRAIN_MINOR 9 +#define CUDNN_CNN_TRAIN_PATCH 2 + +#if (CUDNN_CNN_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_CNN_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_CNN_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN CNN INFER!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* helper function to provide the convolution backward filter algo that fit best the requirement */ + +typedef struct cudnnConvolutionBwdFilterAlgoPerfStruct { + cudnnConvolutionBwdFilterAlgo_t algo; + cudnnStatus_t status; + float time; + size_t memory; + cudnnDeterminism_t determinism; + cudnnMathType_t mathType; + int reserved[3]; +} cudnnConvolutionBwdFilterAlgoPerf_t; + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterAlgorithmMaxCount(cudnnHandle_t handle, int *count); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardFilterAlgorithm(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t dwDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults); + +cudnnStatus_t CUDNNWINAPI +cudnnFindConvolutionBackwardFilterAlgorithmEx(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *y, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t dwDesc, + void *dw, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults, + void *workSpace, + size_t workSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterAlgorithm_v7(cudnnHandle_t handle, + const cudnnTensorDescriptor_t srcDesc, + const cudnnTensorDescriptor_t diffDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t gradDesc, + const int requestedAlgoCount, + int *returnedAlgoCount, + cudnnConvolutionBwdFilterAlgoPerf_t *perfResults); + +/* + * convolution algorithm (which requires potentially some workspace) + */ + +/* Helper function to return the minimum size of the workspace to be passed to the convolution given an algo*/ +cudnnStatus_t CUDNNWINAPI +cudnnGetConvolutionBackwardFilterWorkspaceSize(cudnnHandle_t handle, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnConvolutionDescriptor_t convDesc, + const cudnnFilterDescriptor_t gradDesc, + cudnnConvolutionBwdFilterAlgo_t algo, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBackwardFilter(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnConvolutionDescriptor_t convDesc, + cudnnConvolutionBwdFilterAlgo_t algo, + void *workSpace, + size_t workSpaceSizeInBytes, + const void *beta, + const cudnnFilterDescriptor_t dwDesc, + void *dw); + +/* Function to compute the bias gradient for batch convolution */ +cudnnStatus_t CUDNNWINAPI +cudnnConvolutionBackwardBias(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *beta, + const cudnnTensorDescriptor_t dbDesc, + void *db); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsConstParamPack(cudnnFusedOpsConstParamPack_t *constPack, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsConstParamPack(cudnnFusedOpsConstParamPack_t constPack); + +cudnnStatus_t CUDNNWINAPI +cudnnSetFusedOpsConstParamPackAttribute(cudnnFusedOpsConstParamPack_t constPack, + cudnnFusedOpsConstParamLabel_t paramLabel, + const void *param); + +cudnnStatus_t CUDNNWINAPI +cudnnGetFusedOpsConstParamPackAttribute(const cudnnFusedOpsConstParamPack_t constPack, + cudnnFusedOpsConstParamLabel_t paramLabel, + void *param, + int *isNULL); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsVariantParamPack(cudnnFusedOpsVariantParamPack_t *varPack, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsVariantParamPack(cudnnFusedOpsVariantParamPack_t varPack); + +cudnnStatus_t CUDNNWINAPI +cudnnSetFusedOpsVariantParamPackAttribute(cudnnFusedOpsVariantParamPack_t varPack, + cudnnFusedOpsVariantParamLabel_t paramLabel, + void *ptr); + +cudnnStatus_t CUDNNWINAPI +cudnnGetFusedOpsVariantParamPackAttribute(const cudnnFusedOpsVariantParamPack_t varPack, + cudnnFusedOpsVariantParamLabel_t paramLabel, + void *ptr); + +cudnnStatus_t CUDNNWINAPI +cudnnCreateFusedOpsPlan(cudnnFusedOpsPlan_t *plan, cudnnFusedOps_t ops); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFusedOpsPlan(cudnnFusedOpsPlan_t plan); + +cudnnStatus_t CUDNNWINAPI +cudnnMakeFusedOpsPlan(cudnnHandle_t handle, + cudnnFusedOpsPlan_t plan, + const cudnnFusedOpsConstParamPack_t constPack, + size_t *workspaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnFusedOpsExecute(cudnnHandle_t handle, const cudnnFusedOpsPlan_t plan, cudnnFusedOpsVariantParamPack_t varPack); + +cudnnStatus_t CUDNNWINAPI +cudnnCnnTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_infer_v8.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_infer_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..79ba34cc1a1557462d49b63a9cb52d9bfe149693 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_infer_v8.h @@ -0,0 +1,1183 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_ops_infer : cuDNN's basic definitions and inference operations. + */ + +#if !defined(CUDNN_OPS_INFER_H_) +#define CUDNN_OPS_INFER_H_ + +#include +#include + +#include "cudnn_version.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_OPS_INFER_MAJOR 8 +#define CUDNN_OPS_INFER_MINOR 9 +#define CUDNN_OPS_INFER_PATCH 2 + +#if (CUDNN_OPS_INFER_MAJOR != CUDNN_MAJOR) || (CUDNN_OPS_INFER_MINOR != CUDNN_MINOR) || \ + (CUDNN_OPS_INFER_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN OPS INFER!!! +#endif + +#ifndef CUDNNWINAPI +#ifdef _WIN32 +#define CUDNNWINAPI __stdcall +#else +#define CUDNNWINAPI +#endif +#endif + +/* Warnings for deprecated API-s are enabled using the CUDNN_WARN_DEPRECATED macro */ +#if defined(CUDNN_WARN_DEPRECATED) && (defined(__GNUC__) || defined(__clang__)) +/* GCC, Intel C/C++, Cray C/C++, CLANG, IBM XL C/C++ little endian */ +#define CUDNN_DEPRECATED __attribute__((deprecated)) +#elif defined(CUDNN_WARN_DEPRECATED) && defined(_MSC_VER) +/* Microsoft Visual C++ */ +#define CUDNN_DEPRECATED __declspec(deprecated) +#elif defined(CUDNN_WARN_DEPRECATED) && (__cplusplus >= 201402L) +/* C++14 compilers */ +#define CUDNN_DEPRECATED [[deprecated]] +#else +/* No support for the deprecated attribute */ +#define CUDNN_DEPRECATED +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +struct cudnnContext; +typedef struct cudnnContext *cudnnHandle_t; + +size_t CUDNNWINAPI +cudnnGetVersion(void); + +size_t CUDNNWINAPI +cudnnGetMaxDeviceVersion(void); + +/* Returns CUDA Runtime version statically linked against cudnn */ +size_t CUDNNWINAPI +cudnnGetCudartVersion(void); + +/* + * CUDNN return codes + */ +typedef enum { + CUDNN_STATUS_SUCCESS = 0, + CUDNN_STATUS_NOT_INITIALIZED = 1, + CUDNN_STATUS_ALLOC_FAILED = 2, + CUDNN_STATUS_BAD_PARAM = 3, + CUDNN_STATUS_INTERNAL_ERROR = 4, + CUDNN_STATUS_INVALID_VALUE = 5, + CUDNN_STATUS_ARCH_MISMATCH = 6, + CUDNN_STATUS_MAPPING_ERROR = 7, + CUDNN_STATUS_EXECUTION_FAILED = 8, + CUDNN_STATUS_NOT_SUPPORTED = 9, + CUDNN_STATUS_LICENSE_ERROR = 10, + CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING = 11, + CUDNN_STATUS_RUNTIME_IN_PROGRESS = 12, + CUDNN_STATUS_RUNTIME_FP_OVERFLOW = 13, + CUDNN_STATUS_VERSION_MISMATCH = 14, +} cudnnStatus_t; + +/* human-readable error messages */ +const char *CUDNNWINAPI +cudnnGetErrorString(cudnnStatus_t status); + +/* Forward definition in this version only */ +typedef struct cudnnRuntimeTag_t cudnnRuntimeTag_t; + +typedef enum { + CUDNN_ERRQUERY_RAWCODE = 0, + CUDNN_ERRQUERY_NONBLOCKING = 1, + CUDNN_ERRQUERY_BLOCKING = 2, +} cudnnErrQueryMode_t; + +cudnnStatus_t CUDNNWINAPI +cudnnQueryRuntimeError(cudnnHandle_t handle, cudnnStatus_t *rstatus, cudnnErrQueryMode_t mode, cudnnRuntimeTag_t *tag); + +#ifndef __LIBRARY_TYPES_H__ + +typedef enum libraryPropertyType_t { MAJOR_VERSION, MINOR_VERSION, PATCH_LEVEL } libraryPropertyType; + +#endif + +cudnnStatus_t CUDNNWINAPI +cudnnGetProperty(libraryPropertyType type, int *value); + +cudnnStatus_t CUDNNWINAPI +cudnnCreate(cudnnHandle_t *handle); +cudnnStatus_t CUDNNWINAPI +cudnnDestroy(cudnnHandle_t handle); +cudnnStatus_t CUDNNWINAPI +cudnnSetStream(cudnnHandle_t handle, cudaStream_t streamId); +cudnnStatus_t CUDNNWINAPI +cudnnGetStream(cudnnHandle_t handle, cudaStream_t *streamId); + +/* Data structures to represent Image/Filter and the Neural Network Layer */ +typedef struct cudnnTensorStruct *cudnnTensorDescriptor_t; +typedef struct cudnnPoolingStruct *cudnnPoolingDescriptor_t; +typedef struct cudnnFilterStruct *cudnnFilterDescriptor_t; +typedef struct cudnnLRNStruct *cudnnLRNDescriptor_t; +typedef struct cudnnActivationStruct *cudnnActivationDescriptor_t; +typedef struct cudnnSpatialTransformerStruct *cudnnSpatialTransformerDescriptor_t; +typedef struct cudnnOpTensorStruct *cudnnOpTensorDescriptor_t; +typedef struct cudnnReduceTensorStruct *cudnnReduceTensorDescriptor_t; +typedef struct cudnnCTCLossStruct *cudnnCTCLossDescriptor_t; +typedef struct cudnnTensorTransformStruct *cudnnTensorTransformDescriptor_t; +/* + * CUDNN data type + */ +typedef enum { + CUDNN_DATA_FLOAT = 0, + CUDNN_DATA_DOUBLE = 1, + CUDNN_DATA_HALF = 2, + CUDNN_DATA_INT8 = 3, + CUDNN_DATA_INT32 = 4, + CUDNN_DATA_INT8x4 = 5, + CUDNN_DATA_UINT8 = 6, + CUDNN_DATA_UINT8x4 = 7, + CUDNN_DATA_INT8x32 = 8, + CUDNN_DATA_BFLOAT16 = 9, + CUDNN_DATA_INT64 = 10, + CUDNN_DATA_BOOLEAN = 11, + CUDNN_DATA_FP8_E4M3 = 12, + CUDNN_DATA_FP8_E5M2 = 13, + CUDNN_DATA_FAST_FLOAT_FOR_FP8 = 14, +} cudnnDataType_t; + +/* + * CUDNN math type + */ +typedef enum { + CUDNN_DEFAULT_MATH = 0, + CUDNN_TENSOR_OP_MATH = 1, + CUDNN_TENSOR_OP_MATH_ALLOW_CONVERSION = 2, + CUDNN_FMA_MATH = 3, +} cudnnMathType_t; + +/* + * CUDNN propagate Nan + */ +typedef enum { + CUDNN_NOT_PROPAGATE_NAN = 0, + CUDNN_PROPAGATE_NAN = 1, +} cudnnNanPropagation_t; + +/* + * CUDNN Determinism + */ +typedef enum { + CUDNN_NON_DETERMINISTIC = 0, + CUDNN_DETERMINISTIC = 1, +} cudnnDeterminism_t; + +/* Maximum supported number of tensor dimensions */ +#define CUDNN_DIM_MAX 8 + +/* Create an instance of a generic Tensor descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateTensorDescriptor(cudnnTensorDescriptor_t *tensorDesc); + +typedef enum { + CUDNN_TENSOR_NCHW = 0, /* row major (wStride = 1, hStride = w) */ + CUDNN_TENSOR_NHWC = 1, /* feature maps interleaved ( cStride = 1 )*/ + CUDNN_TENSOR_NCHW_VECT_C = 2, /* each image point is vector of element of C, vector length in data type */ +} cudnnTensorFormat_t; + +cudnnStatus_t CUDNNWINAPI +cudnnSetTensor4dDescriptor(cudnnTensorDescriptor_t tensorDesc, + cudnnTensorFormat_t format, + cudnnDataType_t dataType, /* image data type */ + int n, /* number of inputs (batch size) */ + int c, /* number of input feature maps */ + int h, /* height of input section */ + int w); /* width of input section */ + +cudnnStatus_t CUDNNWINAPI +cudnnSetTensor4dDescriptorEx(cudnnTensorDescriptor_t tensorDesc, + cudnnDataType_t dataType, /* image data type */ + int n, /* number of inputs (batch size) */ + int c, /* number of input feature maps */ + int h, /* height of input section */ + int w, /* width of input section */ + int nStride, + int cStride, + int hStride, + int wStride); + +cudnnStatus_t CUDNNWINAPI +cudnnGetTensor4dDescriptor(const cudnnTensorDescriptor_t tensorDesc, + cudnnDataType_t *dataType, /* image data type */ + int *n, /* number of inputs (batch size) */ + int *c, /* number of input feature maps */ + int *h, /* height of input section */ + int *w, /* width of input section */ + int *nStride, + int *cStride, + int *hStride, + int *wStride); + +cudnnStatus_t CUDNNWINAPI +cudnnSetTensorNdDescriptor(cudnnTensorDescriptor_t tensorDesc, + cudnnDataType_t dataType, + int nbDims, + const int dimA[], + const int strideA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnSetTensorNdDescriptorEx(cudnnTensorDescriptor_t tensorDesc, + cudnnTensorFormat_t format, + cudnnDataType_t dataType, + int nbDims, + const int dimA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetTensorNdDescriptor(const cudnnTensorDescriptor_t tensorDesc, + int nbDimsRequested, + cudnnDataType_t *dataType, + int *nbDims, + int dimA[], + int strideA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetTensorSizeInBytes(const cudnnTensorDescriptor_t tensorDesc, size_t *size); + +/* PixelOffset( n, c, h, w ) = n *input_stride + c * feature_stride + h * h_stride + w * w_stride + + 1)Example of all images in row major order one batch of features after the other (with an optional padding on row) + input_stride : c x h x h_stride + feature_stride : h x h_stride + h_stride : >= w ( h_stride = w if no padding) + w_stride : 1 + + + 2)Example of all images in row major with features maps interleaved + input_stride : c x h x h_stride + feature_stride : 1 + h_stride : w x c + w_stride : c + + 3)Example of all images in column major order one batch of features after the other (with optional padding on column) + input_stride : c x w x w_stride + feature_stride : w x w_stride + h_stride : 1 + w_stride : >= h + +*/ + +/* Destroy an instance of Tensor4d descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnDestroyTensorDescriptor(cudnnTensorDescriptor_t tensorDesc); + +/* Fold/unfold transforms */ +typedef enum { + CUDNN_TRANSFORM_FOLD = 0U, + CUDNN_TRANSFORM_UNFOLD = 1U, +} cudnnFoldingDirection_t; + +/** Create a destination descriptor for cudnnTransformTensor */ +cudnnStatus_t CUDNNWINAPI +cudnnInitTransformDest(const cudnnTensorTransformDescriptor_t transformDesc, + const cudnnTensorDescriptor_t srcDesc, + cudnnTensorDescriptor_t destDesc, + size_t *destSizeInBytes); + +/** Create an empty tensor transform descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateTensorTransformDescriptor(cudnnTensorTransformDescriptor_t *transformDesc); + +/** Initialize a previously created tensor transform descriptor. */ +cudnnStatus_t CUDNNWINAPI +cudnnSetTensorTransformDescriptor(cudnnTensorTransformDescriptor_t transformDesc, + const uint32_t nbDims, + const cudnnTensorFormat_t destFormat, + const int32_t padBeforeA[], + const int32_t padAfterA[], + const uint32_t foldA[], + const cudnnFoldingDirection_t direction); + +/** + * Retrieves the values stored in a previously initialized tensor transform + * descriptor. + */ +cudnnStatus_t CUDNNWINAPI +cudnnGetTensorTransformDescriptor(cudnnTensorTransformDescriptor_t transformDesc, + uint32_t nbDimsRequested, + cudnnTensorFormat_t *destFormat, + int32_t padBeforeA[], + int32_t padAfterA[], + uint32_t foldA[], + cudnnFoldingDirection_t *direction); + +/** + * Destroys a previously created tensor transform descriptor. + */ +cudnnStatus_t CUDNNWINAPI +cudnnDestroyTensorTransformDescriptor(cudnnTensorTransformDescriptor_t transformDesc); + +/* Tensor layout conversion helper (y = alpha * x + beta * y) */ +cudnnStatus_t CUDNNWINAPI +cudnnTransformTensor(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +cudnnStatus_t CUDNNWINAPI +cudnnTransformTensorEx(cudnnHandle_t handle, + const cudnnTensorTransformDescriptor_t transDesc, + const void *alpha, + const cudnnTensorDescriptor_t srcDesc, + const void *srcData, + const void *beta, + const cudnnTensorDescriptor_t destDesc, + void *destData); + +/* Tensor Bias addition : C = alpha * A + beta * C */ +cudnnStatus_t CUDNNWINAPI +cudnnAddTensor(cudnnHandle_t handle, + const void *alpha, + const cudnnTensorDescriptor_t aDesc, + const void *A, + const void *beta, + const cudnnTensorDescriptor_t cDesc, + void *C); + +/* + * CUDNN OpTensor op type + */ +typedef enum { + CUDNN_OP_TENSOR_ADD = 0, + CUDNN_OP_TENSOR_MUL = 1, + CUDNN_OP_TENSOR_MIN = 2, + CUDNN_OP_TENSOR_MAX = 3, + CUDNN_OP_TENSOR_SQRT = 4, + CUDNN_OP_TENSOR_NOT = 5, +} cudnnOpTensorOp_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateOpTensorDescriptor(cudnnOpTensorDescriptor_t *opTensorDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetOpTensorDescriptor(cudnnOpTensorDescriptor_t opTensorDesc, + cudnnOpTensorOp_t opTensorOp, + cudnnDataType_t opTensorCompType, + cudnnNanPropagation_t opTensorNanOpt); + +cudnnStatus_t CUDNNWINAPI +cudnnGetOpTensorDescriptor(const cudnnOpTensorDescriptor_t opTensorDesc, + cudnnOpTensorOp_t *opTensorOp, + cudnnDataType_t *opTensorCompType, + cudnnNanPropagation_t *opTensorNanOpt); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyOpTensorDescriptor(cudnnOpTensorDescriptor_t opTensorDesc); + +/* Tensor operation : C = op( alpha1 * A, alpha2 * B ) + beta * C */ +/* B tensor is ignored for CUDNN_OP_TENSOR_SQRT, CUDNN_OP_TENSOR_NOT. */ +cudnnStatus_t CUDNNWINAPI +cudnnOpTensor(cudnnHandle_t handle, + const cudnnOpTensorDescriptor_t opTensorDesc, + const void *alpha1, + const cudnnTensorDescriptor_t aDesc, + const void *A, + const void *alpha2, + const cudnnTensorDescriptor_t bDesc, + const void *B, + const void *beta, + const cudnnTensorDescriptor_t cDesc, + void *C); + +/* + * CUDNN ReduceTensor op type + */ +typedef enum { + CUDNN_REDUCE_TENSOR_ADD = 0, + CUDNN_REDUCE_TENSOR_MUL = 1, + CUDNN_REDUCE_TENSOR_MIN = 2, + CUDNN_REDUCE_TENSOR_MAX = 3, + CUDNN_REDUCE_TENSOR_AMAX = 4, + CUDNN_REDUCE_TENSOR_AVG = 5, + CUDNN_REDUCE_TENSOR_NORM1 = 6, + CUDNN_REDUCE_TENSOR_NORM2 = 7, + CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS = 8, +} cudnnReduceTensorOp_t; + +/* + * CUDNN ReduceTensor indices type + */ +typedef enum { + CUDNN_REDUCE_TENSOR_NO_INDICES = 0, + CUDNN_REDUCE_TENSOR_FLATTENED_INDICES = 1, +} cudnnReduceTensorIndices_t; + +/* + * CUDNN tensor indices type size (all unsigned) + * Currently not supported, default is 32 bit unsigned. + */ +typedef enum { + CUDNN_32BIT_INDICES = 0, + CUDNN_64BIT_INDICES = 1, + CUDNN_16BIT_INDICES = 2, + CUDNN_8BIT_INDICES = 3, +} cudnnIndicesType_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateReduceTensorDescriptor(cudnnReduceTensorDescriptor_t *reduceTensorDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetReduceTensorDescriptor(cudnnReduceTensorDescriptor_t reduceTensorDesc, + cudnnReduceTensorOp_t reduceTensorOp, + cudnnDataType_t reduceTensorCompType, + cudnnNanPropagation_t reduceTensorNanOpt, + cudnnReduceTensorIndices_t reduceTensorIndices, + cudnnIndicesType_t reduceTensorIndicesType); + +cudnnStatus_t CUDNNWINAPI +cudnnGetReduceTensorDescriptor(const cudnnReduceTensorDescriptor_t reduceTensorDesc, + cudnnReduceTensorOp_t *reduceTensorOp, + cudnnDataType_t *reduceTensorCompType, + cudnnNanPropagation_t *reduceTensorNanOpt, + cudnnReduceTensorIndices_t *reduceTensorIndices, + cudnnIndicesType_t *reduceTensorIndicesType); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyReduceTensorDescriptor(cudnnReduceTensorDescriptor_t reduceTensorDesc); + +/* Helper function to return the minimum size of the index space to be passed to the reduction given the input and + * output tensors */ +cudnnStatus_t CUDNNWINAPI +cudnnGetReductionIndicesSize(cudnnHandle_t handle, + const cudnnReduceTensorDescriptor_t reduceTensorDesc, + const cudnnTensorDescriptor_t aDesc, + const cudnnTensorDescriptor_t cDesc, + size_t *sizeInBytes); + +/* Helper function to return the minimum size of the workspace to be passed to the reduction given the input and output + * tensors */ +cudnnStatus_t CUDNNWINAPI +cudnnGetReductionWorkspaceSize(cudnnHandle_t handle, + const cudnnReduceTensorDescriptor_t reduceTensorDesc, + const cudnnTensorDescriptor_t aDesc, + const cudnnTensorDescriptor_t cDesc, + size_t *sizeInBytes); + +/* Tensor operation : C = reduce op( alpha * A ) + beta * C */ +/* The NaN propagation enum applies to only the min and max reduce ops; the other reduce ops propagate NaN as usual. */ +/* The indices space is ignored for reduce ops other than min or max. */ +cudnnStatus_t CUDNNWINAPI +cudnnReduceTensor(cudnnHandle_t handle, + const cudnnReduceTensorDescriptor_t reduceTensorDesc, + void *indices, + size_t indicesSizeInBytes, + void *workspace, + size_t workspaceSizeInBytes, + const void *alpha, + const cudnnTensorDescriptor_t aDesc, + const void *A, + const void *beta, + const cudnnTensorDescriptor_t cDesc, + void *C); + +/* Set all values of a tensor to a given value : y[i] = value[0] */ +cudnnStatus_t CUDNNWINAPI +cudnnSetTensor(cudnnHandle_t handle, const cudnnTensorDescriptor_t yDesc, void *y, const void *valuePtr); + +/* Scale all values of a tensor by a given factor : y[i] = alpha * y[i] */ +cudnnStatus_t CUDNNWINAPI +cudnnScaleTensor(cudnnHandle_t handle, const cudnnTensorDescriptor_t yDesc, void *y, const void *alpha); + +/* Create an instance of FilterStruct */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateFilterDescriptor(cudnnFilterDescriptor_t *filterDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetFilter4dDescriptor(cudnnFilterDescriptor_t filterDesc, + cudnnDataType_t dataType, /* image data type */ + cudnnTensorFormat_t format, + int k, /* number of output feature maps */ + int c, /* number of input feature maps */ + int h, /* height of each input filter */ + int w); /* width of each input filter */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetFilter4dDescriptor(const cudnnFilterDescriptor_t filterDesc, + cudnnDataType_t *dataType, /* image data type */ + cudnnTensorFormat_t *format, + int *k, /* number of output feature maps */ + int *c, /* number of input feature maps */ + int *h, /* height of each input filter */ + int *w); /* width of each input filter */ + +cudnnStatus_t CUDNNWINAPI +cudnnSetFilterNdDescriptor(cudnnFilterDescriptor_t filterDesc, + cudnnDataType_t dataType, /* image data type */ + cudnnTensorFormat_t format, + int nbDims, + const int filterDimA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetFilterNdDescriptor(const cudnnFilterDescriptor_t filterDesc, + int nbDimsRequested, + cudnnDataType_t *dataType, /* image data type */ + cudnnTensorFormat_t *format, + int *nbDims, + int filterDimA[]); +cudnnStatus_t CUDNNWINAPI +cudnnGetFilterSizeInBytes(const cudnnFilterDescriptor_t filterDesc, size_t *size); + +cudnnStatus_t CUDNNWINAPI +cudnnTransformFilter(cudnnHandle_t handle, + const cudnnTensorTransformDescriptor_t transDesc, + const void *alpha, + const cudnnFilterDescriptor_t srcDesc, + const void *srcData, + const void *beta, + const cudnnFilterDescriptor_t destDesc, + void *destData); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyFilterDescriptor(cudnnFilterDescriptor_t filterDesc); + +/* + * softmax algorithm + */ +typedef enum { + CUDNN_SOFTMAX_FAST = 0, /* straightforward implementation */ + CUDNN_SOFTMAX_ACCURATE = 1, /* subtract max from every point to avoid overflow */ + CUDNN_SOFTMAX_LOG = 2 +} cudnnSoftmaxAlgorithm_t; + +typedef enum { + CUDNN_SOFTMAX_MODE_INSTANCE = 0, /* compute the softmax over all C, H, W for each N */ + CUDNN_SOFTMAX_MODE_CHANNEL = 1 /* compute the softmax over all C for each H, W, N */ +} cudnnSoftmaxMode_t; + +/* Softmax functions: All of the form "output = alpha * Op(inputs) + beta * output" */ + +/* Function to perform forward softmax */ +cudnnStatus_t CUDNNWINAPI +cudnnSoftmaxForward(cudnnHandle_t handle, + cudnnSoftmaxAlgorithm_t algo, + cudnnSoftmaxMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +/* + * pooling mode + */ +typedef enum { + CUDNN_POOLING_MAX = 0, + CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING = 1, /* count for average includes padded values */ + CUDNN_POOLING_AVERAGE_COUNT_EXCLUDE_PADDING = 2, /* count for average does not include padded values */ + CUDNN_POOLING_MAX_DETERMINISTIC = 3 +} cudnnPoolingMode_t; + +/* Create an instance of pooling descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnCreatePoolingDescriptor(cudnnPoolingDescriptor_t *poolingDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetPooling2dDescriptor(cudnnPoolingDescriptor_t poolingDesc, + cudnnPoolingMode_t mode, + cudnnNanPropagation_t maxpoolingNanOpt, + int windowHeight, + int windowWidth, + int verticalPadding, + int horizontalPadding, + int verticalStride, + int horizontalStride); + +cudnnStatus_t CUDNNWINAPI +cudnnGetPooling2dDescriptor(const cudnnPoolingDescriptor_t poolingDesc, + cudnnPoolingMode_t *mode, + cudnnNanPropagation_t *maxpoolingNanOpt, + int *windowHeight, + int *windowWidth, + int *verticalPadding, + int *horizontalPadding, + int *verticalStride, + int *horizontalStride); + +cudnnStatus_t CUDNNWINAPI +cudnnSetPoolingNdDescriptor(cudnnPoolingDescriptor_t poolingDesc, + const cudnnPoolingMode_t mode, + const cudnnNanPropagation_t maxpoolingNanOpt, + int nbDims, + const int windowDimA[], + const int paddingA[], + const int strideA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetPoolingNdDescriptor(const cudnnPoolingDescriptor_t poolingDesc, + int nbDimsRequested, + cudnnPoolingMode_t *mode, + cudnnNanPropagation_t *maxpoolingNanOpt, + int *nbDims, + int windowDimA[], + int paddingA[], + int strideA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetPoolingNdForwardOutputDim(const cudnnPoolingDescriptor_t poolingDesc, + const cudnnTensorDescriptor_t inputTensorDesc, + int nbDims, + int outputTensorDimA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnGetPooling2dForwardOutputDim(const cudnnPoolingDescriptor_t poolingDesc, + const cudnnTensorDescriptor_t inputTensorDesc, + int *n, + int *c, + int *h, + int *w); + +/* Destroy an instance of pooling descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnDestroyPoolingDescriptor(cudnnPoolingDescriptor_t poolingDesc); + +/* Pooling functions: All of the form "output = alpha * Op(inputs) + beta * output" */ + +/* Function to perform forward pooling */ +cudnnStatus_t CUDNNWINAPI +cudnnPoolingForward(cudnnHandle_t handle, + const cudnnPoolingDescriptor_t poolingDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +/* + * activation mode + */ +typedef enum { + CUDNN_ACTIVATION_SIGMOID = 0, + CUDNN_ACTIVATION_RELU = 1, + CUDNN_ACTIVATION_TANH = 2, + CUDNN_ACTIVATION_CLIPPED_RELU = 3, + CUDNN_ACTIVATION_ELU = 4, + CUDNN_ACTIVATION_IDENTITY = 5, + CUDNN_ACTIVATION_SWISH = 6 +} cudnnActivationMode_t; + +/* Activation functions: All of the form "output = alpha * Op(inputs) + beta * output" */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateActivationDescriptor(cudnnActivationDescriptor_t *activationDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetActivationDescriptor(cudnnActivationDescriptor_t activationDesc, + cudnnActivationMode_t mode, + cudnnNanPropagation_t reluNanOpt, + double coef); /* ceiling for clipped RELU, alpha for ELU */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetActivationDescriptor(const cudnnActivationDescriptor_t activationDesc, + cudnnActivationMode_t *mode, + cudnnNanPropagation_t *reluNanOpt, + double *coef); /* ceiling for clipped RELU, alpha for ELU */ + +cudnnStatus_t CUDNNWINAPI +cudnnSetActivationDescriptorSwishBeta(cudnnActivationDescriptor_t activationDesc, double swish_beta); + +cudnnStatus_t CUDNNWINAPI +cudnnGetActivationDescriptorSwishBeta(cudnnActivationDescriptor_t activationDesc, double *swish_beta); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyActivationDescriptor(cudnnActivationDescriptor_t activationDesc); + +/* Function to perform forward activation */ +cudnnStatus_t CUDNNWINAPI +cudnnActivationForward(cudnnHandle_t handle, + cudnnActivationDescriptor_t activationDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +/* + * Create an instance of LRN (Local Response Normalization) descriptor + * Uses lrnN=5, lrnAlpha=1e-4, lrnBeta=0.75, lrnK=2.0 as defaults from Krizhevsky'12 ImageNet paper + */ +cudnnStatus_t CUDNNWINAPI +cudnnCreateLRNDescriptor(cudnnLRNDescriptor_t *normDesc); + +#define CUDNN_LRN_MIN_N 1 /* minimum allowed lrnN */ +#define CUDNN_LRN_MAX_N 16 /* maximum allowed lrnN */ +#define CUDNN_LRN_MIN_K 1e-5 /* minimum allowed lrnK */ +#define CUDNN_LRN_MIN_BETA 0.01 /* minimum allowed lrnBeta */ + +/* LRN layer mode */ +typedef enum { + CUDNN_LRN_CROSS_CHANNEL_DIM1 = 0, /* Normalize across tensor's dimA[1] dimension */ +} cudnnLRNMode_t; + +/* + * Uses a window [center-lookBehind, center+lookAhead], where + * lookBehind = floor( (lrnN-1)/2 ), lookAhead = lrnN-lookBehind-1. + * Values of double parameters cast to tensor data type. + */ +cudnnStatus_t CUDNNWINAPI +cudnnSetLRNDescriptor(cudnnLRNDescriptor_t normDesc, unsigned lrnN, double lrnAlpha, double lrnBeta, double lrnK); +/* + * Retrieve the settings currently stored in an LRN layer descriptor + * Any of the provided pointers can be NULL (no corresponding value will be returned) + */ +cudnnStatus_t CUDNNWINAPI +cudnnGetLRNDescriptor(cudnnLRNDescriptor_t normDesc, unsigned *lrnN, double *lrnAlpha, double *lrnBeta, double *lrnK); + +/* Destroy an instance of LRN descriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnDestroyLRNDescriptor(cudnnLRNDescriptor_t lrnDesc); + +/* LRN functions: output = alpha * normalize(x) + beta * old_y */ + +/* LRN cross-channel forward computation. Double parameters cast to tensor data type */ +cudnnStatus_t CUDNNWINAPI +cudnnLRNCrossChannelForward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnLRNMode_t lrnMode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +typedef enum { + CUDNN_DIVNORM_PRECOMPUTED_MEANS = 0, +} cudnnDivNormMode_t; + +/* LCN/divisive normalization functions: y = alpha * normalize(x) + beta * y */ +cudnnStatus_t CUDNNWINAPI +cudnnDivisiveNormalizationForward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnDivNormMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, /* same desc for means, temp, temp2 */ + const void *x, + const void *means, /* if NULL, means are assumed to be zero */ + void *temp, + void *temp2, + const void *beta, + const cudnnTensorDescriptor_t yDesc, + void *y); + +typedef enum { + /* bnScale, bnBias tensor dims are 1xCxHxWx.. (one value per CHW...-slice, normalized over N slice) */ + CUDNN_BATCHNORM_PER_ACTIVATION = 0, + + /* bnScale, bnBias tensor dims are 1xCx1x1 (one value per C-dim normalized over Nx1xHxW subtensors) */ + CUDNN_BATCHNORM_SPATIAL = 1, + + /* + * bnScale, bnBias tensor dims are 1xCx1x1 (one value per C-dim normalized over Nx1xHxW subtensors). + * May be faster than CUDNN_BATCHNORM_SPATIAL but imposes some limits on the range of values + */ + CUDNN_BATCHNORM_SPATIAL_PERSISTENT = 2, +} cudnnBatchNormMode_t; + +#define CUDNN_BN_MIN_EPSILON 0.0 /* Minimum epsilon allowed to be used in the Batch Normalization formula */ + +/* + * Derives a tensor descriptor from layer data descriptor for BatchNormalization + * scale, invVariance, bnBias, bnScale tensors. Use this tensor desc for + * bnScaleBiasMeanVarDesc and bnScaleBiasDiffDesc in Batch Normalization forward and backward functions. + */ +cudnnStatus_t CUDNNWINAPI +cudnnDeriveBNTensorDescriptor(cudnnTensorDescriptor_t derivedBnDesc, + const cudnnTensorDescriptor_t xDesc, + cudnnBatchNormMode_t mode); + +typedef enum { + CUDNN_BATCHNORM_OPS_BN = 0, /* do batch normalization only */ + CUDNN_BATCHNORM_OPS_BN_ACTIVATION = 1, /* do batchNorm, then activation */ + CUDNN_BATCHNORM_OPS_BN_ADD_ACTIVATION = 2, /* do batchNorm, then elemWiseAdd, then activation */ +} cudnnBatchNormOps_t; + +/* + * Performs Batch Normalization during Inference: + * y[i] = bnScale[k]*(x[i]-estimatedMean[k])/sqrt(epsilon+estimatedVariance[k]) + bnBias[k] + * with bnScale, bnBias, runningMean, runningInvVariance tensors indexed + * according to spatial or per-activation mode. Refer to cudnnBatchNormalizationForwardTraining + * above for notes on function arguments. + */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardInference(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + const cudnnTensorDescriptor_t xDesc, + const void *x, /* NxCxHxW */ + const cudnnTensorDescriptor_t yDesc, + void *y, /* NxCxHxW */ + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const void *bnScale, + const void *bnBias, + const void *estimatedMean, + const void *estimatedVariance, + double epsilon); + +typedef enum { + /* bnScale, bnBias tensor dims are 1xCxHxWx.. (one value per CHW...-slice, normalized over N slice) */ + CUDNN_NORM_PER_ACTIVATION = 0, + + /* bnScale, bnBias tensor dims are 1xCx1x1 (one value per C-dim normalized over Nx1xHxW subtensors) */ + CUDNN_NORM_PER_CHANNEL = 1, +} cudnnNormMode_t; + +typedef enum { CUDNN_NORM_ALGO_STANDARD = 0, CUDNN_NORM_ALGO_PERSIST = 1 } cudnnNormAlgo_t; + +/* + * Derives a tensor descriptor from layer data descriptor for Normalization + * scale, invVariance, bnBias, bnScale tensors. Use this tensor desc for + * normScaleBiasMeanVarDesc and normScaleBiasDiffDesc in Normalization forward and backward functions. + */ +cudnnStatus_t CUDNNWINAPI +cudnnDeriveNormTensorDescriptor(cudnnTensorDescriptor_t derivedNormScaleBiasDesc, + cudnnTensorDescriptor_t derivedNormMeanVarDesc, + const cudnnTensorDescriptor_t xDesc, + cudnnNormMode_t mode, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +typedef enum { + CUDNN_NORM_OPS_NORM = 0, /* do normalization only */ + CUDNN_NORM_OPS_NORM_ACTIVATION = 1, /* do Norm, then activation */ + CUDNN_NORM_OPS_NORM_ADD_ACTIVATION = 2, /* do Norm, then elemWiseAdd, then activation */ +} cudnnNormOps_t; + +/* + * Performs Normalization during Inference: + * y[i] = normScale[k]*(x[i]-estimatedMean[k])/sqrt(epsilon+estimatedVariance[k]) + normBias[k] + * with normScale, normBias, runningMean, runningInvVariance tensors indexed + * according to per-channel or per-activation mode. Refer to cudnnNormalizationForwardTraining + * above for notes on function arguments. + */ +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationForwardInference(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + const cudnnTensorDescriptor_t xDesc, + const void *x, /* NxCxHxW */ + const cudnnTensorDescriptor_t normScaleBiasDesc, + const void *normScale, + const void *normBias, + const cudnnTensorDescriptor_t normMeanVarDesc, + const void *estimatedMean, + const void *estimatedVariance, + const cudnnTensorDescriptor_t zDesc, + const void *z, + cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t yDesc, + void *y, /* NxCxHxW */ + double epsilon, + int groupCnt); /* Place hold for future work*/ + +/* APIs for spatial transformer network*/ +typedef enum { + CUDNN_SAMPLER_BILINEAR = 0, +} cudnnSamplerType_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateSpatialTransformerDescriptor(cudnnSpatialTransformerDescriptor_t *stDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSetSpatialTransformerNdDescriptor(cudnnSpatialTransformerDescriptor_t stDesc, + cudnnSamplerType_t samplerType, + cudnnDataType_t dataType, + const int nbDims, + const int dimA[]); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroySpatialTransformerDescriptor(cudnnSpatialTransformerDescriptor_t stDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfGridGeneratorForward(cudnnHandle_t handle, + const cudnnSpatialTransformerDescriptor_t stDesc, + const void *theta, + void *grid); + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfSamplerForward(cudnnHandle_t handle, + cudnnSpatialTransformerDescriptor_t stDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *grid, + const void *beta, + cudnnTensorDescriptor_t yDesc, + void *y); + +typedef struct cudnnDropoutStruct *cudnnDropoutDescriptor_t; + +cudnnStatus_t CUDNNWINAPI +cudnnCreateDropoutDescriptor(cudnnDropoutDescriptor_t *dropoutDesc); + +cudnnStatus_t CUDNNWINAPI +cudnnDestroyDropoutDescriptor(cudnnDropoutDescriptor_t dropoutDesc); + +/*helper function to determine size of the states to be passed to cudnnSetDropoutDescriptor */ +cudnnStatus_t CUDNNWINAPI +cudnnDropoutGetStatesSize(cudnnHandle_t handle, size_t *sizeInBytes); + +/*helper function to determine size of the reserve space to be passed to dropout forward/backward calls */ +cudnnStatus_t CUDNNWINAPI +cudnnDropoutGetReserveSpaceSize(cudnnTensorDescriptor_t xdesc, size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnSetDropoutDescriptor(cudnnDropoutDescriptor_t dropoutDesc, + cudnnHandle_t handle, + float dropout, + void *states, + size_t stateSizeInBytes, + unsigned long long seed); + +/* Restores the dropout descriptor to a previously saved-off state */ +cudnnStatus_t CUDNNWINAPI +cudnnRestoreDropoutDescriptor(cudnnDropoutDescriptor_t dropoutDesc, + cudnnHandle_t handle, + float dropout, + void *states, + size_t stateSizeInBytes, + unsigned long long seed); + +cudnnStatus_t CUDNNWINAPI +cudnnGetDropoutDescriptor(cudnnDropoutDescriptor_t dropoutDesc, + cudnnHandle_t handle, + float *dropout, + void **states, + unsigned long long *seed); + +cudnnStatus_t CUDNNWINAPI +cudnnDropoutForward(cudnnHandle_t handle, + const cudnnDropoutDescriptor_t dropoutDesc, + const cudnnTensorDescriptor_t xdesc, + const void *x, + const cudnnTensorDescriptor_t ydesc, + void *y, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* TODO: remove */ + +typedef struct cudnnAlgorithmStruct *cudnnAlgorithmDescriptor_t; +typedef struct cudnnAlgorithmPerformanceStruct *cudnnAlgorithmPerformance_t; + +/* TODO: move these enums out to the appropriate submodule */ +typedef enum { + CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM = 0, + CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM = 1, + CUDNN_CONVOLUTION_FWD_ALGO_GEMM = 2, + CUDNN_CONVOLUTION_FWD_ALGO_DIRECT = 3, + CUDNN_CONVOLUTION_FWD_ALGO_FFT = 4, + CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING = 5, + CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD = 6, + CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED = 7, + CUDNN_CONVOLUTION_FWD_ALGO_COUNT = 8 +} cudnnConvolutionFwdAlgo_t; + +typedef enum { + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0 = 0, /* non-deterministic */ + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1 = 1, + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_FFT = 2, + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_3 = 3, /* non-deterministic */ + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD = 4, /* not implemented */ + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_WINOGRAD_NONFUSED = 5, + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_FFT_TILING = 6, + CUDNN_CONVOLUTION_BWD_FILTER_ALGO_COUNT = 7 +} cudnnConvolutionBwdFilterAlgo_t; + +typedef enum { + CUDNN_CONVOLUTION_BWD_DATA_ALGO_0 = 0, /* non-deterministic */ + CUDNN_CONVOLUTION_BWD_DATA_ALGO_1 = 1, + CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT = 2, + CUDNN_CONVOLUTION_BWD_DATA_ALGO_FFT_TILING = 3, + CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD = 4, + CUDNN_CONVOLUTION_BWD_DATA_ALGO_WINOGRAD_NONFUSED = 5, + CUDNN_CONVOLUTION_BWD_DATA_ALGO_COUNT = 6 +} cudnnConvolutionBwdDataAlgo_t; + +typedef enum { + CUDNN_RNN_ALGO_STANDARD = 0, + CUDNN_RNN_ALGO_PERSIST_STATIC = 1, + CUDNN_RNN_ALGO_PERSIST_DYNAMIC = 2, + CUDNN_RNN_ALGO_PERSIST_STATIC_SMALL_H = 3, + CUDNN_RNN_ALGO_COUNT = 4, +} cudnnRNNAlgo_t; + +typedef enum { CUDNN_CTC_LOSS_ALGO_DETERMINISTIC = 0, CUDNN_CTC_LOSS_ALGO_NON_DETERMINISTIC = 1 } cudnnCTCLossAlgo_t; + +/* TODO: remove */ +typedef struct cudnnAlgorithmUnionStruct { + union Algorithm { + cudnnConvolutionFwdAlgo_t convFwdAlgo; + cudnnConvolutionBwdFilterAlgo_t convBwdFilterAlgo; + cudnnConvolutionBwdDataAlgo_t convBwdDataAlgo; + cudnnRNNAlgo_t RNNAlgo; + cudnnCTCLossAlgo_t CTCLossAlgo; + } algo; +} cudnnAlgorithm_t; + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreateAlgorithmDescriptor(cudnnAlgorithmDescriptor_t *algoDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetAlgorithmDescriptor(cudnnAlgorithmDescriptor_t algoDesc, cudnnAlgorithm_t algorithm); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetAlgorithmDescriptor(const cudnnAlgorithmDescriptor_t algoDesc, cudnnAlgorithm_t *algorithm); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCopyAlgorithmDescriptor(const cudnnAlgorithmDescriptor_t src, cudnnAlgorithmDescriptor_t dest); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyAlgorithmDescriptor(cudnnAlgorithmDescriptor_t algoDesc); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnCreateAlgorithmPerformance(cudnnAlgorithmPerformance_t *algoPerf, int numberToCreate); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSetAlgorithmPerformance(cudnnAlgorithmPerformance_t algoPerf, + cudnnAlgorithmDescriptor_t algoDesc, + cudnnStatus_t status, + float time, + size_t memory); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetAlgorithmPerformance(const cudnnAlgorithmPerformance_t algoPerf, + cudnnAlgorithmDescriptor_t *algoDesc, + cudnnStatus_t *status, + float *time, + size_t *memory); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnDestroyAlgorithmPerformance(cudnnAlgorithmPerformance_t *algoPerf, int numberToDestroy); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnGetAlgorithmSpaceSize(cudnnHandle_t handle, cudnnAlgorithmDescriptor_t algoDesc, size_t *algoSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnSaveAlgorithm(cudnnHandle_t handle, + cudnnAlgorithmDescriptor_t algoDesc, + void *algoSpace, + size_t algoSpaceSizeInBytes); + +CUDNN_DEPRECATED cudnnStatus_t CUDNNWINAPI +cudnnRestoreAlgorithm(cudnnHandle_t handle, + void *algoSpace, + size_t algoSpaceSizeInBytes, + cudnnAlgorithmDescriptor_t algoDesc); + +typedef enum { + CUDNN_SEV_FATAL = 0, + CUDNN_SEV_ERROR = 1, + CUDNN_SEV_WARNING = 2, + CUDNN_SEV_INFO = 3, +} cudnnSeverity_t; + +/* Message masks to be used with cudnnSetCallback() */ +#define CUDNN_SEV_ERROR_EN (1U << CUDNN_SEV_ERROR) +#define CUDNN_SEV_WARNING_EN (1U << CUDNN_SEV_WARNING) +#define CUDNN_SEV_INFO_EN (1U << CUDNN_SEV_INFO) + +/* struct containing useful informaiton for each API call */ +typedef struct cudnnDebugStruct { + unsigned cudnn_version; + cudnnStatus_t cudnnStatus; + unsigned time_sec; /* epoch time in seconds */ + unsigned time_usec; /* microseconds part of epoch time */ + unsigned time_delta; /* time since start in seconds */ + cudnnHandle_t handle; /* cudnn handle */ + cudaStream_t stream; /* cuda stream ID */ + unsigned long long pid; /* process ID */ + unsigned long long tid; /* thread ID */ + int cudaDeviceId; /* CUDA device ID */ + int reserved[15]; /* reserved for future use */ +} cudnnDebug_t; + +typedef void (*cudnnCallback_t)(cudnnSeverity_t sev, void *udata, const cudnnDebug_t *dbg, const char *msg); + +cudnnStatus_t CUDNNWINAPI +cudnnSetCallback(unsigned mask, void *udata, cudnnCallback_t fptr); + +cudnnStatus_t CUDNNWINAPI +cudnnGetCallback(unsigned *mask, void **udata, cudnnCallback_t *fptr); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnOpsInferVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_OPS_INFER_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h new file mode 100644 index 0000000000000000000000000000000000000000..425c7c684968d76e1154de76eac082e61ec62f36 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train.h @@ -0,0 +1,501 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_ops_train : cuDNN's basic training operations and algorithms. + */ + +#if !defined(CUDNN_OPS_TRAIN_H_) +#define CUDNN_OPS_TRAIN_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_OPS_TRAIN_MAJOR 8 +#define CUDNN_OPS_TRAIN_MINOR 9 +#define CUDNN_OPS_TRAIN_PATCH 2 + +#if (CUDNN_OPS_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_OPS_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_OPS_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN OPS TRAIN!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* Function to perform backward softmax */ +cudnnStatus_t CUDNNWINAPI +cudnnSoftmaxBackward(cudnnHandle_t handle, + cudnnSoftmaxAlgorithm_t algo, + cudnnSoftmaxMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward pooling */ +cudnnStatus_t CUDNNWINAPI +cudnnPoolingBackward(cudnnHandle_t handle, + const cudnnPoolingDescriptor_t poolingDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward activation */ +cudnnStatus_t CUDNNWINAPI +cudnnActivationBackward(cudnnHandle_t handle, + cudnnActivationDescriptor_t activationDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* LRN cross-channel backward computation. Double parameters cast to tensor data type */ +cudnnStatus_t CUDNNWINAPI +cudnnLRNCrossChannelBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnLRNMode_t lrnMode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +cudnnStatus_t CUDNNWINAPI +cudnnDivisiveNormalizationBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnDivNormMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, means, dy, temp, temp2 */ + const void *x, + const void *means, /* if NULL, means are assumed to be zero */ + const void *dy, + void *temp, + void *temp2, + const void *beta, + const cudnnTensorDescriptor_t dXdMeansDesc, /* same desc for dx, dMeans */ + void *dx, /* output x differential */ + void *dMeans); /* output means differential, can be NULL */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationForwardTrainingExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationBackwardExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationTrainingExReserveSpaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes); + +/* Computes y = BN(x). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTraining( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *x, /* NxCxHxW */ + const cudnnTensorDescriptor_t yDesc, + void *y, /* NxCxHxW */ + + /* Shared desc for the next 6 tensors in the argument list. + Data type to be set as follows: + type = (typeOf(x) == double) ? double : float + Dimensions for this descriptor depend on normalization mode + - Spatial Normalization : tensors are expected to have dims 1xCx1x1 + (normalization is performed across NxHxW) + - Per-Activation Normalization : tensors are expected to have dims of 1xCxHxW + (normalization is performed across N) */ + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + + /* 'Gamma' and 'Beta' respectively in Ioffe and Szegedy's paper's notation */ + const void *bnScale, + const void *bnBias, + + /* MUST use factor=1 in the very first call of a complete training cycle. + Use a factor=1/(1+n) at N-th call to the function to get + Cumulative Moving Average (CMA) behavior + CMA[n] = (x[1]+...+x[n])/n + Since CMA[n+1] = (n*CMA[n]+x[n+1])/(n+1) = + ((n+1)*CMA[n]-CMA[n])/(n+1) + x[n+1]/(n+1) = + CMA[n]*(1-1/(n+1)) + x[n+1]*1/(n+1) */ + double exponentialAverageFactor, + + /* Used in Training phase only. + runningMean = newMean*factor + runningMean*(1-factor) */ + void *resultRunningMean, + /* Output in training mode, input in inference. Is the moving average + of variance[x] (factor is applied in the same way as for runningMean) */ + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance); + +/* Computes y = relu(BN(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTrainingEx( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const void *bnScale, + const void *bnBias, + + double exponentialAverageFactor, + void *resultRunningMean, + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + + cudnnActivationDescriptor_t activationDesc, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* Performs backward pass of Batch Normalization layer. Returns x gradient, +* bnScale gradient and bnBias gradient */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackward(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, dx, dy */ + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScale, /* bnBias doesn't affect backpropagation */ + /* scale and bias diff are not backpropagated below this layer */ + void *dBnScaleResult, + void *dBnBiasResult, + /* Same epsilon as forward pass */ + double epsilon, + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance); + +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackwardEx(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScaleData, + const void *bnBiasData, /* needed if there is activation */ + void *dBnScaleData, + void *dBnBiasData, + double epsilon, /* Same epsilon as forward pass */ + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationForwardTrainingWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationBackwardWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationTrainingReserveSpaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +/* Computes y = relu(Norm(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationForwardTraining(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const void *normScale, + const void *normBias, + double exponentialAverageFactor, + const cudnnTensorDescriptor_t normMeanVarDesc, + void *resultRunningMean, + void *resultRunningVariance, + /* Has to be >= 0. Should be the same in forward and backward functions. */ + double epsilon, + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationBackward(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const void *normScaleData, + const void *normBiasData, /* needed if there is activation */ + void *dNormScaleData, + void *dNormBiasData, + double epsilon, /* Same epsilon as forward pass */ + const cudnnTensorDescriptor_t normMeanVarDesc, + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfGridGeneratorBackward(cudnnHandle_t handle, + const cudnnSpatialTransformerDescriptor_t stDesc, + const void *dgrid, + void *dtheta); + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfSamplerBackward(cudnnHandle_t handle, + cudnnSpatialTransformerDescriptor_t stDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + const void *alphaDgrid, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *grid, + const void *betaDgrid, + void *dgrid); + +cudnnStatus_t CUDNNWINAPI +cudnnDropoutBackward(cudnnHandle_t handle, + const cudnnDropoutDescriptor_t dropoutDesc, + const cudnnTensorDescriptor_t dydesc, + const void *dy, + const cudnnTensorDescriptor_t dxdesc, + void *dx, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnOpsTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_OPS_TRAIN_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train_v8.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train_v8.h new file mode 100644 index 0000000000000000000000000000000000000000..425c7c684968d76e1154de76eac082e61ec62f36 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/cudnn/include/cudnn_ops_train_v8.h @@ -0,0 +1,501 @@ +/* + * Copyright 2014-2023 NVIDIA Corporation. All rights reserved. + * + * NOTICE TO LICENSEE: + * + * This source code and/or documentation ("Licensed Deliverables") are + * subject to NVIDIA intellectual property rights under U.S. and + * international Copyright laws. + * + * These Licensed Deliverables contained herein is PROPRIETARY and + * CONFIDENTIAL to NVIDIA and is being provided under the terms and + * conditions of a form of NVIDIA software license agreement by and + * between NVIDIA and Licensee ("License Agreement") or electronically + * accepted by Licensee. Notwithstanding any terms or conditions to + * the contrary in the License Agreement, reproduction or disclosure + * of the Licensed Deliverables to any third party without the express + * written consent of NVIDIA is prohibited. + * + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE + * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS + * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. + * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED + * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, + * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. + * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE + * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY + * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY + * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, + * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS + * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THESE LICENSED DELIVERABLES. + * + * U.S. Government End Users. These Licensed Deliverables are a + * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT + * 1995), consisting of "commercial computer software" and "commercial + * computer software documentation" as such terms are used in 48 + * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government + * only as a commercial end item. Consistent with 48 C.F.R.12.212 and + * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all + * U.S. Government End Users acquire the Licensed Deliverables with + * only those rights set forth herein. + * + * Any use of the Licensed Deliverables in individual and commercial + * software must include, in the user documentation and internal + * comments to the code, the above Disclaimer and U.S. Government End + * Users Notice. + */ + +/* + * cudnn_ops_train : cuDNN's basic training operations and algorithms. + */ + +#if !defined(CUDNN_OPS_TRAIN_H_) +#define CUDNN_OPS_TRAIN_H_ + +#include +#include + +#include "cudnn_version.h" +#include "cudnn_ops_infer.h" + +/* These version numbers are autogenerated, do not edit manually. */ +#define CUDNN_OPS_TRAIN_MAJOR 8 +#define CUDNN_OPS_TRAIN_MINOR 9 +#define CUDNN_OPS_TRAIN_PATCH 2 + +#if (CUDNN_OPS_TRAIN_MAJOR != CUDNN_MAJOR) || (CUDNN_OPS_TRAIN_MINOR != CUDNN_MINOR) || \ + (CUDNN_OPS_TRAIN_PATCH != CUDNN_PATCHLEVEL) +#error Version mismatch in cuDNN OPS TRAIN!!! +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +/* Function to perform backward softmax */ +cudnnStatus_t CUDNNWINAPI +cudnnSoftmaxBackward(cudnnHandle_t handle, + cudnnSoftmaxAlgorithm_t algo, + cudnnSoftmaxMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward pooling */ +cudnnStatus_t CUDNNWINAPI +cudnnPoolingBackward(cudnnHandle_t handle, + const cudnnPoolingDescriptor_t poolingDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* Function to perform backward activation */ +cudnnStatus_t CUDNNWINAPI +cudnnActivationBackward(cudnnHandle_t handle, + cudnnActivationDescriptor_t activationDesc, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +/* LRN cross-channel backward computation. Double parameters cast to tensor data type */ +cudnnStatus_t CUDNNWINAPI +cudnnLRNCrossChannelBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnLRNMode_t lrnMode, + const void *alpha, + const cudnnTensorDescriptor_t yDesc, + const void *y, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx); + +cudnnStatus_t CUDNNWINAPI +cudnnDivisiveNormalizationBackward(cudnnHandle_t handle, + cudnnLRNDescriptor_t normDesc, + cudnnDivNormMode_t mode, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, means, dy, temp, temp2 */ + const void *x, + const void *means, /* if NULL, means are assumed to be zero */ + const void *dy, + void *temp, + void *temp2, + const void *beta, + const cudnnTensorDescriptor_t dXdMeansDesc, /* same desc for dx, dMeans */ + void *dx, /* output x differential */ + void *dMeans); /* output means differential, can be NULL */ + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationForwardTrainingExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationBackwardExWorkspaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + size_t *sizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetBatchNormalizationTrainingExReserveSpaceSize(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes); + +/* Computes y = BN(x). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTraining( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *x, /* NxCxHxW */ + const cudnnTensorDescriptor_t yDesc, + void *y, /* NxCxHxW */ + + /* Shared desc for the next 6 tensors in the argument list. + Data type to be set as follows: + type = (typeOf(x) == double) ? double : float + Dimensions for this descriptor depend on normalization mode + - Spatial Normalization : tensors are expected to have dims 1xCx1x1 + (normalization is performed across NxHxW) + - Per-Activation Normalization : tensors are expected to have dims of 1xCxHxW + (normalization is performed across N) */ + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + + /* 'Gamma' and 'Beta' respectively in Ioffe and Szegedy's paper's notation */ + const void *bnScale, + const void *bnBias, + + /* MUST use factor=1 in the very first call of a complete training cycle. + Use a factor=1/(1+n) at N-th call to the function to get + Cumulative Moving Average (CMA) behavior + CMA[n] = (x[1]+...+x[n])/n + Since CMA[n+1] = (n*CMA[n]+x[n+1])/(n+1) = + ((n+1)*CMA[n]-CMA[n])/(n+1) + x[n+1]/(n+1) = + CMA[n]*(1-1/(n+1)) + x[n+1]*1/(n+1) */ + double exponentialAverageFactor, + + /* Used in Training phase only. + runningMean = newMean*factor + runningMean*(1-factor) */ + void *resultRunningMean, + /* Output in training mode, input in inference. Is the moving average + of variance[x] (factor is applied in the same way as for runningMean) */ + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance); + +/* Computes y = relu(BN(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationForwardTrainingEx( + cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + + const cudnnTensorDescriptor_t bnScaleBiasMeanVarDesc, + const void *bnScale, + const void *bnBias, + + double exponentialAverageFactor, + void *resultRunningMean, + void *resultRunningVariance, + + /* Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ + double epsilon, + + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + + cudnnActivationDescriptor_t activationDesc, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* Performs backward pass of Batch Normalization layer. Returns x gradient, +* bnScale gradient and bnBias gradient */ +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackward(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, /* same desc for x, dx, dy */ + const void *x, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScale, /* bnBias doesn't affect backpropagation */ + /* scale and bias diff are not backpropagated below this layer */ + void *dBnScaleResult, + void *dBnBiasResult, + /* Same epsilon as forward pass */ + double epsilon, + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance); + +cudnnStatus_t CUDNNWINAPI +cudnnBatchNormalizationBackwardEx(cudnnHandle_t handle, + cudnnBatchNormMode_t mode, + cudnnBatchNormOps_t bnOps, + + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dBnScaleBiasDesc, + const void *bnScaleData, + const void *bnBiasData, /* needed if there is activation */ + void *dBnScaleData, + void *dBnBiasData, + double epsilon, /* Same epsilon as forward pass */ + + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationForwardTrainingWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t zDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationBackwardWorkspaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnTensorDescriptor_t xDesc, + const cudnnTensorDescriptor_t yDesc, + const cudnnTensorDescriptor_t dyDesc, + const cudnnTensorDescriptor_t dzDesc, + const cudnnTensorDescriptor_t dxDesc, + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t normMeanVarDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnGetNormalizationTrainingReserveSpaceSize(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t xDesc, + size_t *sizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +/* Computes y = relu(Norm(x) + z). Also accumulates moving averages of mean and inverse variances */ +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationForwardTraining(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alpha, /* alpha[0] = result blend factor */ + const void *beta, /* beta[0] = dest layer blend factor */ + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t normScaleBiasDesc, + const void *normScale, + const void *normBias, + double exponentialAverageFactor, + const cudnnTensorDescriptor_t normMeanVarDesc, + void *resultRunningMean, + void *resultRunningVariance, + /* Has to be >= 0. Should be the same in forward and backward functions. */ + double epsilon, + /* Optionally save intermediate results from the forward pass here + - can be reused to speed up backward pass. NULL if unused */ + void *resultSaveMean, + void *resultSaveInvVariance, + cudnnActivationDescriptor_t activationDesc, + const cudnnTensorDescriptor_t zDesc, + const void *zData, + const cudnnTensorDescriptor_t yDesc, + void *yData, + void *workspace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnNormalizationBackward(cudnnHandle_t handle, + cudnnNormMode_t mode, + cudnnNormOps_t normOps, + cudnnNormAlgo_t algo, + const void *alphaDataDiff, + const void *betaDataDiff, + const void *alphaParamDiff, + const void *betaParamDiff, + const cudnnTensorDescriptor_t xDesc, + const void *xData, + const cudnnTensorDescriptor_t yDesc, + const void *yData, + const cudnnTensorDescriptor_t dyDesc, + const void *dyData, + const cudnnTensorDescriptor_t dzDesc, + void *dzData, + const cudnnTensorDescriptor_t dxDesc, + void *dxData, + /* Shared tensor desc for the 4 tensors below */ + const cudnnTensorDescriptor_t dNormScaleBiasDesc, + const void *normScaleData, + const void *normBiasData, /* needed if there is activation */ + void *dNormScaleData, + void *dNormBiasData, + double epsilon, /* Same epsilon as forward pass */ + const cudnnTensorDescriptor_t normMeanVarDesc, + /* Optionally cached intermediate results from + forward pass */ + const void *savedMean, + const void *savedInvVariance, + cudnnActivationDescriptor_t activationDesc, + void *workSpace, + size_t workSpaceSizeInBytes, + void *reserveSpace, + size_t reserveSpaceSizeInBytes, + int groupCnt); /* Place hold for future work, should be set to 1 now*/ + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfGridGeneratorBackward(cudnnHandle_t handle, + const cudnnSpatialTransformerDescriptor_t stDesc, + const void *dgrid, + void *dtheta); + +cudnnStatus_t CUDNNWINAPI +cudnnSpatialTfSamplerBackward(cudnnHandle_t handle, + cudnnSpatialTransformerDescriptor_t stDesc, + const void *alpha, + const cudnnTensorDescriptor_t xDesc, + const void *x, + const void *beta, + const cudnnTensorDescriptor_t dxDesc, + void *dx, + const void *alphaDgrid, + const cudnnTensorDescriptor_t dyDesc, + const void *dy, + const void *grid, + const void *betaDgrid, + void *dgrid); + +cudnnStatus_t CUDNNWINAPI +cudnnDropoutBackward(cudnnHandle_t handle, + const cudnnDropoutDescriptor_t dropoutDesc, + const cudnnTensorDescriptor_t dydesc, + const void *dy, + const cudnnTensorDescriptor_t dxdesc, + void *dx, + void *reserveSpace, + size_t reserveSpaceSizeInBytes); + +/* + * \brief Cross-library version checker. + * This function is implemented differently in each sub-library. Each sublib + * checks whether its own version matches that of its dependencies. + * \returns CUDNN_STATUS_SUCCESS if the version check passes, + * CUDNN_STATUS_VERSION_MISMATCH if the versions are inconsistent. + */ +cudnnStatus_t CUDNNWINAPI +cudnnOpsTrainVersionCheck(void); + +#if defined(__cplusplus) +} +#endif + +#endif /* CUDNN_OPS_TRAIN_H_ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1e373aa4c7a86649ca547285f3beca7e6b208fd2 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvToolsExtSync.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvToolsExtSync.h new file mode 100644 index 0000000000000000000000000000000000000000..afc3db98fe1b05d8dfb309221243ae3b4c14dd9d --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvToolsExtSync.h @@ -0,0 +1,411 @@ +/* +* Copyright 2009-2016 NVIDIA Corporation. All rights reserved. +* +* NOTICE TO USER: +* +* This source code is subject to NVIDIA ownership rights under U.S. and +* international Copyright laws. +* +* This software and the information contained herein is PROPRIETARY and +* CONFIDENTIAL to NVIDIA and is being provided under the terms and conditions +* of a form of NVIDIA software license agreement. +* +* NVIDIA MAKES NO REPRESENTATION ABOUT THE SUITABILITY OF THIS SOURCE +* CODE FOR ANY PURPOSE. IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR +* IMPLIED WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH +* REGARD TO THIS SOURCE CODE, INCLUDING ALL IMPLIED WARRANTIES OF +* MERCHANTABILITY, NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. +* IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, +* OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS +* OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +* OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE +* OR PERFORMANCE OF THIS SOURCE CODE. +* +* U.S. Government End Users. This source code is a "commercial item" as +* that term is defined at 48 C.F.R. 2.101 (OCT 1995), consisting of +* "commercial computer software" and "commercial computer software +* documentation" as such terms are used in 48 C.F.R. 12.212 (SEPT 1995) +* and is provided to the U.S. Government only as a commercial end item. +* Consistent with 48 C.F.R.12.212 and 48 C.F.R. 227.7202-1 through +* 227.7202-4 (JUNE 1995), all U.S. Government End Users acquire the +* source code with only those rights set forth herein. +* +* Any use of this source code in individual and commercial software must +* include, in the user documentation and internal comments to the code, +* the above Disclaimer and U.S. Government End Users Notice. +*/ + +#include "nvToolsExt.h" + +#ifndef NVTOOLSEXT_SYNC_V3 +#define NVTOOLSEXT_SYNC_V3 + +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ + +/* \cond SHOW_HIDDEN +* \version \NVTX_VERSION_2 +*/ +#define NVTX_SYNCUSER_ATTRIB_STRUCT_SIZE ( (uint16_t)( sizeof(nvtxSyncUserAttributes_v0) ) ) +/** \endcond */ + + +/** +* \page PAGE_SYNCHRONIZATION Synchronization +* +* This section covers a subset of the API that allow users to track additional +* synchronization details of their application. Naming OS synchronization primitives +* may allow users to better understand the data collected by traced synchronization +* APIs. Additionally, a user defined synchronization object can allow the users to +* to tell the tools when the user is building their own synchronization system +* that do not rely on the OS to provide behaviors and instead use techniques like +* atomic operations and spinlocks. +* +* See module \ref SYNCHRONIZATION for details. +* +* \par Example: +* \code +* class MyMutex +* { +* volatile long bLocked; +* nvtxSyncUser_t hSync; +* public: +* MyMutex(const char* name, nvtxDomainHandle_t d){ +* bLocked = 0; +* +* nvtxSyncUserAttributes_t attribs = { 0 }; +* attribs.version = NVTX_VERSION; +* attribs.size = NVTX_SYNCUSER_ATTRIB_STRUCT_SIZE; +* attribs.messageType = NVTX_MESSAGE_TYPE_ASCII; +* attribs.message.ascii = name; +* hSync = nvtxDomainSyncUserCreate(d, &attribs); +* } +* +* ~MyMutex() { +* nvtxDomainSyncUserDestroy(hSync); +* } +* +* bool Lock() { +* nvtxDomainSyncUserAcquireStart(hSync); +* bool acquired = __sync_bool_compare_and_swap(&bLocked, 0, 1);//atomic compiler intrinsic + +* if (acquired) { +* nvtxDomainSyncUserAcquireSuccess(hSync); +* } +* else { +* nvtxDomainSyncUserAcquireFailed(hSync); +* } +* return acquired; +* } + +* void Unlock() { +* nvtxDomainSyncUserReleasing(hSync); +* bLocked = false; +* } +* }; +* \endcode +* +* \version \NVTX_VERSION_2 +*/ + +/* ------------------------------------------------------------------------- */ +/* \cond SHOW_HIDDEN +* \brief Used to build a non-colliding value for resource types separated class +* \version \NVTX_VERSION_2 +*/ +#define NVTX_RESOURCE_CLASS_SYNC_OS 2 /**< Synchronization objects that are OS specific. */ +#define NVTX_RESOURCE_CLASS_SYNC_PTHREAD 3 /**< Synchronization objects that are from the POSIX Threads API (pthread)*/ +/** \endcond */ + + +/* ------------------------------------------------------------------------- */ +/** \defgroup SYNCHRONIZATION Synchronization +* See page \ref PAGE_SYNCHRONIZATION. +* @{ +*/ + +/** \brief Resource type values for OSs with POSIX Thread API support + */ +typedef enum nvtxResourceSyncPosixThreadType_t +{ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_MUTEX = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 1), /* pthread_mutex_t */ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_CONDITION = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 2), /* pthread_cond_t */ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_RWLOCK = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 3), /* pthread_rwlock_t */ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_BARRIER = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 4), /* pthread_barrier_t */ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_SPINLOCK = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 5), /* pthread_spinlock_t */ + NVTX_RESOURCE_TYPE_SYNC_PTHREAD_ONCE = NVTX_RESOURCE_MAKE_TYPE(SYNC_PTHREAD, 6) /* pthread_once_t */ +} nvtxResourceSyncPosixThreadType_t; + +/** \brief Resource type values for Windows OSs +*/ +typedef enum nvtxResourceSyncWindowsType_t +{ + NVTX_RESOURCE_TYPE_SYNC_WINDOWS_MUTEX = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 1), + NVTX_RESOURCE_TYPE_SYNC_WINDOWS_SEMAPHORE = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 2), + NVTX_RESOURCE_TYPE_SYNC_WINDOWS_EVENT = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 3), + NVTX_RESOURCE_TYPE_SYNC_WINDOWS_CRITICAL_SECTION = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 4), + NVTX_RESOURCE_TYPE_SYNC_WINDOWS_SRWLOCK = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 5) +} nvtxResourceSyncWindowsType_t; + +/** \brief Resource type values for Linux and Linux derived OSs such as Android +* \sa +* ::nvtxResourceSyncPosixThreadType_t +*/ +typedef enum nvtxResourceSyncLinuxType_t +{ + NVTX_RESOURCE_TYPE_SYNC_LINUX_MUTEX = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 1), + NVTX_RESOURCE_TYPE_SYNC_LINUX_FUTEX = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 2), + NVTX_RESOURCE_TYPE_SYNC_LINUX_SEMAPHORE = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 3), + NVTX_RESOURCE_TYPE_SYNC_LINUX_COMPLETION = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 4), + NVTX_RESOURCE_TYPE_SYNC_LINUX_SPINLOCK = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 5), + NVTX_RESOURCE_TYPE_SYNC_LINUX_SEQLOCK = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 6), + NVTX_RESOURCE_TYPE_SYNC_LINUX_RCU = NVTX_RESOURCE_MAKE_TYPE(SYNC_OS, 7) +} nvtxResourceSyncLinuxType_t; + +/** \brief Resource type values for Android come from Linux. +* \sa +* ::nvtxResourceSyncLinuxType_t +* ::nvtxResourceSyncPosixThreadType_t +*/ +typedef enum nvtxResourceSyncLinuxType_t nvtxResourceSyncAndroidType_t; + +/** \brief User Defined Synchronization Object Handle . +* \anchor SYNCUSER_HANDLE_STRUCTURE +* +* This structure is opaque to the user and is used as a handle to reference +* a user defined syncrhonization object. The tools will return a pointer through the API for the application +* to hold on it's behalf to reference the string in the future. +* +*/ +typedef struct nvtxSyncUser* nvtxSyncUser_t; + +/** \brief User Defined Synchronization Object Attributes Structure. +* \anchor USERDEF_SYNC_ATTRIBUTES_STRUCTURE +* +* This structure is used to describe the attributes of a user defined synchronization +* object. The layout of the structure is defined by a specific version of the tools +* extension library and can change between different versions of the Tools Extension +* library. +* +* \par Initializing the Attributes +* +* The caller should always perform the following three tasks when using +* attributes: +*
    +*
  • Zero the structure +*
  • Set the version field +*
  • Set the size field +*
+* +* Zeroing the structure sets all the event attributes types and values +* to the default value. +* +* The version and size field are used by the Tools Extension +* implementation to handle multiple versions of the attributes structure. +* +* It is recommended that the caller use one of the following to methods +* to initialize the event attributes structure: +* +* \par Method 1: Initializing nvtxEventAttributes for future compatibility +* \code +* nvtxSyncUserAttributes_t attribs = {0}; +* attribs.version = NVTX_VERSION; +* attribs.size = NVTX_SYNCUSER_ATTRIB_STRUCT_SIZE; +* \endcode +* +* \par Method 2: Initializing nvtxSyncUserAttributes_t for a specific version +* \code +* nvtxSyncUserAttributes_t attribs = {0}; +* attribs.version = 1; +* attribs.size = (uint16_t)(sizeof(nvtxSyncUserAttributes_t)); +* \endcode +* +* If the caller uses Method 1 it is critical that the entire binary +* layout of the structure be configured to 0 so that all fields +* are initialized to the default value. +* +* The caller should either use both NVTX_VERSION and +* NVTX_SYNCUSER_ATTRIB_STRUCT_SIZE (Method 1) or use explicit values +* and a versioned type (Method 2). Using a mix of the two methods +* will likely cause either source level incompatibility or binary +* incompatibility in the future. +* +* \par Settings Attribute Types and Values +* +* +* \par Example: +* \code +* // Initialize +* nvtxSyncUserAttributes_t attribs = {0}; +* attribs.version = NVTX_VERSION; +* attribs.size = NVTX_SYNCUSER_ATTRIB_STRUCT_SIZE; +* +* // Configure the Attributes +* attribs.messageType = NVTX_MESSAGE_TYPE_ASCII; +* attribs.message.ascii = "Example"; +* \endcode +* +* \sa +* ::nvtxDomainSyncUserCreate +*/ +typedef struct nvtxSyncUserAttributes_v0 +{ + /** + * \brief Version flag of the structure. + * + * Needs to be set to NVTX_VERSION to indicate the version of NVTX APIs + * supported in this header file. This can optionally be overridden to + * another version of the tools extension library. + */ + uint16_t version; + + /** + * \brief Size of the structure. + * + * Needs to be set to the size in bytes of the event attribute + * structure used to specify the event. + */ + uint16_t size; + + /** \brief Message type specified in this attribute structure. + * + * Defines the message format of the attribute structure's \ref nvtxSyncUserAttributes_v0::message + * "message" field. + * + * Default Value is NVTX_MESSAGE_UNKNOWN + */ + int32_t messageType; /* nvtxMessageType_t */ + + /** \brief Message assigned to this attribute structure. + * + * The text message that is attached to an event. + */ + nvtxMessageValue_t message; + +} nvtxSyncUserAttributes_v0; + +typedef struct nvtxSyncUserAttributes_v0 nvtxSyncUserAttributes_t; + +/* ------------------------------------------------------------------------- */ +/** \brief Create a user defined synchronization object +* This is used to track non-OS synchronization working with spinlocks and atomics +* +* \param domain - Domain to own the resource +* \param attribs - A structure to assign multiple attributes to the object. +* +* \return A handle that represents the newly created user defined synchronization object. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/ +NVTX_DECLSPEC nvtxSyncUser_t NVTX_API nvtxDomainSyncUserCreate(nvtxDomainHandle_t domain, const nvtxSyncUserAttributes_t* attribs); + +/* ------------------------------------------------------------------------- */ +/** \brief Destroy a user defined synchronization object +* This is used to track non-OS synchronization working with spinlocks and atomics +* +* \param handle - A handle to the object to operate on. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/ +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserDestroy(nvtxSyncUser_t handle); + +/* ------------------------------------------------------------------------- */ +/** \brief Signal to tools that an attempt to acquire a user defined synchronization object +* +* \param handle - A handle to the object to operate on. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/ +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireStart(nvtxSyncUser_t handle); + +/* ------------------------------------------------------------------------- */ +/** \brief Signal to tools of failure in acquiring a user defined synchronization object +* This should be called after \ref nvtxDomainSyncUserAcquireStart +* +* \param handle - A handle to the object to operate on. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireFailed(nvtxSyncUser_t handle); + +/* ------------------------------------------------------------------------- */ +/** \brief Signal to tools of success in acquiring a user defined synchronization object +* This should be called after \ref nvtxDomainSyncUserAcquireStart. +* +* \param handle - A handle to the object to operate on. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireSuccess(nvtxSyncUser_t handle); + +/* ------------------------------------------------------------------------- */ +/** \brief Signal to tools of releasing a reservation on user defined synchronization object +* This should be called after \ref nvtxDomainSyncUserAcquireSuccess. +* +* \param handle - A handle to the object to operate on. +* +* \sa +* ::nvtxDomainSyncUserCreate +* ::nvtxDomainSyncUserDestroy +* ::nvtxDomainSyncUserAcquireStart +* ::nvtxDomainSyncUserAcquireFailed +* ::nvtxDomainSyncUserAcquireSuccess +* ::nvtxDomainSyncUserReleasing +* +* \version \NVTX_VERSION_2 +*/ +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserReleasing(nvtxSyncUser_t handle); + + +/** @} */ /*END defgroup*/ + +#ifdef __cplusplus +} +#endif /* __cplusplus */ + +#ifndef NVTX_NO_IMPL +#define NVTX_IMPL_GUARD_SYNC /* Ensure other headers cannot included directly */ +#include "nvtxDetail/nvtxImplSync_v3.h" +#undef NVTX_IMPL_GUARD_SYNC +#endif /*NVTX_NO_IMPL*/ + +#endif /* NVTOOLSEXT_SYNC_V3 */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplCore.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplCore.h new file mode 100644 index 0000000000000000000000000000000000000000..aee1014ecd53f8b980442109e51fdbc7672ff6d0 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplCore.h @@ -0,0 +1,299 @@ +NVTX_DECLSPEC void NVTX_API nvtxMarkEx(const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxMarkEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxMarkEx_impl_fnptr; + if(local!=0) + (*local)(eventAttrib); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxMarkA(const char* message) +{ +#ifndef NVTX_DISABLE + nvtxMarkA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxMarkA_impl_fnptr; + if(local!=0) + (*local)(message); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxMarkW(const wchar_t* message) +{ +#ifndef NVTX_DISABLE + nvtxMarkW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxMarkW_impl_fnptr; + if(local!=0) + (*local)(message); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC nvtxRangeId_t NVTX_API nvtxRangeStartEx(const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxRangeStartEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangeStartEx_impl_fnptr; + if(local!=0) + return (*local)(eventAttrib); + else +#endif /*NVTX_DISABLE*/ + return (nvtxRangeId_t)0; +} + +NVTX_DECLSPEC nvtxRangeId_t NVTX_API nvtxRangeStartA(const char* message) +{ +#ifndef NVTX_DISABLE + nvtxRangeStartA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangeStartA_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (nvtxRangeId_t)0; +} + +NVTX_DECLSPEC nvtxRangeId_t NVTX_API nvtxRangeStartW(const wchar_t* message) +{ +#ifndef NVTX_DISABLE + nvtxRangeStartW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangeStartW_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (nvtxRangeId_t)0; +} + +NVTX_DECLSPEC void NVTX_API nvtxRangeEnd(nvtxRangeId_t id) +{ +#ifndef NVTX_DISABLE + nvtxRangeEnd_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangeEnd_impl_fnptr; + if(local!=0) + (*local)(id); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC int NVTX_API nvtxRangePushEx(const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxRangePushEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangePushEx_impl_fnptr; + if(local!=0) + return (*local)(eventAttrib); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC int NVTX_API nvtxRangePushA(const char* message) +{ +#ifndef NVTX_DISABLE + nvtxRangePushA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangePushA_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC int NVTX_API nvtxRangePushW(const wchar_t* message) +{ +#ifndef NVTX_DISABLE + nvtxRangePushW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangePushW_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC int NVTX_API nvtxRangePop(void) +{ +#ifndef NVTX_DISABLE + nvtxRangePop_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxRangePop_impl_fnptr; + if(local!=0) + return (*local)(); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC void NVTX_API nvtxNameCategoryA(uint32_t category, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameCategoryA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameCategoryA_impl_fnptr; + if(local!=0) + (*local)(category, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameCategoryW(uint32_t category, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameCategoryW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameCategoryW_impl_fnptr; + if(local!=0) + (*local)(category, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameOsThreadA(uint32_t threadId, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameOsThreadA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameOsThreadA_impl_fnptr; + if(local!=0) + (*local)(threadId, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameOsThreadW(uint32_t threadId, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameOsThreadW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameOsThreadW_impl_fnptr; + if(local!=0) + (*local)(threadId, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainMarkEx(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxDomainMarkEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainMarkEx_impl_fnptr; + if(local!=0) + (*local)(domain, eventAttrib); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC nvtxRangeId_t NVTX_API nvtxDomainRangeStartEx(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxDomainRangeStartEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRangeStartEx_impl_fnptr; + if(local!=0) + return (*local)(domain, eventAttrib); + else +#endif /*NVTX_DISABLE*/ + return (nvtxRangeId_t)0; +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainRangeEnd(nvtxDomainHandle_t domain, nvtxRangeId_t id) +{ +#ifndef NVTX_DISABLE + nvtxDomainRangeEnd_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRangeEnd_impl_fnptr; + if(local!=0) + (*local)(domain, id); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC int NVTX_API nvtxDomainRangePushEx(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib) +{ +#ifndef NVTX_DISABLE + nvtxDomainRangePushEx_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRangePushEx_impl_fnptr; + if(local!=0) + return (*local)(domain, eventAttrib); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC int NVTX_API nvtxDomainRangePop(nvtxDomainHandle_t domain) +{ +#ifndef NVTX_DISABLE + nvtxDomainRangePop_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRangePop_impl_fnptr; + if(local!=0) + return (*local)(domain); + else +#endif /*NVTX_DISABLE*/ + return (int)NVTX_NO_PUSH_POP_TRACKING; +} + +NVTX_DECLSPEC nvtxResourceHandle_t NVTX_API nvtxDomainResourceCreate(nvtxDomainHandle_t domain, nvtxResourceAttributes_t* attribs) +{ +#ifndef NVTX_DISABLE + nvtxDomainResourceCreate_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainResourceCreate_impl_fnptr; + if(local!=0) + return (*local)(domain, attribs); + else +#endif /*NVTX_DISABLE*/ + return (nvtxResourceHandle_t)0; +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainResourceDestroy(nvtxResourceHandle_t resource) +{ +#ifndef NVTX_DISABLE + nvtxDomainResourceDestroy_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainResourceDestroy_impl_fnptr; + if(local!=0) + (*local)(resource); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainNameCategoryA(nvtxDomainHandle_t domain, uint32_t category, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxDomainNameCategoryA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainNameCategoryA_impl_fnptr; + if(local!=0) + (*local)(domain, category, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainNameCategoryW(nvtxDomainHandle_t domain, uint32_t category, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxDomainNameCategoryW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainNameCategoryW_impl_fnptr; + if(local!=0) + (*local)(domain, category, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC nvtxStringHandle_t NVTX_API nvtxDomainRegisterStringA(nvtxDomainHandle_t domain, const char* string) +{ +#ifndef NVTX_DISABLE + nvtxDomainRegisterStringA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRegisterStringA_impl_fnptr; + if(local!=0) + return (*local)(domain, string); + else +#endif /*NVTX_DISABLE*/ + return (nvtxStringHandle_t)0; +} + +NVTX_DECLSPEC nvtxStringHandle_t NVTX_API nvtxDomainRegisterStringW(nvtxDomainHandle_t domain, const wchar_t* string) +{ +#ifndef NVTX_DISABLE + nvtxDomainRegisterStringW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainRegisterStringW_impl_fnptr; + if(local!=0) + return (*local)(domain, string); + else +#endif /*NVTX_DISABLE*/ + return (nvtxStringHandle_t)0; +} + +NVTX_DECLSPEC nvtxDomainHandle_t NVTX_API nvtxDomainCreateA(const char* message) +{ +#ifndef NVTX_DISABLE + nvtxDomainCreateA_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainCreateA_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (nvtxDomainHandle_t)0; +} + +NVTX_DECLSPEC nvtxDomainHandle_t NVTX_API nvtxDomainCreateW(const wchar_t* message) +{ +#ifndef NVTX_DISABLE + nvtxDomainCreateW_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainCreateW_impl_fnptr; + if(local!=0) + return (*local)(message); + else +#endif /*NVTX_DISABLE*/ + return (nvtxDomainHandle_t)0; +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainDestroy(nvtxDomainHandle_t domain) +{ +#ifndef NVTX_DISABLE + nvtxDomainDestroy_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainDestroy_impl_fnptr; + if(local!=0) + (*local)(domain); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxInitialize(const void* reserved) +{ +#ifndef NVTX_DISABLE + nvtxInitialize_impl_fntype local = NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxInitialize_impl_fnptr; + if(local!=0) + (*local)(reserved); +#endif /*NVTX_DISABLE*/ +} diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplOpenCL_v3.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplOpenCL_v3.h new file mode 100644 index 0000000000000000000000000000000000000000..0e73224cce26b49c8f7585be3a41dd0f428fe07e --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplOpenCL_v3.h @@ -0,0 +1,192 @@ +/* This file was procedurally generated! Do not modify this file by hand. */ + +/* +* Copyright 2009-2016 NVIDIA Corporation. All rights reserved. +* +* NOTICE TO USER: +* +* This source code is subject to NVIDIA ownership rights under U.S. and +* international Copyright laws. +* +* This software and the information contained herein is PROPRIETARY and +* CONFIDENTIAL to NVIDIA and is being provided under the terms and conditions +* of a form of NVIDIA software license agreement. +* +* NVIDIA MAKES NO REPRESENTATION ABOUT THE SUITABILITY OF THIS SOURCE +* CODE FOR ANY PURPOSE. IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR +* IMPLIED WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH +* REGARD TO THIS SOURCE CODE, INCLUDING ALL IMPLIED WARRANTIES OF +* MERCHANTABILITY, NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. +* IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, +* OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS +* OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +* OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE +* OR PERFORMANCE OF THIS SOURCE CODE. +* +* U.S. Government End Users. This source code is a "commercial item" as +* that term is defined at 48 C.F.R. 2.101 (OCT 1995), consisting of +* "commercial computer software" and "commercial computer software +* documentation" as such terms are used in 48 C.F.R. 12.212 (SEPT 1995) +* and is provided to the U.S. Government only as a commercial end item. +* Consistent with 48 C.F.R.12.212 and 48 C.F.R. 227.7202-1 through +* 227.7202-4 (JUNE 1995), all U.S. Government End Users acquire the +* source code with only those rights set forth herein. +* +* Any use of this source code in individual and commercial software must +* include, in the user documentation and internal comments to the code, +* the above Disclaimer and U.S. Government End Users Notice. +*/ + +#ifndef NVTX_IMPL_GUARD_OPENCL +#error Never include this file directly -- it is automatically included by nvToolsExtCuda.h (except when NVTX_NO_IMPL is defined). +#endif + + +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ + +typedef void (NVTX_API * nvtxNameClDeviceA_impl_fntype)(cl_device_id device, const char* name); +typedef void (NVTX_API * nvtxNameClDeviceW_impl_fntype)(cl_device_id device, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClContextA_impl_fntype)(cl_context context, const char* name); +typedef void (NVTX_API * nvtxNameClContextW_impl_fntype)(cl_context context, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClCommandQueueA_impl_fntype)(cl_command_queue command_queue, const char* name); +typedef void (NVTX_API * nvtxNameClCommandQueueW_impl_fntype)(cl_command_queue command_queue, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClMemObjectA_impl_fntype)(cl_mem memobj, const char* name); +typedef void (NVTX_API * nvtxNameClMemObjectW_impl_fntype)(cl_mem memobj, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClSamplerA_impl_fntype)(cl_sampler sampler, const char* name); +typedef void (NVTX_API * nvtxNameClSamplerW_impl_fntype)(cl_sampler sampler, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClProgramA_impl_fntype)(cl_program program, const char* name); +typedef void (NVTX_API * nvtxNameClProgramW_impl_fntype)(cl_program program, const wchar_t* name); +typedef void (NVTX_API * nvtxNameClEventA_impl_fntype)(cl_event evnt, const char* name); +typedef void (NVTX_API * nvtxNameClEventW_impl_fntype)(cl_event evnt, const wchar_t* name); + +NVTX_DECLSPEC void NVTX_API nvtxNameClDeviceA(cl_device_id device, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClDeviceA_impl_fntype local = (nvtxNameClDeviceA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClDeviceA_impl_fnptr; + if(local!=0) + (*local)(device, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClDeviceW(cl_device_id device, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClDeviceW_impl_fntype local = (nvtxNameClDeviceW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClDeviceW_impl_fnptr; + if(local!=0) + (*local)(device, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClContextA(cl_context context, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClContextA_impl_fntype local = (nvtxNameClContextA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClContextA_impl_fnptr; + if(local!=0) + (*local)(context, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClContextW(cl_context context, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClContextW_impl_fntype local = (nvtxNameClContextW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClContextW_impl_fnptr; + if(local!=0) + (*local)(context, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClCommandQueueA(cl_command_queue command_queue, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClCommandQueueA_impl_fntype local = (nvtxNameClCommandQueueA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClCommandQueueA_impl_fnptr; + if(local!=0) + (*local)(command_queue, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClCommandQueueW(cl_command_queue command_queue, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClCommandQueueW_impl_fntype local = (nvtxNameClCommandQueueW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClCommandQueueW_impl_fnptr; + if(local!=0) + (*local)(command_queue, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClMemObjectA(cl_mem memobj, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClMemObjectA_impl_fntype local = (nvtxNameClMemObjectA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClMemObjectA_impl_fnptr; + if(local!=0) + (*local)(memobj, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClMemObjectW(cl_mem memobj, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClMemObjectW_impl_fntype local = (nvtxNameClMemObjectW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClMemObjectW_impl_fnptr; + if(local!=0) + (*local)(memobj, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClSamplerA(cl_sampler sampler, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClSamplerA_impl_fntype local = (nvtxNameClSamplerA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClSamplerA_impl_fnptr; + if(local!=0) + (*local)(sampler, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClSamplerW(cl_sampler sampler, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClSamplerW_impl_fntype local = (nvtxNameClSamplerW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClSamplerW_impl_fnptr; + if(local!=0) + (*local)(sampler, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClProgramA(cl_program program, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClProgramA_impl_fntype local = (nvtxNameClProgramA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClProgramA_impl_fnptr; + if(local!=0) + (*local)(program, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClProgramW(cl_program program, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClProgramW_impl_fntype local = (nvtxNameClProgramW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClProgramW_impl_fnptr; + if(local!=0) + (*local)(program, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClEventA(cl_event evnt, const char* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClEventA_impl_fntype local = (nvtxNameClEventA_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClEventA_impl_fnptr; + if(local!=0) + (*local)(evnt, name); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxNameClEventW(cl_event evnt, const wchar_t* name) +{ +#ifndef NVTX_DISABLE + nvtxNameClEventW_impl_fntype local = (nvtxNameClEventW_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxNameClEventW_impl_fnptr; + if(local!=0) + (*local)(evnt, name); +#endif /*NVTX_DISABLE*/ +} + +#ifdef __cplusplus +} /* extern "C" */ +#endif /* __cplusplus */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplSync_v3.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplSync_v3.h new file mode 100644 index 0000000000000000000000000000000000000000..accc621a3d5071438f6ed7b9e2192ea8ad56977c --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxImplSync_v3.h @@ -0,0 +1,114 @@ +/* This file was procedurally generated! Do not modify this file by hand. */ + +/* +* Copyright 2009-2016 NVIDIA Corporation. All rights reserved. +* +* NOTICE TO USER: +* +* This source code is subject to NVIDIA ownership rights under U.S. and +* international Copyright laws. +* +* This software and the information contained herein is PROPRIETARY and +* CONFIDENTIAL to NVIDIA and is being provided under the terms and conditions +* of a form of NVIDIA software license agreement. +* +* NVIDIA MAKES NO REPRESENTATION ABOUT THE SUITABILITY OF THIS SOURCE +* CODE FOR ANY PURPOSE. IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR +* IMPLIED WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH +* REGARD TO THIS SOURCE CODE, INCLUDING ALL IMPLIED WARRANTIES OF +* MERCHANTABILITY, NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE. +* IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, +* OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS +* OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +* OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE +* OR PERFORMANCE OF THIS SOURCE CODE. +* +* U.S. Government End Users. This source code is a "commercial item" as +* that term is defined at 48 C.F.R. 2.101 (OCT 1995), consisting of +* "commercial computer software" and "commercial computer software +* documentation" as such terms are used in 48 C.F.R. 12.212 (SEPT 1995) +* and is provided to the U.S. Government only as a commercial end item. +* Consistent with 48 C.F.R.12.212 and 48 C.F.R. 227.7202-1 through +* 227.7202-4 (JUNE 1995), all U.S. Government End Users acquire the +* source code with only those rights set forth herein. +* +* Any use of this source code in individual and commercial software must +* include, in the user documentation and internal comments to the code, +* the above Disclaimer and U.S. Government End Users Notice. +*/ + +#ifndef NVTX_IMPL_GUARD_SYNC +#error Never include this file directly -- it is automatically included by nvToolsExtCuda.h (except when NVTX_NO_IMPL is defined). +#endif + + +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ + +typedef nvtxSyncUser_t (NVTX_API * nvtxDomainSyncUserCreate_impl_fntype)(nvtxDomainHandle_t domain, const nvtxSyncUserAttributes_t* attribs); +typedef void (NVTX_API * nvtxDomainSyncUserDestroy_impl_fntype)(nvtxSyncUser_t handle); +typedef void (NVTX_API * nvtxDomainSyncUserAcquireStart_impl_fntype)(nvtxSyncUser_t handle); +typedef void (NVTX_API * nvtxDomainSyncUserAcquireFailed_impl_fntype)(nvtxSyncUser_t handle); +typedef void (NVTX_API * nvtxDomainSyncUserAcquireSuccess_impl_fntype)(nvtxSyncUser_t handle); +typedef void (NVTX_API * nvtxDomainSyncUserReleasing_impl_fntype)(nvtxSyncUser_t handle); + +NVTX_DECLSPEC nvtxSyncUser_t NVTX_API nvtxDomainSyncUserCreate(nvtxDomainHandle_t domain, const nvtxSyncUserAttributes_t* attribs) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserCreate_impl_fntype local = (nvtxDomainSyncUserCreate_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserCreate_impl_fnptr; + if(local!=0) + return (*local)(domain, attribs); + else +#endif /*NVTX_DISABLE*/ + return (nvtxSyncUser_t)0; +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserDestroy(nvtxSyncUser_t handle) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserDestroy_impl_fntype local = (nvtxDomainSyncUserDestroy_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserDestroy_impl_fnptr; + if(local!=0) + (*local)(handle); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireStart(nvtxSyncUser_t handle) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserAcquireStart_impl_fntype local = (nvtxDomainSyncUserAcquireStart_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserAcquireStart_impl_fnptr; + if(local!=0) + (*local)(handle); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireFailed(nvtxSyncUser_t handle) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserAcquireFailed_impl_fntype local = (nvtxDomainSyncUserAcquireFailed_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserAcquireFailed_impl_fnptr; + if(local!=0) + (*local)(handle); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserAcquireSuccess(nvtxSyncUser_t handle) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserAcquireSuccess_impl_fntype local = (nvtxDomainSyncUserAcquireSuccess_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserAcquireSuccess_impl_fnptr; + if(local!=0) + (*local)(handle); +#endif /*NVTX_DISABLE*/ +} + +NVTX_DECLSPEC void NVTX_API nvtxDomainSyncUserReleasing(nvtxSyncUser_t handle) +{ +#ifndef NVTX_DISABLE + nvtxDomainSyncUserReleasing_impl_fntype local = (nvtxDomainSyncUserReleasing_impl_fntype)NVTX_VERSIONED_IDENTIFIER(nvtxGlobals).nvtxDomainSyncUserReleasing_impl_fnptr; + if(local!=0) + (*local)(handle); +#endif /*NVTX_DISABLE*/ +} + +#ifdef __cplusplus +} /* extern "C" */ +#endif /* __cplusplus */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxInitDecls.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxInitDecls.h new file mode 100644 index 0000000000000000000000000000000000000000..757a7296093750e75721fb93855d0aa37da64103 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxInitDecls.h @@ -0,0 +1,73 @@ +#ifndef NVTX_IMPL_GUARD +#error Never include this file directly -- it is automatically included by nvToolsExt.h (except when NVTX_NO_IMPL is defined). +#endif + +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxMarkEx_impl_init)(const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxMarkA_impl_init)(const char* message); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxMarkW_impl_init)(const wchar_t* message); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxRangeId_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangeStartEx_impl_init)(const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxRangeId_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangeStartA_impl_init)(const char* message); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxRangeId_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangeStartW_impl_init)(const wchar_t* message); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangeEnd_impl_init)(nvtxRangeId_t id); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangePushEx_impl_init)(const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangePushA_impl_init)(const char* message); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangePushW_impl_init)(const wchar_t* message); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxRangePop_impl_init)(void); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCategoryA_impl_init)(uint32_t category, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCategoryW_impl_init)(uint32_t category, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameOsThreadA_impl_init)(uint32_t threadId, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameOsThreadW_impl_init)(uint32_t threadId, const wchar_t* name); + +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuDeviceA_impl_init)(nvtx_CUdevice device, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuDeviceW_impl_init)(nvtx_CUdevice device, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuContextA_impl_init)(nvtx_CUcontext context, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuContextW_impl_init)(nvtx_CUcontext context, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuStreamA_impl_init)(nvtx_CUstream stream, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuStreamW_impl_init)(nvtx_CUstream stream, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuEventA_impl_init)(nvtx_CUevent event, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCuEventW_impl_init)(nvtx_CUevent event, const wchar_t* name); + +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClDeviceA_impl_init)(nvtx_cl_device_id device, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClDeviceW_impl_init)(nvtx_cl_device_id device, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClContextA_impl_init)(nvtx_cl_context context, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClContextW_impl_init)(nvtx_cl_context context, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClCommandQueueA_impl_init)(nvtx_cl_command_queue command_queue, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClCommandQueueW_impl_init)(nvtx_cl_command_queue command_queue, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClMemObjectA_impl_init)(nvtx_cl_mem memobj, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClMemObjectW_impl_init)(nvtx_cl_mem memobj, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClSamplerA_impl_init)(nvtx_cl_sampler sampler, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClSamplerW_impl_init)(nvtx_cl_sampler sampler, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClProgramA_impl_init)(nvtx_cl_program program, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClProgramW_impl_init)(nvtx_cl_program program, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClEventA_impl_init)(nvtx_cl_event evnt, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameClEventW_impl_init)(nvtx_cl_event evnt, const wchar_t* name); + +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaDeviceA_impl_init)(int device, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaDeviceW_impl_init)(int device, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaStreamA_impl_init)(nvtx_cudaStream_t stream, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaStreamW_impl_init)(nvtx_cudaStream_t stream, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaEventA_impl_init)(nvtx_cudaEvent_t event, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxNameCudaEventW_impl_init)(nvtx_cudaEvent_t event, const wchar_t* name); + +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainMarkEx_impl_init)(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxRangeId_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRangeStartEx_impl_init)(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRangeEnd_impl_init)(nvtxDomainHandle_t domain, nvtxRangeId_t id); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRangePushEx_impl_init)(nvtxDomainHandle_t domain, const nvtxEventAttributes_t* eventAttrib); +NVTX_LINKONCE_FWDDECL_FUNCTION int NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRangePop_impl_init)(nvtxDomainHandle_t domain); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxResourceHandle_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainResourceCreate_impl_init)(nvtxDomainHandle_t domain, nvtxResourceAttributes_t* attribs); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainResourceDestroy_impl_init)(nvtxResourceHandle_t resource); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainNameCategoryA_impl_init)(nvtxDomainHandle_t domain, uint32_t category, const char* name); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainNameCategoryW_impl_init)(nvtxDomainHandle_t domain, uint32_t category, const wchar_t* name); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxStringHandle_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRegisterStringA_impl_init)(nvtxDomainHandle_t domain, const char* string); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxStringHandle_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainRegisterStringW_impl_init)(nvtxDomainHandle_t domain, const wchar_t* string); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxDomainHandle_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainCreateA_impl_init)(const char* message); +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxDomainHandle_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainCreateW_impl_init)(const wchar_t* message); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainDestroy_impl_init)(nvtxDomainHandle_t domain); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxInitialize_impl_init)(const void* reserved); + +NVTX_LINKONCE_FWDDECL_FUNCTION nvtxSyncUser_t NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserCreate_impl_init)(nvtxDomainHandle_t domain, const nvtxSyncUserAttributes_t* attribs); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserDestroy_impl_init)(nvtxSyncUser_t handle); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserAcquireStart_impl_init)(nvtxSyncUser_t handle); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserAcquireFailed_impl_init)(nvtxSyncUser_t handle); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserAcquireSuccess_impl_init)(nvtxSyncUser_t handle); +NVTX_LINKONCE_FWDDECL_FUNCTION void NVTX_API NVTX_VERSIONED_IDENTIFIER(nvtxDomainSyncUserReleasing_impl_init)(nvtxSyncUser_t handle); diff --git a/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxLinkOnce.h b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxLinkOnce.h new file mode 100644 index 0000000000000000000000000000000000000000..6c6eaa732464519fc7d93f98b646729dab0db8b9 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/nvidia/nvtx/include/nvtx3/nvtxDetail/nvtxLinkOnce.h @@ -0,0 +1,75 @@ +#ifndef __NVTX_LINKONCE_H__ +#define __NVTX_LINKONCE_H__ + +/* This header defines macros to permit making definitions of global variables + * and functions in C/C++ header files which may be included multiple times in + * a translation unit or linkage unit. It allows authoring header-only libraries + * which can be used by multiple other header-only libraries (either as the same + * copy or multiple copies), and does not require any build changes, such as + * adding another .c file, linking a static library, or deploying a dynamic + * library. Globals defined with these macros have the property that they have + * the same address, pointing to a single instance, for the entire linkage unit. + * It is expected but not guaranteed that each linkage unit will have a separate + * instance. + * + * In some situations it is desirable to declare a variable without initializing + * it, refer to it in code or other variables' initializers, and then initialize + * it later. Similarly, functions can be prototyped, have their address taken, + * and then have their body defined later. In such cases, use the FWDDECL macros + * when forward-declaring LINKONCE global variables without initializers and + * function prototypes, and then use the DEFINE macros when later defining them. + * Although in many cases the FWDDECL macro is equivalent to the DEFINE macro, + * following this pattern makes code maximally portable. + */ + +#if defined(__MINGW32__) /* MinGW */ + #define NVTX_LINKONCE_WEAK __attribute__((section(".gnu.linkonce.0."))) + #if defined(__cplusplus) + #define NVTX_LINKONCE_DEFINE_GLOBAL __declspec(selectany) + #define NVTX_LINKONCE_DEFINE_FUNCTION extern "C" inline NVTX_LINKONCE_WEAK + #else + #define NVTX_LINKONCE_DEFINE_GLOBAL __declspec(selectany) + #define NVTX_LINKONCE_DEFINE_FUNCTION NVTX_LINKONCE_WEAK + #endif +#elif defined(_MSC_VER) /* MSVC */ + #if defined(__cplusplus) + #define NVTX_LINKONCE_DEFINE_GLOBAL extern "C" __declspec(selectany) + #define NVTX_LINKONCE_DEFINE_FUNCTION extern "C" inline + #else + #define NVTX_LINKONCE_DEFINE_GLOBAL __declspec(selectany) + #define NVTX_LINKONCE_DEFINE_FUNCTION __inline + #endif +#elif defined(__CYGWIN__) && defined(__clang__) /* Clang on Cygwin */ + #define NVTX_LINKONCE_WEAK __attribute__((section(".gnu.linkonce.0."))) + #if defined(__cplusplus) + #define NVTX_LINKONCE_DEFINE_GLOBAL NVTX_LINKONCE_WEAK + #define NVTX_LINKONCE_DEFINE_FUNCTION extern "C" NVTX_LINKONCE_WEAK + #else + #define NVTX_LINKONCE_DEFINE_GLOBAL NVTX_LINKONCE_WEAK + #define NVTX_LINKONCE_DEFINE_FUNCTION NVTX_LINKONCE_WEAK + #endif +#elif defined(__CYGWIN__) /* Assume GCC or compatible */ + #define NVTX_LINKONCE_WEAK __attribute__((weak)) + #if defined(__cplusplus) + #define NVTX_LINKONCE_DEFINE_GLOBAL __declspec(selectany) + #define NVTX_LINKONCE_DEFINE_FUNCTION extern "C" inline + #else + #define NVTX_LINKONCE_DEFINE_GLOBAL NVTX_LINKONCE_WEAK + #define NVTX_LINKONCE_DEFINE_FUNCTION NVTX_LINKONCE_WEAK + #endif +#else /* All others: Assume GCC, clang, or compatible */ + #define NVTX_LINKONCE_WEAK __attribute__((weak)) + #define NVTX_LINKONCE_HIDDEN __attribute__((visibility("hidden"))) + #if defined(__cplusplus) + #define NVTX_LINKONCE_DEFINE_GLOBAL NVTX_LINKONCE_HIDDEN NVTX_LINKONCE_WEAK + #define NVTX_LINKONCE_DEFINE_FUNCTION extern "C" NVTX_LINKONCE_HIDDEN inline + #else + #define NVTX_LINKONCE_DEFINE_GLOBAL NVTX_LINKONCE_HIDDEN NVTX_LINKONCE_WEAK + #define NVTX_LINKONCE_DEFINE_FUNCTION NVTX_LINKONCE_HIDDEN NVTX_LINKONCE_WEAK + #endif +#endif + +#define NVTX_LINKONCE_FWDDECL_GLOBAL NVTX_LINKONCE_DEFINE_GLOBAL extern +#define NVTX_LINKONCE_FWDDECL_FUNCTION NVTX_LINKONCE_DEFINE_FUNCTION + +#endif /* __NVTX_LINKONCE_H__ */ diff --git a/evalkit_tf437/lib/python3.10/site-packages/portalocker-2.10.1.dist-info/LICENSE b/evalkit_tf437/lib/python3.10/site-packages/portalocker-2.10.1.dist-info/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..b638bda0d3b793bf3c52b37c9ea1f5b6dc0ce98b --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/portalocker-2.10.1.dist-info/LICENSE @@ -0,0 +1,11 @@ +Copyright 2022 Rick van Hattem + +Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/algol_nu.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/algol_nu.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..6706a5d31f39ada90130b19588180eafd7d32865 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/algol_nu.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/coffee.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/coffee.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..bf372338f3d03208c3411e584f68f16a23a823c9 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/coffee.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/friendly.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/friendly.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..fbfc8a8a7d71d225f04df00ef18fe07c64c19af4 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/friendly.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/monokai.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/monokai.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..f7581dfb43b1683cf901f2db84a39ed66b8a9b94 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/monokai.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/paraiso_dark.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/paraiso_dark.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8f9e083764b1510c428a71ca266e33d5481bdb9d Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/paraiso_dark.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/sas.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/sas.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..0c5e35ba78740eb77ebbe1a2749f070cc9ad5d4b Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/sas.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/staroffice.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/staroffice.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..a62ef05b473291f8aca06ed875f61cf0b1835db5 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/staroffice.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/stata_dark.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/stata_dark.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..e6ccdd6790ca7002d58792e1a4b84f6d9618690a Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/stata_dark.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/xcode.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/xcode.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..dc0dffdcd4cc93cf414d6c2f8c7a7d289db4c639 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/xcode.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/zenburn.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/zenburn.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..38d8af8f7d6eff1f96e7e3439389f5fb9193b04c Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/__pycache__/zenburn.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/borland.py b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/borland.py new file mode 100644 index 0000000000000000000000000000000000000000..6bcc6fb37c4e34fa5ae10bee8e6a5d1b779b89c8 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/borland.py @@ -0,0 +1,53 @@ +""" + pygments.styles.borland + ~~~~~~~~~~~~~~~~~~~~~~~ + + Style similar to the style used in the Borland IDEs. + + :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS. + :license: BSD, see LICENSE for details. +""" + +from pygments.style import Style +from pygments.token import Keyword, Name, Comment, String, Error, \ + Number, Operator, Generic, Whitespace + + +__all__ = ['BorlandStyle'] + + +class BorlandStyle(Style): + """ + Style similar to the style used in the borland IDEs. + """ + name = 'borland' + + styles = { + Whitespace: '#bbbbbb', + + Comment: 'italic #008800', + Comment.Preproc: 'noitalic #008080', + Comment.Special: 'noitalic bold', + + String: '#0000FF', + String.Char: '#800080', + Number: '#0000FF', + Keyword: 'bold #000080', + Operator.Word: 'bold', + Name.Tag: 'bold #000080', + Name.Attribute: '#FF0000', + + Generic.Heading: '#999999', + Generic.Subheading: '#aaaaaa', + Generic.Deleted: 'bg:#ffdddd #000000', + Generic.Inserted: 'bg:#ddffdd #000000', + Generic.Error: '#aa0000', + Generic.Emph: 'italic', + Generic.Strong: 'bold', + Generic.EmphStrong: 'bold italic', + Generic.Prompt: '#555555', + Generic.Output: '#888888', + Generic.Traceback: '#aa0000', + + Error: 'bg:#e3d2d2 #a61717' + } diff --git a/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/lightbulb.py b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/lightbulb.py new file mode 100644 index 0000000000000000000000000000000000000000..4e5658a9f6f8ab98676312552081027e76832bcb --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/pygments/styles/lightbulb.py @@ -0,0 +1,110 @@ +""" + pygments.styles.lightbulb + ~~~~~~~~~~~~~~~~~~~~~~~~~ + + A minimal dark theme based on the Lightbulb theme for VSCode. + + :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS. + :license: BSD, see LICENSE for details. +""" + +from pygments.style import Style +from pygments.token import ( + Comment, + Error, + Generic, + Keyword, + Literal, + Name, + Number, + Operator, + Punctuation, + String, + Token, +) + + +__all__ = ['LightbulbStyle'] + + +COLORS = { + "bg": "#1d2331", + "blue_1": "#73D0FF", + "gray_1": "#7e8aa1", + "gray_2": "#3c4354", + "gray_3": "#6e7681", + "red_1": "#f88f7f", + "red_2": "#3d1e20", + "orange_1": "#FFAD66", + "orange_2": "#F29E74", + "yellow_1": "#FFD173", + "white": "#d4d2c8", + "magenta_1": "#DFBFFF", + "green_1": "#D5FF80", + "green_2": "#19362c", + "cyan_1": "#95E6CB", +} + + +class LightbulbStyle(Style): + """ + A minimal dark theme based on the Lightbulb theme for VSCode. + """ + + name = 'lightbulb' + + background_color = COLORS['bg'] + highlight_color = COLORS['gray_3'] + + line_number_color = COLORS['gray_2'] + line_number_special_color = COLORS['gray_2'] + + styles = { + Comment: COLORS["gray_1"], + Comment.Hashbang: "italic " + COLORS['red_1'], + Comment.Preproc: "bold " + COLORS['orange_1'], + Comment.Special: "italic " + COLORS['gray_1'], + Error: COLORS['red_1'], + Generic.Deleted: f"bg:{COLORS['red_2']} #f88f7f", + Generic.Emph: "italic", + Generic.Error: "#f88f7f", + Generic.Inserted: f"bg:{COLORS['green_2']} #6ad4af", + Generic.Output: COLORS['gray_1'], + Generic.Strong: "bold", + Generic.Traceback: COLORS['red_1'], + Keyword: COLORS['orange_1'], + Keyword.Constant: COLORS['orange_1'], + Keyword.Declaration: COLORS['orange_1'], + Keyword.Namespace: COLORS['orange_1'], + Keyword.Reserved: COLORS['orange_1'], + Keyword.Type: COLORS['blue_1'], + Literal: COLORS['green_1'], + Name: COLORS['white'], + Name.Attribute: COLORS['yellow_1'], + Name.Builtin: COLORS['yellow_1'], + Name.Builtin.Pseudo: "#5CCFE6", + Name.Class: COLORS['blue_1'], + Name.Constant: COLORS['yellow_1'], + Name.Decorator: "bold italic " + COLORS['gray_1'], + Name.Entity: COLORS['cyan_1'], + Name.Exception: COLORS['blue_1'], + Name.Function: COLORS['yellow_1'], + Name.Function.Magic: COLORS['yellow_1'], + Name.Other: COLORS['white'], + Name.Property: COLORS['yellow_1'], + Name.Tag: "#5CCFE6", + Name.Variable: COLORS['white'], + Number: COLORS['magenta_1'], + Operator: COLORS['orange_1'], + Operator.Word: COLORS['orange_1'], + Punctuation: COLORS['white'], + String: COLORS['green_1'], + String.Affix: COLORS['orange_2'], + String.Doc: COLORS['gray_1'], + String.Escape: COLORS['cyan_1'], + String.Interpol: COLORS['cyan_1'], + String.Other: COLORS['cyan_1'], + String.Regex: COLORS['cyan_1'], + String.Symbol: COLORS['magenta_1'], + Token: COLORS['white'], + } diff --git a/evalkit_tf437/lib/python3.10/site-packages/traitlets-5.14.3.dist-info/METADATA b/evalkit_tf437/lib/python3.10/site-packages/traitlets-5.14.3.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..777822558d1c4d7344a785e17bf6f4d37887cf83 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/traitlets-5.14.3.dist-info/METADATA @@ -0,0 +1,282 @@ +Metadata-Version: 2.3 +Name: traitlets +Version: 5.14.3 +Summary: Traitlets Python configuration system +Project-URL: Homepage, https://github.com/ipython/traitlets +Project-URL: Documentation, https://traitlets.readthedocs.io +Project-URL: Source, https://github.com/ipython/traitlets +Project-URL: Funding, https://numfocus.org +Project-URL: Tracker, https://github.com/ipython/traitlets/issues +Author-email: IPython Development Team +License: BSD 3-Clause License + + - Copyright (c) 2001-, IPython Development Team + + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + + 3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE + FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER + CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, + OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +License-File: LICENSE +Keywords: Interactive,Interpreter,Shell,Web +Classifier: Framework :: IPython +Classifier: Framework :: Jupyter +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Science/Research +Classifier: Intended Audience :: System Administrators +Classifier: License :: OSI Approved :: BSD License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Typing :: Typed +Requires-Python: >=3.8 +Provides-Extra: docs +Requires-Dist: myst-parser; extra == 'docs' +Requires-Dist: pydata-sphinx-theme; extra == 'docs' +Requires-Dist: sphinx; extra == 'docs' +Provides-Extra: test +Requires-Dist: argcomplete>=3.0.3; extra == 'test' +Requires-Dist: mypy>=1.7.0; extra == 'test' +Requires-Dist: pre-commit; extra == 'test' +Requires-Dist: pytest-mock; extra == 'test' +Requires-Dist: pytest-mypy-testing; extra == 'test' +Requires-Dist: pytest<8.2,>=7.0; extra == 'test' +Description-Content-Type: text/markdown + +# Traitlets + +[![Tests](https://github.com/ipython/traitlets/actions/workflows/tests.yml/badge.svg)](https://github.com/ipython/traitlets/actions/workflows/tests.yml) +[![Documentation Status](https://readthedocs.org/projects/traitlets/badge/?version=latest)](https://traitlets.readthedocs.io/en/latest/?badge=latest) +[![Tidelift](https://tidelift.com/subscription/pkg/pypi-traitlets)](https://tidelift.com/badges/package/pypi/traitlets) + +| | | +| ------------- | ------------------------------------ | +| **home** | https://github.com/ipython/traitlets | +| **pypi-repo** | https://pypi.org/project/traitlets/ | +| **docs** | https://traitlets.readthedocs.io/ | +| **license** | Modified BSD License | + +Traitlets is a pure Python library enabling: + +- the enforcement of strong typing for attributes of Python objects + (typed attributes are called _"traits"_); +- dynamically calculated default values; +- automatic validation and coercion of trait attributes when attempting a + change; +- registering for receiving notifications when trait values change; +- reading configuring values from files or from command line + arguments - a distinct layer on top of traitlets, so you may use + traitlets without the configuration machinery. + +Its implementation relies on the [descriptor](https://docs.python.org/howto/descriptor.html) +pattern, and it is a lightweight pure-python alternative of the +[_traits_ library](https://docs.enthought.com/traits/). + +Traitlets powers the configuration system of IPython and Jupyter +and the declarative API of IPython interactive widgets. + +## Installation + +For a local installation, make sure you have +[pip installed](https://pip.pypa.io/en/stable/installing/) and run: + +```bash +pip install traitlets +``` + +For a **development installation**, clone this repository, change into the +`traitlets` root directory, and run pip: + +```bash +git clone https://github.com/ipython/traitlets.git +cd traitlets +pip install -e . +``` + +## Running the tests + +```bash +pip install "traitlets[test]" +py.test traitlets +``` + +## Code Styling + +`traitlets` has adopted automatic code formatting so you shouldn't +need to worry too much about your code style. +As long as your code is valid, +the pre-commit hook should take care of how it should look. + +To install `pre-commit` locally, run the following:: + +``` +pip install pre-commit +pre-commit install +``` + +You can invoke the pre-commit hook by hand at any time with:: + +``` +pre-commit run +``` + +which should run any autoformatting on your code +and tell you about any errors it couldn't fix automatically. +You may also install [black integration](https://github.com/psf/black#editor-integration) +into your text editor to format code automatically. + +If you have already committed files before setting up the pre-commit +hook with `pre-commit install`, you can fix everything up using +`pre-commit run --all-files`. You need to make the fixing commit +yourself after that. + +Some of the hooks only run on CI by default, but you can invoke them by +running with the `--hook-stage manual` argument. + +## Usage + +Any class with trait attributes must inherit from `HasTraits`. +For the list of available trait types and their properties, see the +[Trait Types](https://traitlets.readthedocs.io/en/latest/trait_types.html) +section of the documentation. + +### Dynamic default values + +To calculate a default value dynamically, decorate a method of your class with +`@default({traitname})`. This method will be called on the instance, and +should return the default value. In this example, the `_username_default` +method is decorated with `@default('username')`: + +```Python +import getpass +from traitlets import HasTraits, Unicode, default + +class Identity(HasTraits): + username = Unicode() + + @default('username') + def _username_default(self): + return getpass.getuser() +``` + +### Callbacks when a trait attribute changes + +When a trait changes, an application can follow this trait change with +additional actions. + +To do something when a trait attribute is changed, decorate a method with +[`traitlets.observe()`](https://traitlets.readthedocs.io/en/latest/api.html?highlight=observe#traitlets.observe). +The method will be called with a single argument, a dictionary which contains +an owner, new value, old value, name of the changed trait, and the event type. + +In this example, the `_num_changed` method is decorated with `` @observe(`num`) ``: + +```Python +from traitlets import HasTraits, Integer, observe + +class TraitletsExample(HasTraits): + num = Integer(5, help="a number").tag(config=True) + + @observe('num') + def _num_changed(self, change): + print("{name} changed from {old} to {new}".format(**change)) +``` + +and is passed the following dictionary when called: + +```Python +{ + 'owner': object, # The HasTraits instance + 'new': 6, # The new value + 'old': 5, # The old value + 'name': "foo", # The name of the changed trait + 'type': 'change', # The event type of the notification, usually 'change' +} +``` + +### Validation and coercion + +Each trait type (`Int`, `Unicode`, `Dict` etc.) may have its own validation or +coercion logic. In addition, we can register custom cross-validators +that may depend on the state of other attributes. For example: + +```Python +from traitlets import HasTraits, TraitError, Int, Bool, validate + +class Parity(HasTraits): + value = Int() + parity = Int() + + @validate('value') + def _valid_value(self, proposal): + if proposal['value'] % 2 != self.parity: + raise TraitError('value and parity should be consistent') + return proposal['value'] + + @validate('parity') + def _valid_parity(self, proposal): + parity = proposal['value'] + if parity not in [0, 1]: + raise TraitError('parity should be 0 or 1') + if self.value % 2 != parity: + raise TraitError('value and parity should be consistent') + return proposal['value'] + +parity_check = Parity(value=2) + +# Changing required parity and value together while holding cross validation +with parity_check.hold_trait_notifications(): + parity_check.value = 1 + parity_check.parity = 1 +``` + +However, we **recommend** that custom cross-validators don't modify the state +of the HasTraits instance. + +## About the IPython Development Team + +The IPython Development Team is the set of all contributors to the IPython project. +This includes all of the IPython subprojects. + +The core team that coordinates development on GitHub can be found here: +https://github.com/jupyter/. + +## Our Copyright Policy + +IPython uses a shared copyright model. Each contributor maintains copyright +over their contributions to IPython. But, it is important to note that these +contributions are typically only changes to the repositories. Thus, the IPython +source code, in its entirety is not the copyright of any single person or +institution. Instead, it is the collective copyright of the entire IPython +Development Team. If individual contributors want to maintain a record of what +changes/contributions they have specific copyright on, they should indicate +their copyright in the commit message of the change, when they commit the +change to one of the IPython repositories. + +With this in mind, the following banner should be used in any source code file +to indicate the copyright and license terms: + +``` +# Copyright (c) IPython Development Team. +# Distributed under the terms of the Modified BSD License. +```