ZTWHHH commited on
Commit
a3da7bd
·
verified ·
1 Parent(s): 1b20da6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +4 -0
  2. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/__pycache__/__init__.cpython-310.pyc +0 -0
  3. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/__init__.py +0 -0
  4. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/__pycache__/__init__.cpython-310.pyc +0 -0
  5. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas.h +891 -0
  6. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublasLt.h +1845 -0
  7. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublasXt.h +693 -0
  8. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas_api.h +0 -0
  9. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas_v2.h +478 -0
  10. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/nvblas.h +824 -0
  11. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/__init__.py +0 -0
  12. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/__pycache__/__init__.cpython-310.pyc +0 -0
  13. infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/libnvblas.so.12 +3 -0
  14. infer_4_37_2/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/info.h +344 -0
  15. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/__init__.py +0 -0
  16. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/__pycache__/__init__.cpython-310.pyc +0 -0
  17. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/__init__.py +0 -0
  18. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/__pycache__/__init__.cpython-310.pyc +0 -0
  19. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cudalibxt.h +97 -0
  20. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufft.h +334 -0
  21. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufftXt.h +259 -0
  22. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufftw.h +465 -0
  23. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/__init__.py +0 -0
  24. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/__pycache__/__init__.cpython-310.pyc +0 -0
  25. infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.11 +3 -0
  26. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/__init__.py +0 -0
  27. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/__pycache__/__init__.cpython-310.pyc +0 -0
  28. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/__init__.py +0 -0
  29. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverDn.h +0 -0
  30. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverMg.h +318 -0
  31. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverRf.h +339 -0
  32. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolver_common.h +261 -0
  33. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/lib/__init__.py +0 -0
  34. infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/lib/__pycache__/__init__.cpython-310.pyc +0 -0
  35. janus/lib/python3.10/site-packages/sympy/combinatorics/prufer.py +435 -0
  36. janus/lib/python3.10/site-packages/sympy/core/__pycache__/_print_helpers.cpython-310.pyc +0 -0
  37. janus/lib/python3.10/site-packages/sympy/core/__pycache__/add.cpython-310.pyc +0 -0
  38. janus/lib/python3.10/site-packages/sympy/core/__pycache__/alphabets.cpython-310.pyc +0 -0
  39. janus/lib/python3.10/site-packages/sympy/core/__pycache__/assumptions.cpython-310.pyc +0 -0
  40. janus/lib/python3.10/site-packages/sympy/core/__pycache__/assumptions_generated.cpython-310.pyc +0 -0
  41. janus/lib/python3.10/site-packages/sympy/core/__pycache__/backend.cpython-310.pyc +0 -0
  42. janus/lib/python3.10/site-packages/sympy/core/__pycache__/basic.cpython-310.pyc +0 -0
  43. janus/lib/python3.10/site-packages/sympy/core/__pycache__/cache.cpython-310.pyc +0 -0
  44. janus/lib/python3.10/site-packages/sympy/core/__pycache__/compatibility.cpython-310.pyc +0 -0
  45. janus/lib/python3.10/site-packages/sympy/core/__pycache__/containers.cpython-310.pyc +0 -0
  46. janus/lib/python3.10/site-packages/sympy/core/__pycache__/core.cpython-310.pyc +0 -0
  47. janus/lib/python3.10/site-packages/sympy/core/__pycache__/coreerrors.cpython-310.pyc +0 -0
  48. janus/lib/python3.10/site-packages/sympy/core/__pycache__/facts.cpython-310.pyc +0 -0
  49. janus/lib/python3.10/site-packages/sympy/core/__pycache__/intfunc.cpython-310.pyc +0 -0
  50. janus/lib/python3.10/site-packages/sympy/core/__pycache__/kind.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -1119,3 +1119,7 @@ infer_4_37_2/lib/python3.10/site-packages/fontTools/misc/bezierTools.cpython-310
1119
  infer_4_37_2/lib/python3.10/site-packages/fontTools/ttLib/tables/__pycache__/otData.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1120
  infer_4_37_2/lib/python3.10/site-packages/fontTools/subset/__pycache__/__init__.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1121
  janus/lib/python3.10/site-packages/sympy/physics/control/__pycache__/lti.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
1119
  infer_4_37_2/lib/python3.10/site-packages/fontTools/ttLib/tables/__pycache__/otData.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1120
  infer_4_37_2/lib/python3.10/site-packages/fontTools/subset/__pycache__/__init__.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1121
  janus/lib/python3.10/site-packages/sympy/physics/control/__pycache__/lti.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1122
+ janus/lib/python3.10/site-packages/sympy/matrices/tests/__pycache__/test_matrices.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1123
+ janus/lib/python3.10/site-packages/sympy/physics/continuum_mechanics/__pycache__/beam.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1124
+ infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/libnvblas.so.12 filter=lfs diff=lfs merge=lfs -text
1125
+ infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.11 filter=lfs diff=lfs merge=lfs -text
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (171 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (179 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas.h ADDED
@@ -0,0 +1,891 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2019 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ /*
51
+ * This is the public header file for the CUBLAS library, defining the API
52
+ *
53
+ * CUBLAS is an implementation of BLAS (Basic Linear Algebra Subroutines)
54
+ * on top of the CUDA runtime.
55
+ */
56
+
57
+ #if !defined(CUBLAS_H_)
58
+ #define CUBLAS_H_
59
+
60
+ #if defined(CUBLAS_V2_H_)
61
+ #error "It is an error to include both cublas.h and cublas_v2.h"
62
+ #endif
63
+
64
+ #include <cuda_runtime.h>
65
+
66
+ #ifndef CUBLASWINAPI
67
+ #ifdef _WIN32
68
+ #define CUBLASWINAPI __stdcall
69
+ #else
70
+ #define CUBLASWINAPI
71
+ #endif
72
+ #endif
73
+
74
+ #undef CUBLASAPI
75
+ #ifdef __CUDACC__
76
+ #define CUBLASAPI __host__
77
+ #else
78
+ #define CUBLASAPI
79
+ #endif
80
+
81
+ #include "cublas_api.h"
82
+
83
+ #if defined(__cplusplus)
84
+ extern "C" {
85
+ #endif
86
+
87
+ /* CUBLAS data types */
88
+ #define cublasStatus cublasStatus_t
89
+
90
+ cublasStatus CUBLASWINAPI cublasInit(void);
91
+ cublasStatus CUBLASWINAPI cublasShutdown(void);
92
+ cublasStatus CUBLASWINAPI cublasGetError(void);
93
+
94
+ cublasStatus CUBLASWINAPI cublasGetVersion(int* version);
95
+ cublasStatus CUBLASWINAPI cublasAlloc(int n, int elemSize, void** devicePtr);
96
+
97
+ cublasStatus CUBLASWINAPI cublasFree(void* devicePtr);
98
+
99
+ cublasStatus CUBLASWINAPI cublasSetKernelStream(cudaStream_t stream);
100
+
101
+ /* ---------------- CUBLAS BLAS1 functions ---------------- */
102
+ /* NRM2 */
103
+ float CUBLASWINAPI cublasSnrm2(int n, const float* x, int incx);
104
+ double CUBLASWINAPI cublasDnrm2(int n, const double* x, int incx);
105
+ float CUBLASWINAPI cublasScnrm2(int n, const cuComplex* x, int incx);
106
+ double CUBLASWINAPI cublasDznrm2(int n, const cuDoubleComplex* x, int incx);
107
+ /*------------------------------------------------------------------------*/
108
+ /* DOT */
109
+ float CUBLASWINAPI cublasSdot(int n, const float* x, int incx, const float* y, int incy);
110
+ double CUBLASWINAPI cublasDdot(int n, const double* x, int incx, const double* y, int incy);
111
+ cuComplex CUBLASWINAPI cublasCdotu(int n, const cuComplex* x, int incx, const cuComplex* y, int incy);
112
+ cuComplex CUBLASWINAPI cublasCdotc(int n, const cuComplex* x, int incx, const cuComplex* y, int incy);
113
+ cuDoubleComplex CUBLASWINAPI cublasZdotu(int n, const cuDoubleComplex* x, int incx, const cuDoubleComplex* y, int incy);
114
+ cuDoubleComplex CUBLASWINAPI cublasZdotc(int n, const cuDoubleComplex* x, int incx, const cuDoubleComplex* y, int incy);
115
+ /*------------------------------------------------------------------------*/
116
+ /* SCAL */
117
+ void CUBLASWINAPI cublasSscal(int n, float alpha, float* x, int incx);
118
+ void CUBLASWINAPI cublasDscal(int n, double alpha, double* x, int incx);
119
+ void CUBLASWINAPI cublasCscal(int n, cuComplex alpha, cuComplex* x, int incx);
120
+ void CUBLASWINAPI cublasZscal(int n, cuDoubleComplex alpha, cuDoubleComplex* x, int incx);
121
+
122
+ void CUBLASWINAPI cublasCsscal(int n, float alpha, cuComplex* x, int incx);
123
+ void CUBLASWINAPI cublasZdscal(int n, double alpha, cuDoubleComplex* x, int incx);
124
+ /*------------------------------------------------------------------------*/
125
+ /* AXPY */
126
+ void CUBLASWINAPI cublasSaxpy(int n, float alpha, const float* x, int incx, float* y, int incy);
127
+ void CUBLASWINAPI cublasDaxpy(int n, double alpha, const double* x, int incx, double* y, int incy);
128
+ void CUBLASWINAPI cublasCaxpy(int n, cuComplex alpha, const cuComplex* x, int incx, cuComplex* y, int incy);
129
+ void CUBLASWINAPI
130
+ cublasZaxpy(int n, cuDoubleComplex alpha, const cuDoubleComplex* x, int incx, cuDoubleComplex* y, int incy);
131
+ /*------------------------------------------------------------------------*/
132
+ /* COPY */
133
+ void CUBLASWINAPI cublasScopy(int n, const float* x, int incx, float* y, int incy);
134
+ void CUBLASWINAPI cublasDcopy(int n, const double* x, int incx, double* y, int incy);
135
+ void CUBLASWINAPI cublasCcopy(int n, const cuComplex* x, int incx, cuComplex* y, int incy);
136
+ void CUBLASWINAPI cublasZcopy(int n, const cuDoubleComplex* x, int incx, cuDoubleComplex* y, int incy);
137
+ /*------------------------------------------------------------------------*/
138
+ /* SWAP */
139
+ void CUBLASWINAPI cublasSswap(int n, float* x, int incx, float* y, int incy);
140
+ void CUBLASWINAPI cublasDswap(int n, double* x, int incx, double* y, int incy);
141
+ void CUBLASWINAPI cublasCswap(int n, cuComplex* x, int incx, cuComplex* y, int incy);
142
+ void CUBLASWINAPI cublasZswap(int n, cuDoubleComplex* x, int incx, cuDoubleComplex* y, int incy);
143
+ /*------------------------------------------------------------------------*/
144
+ /* AMAX */
145
+ int CUBLASWINAPI cublasIsamax(int n, const float* x, int incx);
146
+ int CUBLASWINAPI cublasIdamax(int n, const double* x, int incx);
147
+ int CUBLASWINAPI cublasIcamax(int n, const cuComplex* x, int incx);
148
+ int CUBLASWINAPI cublasIzamax(int n, const cuDoubleComplex* x, int incx);
149
+ /*------------------------------------------------------------------------*/
150
+ /* AMIN */
151
+ int CUBLASWINAPI cublasIsamin(int n, const float* x, int incx);
152
+ int CUBLASWINAPI cublasIdamin(int n, const double* x, int incx);
153
+
154
+ int CUBLASWINAPI cublasIcamin(int n, const cuComplex* x, int incx);
155
+ int CUBLASWINAPI cublasIzamin(int n, const cuDoubleComplex* x, int incx);
156
+ /*------------------------------------------------------------------------*/
157
+ /* ASUM */
158
+ float CUBLASWINAPI cublasSasum(int n, const float* x, int incx);
159
+ double CUBLASWINAPI cublasDasum(int n, const double* x, int incx);
160
+ float CUBLASWINAPI cublasScasum(int n, const cuComplex* x, int incx);
161
+ double CUBLASWINAPI cublasDzasum(int n, const cuDoubleComplex* x, int incx);
162
+ /*------------------------------------------------------------------------*/
163
+ /* ROT */
164
+ void CUBLASWINAPI cublasSrot(int n, float* x, int incx, float* y, int incy, float sc, float ss);
165
+ void CUBLASWINAPI cublasDrot(int n, double* x, int incx, double* y, int incy, double sc, double ss);
166
+ void CUBLASWINAPI cublasCrot(int n, cuComplex* x, int incx, cuComplex* y, int incy, float c, cuComplex s);
167
+ void CUBLASWINAPI
168
+ cublasZrot(int n, cuDoubleComplex* x, int incx, cuDoubleComplex* y, int incy, double sc, cuDoubleComplex cs);
169
+ void CUBLASWINAPI cublasCsrot(int n, cuComplex* x, int incx, cuComplex* y, int incy, float c, float s);
170
+ void CUBLASWINAPI cublasZdrot(int n, cuDoubleComplex* x, int incx, cuDoubleComplex* y, int incy, double c, double s);
171
+ /*------------------------------------------------------------------------*/
172
+ /* ROTG */
173
+ void CUBLASWINAPI cublasSrotg(float* sa, float* sb, float* sc, float* ss);
174
+ void CUBLASWINAPI cublasDrotg(double* sa, double* sb, double* sc, double* ss);
175
+ void CUBLASWINAPI cublasCrotg(cuComplex* ca, cuComplex cb, float* sc, cuComplex* cs);
176
+ void CUBLASWINAPI cublasZrotg(cuDoubleComplex* ca, cuDoubleComplex cb, double* sc, cuDoubleComplex* cs);
177
+ /*------------------------------------------------------------------------*/
178
+ /* ROTM */
179
+ void CUBLASWINAPI cublasSrotm(int n, float* x, int incx, float* y, int incy, const float* sparam);
180
+ void CUBLASWINAPI cublasDrotm(int n, double* x, int incx, double* y, int incy, const double* sparam);
181
+ /*------------------------------------------------------------------------*/
182
+ /* ROTMG */
183
+ void CUBLASWINAPI cublasSrotmg(float* sd1, float* sd2, float* sx1, const float* sy1, float* sparam);
184
+ void CUBLASWINAPI cublasDrotmg(double* sd1, double* sd2, double* sx1, const double* sy1, double* sparam);
185
+
186
+ /* --------------- CUBLAS BLAS2 functions ---------------- */
187
+ /* GEMV */
188
+ void CUBLASWINAPI cublasSgemv(char trans,
189
+ int m,
190
+ int n,
191
+ float alpha,
192
+ const float* A,
193
+ int lda,
194
+ const float* x,
195
+ int incx,
196
+ float beta,
197
+ float* y,
198
+ int incy);
199
+ void CUBLASWINAPI cublasDgemv(char trans,
200
+ int m,
201
+ int n,
202
+ double alpha,
203
+ const double* A,
204
+ int lda,
205
+ const double* x,
206
+ int incx,
207
+ double beta,
208
+ double* y,
209
+ int incy);
210
+ void CUBLASWINAPI cublasCgemv(char trans,
211
+ int m,
212
+ int n,
213
+ cuComplex alpha,
214
+ const cuComplex* A,
215
+ int lda,
216
+ const cuComplex* x,
217
+ int incx,
218
+ cuComplex beta,
219
+ cuComplex* y,
220
+ int incy);
221
+ void CUBLASWINAPI cublasZgemv(char trans,
222
+ int m,
223
+ int n,
224
+ cuDoubleComplex alpha,
225
+ const cuDoubleComplex* A,
226
+ int lda,
227
+ const cuDoubleComplex* x,
228
+ int incx,
229
+ cuDoubleComplex beta,
230
+ cuDoubleComplex* y,
231
+ int incy);
232
+ /*------------------------------------------------------------------------*/
233
+ /* GBMV */
234
+ void CUBLASWINAPI cublasSgbmv(char trans,
235
+ int m,
236
+ int n,
237
+ int kl,
238
+ int ku,
239
+ float alpha,
240
+ const float* A,
241
+ int lda,
242
+ const float* x,
243
+ int incx,
244
+ float beta,
245
+ float* y,
246
+ int incy);
247
+ void CUBLASWINAPI cublasDgbmv(char trans,
248
+ int m,
249
+ int n,
250
+ int kl,
251
+ int ku,
252
+ double alpha,
253
+ const double* A,
254
+ int lda,
255
+ const double* x,
256
+ int incx,
257
+ double beta,
258
+ double* y,
259
+ int incy);
260
+ void CUBLASWINAPI cublasCgbmv(char trans,
261
+ int m,
262
+ int n,
263
+ int kl,
264
+ int ku,
265
+ cuComplex alpha,
266
+ const cuComplex* A,
267
+ int lda,
268
+ const cuComplex* x,
269
+ int incx,
270
+ cuComplex beta,
271
+ cuComplex* y,
272
+ int incy);
273
+ void CUBLASWINAPI cublasZgbmv(char trans,
274
+ int m,
275
+ int n,
276
+ int kl,
277
+ int ku,
278
+ cuDoubleComplex alpha,
279
+ const cuDoubleComplex* A,
280
+ int lda,
281
+ const cuDoubleComplex* x,
282
+ int incx,
283
+ cuDoubleComplex beta,
284
+ cuDoubleComplex* y,
285
+ int incy);
286
+ /*------------------------------------------------------------------------*/
287
+ /* TRMV */
288
+ void CUBLASWINAPI cublasStrmv(char uplo, char trans, char diag, int n, const float* A, int lda, float* x, int incx);
289
+ void CUBLASWINAPI cublasDtrmv(char uplo, char trans, char diag, int n, const double* A, int lda, double* x, int incx);
290
+ void CUBLASWINAPI
291
+ cublasCtrmv(char uplo, char trans, char diag, int n, const cuComplex* A, int lda, cuComplex* x, int incx);
292
+ void CUBLASWINAPI
293
+ cublasZtrmv(char uplo, char trans, char diag, int n, const cuDoubleComplex* A, int lda, cuDoubleComplex* x, int incx);
294
+ /*------------------------------------------------------------------------*/
295
+ /* TBMV */
296
+ void CUBLASWINAPI
297
+ cublasStbmv(char uplo, char trans, char diag, int n, int k, const float* A, int lda, float* x, int incx);
298
+ void CUBLASWINAPI
299
+ cublasDtbmv(char uplo, char trans, char diag, int n, int k, const double* A, int lda, double* x, int incx);
300
+ void CUBLASWINAPI
301
+ cublasCtbmv(char uplo, char trans, char diag, int n, int k, const cuComplex* A, int lda, cuComplex* x, int incx);
302
+ void CUBLASWINAPI cublasZtbmv(
303
+ char uplo, char trans, char diag, int n, int k, const cuDoubleComplex* A, int lda, cuDoubleComplex* x, int incx);
304
+ /*------------------------------------------------------------------------*/
305
+ /* TPMV */
306
+ void CUBLASWINAPI cublasStpmv(char uplo, char trans, char diag, int n, const float* AP, float* x, int incx);
307
+
308
+ void CUBLASWINAPI cublasDtpmv(char uplo, char trans, char diag, int n, const double* AP, double* x, int incx);
309
+
310
+ void CUBLASWINAPI cublasCtpmv(char uplo, char trans, char diag, int n, const cuComplex* AP, cuComplex* x, int incx);
311
+
312
+ void CUBLASWINAPI
313
+ cublasZtpmv(char uplo, char trans, char diag, int n, const cuDoubleComplex* AP, cuDoubleComplex* x, int incx);
314
+ /*------------------------------------------------------------------------*/
315
+ /* TRSV */
316
+ void CUBLASWINAPI cublasStrsv(char uplo, char trans, char diag, int n, const float* A, int lda, float* x, int incx);
317
+
318
+ void CUBLASWINAPI cublasDtrsv(char uplo, char trans, char diag, int n, const double* A, int lda, double* x, int incx);
319
+
320
+ void CUBLASWINAPI
321
+ cublasCtrsv(char uplo, char trans, char diag, int n, const cuComplex* A, int lda, cuComplex* x, int incx);
322
+
323
+ void CUBLASWINAPI
324
+ cublasZtrsv(char uplo, char trans, char diag, int n, const cuDoubleComplex* A, int lda, cuDoubleComplex* x, int incx);
325
+ /*------------------------------------------------------------------------*/
326
+ /* TPSV */
327
+ void CUBLASWINAPI cublasStpsv(char uplo, char trans, char diag, int n, const float* AP, float* x, int incx);
328
+
329
+ void CUBLASWINAPI cublasDtpsv(char uplo, char trans, char diag, int n, const double* AP, double* x, int incx);
330
+
331
+ void CUBLASWINAPI cublasCtpsv(char uplo, char trans, char diag, int n, const cuComplex* AP, cuComplex* x, int incx);
332
+
333
+ void CUBLASWINAPI
334
+ cublasZtpsv(char uplo, char trans, char diag, int n, const cuDoubleComplex* AP, cuDoubleComplex* x, int incx);
335
+ /*------------------------------------------------------------------------*/
336
+ /* TBSV */
337
+ void CUBLASWINAPI
338
+ cublasStbsv(char uplo, char trans, char diag, int n, int k, const float* A, int lda, float* x, int incx);
339
+
340
+ void CUBLASWINAPI
341
+ cublasDtbsv(char uplo, char trans, char diag, int n, int k, const double* A, int lda, double* x, int incx);
342
+ void CUBLASWINAPI
343
+ cublasCtbsv(char uplo, char trans, char diag, int n, int k, const cuComplex* A, int lda, cuComplex* x, int incx);
344
+
345
+ void CUBLASWINAPI cublasZtbsv(
346
+ char uplo, char trans, char diag, int n, int k, const cuDoubleComplex* A, int lda, cuDoubleComplex* x, int incx);
347
+ /*------------------------------------------------------------------------*/
348
+ /* SYMV/HEMV */
349
+ void CUBLASWINAPI cublasSsymv(
350
+ char uplo, int n, float alpha, const float* A, int lda, const float* x, int incx, float beta, float* y, int incy);
351
+ void CUBLASWINAPI cublasDsymv(char uplo,
352
+ int n,
353
+ double alpha,
354
+ const double* A,
355
+ int lda,
356
+ const double* x,
357
+ int incx,
358
+ double beta,
359
+ double* y,
360
+ int incy);
361
+ void CUBLASWINAPI cublasChemv(char uplo,
362
+ int n,
363
+ cuComplex alpha,
364
+ const cuComplex* A,
365
+ int lda,
366
+ const cuComplex* x,
367
+ int incx,
368
+ cuComplex beta,
369
+ cuComplex* y,
370
+ int incy);
371
+ void CUBLASWINAPI cublasZhemv(char uplo,
372
+ int n,
373
+ cuDoubleComplex alpha,
374
+ const cuDoubleComplex* A,
375
+ int lda,
376
+ const cuDoubleComplex* x,
377
+ int incx,
378
+ cuDoubleComplex beta,
379
+ cuDoubleComplex* y,
380
+ int incy);
381
+ /*------------------------------------------------------------------------*/
382
+ /* SBMV/HBMV */
383
+ void CUBLASWINAPI cublasSsbmv(char uplo,
384
+ int n,
385
+ int k,
386
+ float alpha,
387
+ const float* A,
388
+ int lda,
389
+ const float* x,
390
+ int incx,
391
+ float beta,
392
+ float* y,
393
+ int incy);
394
+ void CUBLASWINAPI cublasDsbmv(char uplo,
395
+ int n,
396
+ int k,
397
+ double alpha,
398
+ const double* A,
399
+ int lda,
400
+ const double* x,
401
+ int incx,
402
+ double beta,
403
+ double* y,
404
+ int incy);
405
+ void CUBLASWINAPI cublasChbmv(char uplo,
406
+ int n,
407
+ int k,
408
+ cuComplex alpha,
409
+ const cuComplex* A,
410
+ int lda,
411
+ const cuComplex* x,
412
+ int incx,
413
+ cuComplex beta,
414
+ cuComplex* y,
415
+ int incy);
416
+ void CUBLASWINAPI cublasZhbmv(char uplo,
417
+ int n,
418
+ int k,
419
+ cuDoubleComplex alpha,
420
+ const cuDoubleComplex* A,
421
+ int lda,
422
+ const cuDoubleComplex* x,
423
+ int incx,
424
+ cuDoubleComplex beta,
425
+ cuDoubleComplex* y,
426
+ int incy);
427
+ /*------------------------------------------------------------------------*/
428
+ /* SPMV/HPMV */
429
+ void CUBLASWINAPI
430
+ cublasSspmv(char uplo, int n, float alpha, const float* AP, const float* x, int incx, float beta, float* y, int incy);
431
+ void CUBLASWINAPI cublasDspmv(
432
+ char uplo, int n, double alpha, const double* AP, const double* x, int incx, double beta, double* y, int incy);
433
+ void CUBLASWINAPI cublasChpmv(char uplo,
434
+ int n,
435
+ cuComplex alpha,
436
+ const cuComplex* AP,
437
+ const cuComplex* x,
438
+ int incx,
439
+ cuComplex beta,
440
+ cuComplex* y,
441
+ int incy);
442
+ void CUBLASWINAPI cublasZhpmv(char uplo,
443
+ int n,
444
+ cuDoubleComplex alpha,
445
+ const cuDoubleComplex* AP,
446
+ const cuDoubleComplex* x,
447
+ int incx,
448
+ cuDoubleComplex beta,
449
+ cuDoubleComplex* y,
450
+ int incy);
451
+
452
+ /*------------------------------------------------------------------------*/
453
+ /* GER */
454
+ void CUBLASWINAPI
455
+ cublasSger(int m, int n, float alpha, const float* x, int incx, const float* y, int incy, float* A, int lda);
456
+ void CUBLASWINAPI
457
+ cublasDger(int m, int n, double alpha, const double* x, int incx, const double* y, int incy, double* A, int lda);
458
+
459
+ void CUBLASWINAPI cublasCgeru(
460
+ int m, int n, cuComplex alpha, const cuComplex* x, int incx, const cuComplex* y, int incy, cuComplex* A, int lda);
461
+ void CUBLASWINAPI cublasCgerc(
462
+ int m, int n, cuComplex alpha, const cuComplex* x, int incx, const cuComplex* y, int incy, cuComplex* A, int lda);
463
+ void CUBLASWINAPI cublasZgeru(int m,
464
+ int n,
465
+ cuDoubleComplex alpha,
466
+ const cuDoubleComplex* x,
467
+ int incx,
468
+ const cuDoubleComplex* y,
469
+ int incy,
470
+ cuDoubleComplex* A,
471
+ int lda);
472
+ void CUBLASWINAPI cublasZgerc(int m,
473
+ int n,
474
+ cuDoubleComplex alpha,
475
+ const cuDoubleComplex* x,
476
+ int incx,
477
+ const cuDoubleComplex* y,
478
+ int incy,
479
+ cuDoubleComplex* A,
480
+ int lda);
481
+ /*------------------------------------------------------------------------*/
482
+ /* SYR/HER */
483
+ void CUBLASWINAPI cublasSsyr(char uplo, int n, float alpha, const float* x, int incx, float* A, int lda);
484
+ void CUBLASWINAPI cublasDsyr(char uplo, int n, double alpha, const double* x, int incx, double* A, int lda);
485
+
486
+ void CUBLASWINAPI cublasCher(char uplo, int n, float alpha, const cuComplex* x, int incx, cuComplex* A, int lda);
487
+ void CUBLASWINAPI
488
+ cublasZher(char uplo, int n, double alpha, const cuDoubleComplex* x, int incx, cuDoubleComplex* A, int lda);
489
+
490
+ /*------------------------------------------------------------------------*/
491
+ /* SPR/HPR */
492
+ void CUBLASWINAPI cublasSspr(char uplo, int n, float alpha, const float* x, int incx, float* AP);
493
+ void CUBLASWINAPI cublasDspr(char uplo, int n, double alpha, const double* x, int incx, double* AP);
494
+ void CUBLASWINAPI cublasChpr(char uplo, int n, float alpha, const cuComplex* x, int incx, cuComplex* AP);
495
+ void CUBLASWINAPI cublasZhpr(char uplo, int n, double alpha, const cuDoubleComplex* x, int incx, cuDoubleComplex* AP);
496
+ /*------------------------------------------------------------------------*/
497
+ /* SYR2/HER2 */
498
+ void CUBLASWINAPI
499
+ cublasSsyr2(char uplo, int n, float alpha, const float* x, int incx, const float* y, int incy, float* A, int lda);
500
+ void CUBLASWINAPI
501
+ cublasDsyr2(char uplo, int n, double alpha, const double* x, int incx, const double* y, int incy, double* A, int lda);
502
+ void CUBLASWINAPI cublasCher2(char uplo,
503
+ int n,
504
+ cuComplex alpha,
505
+ const cuComplex* x,
506
+ int incx,
507
+ const cuComplex* y,
508
+ int incy,
509
+ cuComplex* A,
510
+ int lda);
511
+ void CUBLASWINAPI cublasZher2(char uplo,
512
+ int n,
513
+ cuDoubleComplex alpha,
514
+ const cuDoubleComplex* x,
515
+ int incx,
516
+ const cuDoubleComplex* y,
517
+ int incy,
518
+ cuDoubleComplex* A,
519
+ int lda);
520
+
521
+ /*------------------------------------------------------------------------*/
522
+ /* SPR2/HPR2 */
523
+ void CUBLASWINAPI
524
+ cublasSspr2(char uplo, int n, float alpha, const float* x, int incx, const float* y, int incy, float* AP);
525
+ void CUBLASWINAPI
526
+ cublasDspr2(char uplo, int n, double alpha, const double* x, int incx, const double* y, int incy, double* AP);
527
+ void CUBLASWINAPI cublasChpr2(
528
+ char uplo, int n, cuComplex alpha, const cuComplex* x, int incx, const cuComplex* y, int incy, cuComplex* AP);
529
+ void CUBLASWINAPI cublasZhpr2(char uplo,
530
+ int n,
531
+ cuDoubleComplex alpha,
532
+ const cuDoubleComplex* x,
533
+ int incx,
534
+ const cuDoubleComplex* y,
535
+ int incy,
536
+ cuDoubleComplex* AP);
537
+ /* ------------------------BLAS3 Functions ------------------------------- */
538
+ /* GEMM */
539
+ void CUBLASWINAPI cublasSgemm(char transa,
540
+ char transb,
541
+ int m,
542
+ int n,
543
+ int k,
544
+ float alpha,
545
+ const float* A,
546
+ int lda,
547
+ const float* B,
548
+ int ldb,
549
+ float beta,
550
+ float* C,
551
+ int ldc);
552
+ void CUBLASWINAPI cublasDgemm(char transa,
553
+ char transb,
554
+ int m,
555
+ int n,
556
+ int k,
557
+ double alpha,
558
+ const double* A,
559
+ int lda,
560
+ const double* B,
561
+ int ldb,
562
+ double beta,
563
+ double* C,
564
+ int ldc);
565
+ void CUBLASWINAPI cublasCgemm(char transa,
566
+ char transb,
567
+ int m,
568
+ int n,
569
+ int k,
570
+ cuComplex alpha,
571
+ const cuComplex* A,
572
+ int lda,
573
+ const cuComplex* B,
574
+ int ldb,
575
+ cuComplex beta,
576
+ cuComplex* C,
577
+ int ldc);
578
+ void CUBLASWINAPI cublasZgemm(char transa,
579
+ char transb,
580
+ int m,
581
+ int n,
582
+ int k,
583
+ cuDoubleComplex alpha,
584
+ const cuDoubleComplex* A,
585
+ int lda,
586
+ const cuDoubleComplex* B,
587
+ int ldb,
588
+ cuDoubleComplex beta,
589
+ cuDoubleComplex* C,
590
+ int ldc);
591
+ /* -------------------------------------------------------*/
592
+ /* SYRK */
593
+ void CUBLASWINAPI
594
+ cublasSsyrk(char uplo, char trans, int n, int k, float alpha, const float* A, int lda, float beta, float* C, int ldc);
595
+ void CUBLASWINAPI cublasDsyrk(
596
+ char uplo, char trans, int n, int k, double alpha, const double* A, int lda, double beta, double* C, int ldc);
597
+
598
+ void CUBLASWINAPI cublasCsyrk(char uplo,
599
+ char trans,
600
+ int n,
601
+ int k,
602
+ cuComplex alpha,
603
+ const cuComplex* A,
604
+ int lda,
605
+ cuComplex beta,
606
+ cuComplex* C,
607
+ int ldc);
608
+ void CUBLASWINAPI cublasZsyrk(char uplo,
609
+ char trans,
610
+ int n,
611
+ int k,
612
+ cuDoubleComplex alpha,
613
+ const cuDoubleComplex* A,
614
+ int lda,
615
+ cuDoubleComplex beta,
616
+ cuDoubleComplex* C,
617
+ int ldc);
618
+ /* ------------------------------------------------------- */
619
+ /* HERK */
620
+ void CUBLASWINAPI cublasCherk(
621
+ char uplo, char trans, int n, int k, float alpha, const cuComplex* A, int lda, float beta, cuComplex* C, int ldc);
622
+ void CUBLASWINAPI cublasZherk(char uplo,
623
+ char trans,
624
+ int n,
625
+ int k,
626
+ double alpha,
627
+ const cuDoubleComplex* A,
628
+ int lda,
629
+ double beta,
630
+ cuDoubleComplex* C,
631
+ int ldc);
632
+ /* ------------------------------------------------------- */
633
+ /* SYR2K */
634
+ void CUBLASWINAPI cublasSsyr2k(char uplo,
635
+ char trans,
636
+ int n,
637
+ int k,
638
+ float alpha,
639
+ const float* A,
640
+ int lda,
641
+ const float* B,
642
+ int ldb,
643
+ float beta,
644
+ float* C,
645
+ int ldc);
646
+
647
+ void CUBLASWINAPI cublasDsyr2k(char uplo,
648
+ char trans,
649
+ int n,
650
+ int k,
651
+ double alpha,
652
+ const double* A,
653
+ int lda,
654
+ const double* B,
655
+ int ldb,
656
+ double beta,
657
+ double* C,
658
+ int ldc);
659
+ void CUBLASWINAPI cublasCsyr2k(char uplo,
660
+ char trans,
661
+ int n,
662
+ int k,
663
+ cuComplex alpha,
664
+ const cuComplex* A,
665
+ int lda,
666
+ const cuComplex* B,
667
+ int ldb,
668
+ cuComplex beta,
669
+ cuComplex* C,
670
+ int ldc);
671
+
672
+ void CUBLASWINAPI cublasZsyr2k(char uplo,
673
+ char trans,
674
+ int n,
675
+ int k,
676
+ cuDoubleComplex alpha,
677
+ const cuDoubleComplex* A,
678
+ int lda,
679
+ const cuDoubleComplex* B,
680
+ int ldb,
681
+ cuDoubleComplex beta,
682
+ cuDoubleComplex* C,
683
+ int ldc);
684
+ /* ------------------------------------------------------- */
685
+ /* HER2K */
686
+ void CUBLASWINAPI cublasCher2k(char uplo,
687
+ char trans,
688
+ int n,
689
+ int k,
690
+ cuComplex alpha,
691
+ const cuComplex* A,
692
+ int lda,
693
+ const cuComplex* B,
694
+ int ldb,
695
+ float beta,
696
+ cuComplex* C,
697
+ int ldc);
698
+
699
+ void CUBLASWINAPI cublasZher2k(char uplo,
700
+ char trans,
701
+ int n,
702
+ int k,
703
+ cuDoubleComplex alpha,
704
+ const cuDoubleComplex* A,
705
+ int lda,
706
+ const cuDoubleComplex* B,
707
+ int ldb,
708
+ double beta,
709
+ cuDoubleComplex* C,
710
+ int ldc);
711
+
712
+ /*------------------------------------------------------------------------*/
713
+ /* SYMM*/
714
+ void CUBLASWINAPI cublasSsymm(char side,
715
+ char uplo,
716
+ int m,
717
+ int n,
718
+ float alpha,
719
+ const float* A,
720
+ int lda,
721
+ const float* B,
722
+ int ldb,
723
+ float beta,
724
+ float* C,
725
+ int ldc);
726
+ void CUBLASWINAPI cublasDsymm(char side,
727
+ char uplo,
728
+ int m,
729
+ int n,
730
+ double alpha,
731
+ const double* A,
732
+ int lda,
733
+ const double* B,
734
+ int ldb,
735
+ double beta,
736
+ double* C,
737
+ int ldc);
738
+
739
+ void CUBLASWINAPI cublasCsymm(char side,
740
+ char uplo,
741
+ int m,
742
+ int n,
743
+ cuComplex alpha,
744
+ const cuComplex* A,
745
+ int lda,
746
+ const cuComplex* B,
747
+ int ldb,
748
+ cuComplex beta,
749
+ cuComplex* C,
750
+ int ldc);
751
+
752
+ void CUBLASWINAPI cublasZsymm(char side,
753
+ char uplo,
754
+ int m,
755
+ int n,
756
+ cuDoubleComplex alpha,
757
+ const cuDoubleComplex* A,
758
+ int lda,
759
+ const cuDoubleComplex* B,
760
+ int ldb,
761
+ cuDoubleComplex beta,
762
+ cuDoubleComplex* C,
763
+ int ldc);
764
+ /*------------------------------------------------------------------------*/
765
+ /* HEMM*/
766
+ void CUBLASWINAPI cublasChemm(char side,
767
+ char uplo,
768
+ int m,
769
+ int n,
770
+ cuComplex alpha,
771
+ const cuComplex* A,
772
+ int lda,
773
+ const cuComplex* B,
774
+ int ldb,
775
+ cuComplex beta,
776
+ cuComplex* C,
777
+ int ldc);
778
+ void CUBLASWINAPI cublasZhemm(char side,
779
+ char uplo,
780
+ int m,
781
+ int n,
782
+ cuDoubleComplex alpha,
783
+ const cuDoubleComplex* A,
784
+ int lda,
785
+ const cuDoubleComplex* B,
786
+ int ldb,
787
+ cuDoubleComplex beta,
788
+ cuDoubleComplex* C,
789
+ int ldc);
790
+
791
+ /*------------------------------------------------------------------------*/
792
+ /* TRSM*/
793
+ void CUBLASWINAPI cublasStrsm(char side,
794
+ char uplo,
795
+ char transa,
796
+ char diag,
797
+ int m,
798
+ int n,
799
+ float alpha,
800
+ const float* A,
801
+ int lda,
802
+ float* B,
803
+ int ldb);
804
+
805
+ void CUBLASWINAPI cublasDtrsm(char side,
806
+ char uplo,
807
+ char transa,
808
+ char diag,
809
+ int m,
810
+ int n,
811
+ double alpha,
812
+ const double* A,
813
+ int lda,
814
+ double* B,
815
+ int ldb);
816
+
817
+ void CUBLASWINAPI cublasCtrsm(char side,
818
+ char uplo,
819
+ char transa,
820
+ char diag,
821
+ int m,
822
+ int n,
823
+ cuComplex alpha,
824
+ const cuComplex* A,
825
+ int lda,
826
+ cuComplex* B,
827
+ int ldb);
828
+
829
+ void CUBLASWINAPI cublasZtrsm(char side,
830
+ char uplo,
831
+ char transa,
832
+ char diag,
833
+ int m,
834
+ int n,
835
+ cuDoubleComplex alpha,
836
+ const cuDoubleComplex* A,
837
+ int lda,
838
+ cuDoubleComplex* B,
839
+ int ldb);
840
+ /*------------------------------------------------------------------------*/
841
+ /* TRMM*/
842
+ void CUBLASWINAPI cublasStrmm(char side,
843
+ char uplo,
844
+ char transa,
845
+ char diag,
846
+ int m,
847
+ int n,
848
+ float alpha,
849
+ const float* A,
850
+ int lda,
851
+ float* B,
852
+ int ldb);
853
+ void CUBLASWINAPI cublasDtrmm(char side,
854
+ char uplo,
855
+ char transa,
856
+ char diag,
857
+ int m,
858
+ int n,
859
+ double alpha,
860
+ const double* A,
861
+ int lda,
862
+ double* B,
863
+ int ldb);
864
+ void CUBLASWINAPI cublasCtrmm(char side,
865
+ char uplo,
866
+ char transa,
867
+ char diag,
868
+ int m,
869
+ int n,
870
+ cuComplex alpha,
871
+ const cuComplex* A,
872
+ int lda,
873
+ cuComplex* B,
874
+ int ldb);
875
+ void CUBLASWINAPI cublasZtrmm(char side,
876
+ char uplo,
877
+ char transa,
878
+ char diag,
879
+ int m,
880
+ int n,
881
+ cuDoubleComplex alpha,
882
+ const cuDoubleComplex* A,
883
+ int lda,
884
+ cuDoubleComplex* B,
885
+ int ldb);
886
+
887
+ #if defined(__cplusplus)
888
+ }
889
+ #endif /* __cplusplus */
890
+
891
+ #endif /* !defined(CUBLAS_H_) */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublasLt.h ADDED
@@ -0,0 +1,1845 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2022 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+ #pragma once
50
+
51
+ #ifndef CUBLASAPI
52
+ #ifdef __CUDACC__
53
+ #define CUBLASAPI __host__ __device__
54
+ #else
55
+ #define CUBLASAPI
56
+ #endif
57
+ #endif
58
+
59
+ #include <cublas_api.h>
60
+
61
+ #include <stdint.h>
62
+ #include <stddef.h>
63
+ #include <stdio.h>
64
+
65
+ #if defined(__cplusplus)
66
+ extern "C" {
67
+ #endif /* __cplusplus */
68
+
69
+ /** Opaque structure holding CUBLASLT context
70
+ */
71
+ typedef struct cublasLtContext* cublasLtHandle_t;
72
+
73
+ cublasStatus_t CUBLASWINAPI cublasLtCreate(cublasLtHandle_t* lightHandle);
74
+
75
+ cublasStatus_t CUBLASWINAPI cublasLtDestroy(cublasLtHandle_t lightHandle);
76
+
77
+ const char* CUBLASWINAPI cublasLtGetStatusName(cublasStatus_t status);
78
+
79
+ const char* CUBLASWINAPI cublasLtGetStatusString(cublasStatus_t status);
80
+
81
+ size_t CUBLASWINAPI cublasLtGetVersion(void);
82
+
83
+ size_t CUBLASWINAPI cublasLtGetCudartVersion(void);
84
+
85
+ cublasStatus_t CUBLASWINAPI cublasLtGetProperty(libraryPropertyType type, int* value);
86
+
87
+ cublasStatus_t CUBLASWINAPI cublasLtHeuristicsCacheGetCapacity(size_t* capacity);
88
+ cublasStatus_t CUBLASWINAPI cublasLtHeuristicsCacheSetCapacity(size_t capacity);
89
+
90
+ /** Restricts usage of CPU instructions (ISA) specified by the flags in the mask.
91
+ *
92
+ * Flags can be combined with bitwise OR(|) operator. Supported flags:
93
+ * - 0x1 -- x86-64 AVX512 ISA
94
+ *
95
+ * Default mask: 0 (any applicable ISA is allowed).
96
+ *
97
+ * The function returns the previous value of the mask.
98
+ * The function takes precedence over the environment variable CUBLASLT_DISABLE_CPU_INSTRUCTIONS_MASK.
99
+ */
100
+ unsigned CUBLASWINAPI cublasLtDisableCpuInstructionsSetMask(unsigned mask);
101
+
102
+ /** Semi-opaque descriptor for matrix memory layout
103
+ */
104
+ typedef struct {
105
+ uint64_t data[8];
106
+ } cublasLtMatrixLayoutOpaque_t;
107
+
108
+ /** Opaque descriptor for matrix memory layout
109
+ */
110
+ typedef cublasLtMatrixLayoutOpaque_t* cublasLtMatrixLayout_t;
111
+
112
+ /** Semi-opaque algorithm descriptor (to avoid complicated alloc/free schemes)
113
+ *
114
+ * This structure can be trivially serialized and later restored for use with the same version of cuBLAS library to save
115
+ * on selecting the right configuration again.
116
+ */
117
+ typedef struct {
118
+ uint64_t data[8];
119
+ } cublasLtMatmulAlgo_t;
120
+
121
+ /** Semi-opaque descriptor for cublasLtMatmul() operation details
122
+ */
123
+ typedef struct {
124
+ uint64_t data[32];
125
+ } cublasLtMatmulDescOpaque_t;
126
+
127
+ /** Opaque descriptor for cublasLtMatmul() operation details
128
+ */
129
+ typedef cublasLtMatmulDescOpaque_t* cublasLtMatmulDesc_t;
130
+
131
+ /** Semi-opaque descriptor for cublasLtMatrixTransform() operation details
132
+ */
133
+ typedef struct {
134
+ uint64_t data[8];
135
+ } cublasLtMatrixTransformDescOpaque_t;
136
+
137
+ /** Opaque descriptor for cublasLtMatrixTransform() operation details
138
+ */
139
+ typedef cublasLtMatrixTransformDescOpaque_t* cublasLtMatrixTransformDesc_t;
140
+
141
+ /** Semi-opaque descriptor for cublasLtMatmulPreference() operation details
142
+ */
143
+ typedef struct {
144
+ uint64_t data[8];
145
+ } cublasLtMatmulPreferenceOpaque_t;
146
+
147
+ /** Opaque descriptor for cublasLtMatmulAlgoGetHeuristic() configuration
148
+ */
149
+ typedef cublasLtMatmulPreferenceOpaque_t* cublasLtMatmulPreference_t;
150
+
151
+ /** Tile size (in C/D matrix Rows x Cols)
152
+ *
153
+ * General order of tile IDs is sorted by size first and by first dimension second.
154
+ */
155
+ typedef enum {
156
+ CUBLASLT_MATMUL_TILE_UNDEFINED = 0,
157
+ CUBLASLT_MATMUL_TILE_8x8 = 1,
158
+ CUBLASLT_MATMUL_TILE_8x16 = 2,
159
+ CUBLASLT_MATMUL_TILE_16x8 = 3,
160
+ CUBLASLT_MATMUL_TILE_8x32 = 4,
161
+ CUBLASLT_MATMUL_TILE_16x16 = 5,
162
+ CUBLASLT_MATMUL_TILE_32x8 = 6,
163
+ CUBLASLT_MATMUL_TILE_8x64 = 7,
164
+ CUBLASLT_MATMUL_TILE_16x32 = 8,
165
+ CUBLASLT_MATMUL_TILE_32x16 = 9,
166
+ CUBLASLT_MATMUL_TILE_64x8 = 10,
167
+ CUBLASLT_MATMUL_TILE_32x32 = 11,
168
+ CUBLASLT_MATMUL_TILE_32x64 = 12,
169
+ CUBLASLT_MATMUL_TILE_64x32 = 13,
170
+ CUBLASLT_MATMUL_TILE_32x128 = 14,
171
+ CUBLASLT_MATMUL_TILE_64x64 = 15,
172
+ CUBLASLT_MATMUL_TILE_128x32 = 16,
173
+ CUBLASLT_MATMUL_TILE_64x128 = 17,
174
+ CUBLASLT_MATMUL_TILE_128x64 = 18,
175
+ CUBLASLT_MATMUL_TILE_64x256 = 19,
176
+ CUBLASLT_MATMUL_TILE_128x128 = 20,
177
+ CUBLASLT_MATMUL_TILE_256x64 = 21,
178
+ CUBLASLT_MATMUL_TILE_64x512 = 22,
179
+ CUBLASLT_MATMUL_TILE_128x256 = 23,
180
+ CUBLASLT_MATMUL_TILE_256x128 = 24,
181
+ CUBLASLT_MATMUL_TILE_512x64 = 25,
182
+ CUBLASLT_MATMUL_TILE_64x96 = 26,
183
+ CUBLASLT_MATMUL_TILE_96x64 = 27,
184
+ CUBLASLT_MATMUL_TILE_96x128 = 28,
185
+ CUBLASLT_MATMUL_TILE_128x160 = 29,
186
+ CUBLASLT_MATMUL_TILE_160x128 = 30,
187
+ CUBLASLT_MATMUL_TILE_192x128 = 31,
188
+ CUBLASLT_MATMUL_TILE_128x192 = 32,
189
+ CUBLASLT_MATMUL_TILE_128x96 = 33,
190
+ CUBLASLT_MATMUL_TILE_32x256 = 34,
191
+ CUBLASLT_MATMUL_TILE_256x32 = 35,
192
+ CUBLASLT_MATMUL_TILE_END
193
+ } cublasLtMatmulTile_t;
194
+
195
+ /** Size and number of stages in which elements are read into shared memory
196
+ *
197
+ * General order of stages IDs is sorted by stage size first and by number of stages second.
198
+ */
199
+ typedef enum {
200
+ CUBLASLT_MATMUL_STAGES_UNDEFINED = 0,
201
+ CUBLASLT_MATMUL_STAGES_16x1 = 1,
202
+ CUBLASLT_MATMUL_STAGES_16x2 = 2,
203
+ CUBLASLT_MATMUL_STAGES_16x3 = 3,
204
+ CUBLASLT_MATMUL_STAGES_16x4 = 4,
205
+ CUBLASLT_MATMUL_STAGES_16x5 = 5,
206
+ CUBLASLT_MATMUL_STAGES_16x6 = 6,
207
+ CUBLASLT_MATMUL_STAGES_32x1 = 7,
208
+ CUBLASLT_MATMUL_STAGES_32x2 = 8,
209
+ CUBLASLT_MATMUL_STAGES_32x3 = 9,
210
+ CUBLASLT_MATMUL_STAGES_32x4 = 10,
211
+ CUBLASLT_MATMUL_STAGES_32x5 = 11,
212
+ CUBLASLT_MATMUL_STAGES_32x6 = 12,
213
+ CUBLASLT_MATMUL_STAGES_64x1 = 13,
214
+ CUBLASLT_MATMUL_STAGES_64x2 = 14,
215
+ CUBLASLT_MATMUL_STAGES_64x3 = 15,
216
+ CUBLASLT_MATMUL_STAGES_64x4 = 16,
217
+ CUBLASLT_MATMUL_STAGES_64x5 = 17,
218
+ CUBLASLT_MATMUL_STAGES_64x6 = 18,
219
+ CUBLASLT_MATMUL_STAGES_128x1 = 19,
220
+ CUBLASLT_MATMUL_STAGES_128x2 = 20,
221
+ CUBLASLT_MATMUL_STAGES_128x3 = 21,
222
+ CUBLASLT_MATMUL_STAGES_128x4 = 22,
223
+ CUBLASLT_MATMUL_STAGES_128x5 = 23,
224
+ CUBLASLT_MATMUL_STAGES_128x6 = 24,
225
+ CUBLASLT_MATMUL_STAGES_32x10 = 25,
226
+ CUBLASLT_MATMUL_STAGES_8x4 = 26,
227
+ CUBLASLT_MATMUL_STAGES_16x10 = 27,
228
+ CUBLASLT_MATMUL_STAGES_8x5 = 28,
229
+ CUBLASLT_MATMUL_STAGES_8x3 = 31,
230
+ CUBLASLT_MATMUL_STAGES_8xAUTO = 32,
231
+ CUBLASLT_MATMUL_STAGES_16xAUTO = 33,
232
+ CUBLASLT_MATMUL_STAGES_32xAUTO = 34,
233
+ CUBLASLT_MATMUL_STAGES_64xAUTO = 35,
234
+ CUBLASLT_MATMUL_STAGES_128xAUTO = 36,
235
+ CUBLASLT_MATMUL_STAGES_END
236
+ } cublasLtMatmulStages_t;
237
+
238
+ /** Thread Block Cluster size
239
+ *
240
+ * Typically dimensioned similar to cublasLtMatmulTile_t, with the third coordinate unused at this time.
241
+ */
242
+ typedef enum {
243
+ /** Let library pick cluster shape automatically */
244
+ CUBLASLT_CLUSTER_SHAPE_AUTO = 0,
245
+ CUBLASLT_CLUSTER_SHAPE_1x1x1 = 2,
246
+ CUBLASLT_CLUSTER_SHAPE_2x1x1 = 3,
247
+ CUBLASLT_CLUSTER_SHAPE_4x1x1 = 4,
248
+ CUBLASLT_CLUSTER_SHAPE_1x2x1 = 5,
249
+ CUBLASLT_CLUSTER_SHAPE_2x2x1 = 6,
250
+ CUBLASLT_CLUSTER_SHAPE_4x2x1 = 7,
251
+ CUBLASLT_CLUSTER_SHAPE_1x4x1 = 8,
252
+ CUBLASLT_CLUSTER_SHAPE_2x4x1 = 9,
253
+ CUBLASLT_CLUSTER_SHAPE_4x4x1 = 10,
254
+ CUBLASLT_CLUSTER_SHAPE_8x1x1 = 11,
255
+ CUBLASLT_CLUSTER_SHAPE_1x8x1 = 12,
256
+ CUBLASLT_CLUSTER_SHAPE_8x2x1 = 13,
257
+ CUBLASLT_CLUSTER_SHAPE_2x8x1 = 14,
258
+ CUBLASLT_CLUSTER_SHAPE_16x1x1 = 15,
259
+ CUBLASLT_CLUSTER_SHAPE_1x16x1 = 16,
260
+ CUBLASLT_CLUSTER_SHAPE_3x1x1 = 17,
261
+ CUBLASLT_CLUSTER_SHAPE_5x1x1 = 18,
262
+ CUBLASLT_CLUSTER_SHAPE_6x1x1 = 19,
263
+ CUBLASLT_CLUSTER_SHAPE_7x1x1 = 20,
264
+ CUBLASLT_CLUSTER_SHAPE_9x1x1 = 21,
265
+ CUBLASLT_CLUSTER_SHAPE_10x1x1 = 22,
266
+ CUBLASLT_CLUSTER_SHAPE_11x1x1 = 23,
267
+ CUBLASLT_CLUSTER_SHAPE_12x1x1 = 24,
268
+ CUBLASLT_CLUSTER_SHAPE_13x1x1 = 25,
269
+ CUBLASLT_CLUSTER_SHAPE_14x1x1 = 26,
270
+ CUBLASLT_CLUSTER_SHAPE_15x1x1 = 27,
271
+ CUBLASLT_CLUSTER_SHAPE_3x2x1 = 28,
272
+ CUBLASLT_CLUSTER_SHAPE_5x2x1 = 29,
273
+ CUBLASLT_CLUSTER_SHAPE_6x2x1 = 30,
274
+ CUBLASLT_CLUSTER_SHAPE_7x2x1 = 31,
275
+ CUBLASLT_CLUSTER_SHAPE_1x3x1 = 32,
276
+ CUBLASLT_CLUSTER_SHAPE_2x3x1 = 33,
277
+ CUBLASLT_CLUSTER_SHAPE_3x3x1 = 34,
278
+ CUBLASLT_CLUSTER_SHAPE_4x3x1 = 35,
279
+ CUBLASLT_CLUSTER_SHAPE_5x3x1 = 36,
280
+ CUBLASLT_CLUSTER_SHAPE_3x4x1 = 37,
281
+ CUBLASLT_CLUSTER_SHAPE_1x5x1 = 38,
282
+ CUBLASLT_CLUSTER_SHAPE_2x5x1 = 39,
283
+ CUBLASLT_CLUSTER_SHAPE_3x5x1 = 40,
284
+ CUBLASLT_CLUSTER_SHAPE_1x6x1 = 41,
285
+ CUBLASLT_CLUSTER_SHAPE_2x6x1 = 42,
286
+ CUBLASLT_CLUSTER_SHAPE_1x7x1 = 43,
287
+ CUBLASLT_CLUSTER_SHAPE_2x7x1 = 44,
288
+ CUBLASLT_CLUSTER_SHAPE_1x9x1 = 45,
289
+ CUBLASLT_CLUSTER_SHAPE_1x10x1 = 46,
290
+ CUBLASLT_CLUSTER_SHAPE_1x11x1 = 47,
291
+ CUBLASLT_CLUSTER_SHAPE_1x12x1 = 48,
292
+ CUBLASLT_CLUSTER_SHAPE_1x13x1 = 49,
293
+ CUBLASLT_CLUSTER_SHAPE_1x14x1 = 50,
294
+ CUBLASLT_CLUSTER_SHAPE_1x15x1 = 51,
295
+ CUBLASLT_CLUSTER_SHAPE_END
296
+ } cublasLtClusterShape_t;
297
+
298
+ /** Inner size of the kernel
299
+ *
300
+ * Represents various aspects of internal kernel design, that don't impact CUDA grid size but may have other more subtle
301
+ * effects.
302
+ *
303
+ */
304
+ typedef enum {
305
+ CUBLASLT_MATMUL_INNER_SHAPE_UNDEFINED = 0,
306
+ CUBLASLT_MATMUL_INNER_SHAPE_MMA884 = 1,
307
+ CUBLASLT_MATMUL_INNER_SHAPE_MMA1684 = 2,
308
+ CUBLASLT_MATMUL_INNER_SHAPE_MMA1688 = 3,
309
+ CUBLASLT_MATMUL_INNER_SHAPE_MMA16816 = 4,
310
+ CUBLASLT_MATMUL_INNER_SHAPE_END
311
+ } cublasLtMatmulInnerShape_t;
312
+
313
+ /** Pointer mode to use for alpha/beta */
314
+ typedef enum {
315
+ /** matches CUBLAS_POINTER_MODE_HOST, pointer targets a single value host memory */
316
+ CUBLASLT_POINTER_MODE_HOST = CUBLAS_POINTER_MODE_HOST,
317
+ /** matches CUBLAS_POINTER_MODE_DEVICE, pointer targets a single value device memory */
318
+ CUBLASLT_POINTER_MODE_DEVICE = CUBLAS_POINTER_MODE_DEVICE,
319
+ /** pointer targets an array in device memory */
320
+ CUBLASLT_POINTER_MODE_DEVICE_VECTOR = 2,
321
+ /** alpha pointer targets an array in device memory, beta is zero. Note:
322
+ CUBLASLT_MATMUL_DESC_ALPHA_VECTOR_BATCH_STRIDE is not supported, must be 0. */
323
+ CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_ZERO = 3,
324
+ /** alpha pointer targets an array in device memory, beta is a single value in host memory. */
325
+ CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_HOST = 4,
326
+ } cublasLtPointerMode_t;
327
+
328
+ /** Mask to define pointer mode capability */
329
+ typedef enum {
330
+ /** see CUBLASLT_POINTER_MODE_HOST */
331
+ CUBLASLT_POINTER_MODE_MASK_HOST = 1,
332
+ /** see CUBLASLT_POINTER_MODE_DEVICE */
333
+ CUBLASLT_POINTER_MODE_MASK_DEVICE = 2,
334
+ /** see CUBLASLT_POINTER_MODE_DEVICE_VECTOR */
335
+ CUBLASLT_POINTER_MODE_MASK_DEVICE_VECTOR = 4,
336
+ /** see CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_ZERO */
337
+ CUBLASLT_POINTER_MODE_MASK_ALPHA_DEVICE_VECTOR_BETA_ZERO = 8,
338
+ /** see CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_HOST */
339
+ CUBLASLT_POINTER_MODE_MASK_ALPHA_DEVICE_VECTOR_BETA_HOST = 16,
340
+ } cublasLtPointerModeMask_t;
341
+
342
+ /** Implementation details that may affect numerical behavior of algorithms. */
343
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_FMA (0x01ull << 0)
344
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_HMMA (0x02ull << 0)
345
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_IMMA (0x04ull << 0)
346
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_DMMA (0x08ull << 0)
347
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_TENSOR_OP_MASK (0xfeull << 0)
348
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_OP_TYPE_MASK (0xffull << 0)
349
+
350
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_ACCUMULATOR_16F (0x01ull << 8)
351
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_ACCUMULATOR_32F (0x02ull << 8)
352
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_ACCUMULATOR_64F (0x04ull << 8)
353
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_ACCUMULATOR_32I (0x08ull << 8)
354
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_ACCUMULATOR_TYPE_MASK (0xffull << 8)
355
+
356
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_16F (0x01ull << 16)
357
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_16BF (0x02ull << 16)
358
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_TF32 (0x04ull << 16)
359
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_32F (0x08ull << 16)
360
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_64F (0x10ull << 16)
361
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_8I (0x20ull << 16)
362
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_8F_E4M3 (0x40ull << 16)
363
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_INPUT_8F_E5M2 (0x80ull << 16)
364
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_OP_INPUT_TYPE_MASK (0xffull << 16)
365
+
366
+ #define CUBLASLT_NUMERICAL_IMPL_FLAGS_GAUSSIAN (0x01ull << 32)
367
+ typedef uint64_t cublasLtNumericalImplFlags_t;
368
+
369
+ /** Execute matrix multiplication (D = alpha * op(A) * op(B) + beta * C).
370
+ *
371
+ * \retval CUBLAS_STATUS_NOT_INITIALIZED if cuBLASLt handle has not been initialized
372
+ * \retval CUBLAS_STATUS_INVALID_VALUE if parameters are in conflict or in an impossible configuration; e.g.
373
+ * when workspaceSizeInBytes is less than workspace required by configured
374
+ * algo
375
+ * \retval CUBLAS_STATUS_NOT_SUPPORTED if current implementation on selected device doesn't support configured
376
+ * operation
377
+ * \retval CUBLAS_STATUS_ARCH_MISMATCH if configured operation cannot be run using selected device
378
+ * \retval CUBLAS_STATUS_EXECUTION_FAILED if cuda reported execution error from the device
379
+ * \retval CUBLAS_STATUS_SUCCESS if the operation completed successfully
380
+ */
381
+ cublasStatus_t CUBLASWINAPI cublasLtMatmul(cublasLtHandle_t lightHandle,
382
+ cublasLtMatmulDesc_t computeDesc,
383
+ const void* alpha, /* host or device pointer */
384
+ const void* A,
385
+ cublasLtMatrixLayout_t Adesc,
386
+ const void* B,
387
+ cublasLtMatrixLayout_t Bdesc,
388
+ const void* beta, /* host or device pointer */
389
+ const void* C,
390
+ cublasLtMatrixLayout_t Cdesc,
391
+ void* D,
392
+ cublasLtMatrixLayout_t Ddesc,
393
+ const cublasLtMatmulAlgo_t* algo,
394
+ void* workspace,
395
+ size_t workspaceSizeInBytes,
396
+ cudaStream_t stream);
397
+
398
+ /** Matrix layout conversion helper (C = alpha * op(A) + beta * op(B))
399
+ *
400
+ * Can be used to change memory order of data or to scale and shift the values.
401
+ *
402
+ * \retval CUBLAS_STATUS_NOT_INITIALIZED if cuBLASLt handle has not been initialized
403
+ * \retval CUBLAS_STATUS_INVALID_VALUE if parameters are in conflict or in an impossible configuration; e.g.
404
+ * when A is not NULL, but Adesc is NULL
405
+ * \retval CUBLAS_STATUS_NOT_SUPPORTED if current implementation on selected device doesn't support configured
406
+ * operation
407
+ * \retval CUBLAS_STATUS_ARCH_MISMATCH if configured operation cannot be run using selected device
408
+ * \retval CUBLAS_STATUS_EXECUTION_FAILED if cuda reported execution error from the device
409
+ * \retval CUBLAS_STATUS_SUCCESS if the operation completed successfully
410
+ */
411
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransform(cublasLtHandle_t lightHandle,
412
+ cublasLtMatrixTransformDesc_t transformDesc,
413
+ const void* alpha, /* host or device pointer */
414
+ const void* A,
415
+ cublasLtMatrixLayout_t Adesc,
416
+ const void* beta, /* host or device pointer */
417
+ const void* B,
418
+ cublasLtMatrixLayout_t Bdesc,
419
+ void* C,
420
+ cublasLtMatrixLayout_t Cdesc,
421
+ cudaStream_t stream);
422
+
423
+ /* ---------------------------------------------------------------------------------------*/
424
+ /* Helper functions for cublasLtMatrixLayout_t */
425
+ /* ---------------------------------------------------------------------------------------*/
426
+
427
+ /** Enum for data ordering */
428
+ typedef enum {
429
+ /** Column-major
430
+ *
431
+ * Leading dimension is the stride (in elements) to the beginning of next column in memory.
432
+ */
433
+ CUBLASLT_ORDER_COL = 0,
434
+ /** Row major
435
+ *
436
+ * Leading dimension is the stride (in elements) to the beginning of next row in memory.
437
+ */
438
+ CUBLASLT_ORDER_ROW = 1,
439
+ /** Column-major ordered tiles of 32 columns.
440
+ *
441
+ * Leading dimension is the stride (in elements) to the beginning of next group of 32-columns. E.g. if matrix has 33
442
+ * columns and 2 rows, ld must be at least (32) * 2 = 64.
443
+ */
444
+ CUBLASLT_ORDER_COL32 = 2,
445
+ /** Column-major ordered tiles of composite tiles with total 32 columns and 8 rows, tile composed of interleaved
446
+ * inner tiles of 4 columns within 4 even or odd rows in an alternating pattern.
447
+ *
448
+ * Leading dimension is the stride (in elements) to the beginning of the first 32 column x 8 row tile for the next
449
+ * 32-wide group of columns. E.g. if matrix has 33 columns and 1 row, ld must be at least (32 * 8) * 1 = 256.
450
+ */
451
+ CUBLASLT_ORDER_COL4_4R2_8C = 3,
452
+ /** Column-major ordered tiles of composite tiles with total 32 columns ands 32 rows.
453
+ * Element offset within the tile is calculated as (((row%8)/2*4+row/8)*2+row%2)*32+col.
454
+ *
455
+ * Leading dimension is the stride (in elements) to the beginning of the first 32 column x 32 row tile for the next
456
+ * 32-wide group of columns. E.g. if matrix has 33 columns and 1 row, ld must be at least (32*32)*1 = 1024.
457
+ */
458
+ CUBLASLT_ORDER_COL32_2R_4R4 = 4,
459
+
460
+ } cublasLtOrder_t;
461
+
462
+ /** Attributes of memory layout */
463
+ typedef enum {
464
+ /** Data type, see cudaDataType.
465
+ *
466
+ * uint32_t
467
+ */
468
+ CUBLASLT_MATRIX_LAYOUT_TYPE = 0,
469
+
470
+ /** Memory order of the data, see cublasLtOrder_t.
471
+ *
472
+ * int32_t, default: CUBLASLT_ORDER_COL
473
+ */
474
+ CUBLASLT_MATRIX_LAYOUT_ORDER = 1,
475
+
476
+ /** Number of rows.
477
+ *
478
+ * Usually only values that can be expressed as int32_t are supported.
479
+ *
480
+ * uint64_t
481
+ */
482
+ CUBLASLT_MATRIX_LAYOUT_ROWS = 2,
483
+
484
+ /** Number of columns.
485
+ *
486
+ * Usually only values that can be expressed as int32_t are supported.
487
+ *
488
+ * uint64_t
489
+ */
490
+ CUBLASLT_MATRIX_LAYOUT_COLS = 3,
491
+
492
+ /** Matrix leading dimension.
493
+ *
494
+ * For CUBLASLT_ORDER_COL this is stride (in elements) of matrix column, for more details and documentation for
495
+ * other memory orders see documentation for cublasLtOrder_t values.
496
+ *
497
+ * Currently only non-negative values are supported, must be large enough so that matrix memory locations are not
498
+ * overlapping (e.g. greater or equal to CUBLASLT_MATRIX_LAYOUT_ROWS in case of CUBLASLT_ORDER_COL).
499
+ *
500
+ * int64_t;
501
+ */
502
+ CUBLASLT_MATRIX_LAYOUT_LD = 4,
503
+
504
+ /** Number of matmul operations to perform in the batch.
505
+ *
506
+ * See also CUBLASLT_ALGO_CAP_STRIDED_BATCH_SUPPORT
507
+ *
508
+ * int32_t, default: 1
509
+ */
510
+ CUBLASLT_MATRIX_LAYOUT_BATCH_COUNT = 5,
511
+
512
+ /** Stride (in elements) to the next matrix for strided batch operation.
513
+ *
514
+ * When matrix type is planar-complex (CUBLASLT_MATRIX_LAYOUT_PLANE_OFFSET != 0), batch stride
515
+ * is interpreted by cublasLtMatmul() in number of real valued sub-elements. E.g. for data of type CUDA_C_16F,
516
+ * offset of 1024B is encoded as a stride of value 512 (since each element of the real and imaginary matrices
517
+ * is a 2B (16bit) floating point type).
518
+ *
519
+ * NOTE: A bug in cublasLtMatrixTransform() causes it to interpret the batch stride for a planar-complex matrix
520
+ * as if it was specified in number of complex elements. Therefore an offset of 1024B must be encoded as stride
521
+ * value 256 when calling cublasLtMatrixTransform() (each complex element is 4B with real and imaginary values 2B
522
+ * each). This behavior is expected to be corrected in the next major cuBLAS version.
523
+ *
524
+ * int64_t, default: 0
525
+ */
526
+ CUBLASLT_MATRIX_LAYOUT_STRIDED_BATCH_OFFSET = 6,
527
+
528
+ /** Stride (in bytes) to the imaginary plane for planar complex layout.
529
+ *
530
+ * int64_t, default: 0 - 0 means that layout is regular (real and imaginary parts of complex numbers are interleaved
531
+ * in memory in each element)
532
+ */
533
+ CUBLASLT_MATRIX_LAYOUT_PLANE_OFFSET = 7,
534
+ } cublasLtMatrixLayoutAttribute_t;
535
+
536
+ /** Internal. Do not use directly.
537
+ */
538
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixLayoutInit_internal( //
539
+ cublasLtMatrixLayout_t matLayout,
540
+ size_t size,
541
+ cudaDataType type,
542
+ uint64_t rows,
543
+ uint64_t cols,
544
+ int64_t ld);
545
+
546
+ /** Initialize matrix layout descriptor in pre-allocated space.
547
+ *
548
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if size of the pre-allocated space is insufficient
549
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
550
+ */
551
+ static inline cublasStatus_t cublasLtMatrixLayoutInit(
552
+ cublasLtMatrixLayout_t matLayout, cudaDataType type, uint64_t rows, uint64_t cols, int64_t ld) {
553
+ return cublasLtMatrixLayoutInit_internal(matLayout, sizeof(*matLayout), type, rows, cols, ld);
554
+ }
555
+
556
+ /** Create new matrix layout descriptor.
557
+ *
558
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if memory could not be allocated
559
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
560
+ */
561
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixLayoutCreate( //
562
+ cublasLtMatrixLayout_t* matLayout,
563
+ cudaDataType type,
564
+ uint64_t rows,
565
+ uint64_t cols,
566
+ int64_t ld);
567
+
568
+ /** Destroy matrix layout descriptor.
569
+ *
570
+ * \retval CUBLAS_STATUS_SUCCESS if operation was successful
571
+ */
572
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixLayoutDestroy(cublasLtMatrixLayout_t matLayout);
573
+
574
+ /** Set matrix layout descriptor attribute.
575
+ *
576
+ * \param[in] matLayout The descriptor
577
+ * \param[in] attr The attribute
578
+ * \param[in] buf memory address containing the new value
579
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
580
+ *
581
+ * \retval CUBLAS_STATUS_INVALID_VALUE if buf is NULL or sizeInBytes doesn't match size of internal storage for
582
+ * selected attribute
583
+ * \retval CUBLAS_STATUS_SUCCESS if attribute was set successfully
584
+ */
585
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixLayoutSetAttribute( //
586
+ cublasLtMatrixLayout_t matLayout,
587
+ cublasLtMatrixLayoutAttribute_t attr,
588
+ const void* buf,
589
+ size_t sizeInBytes);
590
+
591
+ /** Get matrix layout descriptor attribute.
592
+ *
593
+ * \param[in] matLayout The descriptor
594
+ * \param[in] attr The attribute
595
+ * \param[out] buf memory address containing the new value
596
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
597
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number of
598
+ * bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
599
+ *
600
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
601
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
602
+ * selected attribute
603
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
604
+ */
605
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixLayoutGetAttribute( //
606
+ cublasLtMatrixLayout_t matLayout,
607
+ cublasLtMatrixLayoutAttribute_t attr,
608
+ void* buf,
609
+ size_t sizeInBytes,
610
+ size_t* sizeWritten);
611
+
612
+ /* ---------------------------------------------------------------------------------------*/
613
+ /* Helper functions for cublasLtMatmulDesc_t */
614
+ /* ---------------------------------------------------------------------------------------*/
615
+
616
+ /** Matmul descriptor attributes to define details of the operation. */
617
+ typedef enum {
618
+ /** Compute type, see cudaDataType. Defines data type used for multiply and accumulate operations and the
619
+ * accumulator during matrix multiplication.
620
+ *
621
+ * int32_t
622
+ */
623
+ CUBLASLT_MATMUL_DESC_COMPUTE_TYPE = 0,
624
+
625
+ /** Scale type, see cudaDataType. Defines data type of alpha and beta. Accumulator and value from matrix C are
626
+ * typically converted to scale type before final scaling. Value is then converted from scale type to type of matrix
627
+ * D before being stored in memory.
628
+ *
629
+ * int32_t, default: same as CUBLASLT_MATMUL_DESC_COMPUTE_TYPE
630
+ */
631
+ CUBLASLT_MATMUL_DESC_SCALE_TYPE = 1,
632
+
633
+ /** Pointer mode of alpha and beta, see cublasLtPointerMode_t. When CUBLASLT_POINTER_MODE_DEVICE_VECTOR is in use,
634
+ * alpha/beta vector lenghts must match number of output matrix rows.
635
+ *
636
+ * int32_t, default: CUBLASLT_POINTER_MODE_HOST
637
+ */
638
+ CUBLASLT_MATMUL_DESC_POINTER_MODE = 2,
639
+
640
+ /** Transform of matrix A, see cublasOperation_t.
641
+ *
642
+ * int32_t, default: CUBLAS_OP_N
643
+ */
644
+ CUBLASLT_MATMUL_DESC_TRANSA = 3,
645
+
646
+ /** Transform of matrix B, see cublasOperation_t.
647
+ *
648
+ * int32_t, default: CUBLAS_OP_N
649
+ */
650
+ CUBLASLT_MATMUL_DESC_TRANSB = 4,
651
+
652
+ /** Transform of matrix C, see cublasOperation_t.
653
+ *
654
+ * Currently only CUBLAS_OP_N is supported.
655
+ *
656
+ * int32_t, default: CUBLAS_OP_N
657
+ */
658
+ CUBLASLT_MATMUL_DESC_TRANSC = 5,
659
+
660
+ /** Matrix fill mode, see cublasFillMode_t.
661
+ *
662
+ * int32_t, default: CUBLAS_FILL_MODE_FULL
663
+ */
664
+ CUBLASLT_MATMUL_DESC_FILL_MODE = 6,
665
+
666
+ /** Epilogue function, see cublasLtEpilogue_t.
667
+ *
668
+ * uint32_t, default: CUBLASLT_EPILOGUE_DEFAULT
669
+ */
670
+ CUBLASLT_MATMUL_DESC_EPILOGUE = 7,
671
+
672
+ /** Bias or bias gradient vector pointer in the device memory.
673
+ *
674
+ * Bias case. See CUBLASLT_EPILOGUE_BIAS.
675
+ * For bias data type see CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE.
676
+ *
677
+ * Bias vector length must match matrix D rows count.
678
+ *
679
+ * Bias gradient case. See CUBLASLT_EPILOGUE_DRELU_BGRAD and CUBLASLT_EPILOGUE_DGELU_BGRAD.
680
+ * Bias gradient vector elements are the same type as the output elements
681
+ * (Ctype) with the exception of IMMA kernels (see above).
682
+ *
683
+ * Routines that don't dereference this pointer, like cublasLtMatmulAlgoGetHeuristic()
684
+ * depend on its value to determine expected pointer alignment.
685
+ *
686
+ * Bias case: const void *, default: NULL
687
+ * Bias gradient case: void *, default: NULL
688
+ */
689
+ CUBLASLT_MATMUL_DESC_BIAS_POINTER = 8,
690
+
691
+ /** Batch stride for bias or bias gradient vector.
692
+ *
693
+ * Used together with CUBLASLT_MATMUL_DESC_BIAS_POINTER when matrix D's CUBLASLT_MATRIX_LAYOUT_BATCH_COUNT > 1.
694
+ *
695
+ * int64_t, default: 0
696
+ */
697
+ CUBLASLT_MATMUL_DESC_BIAS_BATCH_STRIDE = 10,
698
+
699
+ /** Pointer for epilogue auxiliary buffer.
700
+ *
701
+ * - Output vector for ReLu bit-mask in forward pass when CUBLASLT_EPILOGUE_RELU_AUX
702
+ * or CUBLASLT_EPILOGUE_RELU_AUX_BIAS epilogue is used.
703
+ * - Input vector for ReLu bit-mask in backward pass when
704
+ * CUBLASLT_EPILOGUE_DRELU_BGRAD epilogue is used.
705
+ *
706
+ * - Output of GELU input matrix in forward pass when
707
+ * CUBLASLT_EPILOGUE_GELU_AUX_BIAS epilogue is used.
708
+ * - Input of GELU input matrix for backward pass when
709
+ * CUBLASLT_EPILOGUE_DGELU_BGRAD epilogue is used.
710
+ *
711
+ * For aux data type see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_DATA_TYPE.
712
+ *
713
+ * Routines that don't dereference this pointer, like cublasLtMatmulAlgoGetHeuristic()
714
+ * depend on its value to determine expected pointer alignment.
715
+ *
716
+ * Requires setting CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_LD attribute.
717
+ *
718
+ * Forward pass: void *, default: NULL
719
+ * Backward pass: const void *, default: NULL
720
+ */
721
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER = 11,
722
+
723
+ /** Leading dimension for epilogue auxiliary buffer.
724
+ *
725
+ * - ReLu bit-mask matrix leading dimension in elements (i.e. bits)
726
+ * when CUBLASLT_EPILOGUE_RELU_AUX, CUBLASLT_EPILOGUE_RELU_AUX_BIAS or CUBLASLT_EPILOGUE_DRELU_BGRAD epilogue is
727
+ * used. Must be divisible by 128 and be no less than the number of rows in the output matrix.
728
+ *
729
+ * - GELU input matrix leading dimension in elements
730
+ * when CUBLASLT_EPILOGUE_GELU_AUX_BIAS or CUBLASLT_EPILOGUE_DGELU_BGRAD epilogue used.
731
+ * Must be divisible by 8 and be no less than the number of rows in the output matrix.
732
+ *
733
+ * int64_t, default: 0
734
+ */
735
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_LD = 12,
736
+
737
+ /** Batch stride for epilogue auxiliary buffer.
738
+ *
739
+ * - ReLu bit-mask matrix batch stride in elements (i.e. bits)
740
+ * when CUBLASLT_EPILOGUE_RELU_AUX, CUBLASLT_EPILOGUE_RELU_AUX_BIAS or CUBLASLT_EPILOGUE_DRELU_BGRAD epilogue is
741
+ * used. Must be divisible by 128.
742
+ *
743
+ * - GELU input matrix batch stride in elements
744
+ * when CUBLASLT_EPILOGUE_GELU_AUX_BIAS or CUBLASLT_EPILOGUE_DGELU_BGRAD epilogue used.
745
+ * Must be divisible by 8.
746
+ *
747
+ * int64_t, default: 0
748
+ */
749
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_BATCH_STRIDE = 13,
750
+
751
+ /** Batch stride for alpha vector.
752
+ *
753
+ * Used together with CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_HOST when matrix D's
754
+ * CUBLASLT_MATRIX_LAYOUT_BATCH_COUNT > 1. If CUBLASLT_POINTER_MODE_ALPHA_DEVICE_VECTOR_BETA_ZERO is set then
755
+ * CUBLASLT_MATMUL_DESC_ALPHA_VECTOR_BATCH_STRIDE must be set to 0 as this mode doesnt supported batched alpha vector.
756
+ *
757
+ * int64_t, default: 0
758
+ */
759
+ CUBLASLT_MATMUL_DESC_ALPHA_VECTOR_BATCH_STRIDE = 14,
760
+
761
+ /** Number of SMs to target for parallel execution. Optimizes heuristics for execution on a different number of SMs
762
+ * when user expects a concurrent stream to be using some of the device resources.
763
+ *
764
+ * int32_t, default: 0 - use the number reported by the device.
765
+ */
766
+ CUBLASLT_MATMUL_DESC_SM_COUNT_TARGET = 15,
767
+
768
+ /** Device pointer to the scale factor value that converts data in matrix A to the compute data type range.
769
+ *
770
+ * The scaling factor value must have the same type as the compute type.
771
+ *
772
+ * If not specified, or set to NULL, the scaling factor is assumed to be 1.
773
+ *
774
+ * If set for an unsupported matrix data, scale, and compute type combination, calling cublasLtMatmul()
775
+ * will return CUBLAS_INVALID_VALUE.
776
+ *
777
+ * const void *, default: NULL
778
+ */
779
+ CUBLASLT_MATMUL_DESC_A_SCALE_POINTER = 17,
780
+
781
+ /** Device pointer to the scale factor value to convert data in matrix B to compute data type range.
782
+ *
783
+ * The scaling factor value must have the same type as the compute type.
784
+ *
785
+ * If not specified, or set to NULL, the scaling factor is assumed to be 1.
786
+ *
787
+ * If set for an unsupported matrix data, scale, and compute type combination, calling cublasLtMatmul()
788
+ * will return CUBLAS_INVALID_VALUE.
789
+ *
790
+ * const void *, default: NULL
791
+ */
792
+ CUBLASLT_MATMUL_DESC_B_SCALE_POINTER = 18,
793
+
794
+ /** Device pointer to the scale factor value to convert data in matrix C to compute data type range.
795
+ *
796
+ * The scaling factor value must have the same type as the compute type.
797
+ *
798
+ * If not specified, or set to NULL, the scaling factor is assumed to be 1.
799
+ *
800
+ * If set for an unsupported matrix data, scale, and compute type combination, calling cublasLtMatmul()
801
+ * will return CUBLAS_INVALID_VALUE.
802
+ *
803
+ * const void *, default: NULL
804
+ */
805
+ CUBLASLT_MATMUL_DESC_C_SCALE_POINTER = 19,
806
+
807
+ /** Device pointer to the scale factor value to convert data in matrix D to compute data type range.
808
+ *
809
+ * The scaling factor value must have the same type as the compute type.
810
+ *
811
+ * If not specified, or set to NULL, the scaling factor is assumed to be 1.
812
+ *
813
+ * If set for an unsupported matrix data, scale, and compute type combination, calling cublasLtMatmul()
814
+ * will return CUBLAS_INVALID_VALUE.
815
+ *
816
+ * const void *, default: NULL
817
+ */
818
+ CUBLASLT_MATMUL_DESC_D_SCALE_POINTER = 20,
819
+
820
+ /** Device pointer to the memory location that on completion will be set to the maximum of absolute values in the
821
+ * output matrix.
822
+ *
823
+ * The computed value has the same type as the compute type.
824
+ *
825
+ * If not specified or set to NULL, the maximum absolute value is not computed. If set for an unsupported matrix
826
+ * data, scale, and compute type combination, calling cublasLtMatmul() will return CUBLAS_INVALID_VALUE.
827
+ *
828
+ * void *, default: NULL
829
+ */
830
+ CUBLASLT_MATMUL_DESC_AMAX_D_POINTER = 21,
831
+
832
+ /** Type of the data to be stored to the memory pointed to by CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
833
+ *
834
+ * If unset, the data type defaults to the type of elements of the output matrix with some exceptions, see details
835
+ * below.
836
+ *
837
+ * ReLu uses a bit-mask.
838
+ *
839
+ * GELU input matrix elements type is the same as the type of elements of
840
+ * the output matrix with some exceptions, see details below.
841
+ *
842
+ * For fp8 kernels with output type CUDA_R_8F_E4M3 the aux data type can be CUDA_R_8F_E4M3 or CUDA_R_16F with some
843
+ * restrictions. See https://docs.nvidia.com/cuda/cublas/index.html#cublasLtMatmulDescAttributes_t for more details.
844
+ *
845
+ * If set for an unsupported matrix data, scale, and compute type combination, calling cublasLtMatmul()
846
+ * will return CUBLAS_INVALID_VALUE.
847
+ *
848
+ * int32_t based on cudaDataType, default: -1
849
+ */
850
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_DATA_TYPE = 22,
851
+
852
+ /** Device pointer to the scaling factor value to convert results from compute type data range to storage
853
+ * data range in the auxiliary matrix that is set via CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
854
+ *
855
+ * The scaling factor value must have the same type as the compute type.
856
+ *
857
+ * If not specified, or set to NULL, the scaling factor is assumed to be 1. If set for an unsupported matrix data,
858
+ * scale, and compute type combination, calling cublasLtMatmul() will return CUBLAS_INVALID_VALUE.
859
+ *
860
+ * void *, default: NULL
861
+ */
862
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_SCALE_POINTER = 23,
863
+
864
+ /** Device pointer to the memory location that on completion will be set to the maximum of absolute values in the
865
+ * buffer that is set via CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
866
+ *
867
+ * The computed value has the same type as the compute type.
868
+ *
869
+ * If not specified or set to NULL, the maximum absolute value is not computed. If set for an unsupported matrix
870
+ * data, scale, and compute type combination, calling cublasLtMatmul() will return CUBLAS_INVALID_VALUE.
871
+ *
872
+ * void *, default: NULL
873
+ */
874
+ CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_AMAX_POINTER = 24,
875
+
876
+ /** Flag for managing fp8 fast accumulation mode.
877
+ * When enabled, problem execution might be faster but at the cost of lower accuracy because intermediate results
878
+ * will not periodically be promoted to a higher precision.
879
+ *
880
+ * int8_t, default: 0 - fast accumulation mode is disabled.
881
+ */
882
+ CUBLASLT_MATMUL_DESC_FAST_ACCUM = 25,
883
+
884
+ /** Type of bias or bias gradient vector in the device memory.
885
+ *
886
+ * Bias case: see CUBLASLT_EPILOGUE_BIAS.
887
+ *
888
+ * Bias vector elements are the same type as the elements of output matrix (Dtype) with the following exceptions:
889
+ * - IMMA kernels with computeType=CUDA_R_32I and Ctype=CUDA_R_8I where the bias vector elements
890
+ * are the same type as alpha, beta (CUBLASLT_MATMUL_DESC_SCALE_TYPE=CUDA_R_32F)
891
+ * - fp8 kernels with an output type of CUDA_R_32F, CUDA_R_8F_E4M3 or CUDA_R_8F_E5M2, See
892
+ * https://docs.nvidia.com/cuda/cublas/index.html#cublasLtMatmul for details.
893
+ *
894
+ * int32_t based on cudaDataType, default: -1
895
+ */
896
+ CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE = 26,
897
+
898
+ /** EXPERIMENTAL: Number of atomic synchronization chunks in the row dimension of the output matrix D.
899
+ *
900
+ * int32_t, default 0 (atomic synchronization disabled)
901
+ */
902
+ CUBLASLT_MATMUL_DESC_ATOMIC_SYNC_NUM_CHUNKS_D_ROWS = 27,
903
+
904
+ /** EXPERIMENTAL: Number of atomic synchronization chunks in the column dimension of the output matrix D.
905
+ *
906
+ * int32_t, default 0 (atomic synchronization disabled)
907
+ */
908
+ CUBLASLT_MATMUL_DESC_ATOMIC_SYNC_NUM_CHUNKS_D_COLS = 28,
909
+
910
+ /** EXPERIMENTAL: Pointer to a device array of input atomic counters consumed by a matmul.
911
+ *
912
+ * int32_t *, default: NULL
913
+ * */
914
+ CUBLASLT_MATMUL_DESC_ATOMIC_SYNC_IN_COUNTERS_POINTER = 29,
915
+
916
+ /** EXPERIMENTAL: Pointer to a device array of output atomic counters produced by a matmul.
917
+ *
918
+ * int32_t *, default: NULL
919
+ * */
920
+ CUBLASLT_MATMUL_DESC_ATOMIC_SYNC_OUT_COUNTERS_POINTER = 30,
921
+ } cublasLtMatmulDescAttributes_t;
922
+
923
+ /** Internal. Do not use directly.
924
+ */
925
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulDescInit_internal( //
926
+ cublasLtMatmulDesc_t matmulDesc,
927
+ size_t size,
928
+ cublasComputeType_t computeType,
929
+ cudaDataType_t scaleType);
930
+
931
+ /** Initialize matmul operation descriptor in pre-allocated space.
932
+ *
933
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if size of the pre-allocated space is insufficient
934
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was initialized successfully
935
+ */
936
+ static inline cublasStatus_t cublasLtMatmulDescInit( //
937
+ cublasLtMatmulDesc_t matmulDesc,
938
+ cublasComputeType_t computeType,
939
+ cudaDataType_t scaleType) {
940
+ return cublasLtMatmulDescInit_internal(matmulDesc, sizeof(*matmulDesc), computeType, scaleType);
941
+ }
942
+
943
+ /** Create new matmul operation descriptor.
944
+ *
945
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if memory could not be allocated
946
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
947
+ */
948
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulDescCreate(cublasLtMatmulDesc_t* matmulDesc,
949
+ cublasComputeType_t computeType,
950
+ cudaDataType_t scaleType);
951
+
952
+ /** Destroy matmul operation descriptor.
953
+ *
954
+ * \retval CUBLAS_STATUS_SUCCESS if operation was successful
955
+ */
956
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulDescDestroy(cublasLtMatmulDesc_t matmulDesc);
957
+
958
+ /** Set matmul operation descriptor attribute.
959
+ *
960
+ * \param[in] matmulDesc The descriptor
961
+ * \param[in] attr The attribute
962
+ * \param[in] buf memory address containing the new value
963
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
964
+ *
965
+ * \retval CUBLAS_STATUS_INVALID_VALUE if buf is NULL or sizeInBytes doesn't match size of internal storage for
966
+ * selected attribute
967
+ * \retval CUBLAS_STATUS_SUCCESS if attribute was set successfully
968
+ */
969
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulDescSetAttribute( //
970
+ cublasLtMatmulDesc_t matmulDesc,
971
+ cublasLtMatmulDescAttributes_t attr,
972
+ const void* buf,
973
+ size_t sizeInBytes);
974
+
975
+ /** Get matmul operation descriptor attribute.
976
+ *
977
+ * \param[in] matmulDesc The descriptor
978
+ * \param[in] attr The attribute
979
+ * \param[out] buf memory address containing the new value
980
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
981
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number of
982
+ * bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
983
+ *
984
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
985
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
986
+ * selected attribute
987
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
988
+ */
989
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulDescGetAttribute( //
990
+ cublasLtMatmulDesc_t matmulDesc,
991
+ cublasLtMatmulDescAttributes_t attr,
992
+ void* buf,
993
+ size_t sizeInBytes,
994
+ size_t* sizeWritten);
995
+
996
+ /* ---------------------------------------------------------------------------------------*/
997
+ /* Helper functions for cublasLtMatrixTransformDesc_t */
998
+ /* ---------------------------------------------------------------------------------------*/
999
+
1000
+ /** Matrix transform descriptor attributes to define details of the operation.
1001
+ */
1002
+ typedef enum {
1003
+ /** Scale type, see cudaDataType. Inputs are converted to scale type for scaling and summation and results are then
1004
+ * converted to output type to store in memory.
1005
+ *
1006
+ * int32_t
1007
+ */
1008
+ CUBLASLT_MATRIX_TRANSFORM_DESC_SCALE_TYPE,
1009
+
1010
+ /** Pointer mode of alpha and beta, see cublasLtPointerMode_t.
1011
+ *
1012
+ * int32_t, default: CUBLASLT_POINTER_MODE_HOST
1013
+ */
1014
+ CUBLASLT_MATRIX_TRANSFORM_DESC_POINTER_MODE,
1015
+
1016
+ /** Transform of matrix A, see cublasOperation_t.
1017
+ *
1018
+ * int32_t, default: CUBLAS_OP_N
1019
+ */
1020
+ CUBLASLT_MATRIX_TRANSFORM_DESC_TRANSA,
1021
+
1022
+ /** Transform of matrix B, see cublasOperation_t.
1023
+ *
1024
+ * int32_t, default: CUBLAS_OP_N
1025
+ */
1026
+ CUBLASLT_MATRIX_TRANSFORM_DESC_TRANSB,
1027
+ } cublasLtMatrixTransformDescAttributes_t;
1028
+
1029
+ /** Internal. Do not use directly.
1030
+ */
1031
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransformDescInit_internal(cublasLtMatrixTransformDesc_t transformDesc,
1032
+ size_t size,
1033
+ cudaDataType scaleType);
1034
+
1035
+ /** Initialize matrix transform operation descriptor in pre-allocated space.
1036
+ *
1037
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if size of the pre-allocated space is insufficient
1038
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
1039
+ */
1040
+ static inline cublasStatus_t cublasLtMatrixTransformDescInit(cublasLtMatrixTransformDesc_t transformDesc,
1041
+ cudaDataType scaleType) {
1042
+ return cublasLtMatrixTransformDescInit_internal(transformDesc, sizeof(*transformDesc), scaleType);
1043
+ }
1044
+
1045
+ /** Create new matrix transform operation descriptor.
1046
+ *
1047
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if memory could not be allocated
1048
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
1049
+ */
1050
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransformDescCreate(cublasLtMatrixTransformDesc_t* transformDesc,
1051
+ cudaDataType scaleType);
1052
+
1053
+ /** Destroy matrix transform operation descriptor.
1054
+ *
1055
+ * \retval CUBLAS_STATUS_SUCCESS if operation was successful
1056
+ */
1057
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransformDescDestroy(cublasLtMatrixTransformDesc_t transformDesc);
1058
+
1059
+ /** Set matrix transform operation descriptor attribute.
1060
+ *
1061
+ * \param[in] transformDesc The descriptor
1062
+ * \param[in] attr The attribute
1063
+ * \param[in] buf memory address containing the new value
1064
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1065
+ *
1066
+ * \retval CUBLAS_STATUS_INVALID_VALUE if buf is NULL or sizeInBytes doesn't match size of internal storage for
1067
+ * selected attribute
1068
+ * \retval CUBLAS_STATUS_SUCCESS if attribute was set successfully
1069
+ */
1070
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransformDescSetAttribute( //
1071
+ cublasLtMatrixTransformDesc_t transformDesc,
1072
+ cublasLtMatrixTransformDescAttributes_t attr,
1073
+ const void* buf,
1074
+ size_t sizeInBytes);
1075
+
1076
+ /** Get matrix transform operation descriptor attribute.
1077
+ *
1078
+ * \param[in] transformDesc The descriptor
1079
+ * \param[in] attr The attribute
1080
+ * \param[out] buf memory address containing the new value
1081
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1082
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number
1083
+ * of bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
1084
+ *
1085
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
1086
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
1087
+ * selected attribute
1088
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
1089
+ */
1090
+ cublasStatus_t CUBLASWINAPI cublasLtMatrixTransformDescGetAttribute( //
1091
+ cublasLtMatrixTransformDesc_t transformDesc,
1092
+ cublasLtMatrixTransformDescAttributes_t attr,
1093
+ void* buf,
1094
+ size_t sizeInBytes,
1095
+ size_t* sizeWritten);
1096
+
1097
+ /** Reduction scheme for portions of the dot-product calculated in parallel (a. k. a. "split - K").
1098
+ */
1099
+ typedef enum {
1100
+ /** No reduction scheme, dot-product shall be performed in one sequence.
1101
+ */
1102
+ CUBLASLT_REDUCTION_SCHEME_NONE = 0,
1103
+
1104
+ /** Reduction is performed "in place" - using the output buffer (and output data type) and counters (in workspace) to
1105
+ * guarantee the sequentiality.
1106
+ */
1107
+ CUBLASLT_REDUCTION_SCHEME_INPLACE = 1,
1108
+
1109
+ /** Intermediate results are stored in compute type in the workspace and reduced in a separate step.
1110
+ */
1111
+ CUBLASLT_REDUCTION_SCHEME_COMPUTE_TYPE = 2,
1112
+
1113
+ /** Intermediate results are stored in output type in the workspace and reduced in a separate step.
1114
+ */
1115
+ CUBLASLT_REDUCTION_SCHEME_OUTPUT_TYPE = 4,
1116
+
1117
+ CUBLASLT_REDUCTION_SCHEME_MASK = 0x7,
1118
+ } cublasLtReductionScheme_t;
1119
+
1120
+ /** Postprocessing options for the epilogue
1121
+ */
1122
+ typedef enum {
1123
+ /** No special postprocessing, just scale and quantize results if necessary.
1124
+ */
1125
+ CUBLASLT_EPILOGUE_DEFAULT = 1,
1126
+
1127
+ /** ReLu, apply ReLu point-wise transform to the results (x:=max(x, 0)).
1128
+ */
1129
+ CUBLASLT_EPILOGUE_RELU = 2,
1130
+
1131
+ /** ReLu, apply ReLu point-wise transform to the results (x:=max(x, 0)).
1132
+ *
1133
+ * This epilogue mode produces an extra output, a ReLu bit-mask matrix,
1134
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1135
+ */
1136
+ CUBLASLT_EPILOGUE_RELU_AUX = (CUBLASLT_EPILOGUE_RELU | 128),
1137
+
1138
+ /** Bias, apply (broadcasted) Bias from bias vector. Bias vector length must match matrix D rows, it must be packed
1139
+ * (stride between vector elements is 1). Bias vector is broadcasted to all columns and added before applying final
1140
+ * postprocessing.
1141
+ */
1142
+ CUBLASLT_EPILOGUE_BIAS = 4,
1143
+
1144
+ /** ReLu and Bias, apply Bias and then ReLu transform
1145
+ */
1146
+ CUBLASLT_EPILOGUE_RELU_BIAS = (CUBLASLT_EPILOGUE_RELU | CUBLASLT_EPILOGUE_BIAS),
1147
+
1148
+ /** ReLu and Bias, apply Bias and then ReLu transform
1149
+ *
1150
+ * This epilogue mode produces an extra output, a ReLu bit-mask matrix,
1151
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1152
+ */
1153
+ CUBLASLT_EPILOGUE_RELU_AUX_BIAS = (CUBLASLT_EPILOGUE_RELU_AUX | CUBLASLT_EPILOGUE_BIAS),
1154
+
1155
+ /* ReLu gradient. Apply ReLu gradient to matmul output. Store ReLu gradient in the output matrix.
1156
+ *
1157
+ * This epilogue mode requires an extra input,
1158
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1159
+ */
1160
+ CUBLASLT_EPILOGUE_DRELU = 8 | 128,
1161
+
1162
+ /* ReLu and Bias gradients. Apply independently ReLu and Bias gradient to
1163
+ * matmul output. Store ReLu gradient in the output matrix, and Bias gradient
1164
+ * in the auxiliary output (see CUBLASLT_MATMUL_DESC_BIAS_POINTER).
1165
+ *
1166
+ * This epilogue mode requires an extra input,
1167
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1168
+ */
1169
+ CUBLASLT_EPILOGUE_DRELU_BGRAD = CUBLASLT_EPILOGUE_DRELU | 16,
1170
+
1171
+ /** GELU, apply GELU point-wise transform to the results (x:=GELU(x)).
1172
+ */
1173
+ CUBLASLT_EPILOGUE_GELU = 32,
1174
+
1175
+ /** GELU, apply GELU point-wise transform to the results (x:=GELU(x)).
1176
+ *
1177
+ * This epilogue mode outputs GELU input as a separate matrix (useful for training).
1178
+ * See CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1179
+ */
1180
+ CUBLASLT_EPILOGUE_GELU_AUX = (CUBLASLT_EPILOGUE_GELU | 128),
1181
+
1182
+ /** GELU and Bias, apply Bias and then GELU transform
1183
+ */
1184
+ CUBLASLT_EPILOGUE_GELU_BIAS = (CUBLASLT_EPILOGUE_GELU | CUBLASLT_EPILOGUE_BIAS),
1185
+
1186
+ /** GELU and Bias, apply Bias and then GELU transform
1187
+ *
1188
+ * This epilogue mode outputs GELU input as a separate matrix (useful for training).
1189
+ * See CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1190
+ */
1191
+ CUBLASLT_EPILOGUE_GELU_AUX_BIAS = (CUBLASLT_EPILOGUE_GELU_AUX | CUBLASLT_EPILOGUE_BIAS),
1192
+
1193
+ /* GELU gradient. Apply GELU gradient to matmul output. Store GELU gradient in the output matrix.
1194
+ *
1195
+ * This epilogue mode requires an extra input,
1196
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1197
+ */
1198
+ CUBLASLT_EPILOGUE_DGELU = 64 | 128,
1199
+
1200
+ /* GELU and Bias gradients. Apply independently GELU and Bias gradient to
1201
+ * matmul output. Store GELU gradient in the output matrix, and Bias gradient
1202
+ * in the auxiliary output (see CUBLASLT_MATMUL_DESC_BIAS_POINTER).
1203
+ *
1204
+ * This epilogue mode requires an extra input,
1205
+ * see CUBLASLT_MATMUL_DESC_EPILOGUE_AUX_POINTER.
1206
+ */
1207
+ CUBLASLT_EPILOGUE_DGELU_BGRAD = CUBLASLT_EPILOGUE_DGELU | 16,
1208
+
1209
+ /** Bias gradient based on the input matrix A.
1210
+ *
1211
+ * The bias size corresponds to the number of rows of the matrix D.
1212
+ * The reduction happens over the GEMM's "k" dimension.
1213
+ *
1214
+ * Stores Bias gradient in the auxiliary output
1215
+ * (see CUBLASLT_MATMUL_DESC_BIAS_POINTER).
1216
+ */
1217
+ CUBLASLT_EPILOGUE_BGRADA = 256,
1218
+
1219
+ /** Bias gradient based on the input matrix B.
1220
+ *
1221
+ * The bias size corresponds to the number of columns of the matrix D.
1222
+ * The reduction happens over the GEMM's "k" dimension.
1223
+ *
1224
+ * Stores Bias gradient in the auxiliary output
1225
+ * (see CUBLASLT_MATMUL_DESC_BIAS_POINTER).
1226
+ */
1227
+ CUBLASLT_EPILOGUE_BGRADB = 512,
1228
+ } cublasLtEpilogue_t;
1229
+
1230
+ /** Matmul heuristic search mode
1231
+ */
1232
+ typedef enum {
1233
+ /** ask heuristics for best algo for given usecase
1234
+ */
1235
+ CUBLASLT_SEARCH_BEST_FIT = 0,
1236
+ /** only try to find best config for preconfigured algo id
1237
+ */
1238
+ CUBLASLT_SEARCH_LIMITED_BY_ALGO_ID = 1,
1239
+ /** reserved for future use
1240
+ */
1241
+ CUBLASLT_SEARCH_RESERVED_02 = 2,
1242
+ /** reserved for future use
1243
+ */
1244
+ CUBLASLT_SEARCH_RESERVED_03 = 3,
1245
+ /** reserved for future use
1246
+ */
1247
+ CUBLASLT_SEARCH_RESERVED_04 = 4,
1248
+ /** reserved for future use
1249
+ */
1250
+ CUBLASLT_SEARCH_RESERVED_05 = 5,
1251
+ } cublasLtMatmulSearch_t;
1252
+
1253
+ /** Algo search preference to fine tune the heuristic function. */
1254
+ typedef enum {
1255
+ /** Search mode, see cublasLtMatmulSearch_t.
1256
+ *
1257
+ * uint32_t, default: CUBLASLT_SEARCH_BEST_FIT
1258
+ */
1259
+ CUBLASLT_MATMUL_PREF_SEARCH_MODE = 0,
1260
+
1261
+ /** Maximum allowed workspace size in bytes.
1262
+ *
1263
+ * uint64_t, default: 0 - no workspace allowed
1264
+ */
1265
+ CUBLASLT_MATMUL_PREF_MAX_WORKSPACE_BYTES = 1,
1266
+
1267
+ /** Reduction scheme mask, see cublasLtReductionScheme_t. Filters heuristic result to only include algo configs that
1268
+ * use one of the required modes.
1269
+ *
1270
+ * E.g. mask value of 0x03 will allow only INPLACE and COMPUTE_TYPE reduction schemes.
1271
+ *
1272
+ * uint32_t, default: CUBLASLT_REDUCTION_SCHEME_MASK (allows all reduction schemes)
1273
+ */
1274
+ CUBLASLT_MATMUL_PREF_REDUCTION_SCHEME_MASK = 3,
1275
+
1276
+ /** Minimum buffer alignment for matrix A (in bytes).
1277
+ *
1278
+ * Selecting a smaller value will exclude algorithms that can not work with matrix A that is not as strictly aligned
1279
+ * as they need.
1280
+ *
1281
+ * uint32_t, default: 256
1282
+ */
1283
+ CUBLASLT_MATMUL_PREF_MIN_ALIGNMENT_A_BYTES = 5,
1284
+
1285
+ /** Minimum buffer alignment for matrix B (in bytes).
1286
+ *
1287
+ * Selecting a smaller value will exclude algorithms that can not work with matrix B that is not as strictly aligned
1288
+ * as they need.
1289
+ *
1290
+ * uint32_t, default: 256
1291
+ */
1292
+ CUBLASLT_MATMUL_PREF_MIN_ALIGNMENT_B_BYTES = 6,
1293
+
1294
+ /** Minimum buffer alignment for matrix C (in bytes).
1295
+ *
1296
+ * Selecting a smaller value will exclude algorithms that can not work with matrix C that is not as strictly aligned
1297
+ * as they need.
1298
+ *
1299
+ * uint32_t, default: 256
1300
+ */
1301
+ CUBLASLT_MATMUL_PREF_MIN_ALIGNMENT_C_BYTES = 7,
1302
+
1303
+ /** Minimum buffer alignment for matrix D (in bytes).
1304
+ *
1305
+ * Selecting a smaller value will exclude algorithms that can not work with matrix D that is not as strictly aligned
1306
+ * as they need.
1307
+ *
1308
+ * uint32_t, default: 256
1309
+ */
1310
+ CUBLASLT_MATMUL_PREF_MIN_ALIGNMENT_D_BYTES = 8,
1311
+
1312
+ /** Maximum wave count.
1313
+ *
1314
+ * See cublasLtMatmulHeuristicResult_t::wavesCount.
1315
+ *
1316
+ * Selecting a non-zero value will exclude algorithms that report device utilization higher than specified.
1317
+ *
1318
+ * float, default: 0.0f
1319
+ */
1320
+ CUBLASLT_MATMUL_PREF_MAX_WAVES_COUNT = 9,
1321
+
1322
+ /** Numerical implementation details mask, see cublasLtNumericalImplFlags_t. Filters heuristic result to only include
1323
+ * algorithms that use the allowed implementations.
1324
+ *
1325
+ * uint64_t, default: uint64_t(-1) (allow everything)
1326
+ */
1327
+ CUBLASLT_MATMUL_PREF_IMPL_MASK = 12,
1328
+ } cublasLtMatmulPreferenceAttributes_t;
1329
+
1330
+ /** Internal. Do not use directly.
1331
+ */
1332
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulPreferenceInit_internal(cublasLtMatmulPreference_t pref, size_t size);
1333
+
1334
+ /** Initialize matmul heuristic search preference descriptor in pre-allocated space.
1335
+ *
1336
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if size of the pre-allocated space is insufficient
1337
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
1338
+ */
1339
+ static inline cublasStatus_t cublasLtMatmulPreferenceInit(cublasLtMatmulPreference_t pref) {
1340
+ return cublasLtMatmulPreferenceInit_internal(pref, sizeof(*pref));
1341
+ }
1342
+
1343
+ /** Create new matmul heuristic search preference descriptor.
1344
+ *
1345
+ * \retval CUBLAS_STATUS_ALLOC_FAILED if memory could not be allocated
1346
+ * \retval CUBLAS_STATUS_SUCCESS if desciptor was created successfully
1347
+ */
1348
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulPreferenceCreate(cublasLtMatmulPreference_t* pref);
1349
+
1350
+ /** Destroy matmul heuristic search preference descriptor.
1351
+ *
1352
+ * \retval CUBLAS_STATUS_SUCCESS if operation was successful
1353
+ */
1354
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulPreferenceDestroy(cublasLtMatmulPreference_t pref);
1355
+
1356
+ /** Set matmul heuristic search preference descriptor attribute.
1357
+ *
1358
+ * \param[in] pref The descriptor
1359
+ * \param[in] attr The attribute
1360
+ * \param[in] buf memory address containing the new value
1361
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1362
+ *
1363
+ * \retval CUBLAS_STATUS_INVALID_VALUE if buf is NULL or sizeInBytes doesn't match size of internal storage for
1364
+ * selected attribute
1365
+ * \retval CUBLAS_STATUS_SUCCESS if attribute was set successfully
1366
+ */
1367
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulPreferenceSetAttribute( //
1368
+ cublasLtMatmulPreference_t pref,
1369
+ cublasLtMatmulPreferenceAttributes_t attr,
1370
+ const void* buf,
1371
+ size_t sizeInBytes);
1372
+
1373
+ /** Get matmul heuristic search preference descriptor attribute.
1374
+ *
1375
+ * \param[in] pref The descriptor
1376
+ * \param[in] attr The attribute
1377
+ * \param[out] buf memory address containing the new value
1378
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1379
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number of
1380
+ * bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
1381
+ *
1382
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
1383
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
1384
+ * selected attribute
1385
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
1386
+ */
1387
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulPreferenceGetAttribute( //
1388
+ cublasLtMatmulPreference_t pref,
1389
+ cublasLtMatmulPreferenceAttributes_t attr,
1390
+ void* buf,
1391
+ size_t sizeInBytes,
1392
+ size_t* sizeWritten);
1393
+
1394
+ /** Results structure used by cublasLtMatmulGetAlgo.
1395
+ *
1396
+ * Holds returned configured algo descriptor and its runtime properties.
1397
+ */
1398
+ typedef struct {
1399
+ /** Matmul algorithm descriptor.
1400
+ *
1401
+ * Must be initialized with cublasLtMatmulAlgoInit() if preferences' CUBLASLT_MATMUL_PERF_SEARCH_MODE is set to
1402
+ * CUBLASLT_SEARCH_LIMITED_BY_ALGO_ID
1403
+ */
1404
+ cublasLtMatmulAlgo_t algo;
1405
+
1406
+ /** Actual size of workspace memory required.
1407
+ */
1408
+ size_t workspaceSize;
1409
+
1410
+ /** Result status, other fields are only valid if after call to cublasLtMatmulAlgoGetHeuristic() this member is set to
1411
+ * CUBLAS_STATUS_SUCCESS.
1412
+ */
1413
+ cublasStatus_t state;
1414
+
1415
+ /** Waves count - a device utilization metric.
1416
+ *
1417
+ * wavesCount value of 1.0f suggests that when kernel is launched it will fully occupy the GPU.
1418
+ */
1419
+ float wavesCount;
1420
+
1421
+ int reserved[4];
1422
+ } cublasLtMatmulHeuristicResult_t;
1423
+
1424
+ /** Query cublasLt heuristic for algorithm appropriate for given use case.
1425
+ *
1426
+ * \param[in] lightHandle Pointer to the allocated cuBLASLt handle for the cuBLASLt
1427
+ * context. See cublasLtHandle_t.
1428
+ * \param[in] operationDesc Handle to the matrix multiplication descriptor.
1429
+ * \param[in] Adesc Handle to the layout descriptors for matrix A.
1430
+ * \param[in] Bdesc Handle to the layout descriptors for matrix B.
1431
+ * \param[in] Cdesc Handle to the layout descriptors for matrix C.
1432
+ * \param[in] Ddesc Handle to the layout descriptors for matrix D.
1433
+ * \param[in] preference Pointer to the structure holding the heuristic search
1434
+ * preferences descriptor. See cublasLtMatrixLayout_t.
1435
+ * \param[in] requestedAlgoCount Size of heuristicResultsArray (in elements) and requested
1436
+ * maximum number of algorithms to return.
1437
+ * \param[in, out] heuristicResultsArray Output algorithms and associated runtime characteristics,
1438
+ * ordered in increasing estimated compute time.
1439
+ * \param[out] returnAlgoCount The number of heuristicResultsArray elements written.
1440
+ *
1441
+ * \retval CUBLAS_STATUS_INVALID_VALUE if requestedAlgoCount is less or equal to zero
1442
+ * \retval CUBLAS_STATUS_NOT_SUPPORTED if no heuristic function available for current configuration
1443
+ * \retval CUBLAS_STATUS_SUCCESS if query was successful, inspect
1444
+ * heuristicResultsArray[0 to (returnAlgoCount - 1)].state
1445
+ * for detail status of results
1446
+ */
1447
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoGetHeuristic(cublasLtHandle_t lightHandle,
1448
+ cublasLtMatmulDesc_t operationDesc,
1449
+ cublasLtMatrixLayout_t Adesc,
1450
+ cublasLtMatrixLayout_t Bdesc,
1451
+ cublasLtMatrixLayout_t Cdesc,
1452
+ cublasLtMatrixLayout_t Ddesc,
1453
+ cublasLtMatmulPreference_t preference,
1454
+ int requestedAlgoCount,
1455
+ cublasLtMatmulHeuristicResult_t heuristicResultsArray[],
1456
+ int* returnAlgoCount);
1457
+
1458
+ /* ---------------------------------------------------------------------------------------*/
1459
+ /* Lower level API to be able to implement own Heuristic and Find routines */
1460
+ /* ---------------------------------------------------------------------------------------*/
1461
+
1462
+ /** Routine to get all algo IDs that can potentially run
1463
+ *
1464
+ * \param[in] int requestedAlgoCount requested number of algos (must be less or equal to size of algoIdsA
1465
+ * (in elements)) \param[out] algoIdsA array to write algoIds to \param[out] returnAlgoCount number of algoIds
1466
+ * actually written
1467
+ *
1468
+ * \retval CUBLAS_STATUS_INVALID_VALUE if requestedAlgoCount is less or equal to zero
1469
+ * \retval CUBLAS_STATUS_SUCCESS if query was successful, inspect returnAlgoCount to get actual number of IDs
1470
+ * available
1471
+ */
1472
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoGetIds(cublasLtHandle_t lightHandle,
1473
+ cublasComputeType_t computeType,
1474
+ cudaDataType_t scaleType,
1475
+ cudaDataType_t Atype,
1476
+ cudaDataType_t Btype,
1477
+ cudaDataType_t Ctype,
1478
+ cudaDataType_t Dtype,
1479
+ int requestedAlgoCount,
1480
+ int algoIdsArray[],
1481
+ int* returnAlgoCount);
1482
+
1483
+ /** Initialize algo structure
1484
+ *
1485
+ * \retval CUBLAS_STATUS_INVALID_VALUE if algo is NULL or algoId is outside of recognized range
1486
+ * \retval CUBLAS_STATUS_NOT_SUPPORTED if algoId is not supported for given combination of data types
1487
+ * \retval CUBLAS_STATUS_SUCCESS if the structure was successfully initialized
1488
+ */
1489
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoInit(cublasLtHandle_t lightHandle,
1490
+ cublasComputeType_t computeType,
1491
+ cudaDataType_t scaleType,
1492
+ cudaDataType_t Atype,
1493
+ cudaDataType_t Btype,
1494
+ cudaDataType_t Ctype,
1495
+ cudaDataType_t Dtype,
1496
+ int algoId,
1497
+ cublasLtMatmulAlgo_t* algo);
1498
+
1499
+ /** Check configured algo descriptor for correctness and support on current device.
1500
+ *
1501
+ * Result includes required workspace size and calculated wave count.
1502
+ *
1503
+ * CUBLAS_STATUS_SUCCESS doesn't fully guarantee algo will run (will fail if e.g. buffers are not correctly aligned);
1504
+ * but if cublasLtMatmulAlgoCheck fails, the algo will not run.
1505
+ *
1506
+ * \param[in] algo algo configuration to check
1507
+ * \param[out] result result structure to report algo runtime characteristics; algo field is never updated
1508
+ *
1509
+ * \retval CUBLAS_STATUS_INVALID_VALUE if matrix layout descriptors or operation descriptor don't match algo
1510
+ * descriptor
1511
+ * \retval CUBLAS_STATUS_NOT_SUPPORTED if algo configuration or data type combination is not currently supported on
1512
+ * given device
1513
+ * \retval CUBLAS_STATUS_ARCH_MISMATCH if algo configuration cannot be run using the selected device
1514
+ * \retval CUBLAS_STATUS_SUCCESS if check was successful
1515
+ */
1516
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoCheck( //
1517
+ cublasLtHandle_t lightHandle,
1518
+ cublasLtMatmulDesc_t operationDesc,
1519
+ cublasLtMatrixLayout_t Adesc,
1520
+ cublasLtMatrixLayout_t Bdesc,
1521
+ cublasLtMatrixLayout_t Cdesc,
1522
+ cublasLtMatrixLayout_t Ddesc,
1523
+ const cublasLtMatmulAlgo_t* algo, ///< may point to result->algo
1524
+ cublasLtMatmulHeuristicResult_t* result);
1525
+
1526
+ /** Capabilities Attributes that can be retrieved from an initialized Algo structure
1527
+ */
1528
+ typedef enum {
1529
+ /** support for split K, see CUBLASLT_ALGO_CONFIG_SPLITK_NUM
1530
+ *
1531
+ * int32_t, 0 means no support, supported otherwise
1532
+ */
1533
+ CUBLASLT_ALGO_CAP_SPLITK_SUPPORT = 0,
1534
+
1535
+ /** reduction scheme mask, see cublasLtReductionScheme_t; shows supported reduction schemes, if reduction scheme is
1536
+ * not masked out it is supported.
1537
+ *
1538
+ * e.g. int isReductionSchemeComputeTypeSupported ? (reductionSchemeMask & CUBLASLT_REDUCTION_SCHEME_COMPUTE_TYPE) ==
1539
+ * CUBLASLT_REDUCTION_SCHEME_COMPUTE_TYPE ? 1 : 0;
1540
+ *
1541
+ * uint32_t
1542
+ */
1543
+ CUBLASLT_ALGO_CAP_REDUCTION_SCHEME_MASK = 1,
1544
+
1545
+ /** support for cta swizzling, see CUBLASLT_ALGO_CONFIG_CTA_SWIZZLING
1546
+ *
1547
+ * uint32_t, 0 means no support, 1 means supported value of 1, other values are reserved
1548
+ */
1549
+ CUBLASLT_ALGO_CAP_CTA_SWIZZLING_SUPPORT = 2,
1550
+
1551
+ /** support strided batch
1552
+ *
1553
+ * int32_t, 0 means no support, supported otherwise
1554
+ */
1555
+ CUBLASLT_ALGO_CAP_STRIDED_BATCH_SUPPORT = 3,
1556
+
1557
+ /** support results out of place (D != C in D = alpha.A.B + beta.C)
1558
+ *
1559
+ * int32_t, 0 means no support, supported otherwise
1560
+ */
1561
+ CUBLASLT_ALGO_CAP_OUT_OF_PLACE_RESULT_SUPPORT = 4,
1562
+
1563
+ /** syrk/herk support (on top of regular gemm)
1564
+ *
1565
+ * int32_t, 0 means no support, supported otherwise
1566
+ */
1567
+ CUBLASLT_ALGO_CAP_UPLO_SUPPORT = 5,
1568
+
1569
+ /** tile ids possible to use, see cublasLtMatmulTile_t; if no tile ids are supported use
1570
+ * CUBLASLT_MATMUL_TILE_UNDEFINED
1571
+ *
1572
+ * use cublasLtMatmulAlgoCapGetAttribute() with sizeInBytes=0 to query actual count
1573
+ *
1574
+ * array of uint32_t
1575
+ */
1576
+ CUBLASLT_ALGO_CAP_TILE_IDS = 6,
1577
+
1578
+ /** custom option range is from 0 to CUBLASLT_ALGO_CAP_CUSTOM_OPTION_MAX (inclusive), see
1579
+ * CUBLASLT_ALGO_CONFIG_CUSTOM_OPTION
1580
+ *
1581
+ * int32_t
1582
+ */
1583
+ CUBLASLT_ALGO_CAP_CUSTOM_OPTION_MAX = 7,
1584
+
1585
+ /** whether algorithm supports custom (not COL or ROW memory order), see cublasLtOrder_t
1586
+ *
1587
+ * int32_t 0 means only COL and ROW memory order is allowed, non-zero means that algo might have different
1588
+ * requirements;
1589
+ */
1590
+ CUBLASLT_ALGO_CAP_CUSTOM_MEMORY_ORDER = 10,
1591
+
1592
+ /** bitmask enumerating pointer modes algorithm supports
1593
+ *
1594
+ * uint32_t, see cublasLtPointerModeMask_t
1595
+ */
1596
+ CUBLASLT_ALGO_CAP_POINTER_MODE_MASK = 11,
1597
+
1598
+ /** bitmask enumerating kinds of postprocessing algorithm supports in the epilogue
1599
+ *
1600
+ * uint32_t, see cublasLtEpilogue_t
1601
+ */
1602
+ CUBLASLT_ALGO_CAP_EPILOGUE_MASK = 12,
1603
+
1604
+ /** stages ids possible to use, see cublasLtMatmulStages_t; if no stages ids are supported use
1605
+ * CUBLASLT_MATMUL_STAGES_UNDEFINED
1606
+ *
1607
+ * use cublasLtMatmulAlgoCapGetAttribute() with sizeInBytes=0 to query actual count
1608
+ *
1609
+ * array of uint32_t
1610
+ */
1611
+ CUBLASLT_ALGO_CAP_STAGES_IDS = 13,
1612
+
1613
+ /** support for nagative ld for all of the matrices
1614
+ *
1615
+ * int32_t 0 means no support, supported otherwise
1616
+ */
1617
+ CUBLASLT_ALGO_CAP_LD_NEGATIVE = 14,
1618
+
1619
+ /** details about algorithm's implementation that affect it's numerical behavior
1620
+ *
1621
+ * uint64_t, see cublasLtNumericalImplFlags_t
1622
+ */
1623
+ CUBLASLT_ALGO_CAP_NUMERICAL_IMPL_FLAGS = 15,
1624
+
1625
+ /** minimum alignment required for A matrix in bytes
1626
+ * (required for buffer pointer, leading dimension, and possibly other strides defined for matrix memory order)
1627
+ *
1628
+ * uint32_t
1629
+ */
1630
+ CUBLASLT_ALGO_CAP_MIN_ALIGNMENT_A_BYTES = 16,
1631
+
1632
+ /** minimum alignment required for B matrix in bytes
1633
+ * (required for buffer pointer, leading dimension, and possibly other strides defined for matrix memory order)
1634
+ *
1635
+ * uint32_t
1636
+ */
1637
+ CUBLASLT_ALGO_CAP_MIN_ALIGNMENT_B_BYTES = 17,
1638
+
1639
+ /** minimum alignment required for C matrix in bytes
1640
+ * (required for buffer pointer, leading dimension, and possibly other strides defined for matrix memory order)
1641
+ *
1642
+ * uint32_t
1643
+ */
1644
+ CUBLASLT_ALGO_CAP_MIN_ALIGNMENT_C_BYTES = 18,
1645
+
1646
+ /** minimum alignment required for D matrix in bytes
1647
+ * (required for buffer pointer, leading dimension, and possibly other strides defined for matrix memory order)
1648
+ *
1649
+ * uint32_t
1650
+ */
1651
+ CUBLASLT_ALGO_CAP_MIN_ALIGNMENT_D_BYTES = 19,
1652
+
1653
+ /** EXPERIMENTAL: support for synchronization via atomic counters
1654
+ *
1655
+ * int32_t
1656
+ */
1657
+ CUBLASLT_ALGO_CAP_ATOMIC_SYNC = 20,
1658
+ } cublasLtMatmulAlgoCapAttributes_t;
1659
+
1660
+ /** Get algo capability attribute.
1661
+ *
1662
+ * E.g. to get list of supported Tile IDs:
1663
+ * cublasLtMatmulTile_t tiles[CUBLASLT_MATMUL_TILE_END];
1664
+ * size_t num_tiles, size_written;
1665
+ * if (cublasLtMatmulAlgoCapGetAttribute(algo, CUBLASLT_ALGO_CAP_TILE_IDS, tiles, sizeof(tiles), size_written) ==
1666
+ * CUBLAS_STATUS_SUCCESS) { num_tiles = size_written / sizeof(tiles[0]);
1667
+ * }
1668
+ *
1669
+ * \param[in] algo The algo descriptor
1670
+ * \param[in] attr The attribute
1671
+ * \param[out] buf memory address containing the new value
1672
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1673
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number of
1674
+ * bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
1675
+ *
1676
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
1677
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
1678
+ * selected attribute
1679
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
1680
+ */
1681
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoCapGetAttribute(const cublasLtMatmulAlgo_t* algo,
1682
+ cublasLtMatmulAlgoCapAttributes_t attr,
1683
+ void* buf,
1684
+ size_t sizeInBytes,
1685
+ size_t* sizeWritten);
1686
+
1687
+ /** Algo Configuration Attributes that can be set according to the Algo capabilities
1688
+ */
1689
+ typedef enum {
1690
+ /** algorithm index, see cublasLtMatmulAlgoGetIds()
1691
+ *
1692
+ * readonly, set by cublasLtMatmulAlgoInit()
1693
+ * int32_t
1694
+ */
1695
+ CUBLASLT_ALGO_CONFIG_ID = 0,
1696
+ /** tile id, see cublasLtMatmulTile_t
1697
+ *
1698
+ * uint32_t, default: CUBLASLT_MATMUL_TILE_UNDEFINED
1699
+ */
1700
+ CUBLASLT_ALGO_CONFIG_TILE_ID = 1,
1701
+ /** Number of K splits. If the number of K splits is greater than one, SPLITK_NUM parts
1702
+ * of matrix multiplication will be computed in parallel. The results will be accumulated
1703
+ * according to CUBLASLT_ALGO_CONFIG_REDUCTION_SCHEME
1704
+ *
1705
+ * int32_t, default: 1
1706
+ */
1707
+ CUBLASLT_ALGO_CONFIG_SPLITK_NUM = 2,
1708
+ /** reduction scheme, see cublasLtReductionScheme_t
1709
+ *
1710
+ * uint32_t, default: CUBLASLT_REDUCTION_SCHEME_NONE
1711
+ */
1712
+ CUBLASLT_ALGO_CONFIG_REDUCTION_SCHEME = 3,
1713
+ /** cta swizzling, change mapping from CUDA grid coordinates to parts of the matrices
1714
+ *
1715
+ * possible values: 0, 1, other values reserved
1716
+ *
1717
+ * uint32_t, default: 0
1718
+ */
1719
+ CUBLASLT_ALGO_CONFIG_CTA_SWIZZLING = 4,
1720
+ /** custom option, each algorithm can support some custom options that don't fit description of the other config
1721
+ * attributes, see CUBLASLT_ALGO_CAP_CUSTOM_OPTION_MAX to get accepted range for any specific case
1722
+ *
1723
+ * uint32_t, default: 0
1724
+ */
1725
+ CUBLASLT_ALGO_CONFIG_CUSTOM_OPTION = 5,
1726
+ /** stages id, see cublasLtMatmulStages_t
1727
+ *
1728
+ * uint32_t, default: CUBLASLT_MATMUL_STAGES_UNDEFINED
1729
+ */
1730
+ CUBLASLT_ALGO_CONFIG_STAGES_ID = 6,
1731
+ /** inner shape id, see cublasLtMatmulInnerShape_t
1732
+ *
1733
+ * uint16_t, default: 0 (CUBLASLT_MATMUL_INNER_SHAPE_UNDEFINED)
1734
+ */
1735
+ CUBLASLT_ALGO_CONFIG_INNER_SHAPE_ID = 7,
1736
+ /** Thread Block Cluster shape id, see cublasLtClusterShape_t. Defines cluster size to use.
1737
+ *
1738
+ * uint16_t, default: 0 (CUBLASLT_CLUSTER_SHAPE_AUTO)
1739
+ */
1740
+ CUBLASLT_ALGO_CONFIG_CLUSTER_SHAPE_ID = 8,
1741
+ } cublasLtMatmulAlgoConfigAttributes_t;
1742
+
1743
+ /** Set algo configuration attribute.
1744
+ *
1745
+ * \param[in] algo The algo descriptor
1746
+ * \param[in] attr The attribute
1747
+ * \param[in] buf memory address containing the new value
1748
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1749
+ *
1750
+ * \retval CUBLAS_STATUS_INVALID_VALUE if buf is NULL or sizeInBytes doesn't match size of internal storage for
1751
+ * selected attribute
1752
+ * \retval CUBLAS_STATUS_SUCCESS if attribute was set successfully
1753
+ */
1754
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoConfigSetAttribute(cublasLtMatmulAlgo_t* algo,
1755
+ cublasLtMatmulAlgoConfigAttributes_t attr,
1756
+ const void* buf,
1757
+ size_t sizeInBytes);
1758
+
1759
+ /** Get algo configuration attribute.
1760
+ *
1761
+ * \param[in] algo The algo descriptor
1762
+ * \param[in] attr The attribute
1763
+ * \param[out] buf memory address containing the new value
1764
+ * \param[in] sizeInBytes size of buf buffer for verification (in bytes)
1765
+ * \param[out] sizeWritten only valid when return value is CUBLAS_STATUS_SUCCESS. If sizeInBytes is non-zero: number of
1766
+ * bytes actually written, if sizeInBytes is 0: number of bytes needed to write full contents
1767
+ *
1768
+ * \retval CUBLAS_STATUS_INVALID_VALUE if sizeInBytes is 0 and sizeWritten is NULL, or if sizeInBytes is non-zero
1769
+ * and buf is NULL or sizeInBytes doesn't match size of internal storage for
1770
+ * selected attribute
1771
+ * \retval CUBLAS_STATUS_SUCCESS if attribute's value was successfully written to user memory
1772
+ */
1773
+ cublasStatus_t CUBLASWINAPI cublasLtMatmulAlgoConfigGetAttribute(const cublasLtMatmulAlgo_t* algo,
1774
+ cublasLtMatmulAlgoConfigAttributes_t attr,
1775
+ void* buf,
1776
+ size_t sizeInBytes,
1777
+ size_t* sizeWritten);
1778
+
1779
+ /** Experimental: Logger callback type.
1780
+ */
1781
+ typedef void (*cublasLtLoggerCallback_t)(int logLevel, const char* functionName, const char* message);
1782
+
1783
+ /** Experimental: Logger callback setter.
1784
+ *
1785
+ * \param[in] callback a user defined callback function to be called by the logger
1786
+ *
1787
+ * \retval CUBLAS_STATUS_SUCCESS if callback was set successfully
1788
+ */
1789
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerSetCallback(cublasLtLoggerCallback_t callback);
1790
+
1791
+ /** Experimental: Log file setter.
1792
+ *
1793
+ * \param[in] file an open file with write permissions
1794
+ *
1795
+ * \retval CUBLAS_STATUS_SUCCESS if log file was set successfully
1796
+ */
1797
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerSetFile(FILE* file);
1798
+
1799
+ /** Experimental: Open log file.
1800
+ *
1801
+ * \param[in] logFile log file path. if the log file does not exist, it will be created
1802
+ *
1803
+ * \retval CUBLAS_STATUS_SUCCESS if log file was created successfully
1804
+ */
1805
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerOpenFile(const char* logFile);
1806
+
1807
+ /** Experimental: Log level setter.
1808
+ *
1809
+ * \param[in] level log level, should be one of the following:
1810
+ * 0. Off
1811
+ * 1. Errors
1812
+ * 2. Performance Trace
1813
+ * 3. Performance Hints
1814
+ * 4. Heuristics Trace
1815
+ * 5. API Trace
1816
+ *
1817
+ * \retval CUBLAS_STATUS_INVALID_VALUE if log level is not one of the above levels
1818
+ *
1819
+ * \retval CUBLAS_STATUS_SUCCESS if log level was set successfully
1820
+ */
1821
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerSetLevel(int level);
1822
+
1823
+ /** Experimental: Log mask setter.
1824
+ *
1825
+ * \param[in] mask log mask, should be a combination of the following masks:
1826
+ * 0. Off
1827
+ * 1. Errors
1828
+ * 2. Performance Trace
1829
+ * 4. Performance Hints
1830
+ * 8. Heuristics Trace
1831
+ * 16. API Trace
1832
+ *
1833
+ * \retval CUBLAS_STATUS_SUCCESS if log mask was set successfully
1834
+ */
1835
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerSetMask(int mask);
1836
+
1837
+ /** Experimental: Disable logging for the entire session.
1838
+ *
1839
+ * \retval CUBLAS_STATUS_SUCCESS if disabled logging
1840
+ */
1841
+ cublasStatus_t CUBLASWINAPI cublasLtLoggerForceDisable();
1842
+
1843
+ #if defined(__cplusplus)
1844
+ }
1845
+ #endif /* __cplusplus */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublasXt.h ADDED
@@ -0,0 +1,693 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2019 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ /* cublasXt : Host API, Out of Core and Multi-GPU BLAS Library
51
+
52
+ */
53
+
54
+ #if !defined(CUBLAS_XT_H_)
55
+ #define CUBLAS_XT_H_
56
+
57
+ #include "driver_types.h"
58
+ #include "cuComplex.h" /* import complex data type */
59
+
60
+ #include "cublas_v2.h"
61
+
62
+ #if defined(__cplusplus)
63
+ extern "C" {
64
+ #endif /* __cplusplus */
65
+
66
+ struct cublasXtContext;
67
+ typedef struct cublasXtContext* cublasXtHandle_t;
68
+
69
+ cublasStatus_t CUBLASWINAPI cublasXtCreate(cublasXtHandle_t* handle);
70
+ cublasStatus_t CUBLASWINAPI cublasXtDestroy(cublasXtHandle_t handle);
71
+ cublasStatus_t CUBLASWINAPI cublasXtGetNumBoards(int nbDevices, int deviceId[], int* nbBoards);
72
+ cublasStatus_t CUBLASWINAPI cublasXtMaxBoards(int* nbGpuBoards);
73
+ /* This routine selects the Gpus that the user want to use for CUBLAS-XT */
74
+ cublasStatus_t CUBLASWINAPI cublasXtDeviceSelect(cublasXtHandle_t handle, int nbDevices, int deviceId[]);
75
+
76
+ /* This routine allows to change the dimension of the tiles ( blockDim x blockDim ) */
77
+ cublasStatus_t CUBLASWINAPI cublasXtSetBlockDim(cublasXtHandle_t handle, int blockDim);
78
+ cublasStatus_t CUBLASWINAPI cublasXtGetBlockDim(cublasXtHandle_t handle, int* blockDim);
79
+
80
+ typedef enum { CUBLASXT_PINNING_DISABLED = 0, CUBLASXT_PINNING_ENABLED = 1 } cublasXtPinnedMemMode_t;
81
+ /* This routine allows to CUBLAS-XT to pin the Host memory if it find out that some of the matrix passed
82
+ are not pinned : Pinning/Unpinning the Host memory is still a costly operation
83
+ It is better if the user controls the memory on its own (by pinning/unpinning oly when necessary)
84
+ */
85
+ cublasStatus_t CUBLASWINAPI cublasXtGetPinningMemMode(cublasXtHandle_t handle, cublasXtPinnedMemMode_t* mode);
86
+ cublasStatus_t CUBLASWINAPI cublasXtSetPinningMemMode(cublasXtHandle_t handle, cublasXtPinnedMemMode_t mode);
87
+
88
+ /* This routines is to provide a CPU Blas routines, used for too small sizes or hybrid computation */
89
+ typedef enum {
90
+ CUBLASXT_FLOAT = 0,
91
+ CUBLASXT_DOUBLE = 1,
92
+ CUBLASXT_COMPLEX = 2,
93
+ CUBLASXT_DOUBLECOMPLEX = 3,
94
+ } cublasXtOpType_t;
95
+
96
+ typedef enum {
97
+ CUBLASXT_GEMM = 0,
98
+ CUBLASXT_SYRK = 1,
99
+ CUBLASXT_HERK = 2,
100
+ CUBLASXT_SYMM = 3,
101
+ CUBLASXT_HEMM = 4,
102
+ CUBLASXT_TRSM = 5,
103
+ CUBLASXT_SYR2K = 6,
104
+ CUBLASXT_HER2K = 7,
105
+
106
+ CUBLASXT_SPMM = 8,
107
+ CUBLASXT_SYRKX = 9,
108
+ CUBLASXT_HERKX = 10,
109
+ CUBLASXT_TRMM = 11,
110
+ CUBLASXT_ROUTINE_MAX = 12,
111
+ } cublasXtBlasOp_t;
112
+
113
+ /* Currently only 32-bit integer BLAS routines are supported */
114
+ cublasStatus_t CUBLASWINAPI cublasXtSetCpuRoutine(cublasXtHandle_t handle,
115
+ cublasXtBlasOp_t blasOp,
116
+ cublasXtOpType_t type,
117
+ void* blasFunctor);
118
+
119
+ /* Specified the percentage of work that should done by the CPU, default is 0 (no work) */
120
+ cublasStatus_t CUBLASWINAPI cublasXtSetCpuRatio(cublasXtHandle_t handle,
121
+ cublasXtBlasOp_t blasOp,
122
+ cublasXtOpType_t type,
123
+ float ratio);
124
+
125
+ /* GEMM */
126
+ cublasStatus_t CUBLASWINAPI cublasXtSgemm(cublasXtHandle_t handle,
127
+ cublasOperation_t transa,
128
+ cublasOperation_t transb,
129
+ size_t m,
130
+ size_t n,
131
+ size_t k,
132
+ const float* alpha,
133
+ const float* A,
134
+ size_t lda,
135
+ const float* B,
136
+ size_t ldb,
137
+ const float* beta,
138
+ float* C,
139
+ size_t ldc);
140
+
141
+ cublasStatus_t CUBLASWINAPI cublasXtDgemm(cublasXtHandle_t handle,
142
+ cublasOperation_t transa,
143
+ cublasOperation_t transb,
144
+ size_t m,
145
+ size_t n,
146
+ size_t k,
147
+ const double* alpha,
148
+ const double* A,
149
+ size_t lda,
150
+ const double* B,
151
+ size_t ldb,
152
+ const double* beta,
153
+ double* C,
154
+ size_t ldc);
155
+
156
+ cublasStatus_t CUBLASWINAPI cublasXtCgemm(cublasXtHandle_t handle,
157
+ cublasOperation_t transa,
158
+ cublasOperation_t transb,
159
+ size_t m,
160
+ size_t n,
161
+ size_t k,
162
+ const cuComplex* alpha,
163
+ const cuComplex* A,
164
+ size_t lda,
165
+ const cuComplex* B,
166
+ size_t ldb,
167
+ const cuComplex* beta,
168
+ cuComplex* C,
169
+ size_t ldc);
170
+
171
+ cublasStatus_t CUBLASWINAPI cublasXtZgemm(cublasXtHandle_t handle,
172
+ cublasOperation_t transa,
173
+ cublasOperation_t transb,
174
+ size_t m,
175
+ size_t n,
176
+ size_t k,
177
+ const cuDoubleComplex* alpha,
178
+ const cuDoubleComplex* A,
179
+ size_t lda,
180
+ const cuDoubleComplex* B,
181
+ size_t ldb,
182
+ const cuDoubleComplex* beta,
183
+ cuDoubleComplex* C,
184
+ size_t ldc);
185
+ /* ------------------------------------------------------- */
186
+ /* SYRK */
187
+ cublasStatus_t CUBLASWINAPI cublasXtSsyrk(cublasXtHandle_t handle,
188
+ cublasFillMode_t uplo,
189
+ cublasOperation_t trans,
190
+ size_t n,
191
+ size_t k,
192
+ const float* alpha,
193
+ const float* A,
194
+ size_t lda,
195
+ const float* beta,
196
+ float* C,
197
+ size_t ldc);
198
+
199
+ cublasStatus_t CUBLASWINAPI cublasXtDsyrk(cublasXtHandle_t handle,
200
+ cublasFillMode_t uplo,
201
+ cublasOperation_t trans,
202
+ size_t n,
203
+ size_t k,
204
+ const double* alpha,
205
+ const double* A,
206
+ size_t lda,
207
+ const double* beta,
208
+ double* C,
209
+ size_t ldc);
210
+
211
+ cublasStatus_t CUBLASWINAPI cublasXtCsyrk(cublasXtHandle_t handle,
212
+ cublasFillMode_t uplo,
213
+ cublasOperation_t trans,
214
+ size_t n,
215
+ size_t k,
216
+ const cuComplex* alpha,
217
+ const cuComplex* A,
218
+ size_t lda,
219
+ const cuComplex* beta,
220
+ cuComplex* C,
221
+ size_t ldc);
222
+
223
+ cublasStatus_t CUBLASWINAPI cublasXtZsyrk(cublasXtHandle_t handle,
224
+ cublasFillMode_t uplo,
225
+ cublasOperation_t trans,
226
+ size_t n,
227
+ size_t k,
228
+ const cuDoubleComplex* alpha,
229
+ const cuDoubleComplex* A,
230
+ size_t lda,
231
+ const cuDoubleComplex* beta,
232
+ cuDoubleComplex* C,
233
+ size_t ldc);
234
+ /* -------------------------------------------------------------------- */
235
+ /* HERK */
236
+ cublasStatus_t CUBLASWINAPI cublasXtCherk(cublasXtHandle_t handle,
237
+ cublasFillMode_t uplo,
238
+ cublasOperation_t trans,
239
+ size_t n,
240
+ size_t k,
241
+ const float* alpha,
242
+ const cuComplex* A,
243
+ size_t lda,
244
+ const float* beta,
245
+ cuComplex* C,
246
+ size_t ldc);
247
+
248
+ cublasStatus_t CUBLASWINAPI cublasXtZherk(cublasXtHandle_t handle,
249
+ cublasFillMode_t uplo,
250
+ cublasOperation_t trans,
251
+ size_t n,
252
+ size_t k,
253
+ const double* alpha,
254
+ const cuDoubleComplex* A,
255
+ size_t lda,
256
+ const double* beta,
257
+ cuDoubleComplex* C,
258
+ size_t ldc);
259
+ /* -------------------------------------------------------------------- */
260
+ /* SYR2K */
261
+ cublasStatus_t CUBLASWINAPI cublasXtSsyr2k(cublasXtHandle_t handle,
262
+ cublasFillMode_t uplo,
263
+ cublasOperation_t trans,
264
+ size_t n,
265
+ size_t k,
266
+ const float* alpha,
267
+ const float* A,
268
+ size_t lda,
269
+ const float* B,
270
+ size_t ldb,
271
+ const float* beta,
272
+ float* C,
273
+ size_t ldc);
274
+
275
+ cublasStatus_t CUBLASWINAPI cublasXtDsyr2k(cublasXtHandle_t handle,
276
+ cublasFillMode_t uplo,
277
+ cublasOperation_t trans,
278
+ size_t n,
279
+ size_t k,
280
+ const double* alpha,
281
+ const double* A,
282
+ size_t lda,
283
+ const double* B,
284
+ size_t ldb,
285
+ const double* beta,
286
+ double* C,
287
+ size_t ldc);
288
+
289
+ cublasStatus_t CUBLASWINAPI cublasXtCsyr2k(cublasXtHandle_t handle,
290
+ cublasFillMode_t uplo,
291
+ cublasOperation_t trans,
292
+ size_t n,
293
+ size_t k,
294
+ const cuComplex* alpha,
295
+ const cuComplex* A,
296
+ size_t lda,
297
+ const cuComplex* B,
298
+ size_t ldb,
299
+ const cuComplex* beta,
300
+ cuComplex* C,
301
+ size_t ldc);
302
+
303
+ cublasStatus_t CUBLASWINAPI cublasXtZsyr2k(cublasXtHandle_t handle,
304
+ cublasFillMode_t uplo,
305
+ cublasOperation_t trans,
306
+ size_t n,
307
+ size_t k,
308
+ const cuDoubleComplex* alpha,
309
+ const cuDoubleComplex* A,
310
+ size_t lda,
311
+ const cuDoubleComplex* B,
312
+ size_t ldb,
313
+ const cuDoubleComplex* beta,
314
+ cuDoubleComplex* C,
315
+ size_t ldc);
316
+ /* -------------------------------------------------------------------- */
317
+ /* HERKX : variant extension of HERK */
318
+ cublasStatus_t CUBLASWINAPI cublasXtCherkx(cublasXtHandle_t handle,
319
+ cublasFillMode_t uplo,
320
+ cublasOperation_t trans,
321
+ size_t n,
322
+ size_t k,
323
+ const cuComplex* alpha,
324
+ const cuComplex* A,
325
+ size_t lda,
326
+ const cuComplex* B,
327
+ size_t ldb,
328
+ const float* beta,
329
+ cuComplex* C,
330
+ size_t ldc);
331
+
332
+ cublasStatus_t CUBLASWINAPI cublasXtZherkx(cublasXtHandle_t handle,
333
+ cublasFillMode_t uplo,
334
+ cublasOperation_t trans,
335
+ size_t n,
336
+ size_t k,
337
+ const cuDoubleComplex* alpha,
338
+ const cuDoubleComplex* A,
339
+ size_t lda,
340
+ const cuDoubleComplex* B,
341
+ size_t ldb,
342
+ const double* beta,
343
+ cuDoubleComplex* C,
344
+ size_t ldc);
345
+
346
+ /* -------------------------------------------------------------------- */
347
+ /* TRSM */
348
+ cublasStatus_t CUBLASWINAPI cublasXtStrsm(cublasXtHandle_t handle,
349
+ cublasSideMode_t side,
350
+ cublasFillMode_t uplo,
351
+ cublasOperation_t trans,
352
+ cublasDiagType_t diag,
353
+ size_t m,
354
+ size_t n,
355
+ const float* alpha,
356
+ const float* A,
357
+ size_t lda,
358
+ float* B,
359
+ size_t ldb);
360
+
361
+ cublasStatus_t CUBLASWINAPI cublasXtDtrsm(cublasXtHandle_t handle,
362
+ cublasSideMode_t side,
363
+ cublasFillMode_t uplo,
364
+ cublasOperation_t trans,
365
+ cublasDiagType_t diag,
366
+ size_t m,
367
+ size_t n,
368
+ const double* alpha,
369
+ const double* A,
370
+ size_t lda,
371
+ double* B,
372
+ size_t ldb);
373
+
374
+ cublasStatus_t CUBLASWINAPI cublasXtCtrsm(cublasXtHandle_t handle,
375
+ cublasSideMode_t side,
376
+ cublasFillMode_t uplo,
377
+ cublasOperation_t trans,
378
+ cublasDiagType_t diag,
379
+ size_t m,
380
+ size_t n,
381
+ const cuComplex* alpha,
382
+ const cuComplex* A,
383
+ size_t lda,
384
+ cuComplex* B,
385
+ size_t ldb);
386
+
387
+ cublasStatus_t CUBLASWINAPI cublasXtZtrsm(cublasXtHandle_t handle,
388
+ cublasSideMode_t side,
389
+ cublasFillMode_t uplo,
390
+ cublasOperation_t trans,
391
+ cublasDiagType_t diag,
392
+ size_t m,
393
+ size_t n,
394
+ const cuDoubleComplex* alpha,
395
+ const cuDoubleComplex* A,
396
+ size_t lda,
397
+ cuDoubleComplex* B,
398
+ size_t ldb);
399
+ /* -------------------------------------------------------------------- */
400
+ /* SYMM : Symmetric Multiply Matrix*/
401
+ cublasStatus_t CUBLASWINAPI cublasXtSsymm(cublasXtHandle_t handle,
402
+ cublasSideMode_t side,
403
+ cublasFillMode_t uplo,
404
+ size_t m,
405
+ size_t n,
406
+ const float* alpha,
407
+ const float* A,
408
+ size_t lda,
409
+ const float* B,
410
+ size_t ldb,
411
+ const float* beta,
412
+ float* C,
413
+ size_t ldc);
414
+
415
+ cublasStatus_t CUBLASWINAPI cublasXtDsymm(cublasXtHandle_t handle,
416
+ cublasSideMode_t side,
417
+ cublasFillMode_t uplo,
418
+ size_t m,
419
+ size_t n,
420
+ const double* alpha,
421
+ const double* A,
422
+ size_t lda,
423
+ const double* B,
424
+ size_t ldb,
425
+ const double* beta,
426
+ double* C,
427
+ size_t ldc);
428
+
429
+ cublasStatus_t CUBLASWINAPI cublasXtCsymm(cublasXtHandle_t handle,
430
+ cublasSideMode_t side,
431
+ cublasFillMode_t uplo,
432
+ size_t m,
433
+ size_t n,
434
+ const cuComplex* alpha,
435
+ const cuComplex* A,
436
+ size_t lda,
437
+ const cuComplex* B,
438
+ size_t ldb,
439
+ const cuComplex* beta,
440
+ cuComplex* C,
441
+ size_t ldc);
442
+
443
+ cublasStatus_t CUBLASWINAPI cublasXtZsymm(cublasXtHandle_t handle,
444
+ cublasSideMode_t side,
445
+ cublasFillMode_t uplo,
446
+ size_t m,
447
+ size_t n,
448
+ const cuDoubleComplex* alpha,
449
+ const cuDoubleComplex* A,
450
+ size_t lda,
451
+ const cuDoubleComplex* B,
452
+ size_t ldb,
453
+ const cuDoubleComplex* beta,
454
+ cuDoubleComplex* C,
455
+ size_t ldc);
456
+ /* -------------------------------------------------------------------- */
457
+ /* HEMM : Hermitian Matrix Multiply */
458
+ cublasStatus_t CUBLASWINAPI cublasXtChemm(cublasXtHandle_t handle,
459
+ cublasSideMode_t side,
460
+ cublasFillMode_t uplo,
461
+ size_t m,
462
+ size_t n,
463
+ const cuComplex* alpha,
464
+ const cuComplex* A,
465
+ size_t lda,
466
+ const cuComplex* B,
467
+ size_t ldb,
468
+ const cuComplex* beta,
469
+ cuComplex* C,
470
+ size_t ldc);
471
+
472
+ cublasStatus_t CUBLASWINAPI cublasXtZhemm(cublasXtHandle_t handle,
473
+ cublasSideMode_t side,
474
+ cublasFillMode_t uplo,
475
+ size_t m,
476
+ size_t n,
477
+ const cuDoubleComplex* alpha,
478
+ const cuDoubleComplex* A,
479
+ size_t lda,
480
+ const cuDoubleComplex* B,
481
+ size_t ldb,
482
+ const cuDoubleComplex* beta,
483
+ cuDoubleComplex* C,
484
+ size_t ldc);
485
+
486
+ /* -------------------------------------------------------------------- */
487
+ /* SYRKX : variant extension of SYRK */
488
+ cublasStatus_t CUBLASWINAPI cublasXtSsyrkx(cublasXtHandle_t handle,
489
+ cublasFillMode_t uplo,
490
+ cublasOperation_t trans,
491
+ size_t n,
492
+ size_t k,
493
+ const float* alpha,
494
+ const float* A,
495
+ size_t lda,
496
+ const float* B,
497
+ size_t ldb,
498
+ const float* beta,
499
+ float* C,
500
+ size_t ldc);
501
+
502
+ cublasStatus_t CUBLASWINAPI cublasXtDsyrkx(cublasXtHandle_t handle,
503
+ cublasFillMode_t uplo,
504
+ cublasOperation_t trans,
505
+ size_t n,
506
+ size_t k,
507
+ const double* alpha,
508
+ const double* A,
509
+ size_t lda,
510
+ const double* B,
511
+ size_t ldb,
512
+ const double* beta,
513
+ double* C,
514
+ size_t ldc);
515
+
516
+ cublasStatus_t CUBLASWINAPI cublasXtCsyrkx(cublasXtHandle_t handle,
517
+ cublasFillMode_t uplo,
518
+ cublasOperation_t trans,
519
+ size_t n,
520
+ size_t k,
521
+ const cuComplex* alpha,
522
+ const cuComplex* A,
523
+ size_t lda,
524
+ const cuComplex* B,
525
+ size_t ldb,
526
+ const cuComplex* beta,
527
+ cuComplex* C,
528
+ size_t ldc);
529
+
530
+ cublasStatus_t CUBLASWINAPI cublasXtZsyrkx(cublasXtHandle_t handle,
531
+ cublasFillMode_t uplo,
532
+ cublasOperation_t trans,
533
+ size_t n,
534
+ size_t k,
535
+ const cuDoubleComplex* alpha,
536
+ const cuDoubleComplex* A,
537
+ size_t lda,
538
+ const cuDoubleComplex* B,
539
+ size_t ldb,
540
+ const cuDoubleComplex* beta,
541
+ cuDoubleComplex* C,
542
+ size_t ldc);
543
+ /* -------------------------------------------------------------------- */
544
+ /* HER2K : variant extension of HERK */
545
+ cublasStatus_t CUBLASWINAPI cublasXtCher2k(cublasXtHandle_t handle,
546
+ cublasFillMode_t uplo,
547
+ cublasOperation_t trans,
548
+ size_t n,
549
+ size_t k,
550
+ const cuComplex* alpha,
551
+ const cuComplex* A,
552
+ size_t lda,
553
+ const cuComplex* B,
554
+ size_t ldb,
555
+ const float* beta,
556
+ cuComplex* C,
557
+ size_t ldc);
558
+
559
+ cublasStatus_t CUBLASWINAPI cublasXtZher2k(cublasXtHandle_t handle,
560
+ cublasFillMode_t uplo,
561
+ cublasOperation_t trans,
562
+ size_t n,
563
+ size_t k,
564
+ const cuDoubleComplex* alpha,
565
+ const cuDoubleComplex* A,
566
+ size_t lda,
567
+ const cuDoubleComplex* B,
568
+ size_t ldb,
569
+ const double* beta,
570
+ cuDoubleComplex* C,
571
+ size_t ldc);
572
+
573
+ /* -------------------------------------------------------------------- */
574
+ /* SPMM : Symmetric Packed Multiply Matrix*/
575
+ cublasStatus_t CUBLASWINAPI cublasXtSspmm(cublasXtHandle_t handle,
576
+ cublasSideMode_t side,
577
+ cublasFillMode_t uplo,
578
+ size_t m,
579
+ size_t n,
580
+ const float* alpha,
581
+ const float* AP,
582
+ const float* B,
583
+ size_t ldb,
584
+ const float* beta,
585
+ float* C,
586
+ size_t ldc);
587
+
588
+ cublasStatus_t CUBLASWINAPI cublasXtDspmm(cublasXtHandle_t handle,
589
+ cublasSideMode_t side,
590
+ cublasFillMode_t uplo,
591
+ size_t m,
592
+ size_t n,
593
+ const double* alpha,
594
+ const double* AP,
595
+ const double* B,
596
+ size_t ldb,
597
+ const double* beta,
598
+ double* C,
599
+ size_t ldc);
600
+
601
+ cublasStatus_t CUBLASWINAPI cublasXtCspmm(cublasXtHandle_t handle,
602
+ cublasSideMode_t side,
603
+ cublasFillMode_t uplo,
604
+ size_t m,
605
+ size_t n,
606
+ const cuComplex* alpha,
607
+ const cuComplex* AP,
608
+ const cuComplex* B,
609
+ size_t ldb,
610
+ const cuComplex* beta,
611
+ cuComplex* C,
612
+ size_t ldc);
613
+
614
+ cublasStatus_t CUBLASWINAPI cublasXtZspmm(cublasXtHandle_t handle,
615
+ cublasSideMode_t side,
616
+ cublasFillMode_t uplo,
617
+ size_t m,
618
+ size_t n,
619
+ const cuDoubleComplex* alpha,
620
+ const cuDoubleComplex* AP,
621
+ const cuDoubleComplex* B,
622
+ size_t ldb,
623
+ const cuDoubleComplex* beta,
624
+ cuDoubleComplex* C,
625
+ size_t ldc);
626
+
627
+ /* -------------------------------------------------------------------- */
628
+ /* TRMM */
629
+ cublasStatus_t CUBLASWINAPI cublasXtStrmm(cublasXtHandle_t handle,
630
+ cublasSideMode_t side,
631
+ cublasFillMode_t uplo,
632
+ cublasOperation_t trans,
633
+ cublasDiagType_t diag,
634
+ size_t m,
635
+ size_t n,
636
+ const float* alpha,
637
+ const float* A,
638
+ size_t lda,
639
+ const float* B,
640
+ size_t ldb,
641
+ float* C,
642
+ size_t ldc);
643
+
644
+ cublasStatus_t CUBLASWINAPI cublasXtDtrmm(cublasXtHandle_t handle,
645
+ cublasSideMode_t side,
646
+ cublasFillMode_t uplo,
647
+ cublasOperation_t trans,
648
+ cublasDiagType_t diag,
649
+ size_t m,
650
+ size_t n,
651
+ const double* alpha,
652
+ const double* A,
653
+ size_t lda,
654
+ const double* B,
655
+ size_t ldb,
656
+ double* C,
657
+ size_t ldc);
658
+
659
+ cublasStatus_t CUBLASWINAPI cublasXtCtrmm(cublasXtHandle_t handle,
660
+ cublasSideMode_t side,
661
+ cublasFillMode_t uplo,
662
+ cublasOperation_t trans,
663
+ cublasDiagType_t diag,
664
+ size_t m,
665
+ size_t n,
666
+ const cuComplex* alpha,
667
+ const cuComplex* A,
668
+ size_t lda,
669
+ const cuComplex* B,
670
+ size_t ldb,
671
+ cuComplex* C,
672
+ size_t ldc);
673
+
674
+ cublasStatus_t CUBLASWINAPI cublasXtZtrmm(cublasXtHandle_t handle,
675
+ cublasSideMode_t side,
676
+ cublasFillMode_t uplo,
677
+ cublasOperation_t trans,
678
+ cublasDiagType_t diag,
679
+ size_t m,
680
+ size_t n,
681
+ const cuDoubleComplex* alpha,
682
+ const cuDoubleComplex* A,
683
+ size_t lda,
684
+ const cuDoubleComplex* B,
685
+ size_t ldb,
686
+ cuDoubleComplex* C,
687
+ size_t ldc);
688
+
689
+ #if defined(__cplusplus)
690
+ }
691
+ #endif /* __cplusplus */
692
+
693
+ #endif /* !defined(CUBLAS_XT_H_) */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas_api.h ADDED
The diff for this file is too large to render. See raw diff
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/cublas_v2.h ADDED
@@ -0,0 +1,478 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2019 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ /*
51
+ * This is the public header file for the new CUBLAS library API, it mapped the generic
52
+ * Cublas name functions to the actual _v2 implementations.
53
+ */
54
+
55
+ #if !defined(CUBLAS_V2_H_)
56
+ #define CUBLAS_V2_H_
57
+
58
+ #if defined(CUBLAS_H_)
59
+ #error "It is an error to include both cublas.h and cublas_v2.h"
60
+ #endif
61
+
62
+ #undef CUBLASAPI
63
+ #ifdef __CUDACC__
64
+ #define CUBLASAPI __host__ __device__
65
+ #else
66
+ #define CUBLASAPI
67
+ #endif
68
+
69
+ #include "cublas_api.h"
70
+
71
+ #define cublasCreate cublasCreate_v2
72
+ #define cublasDestroy cublasDestroy_v2
73
+ #define cublasGetVersion cublasGetVersion_v2
74
+ #define cublasSetWorkspace cublasSetWorkspace_v2
75
+ #define cublasSetStream cublasSetStream_v2
76
+ #define cublasGetStream cublasGetStream_v2
77
+ #define cublasGetPointerMode cublasGetPointerMode_v2
78
+ #define cublasSetPointerMode cublasSetPointerMode_v2
79
+
80
+ /* 32-bit integer */
81
+
82
+ /* Blas1 Routines */
83
+
84
+ #define cublasSnrm2 cublasSnrm2_v2
85
+ #define cublasDnrm2 cublasDnrm2_v2
86
+ #define cublasScnrm2 cublasScnrm2_v2
87
+ #define cublasDznrm2 cublasDznrm2_v2
88
+
89
+ #define cublasSdot cublasSdot_v2
90
+ #define cublasDdot cublasDdot_v2
91
+ #define cublasCdotu cublasCdotu_v2
92
+ #define cublasCdotc cublasCdotc_v2
93
+ #define cublasZdotu cublasZdotu_v2
94
+ #define cublasZdotc cublasZdotc_v2
95
+
96
+ #define cublasSscal cublasSscal_v2
97
+ #define cublasDscal cublasDscal_v2
98
+ #define cublasCscal cublasCscal_v2
99
+ #define cublasCsscal cublasCsscal_v2
100
+ #define cublasZscal cublasZscal_v2
101
+ #define cublasZdscal cublasZdscal_v2
102
+
103
+ #define cublasSaxpy cublasSaxpy_v2
104
+ #define cublasDaxpy cublasDaxpy_v2
105
+ #define cublasCaxpy cublasCaxpy_v2
106
+ #define cublasZaxpy cublasZaxpy_v2
107
+
108
+ #define cublasScopy cublasScopy_v2
109
+ #define cublasDcopy cublasDcopy_v2
110
+ #define cublasCcopy cublasCcopy_v2
111
+ #define cublasZcopy cublasZcopy_v2
112
+
113
+ #define cublasSswap cublasSswap_v2
114
+ #define cublasDswap cublasDswap_v2
115
+ #define cublasCswap cublasCswap_v2
116
+ #define cublasZswap cublasZswap_v2
117
+
118
+ #define cublasIsamax cublasIsamax_v2
119
+ #define cublasIdamax cublasIdamax_v2
120
+ #define cublasIcamax cublasIcamax_v2
121
+ #define cublasIzamax cublasIzamax_v2
122
+
123
+ #define cublasIsamin cublasIsamin_v2
124
+ #define cublasIdamin cublasIdamin_v2
125
+ #define cublasIcamin cublasIcamin_v2
126
+ #define cublasIzamin cublasIzamin_v2
127
+
128
+ #define cublasSasum cublasSasum_v2
129
+ #define cublasDasum cublasDasum_v2
130
+ #define cublasScasum cublasScasum_v2
131
+ #define cublasDzasum cublasDzasum_v2
132
+
133
+ #define cublasSrot cublasSrot_v2
134
+ #define cublasDrot cublasDrot_v2
135
+ #define cublasCrot cublasCrot_v2
136
+ #define cublasCsrot cublasCsrot_v2
137
+ #define cublasZrot cublasZrot_v2
138
+ #define cublasZdrot cublasZdrot_v2
139
+
140
+ #define cublasSrotg cublasSrotg_v2
141
+ #define cublasDrotg cublasDrotg_v2
142
+ #define cublasCrotg cublasCrotg_v2
143
+ #define cublasZrotg cublasZrotg_v2
144
+
145
+ #define cublasSrotm cublasSrotm_v2
146
+ #define cublasDrotm cublasDrotm_v2
147
+
148
+ #define cublasSrotmg cublasSrotmg_v2
149
+ #define cublasDrotmg cublasDrotmg_v2
150
+
151
+ /* Blas2 Routines */
152
+
153
+ #define cublasSgemv cublasSgemv_v2
154
+ #define cublasDgemv cublasDgemv_v2
155
+ #define cublasCgemv cublasCgemv_v2
156
+ #define cublasZgemv cublasZgemv_v2
157
+
158
+ #define cublasSgbmv cublasSgbmv_v2
159
+ #define cublasDgbmv cublasDgbmv_v2
160
+ #define cublasCgbmv cublasCgbmv_v2
161
+ #define cublasZgbmv cublasZgbmv_v2
162
+
163
+ #define cublasStrmv cublasStrmv_v2
164
+ #define cublasDtrmv cublasDtrmv_v2
165
+ #define cublasCtrmv cublasCtrmv_v2
166
+ #define cublasZtrmv cublasZtrmv_v2
167
+
168
+ #define cublasStbmv cublasStbmv_v2
169
+ #define cublasDtbmv cublasDtbmv_v2
170
+ #define cublasCtbmv cublasCtbmv_v2
171
+ #define cublasZtbmv cublasZtbmv_v2
172
+
173
+ #define cublasStpmv cublasStpmv_v2
174
+ #define cublasDtpmv cublasDtpmv_v2
175
+ #define cublasCtpmv cublasCtpmv_v2
176
+ #define cublasZtpmv cublasZtpmv_v2
177
+
178
+ #define cublasStrsv cublasStrsv_v2
179
+ #define cublasDtrsv cublasDtrsv_v2
180
+ #define cublasCtrsv cublasCtrsv_v2
181
+ #define cublasZtrsv cublasZtrsv_v2
182
+
183
+ #define cublasStpsv cublasStpsv_v2
184
+ #define cublasDtpsv cublasDtpsv_v2
185
+ #define cublasCtpsv cublasCtpsv_v2
186
+ #define cublasZtpsv cublasZtpsv_v2
187
+
188
+ #define cublasStbsv cublasStbsv_v2
189
+ #define cublasDtbsv cublasDtbsv_v2
190
+ #define cublasCtbsv cublasCtbsv_v2
191
+ #define cublasZtbsv cublasZtbsv_v2
192
+
193
+ #define cublasSsymv cublasSsymv_v2
194
+ #define cublasDsymv cublasDsymv_v2
195
+ #define cublasCsymv cublasCsymv_v2
196
+ #define cublasZsymv cublasZsymv_v2
197
+ #define cublasChemv cublasChemv_v2
198
+ #define cublasZhemv cublasZhemv_v2
199
+
200
+ #define cublasSsbmv cublasSsbmv_v2
201
+ #define cublasDsbmv cublasDsbmv_v2
202
+ #define cublasChbmv cublasChbmv_v2
203
+ #define cublasZhbmv cublasZhbmv_v2
204
+
205
+ #define cublasSspmv cublasSspmv_v2
206
+ #define cublasDspmv cublasDspmv_v2
207
+ #define cublasChpmv cublasChpmv_v2
208
+ #define cublasZhpmv cublasZhpmv_v2
209
+
210
+ #define cublasSger cublasSger_v2
211
+ #define cublasDger cublasDger_v2
212
+ #define cublasCgeru cublasCgeru_v2
213
+ #define cublasCgerc cublasCgerc_v2
214
+ #define cublasZgeru cublasZgeru_v2
215
+ #define cublasZgerc cublasZgerc_v2
216
+
217
+ #define cublasSsyr cublasSsyr_v2
218
+ #define cublasDsyr cublasDsyr_v2
219
+ #define cublasCsyr cublasCsyr_v2
220
+ #define cublasZsyr cublasZsyr_v2
221
+ #define cublasCher cublasCher_v2
222
+ #define cublasZher cublasZher_v2
223
+
224
+ #define cublasSspr cublasSspr_v2
225
+ #define cublasDspr cublasDspr_v2
226
+ #define cublasChpr cublasChpr_v2
227
+ #define cublasZhpr cublasZhpr_v2
228
+
229
+ #define cublasSsyr2 cublasSsyr2_v2
230
+ #define cublasDsyr2 cublasDsyr2_v2
231
+ #define cublasCsyr2 cublasCsyr2_v2
232
+ #define cublasZsyr2 cublasZsyr2_v2
233
+ #define cublasCher2 cublasCher2_v2
234
+ #define cublasZher2 cublasZher2_v2
235
+
236
+ #define cublasSspr2 cublasSspr2_v2
237
+ #define cublasDspr2 cublasDspr2_v2
238
+ #define cublasChpr2 cublasChpr2_v2
239
+ #define cublasZhpr2 cublasZhpr2_v2
240
+
241
+ /* Blas3 Routines */
242
+
243
+ #define cublasSgemm cublasSgemm_v2
244
+ #define cublasDgemm cublasDgemm_v2
245
+ #define cublasCgemm cublasCgemm_v2
246
+ #define cublasZgemm cublasZgemm_v2
247
+
248
+ #define cublasSsyrk cublasSsyrk_v2
249
+ #define cublasDsyrk cublasDsyrk_v2
250
+ #define cublasCsyrk cublasCsyrk_v2
251
+ #define cublasZsyrk cublasZsyrk_v2
252
+ #define cublasCherk cublasCherk_v2
253
+ #define cublasZherk cublasZherk_v2
254
+
255
+ #define cublasSsyr2k cublasSsyr2k_v2
256
+ #define cublasDsyr2k cublasDsyr2k_v2
257
+ #define cublasCsyr2k cublasCsyr2k_v2
258
+ #define cublasZsyr2k cublasZsyr2k_v2
259
+ #define cublasCher2k cublasCher2k_v2
260
+ #define cublasZher2k cublasZher2k_v2
261
+
262
+ #define cublasSsymm cublasSsymm_v2
263
+ #define cublasDsymm cublasDsymm_v2
264
+ #define cublasCsymm cublasCsymm_v2
265
+ #define cublasZsymm cublasZsymm_v2
266
+ #define cublasChemm cublasChemm_v2
267
+ #define cublasZhemm cublasZhemm_v2
268
+
269
+ #define cublasStrsm cublasStrsm_v2
270
+ #define cublasDtrsm cublasDtrsm_v2
271
+ #define cublasCtrsm cublasCtrsm_v2
272
+ #define cublasZtrsm cublasZtrsm_v2
273
+
274
+ #define cublasStrmm cublasStrmm_v2
275
+ #define cublasDtrmm cublasDtrmm_v2
276
+ #define cublasCtrmm cublasCtrmm_v2
277
+ #define cublasZtrmm cublasZtrmm_v2
278
+
279
+ /* 64-bit integer */
280
+
281
+ /* Blas1 Routines */
282
+
283
+ #define cublasSnrm2_64 cublasSnrm2_v2_64
284
+ #define cublasDnrm2_64 cublasDnrm2_v2_64
285
+ #define cublasScnrm2_64 cublasScnrm2_v2_64
286
+ #define cublasDznrm2_64 cublasDznrm2_v2_64
287
+
288
+ #define cublasSdot_64 cublasSdot_v2_64
289
+ #define cublasDdot_64 cublasDdot_v2_64
290
+ #define cublasCdotu_64 cublasCdotu_v2_64
291
+ #define cublasCdotc_64 cublasCdotc_v2_64
292
+ #define cublasZdotu_64 cublasZdotu_v2_64
293
+ #define cublasZdotc_64 cublasZdotc_v2_64
294
+
295
+ #define cublasSscal_64 cublasSscal_v2_64
296
+ #define cublasDscal_64 cublasDscal_v2_64
297
+ #define cublasCscal_64 cublasCscal_v2_64
298
+ #define cublasCsscal_64 cublasCsscal_v2_64
299
+ #define cublasZscal_64 cublasZscal_v2_64
300
+ #define cublasZdscal_64 cublasZdscal_v2_64
301
+
302
+ #define cublasSaxpy_64 cublasSaxpy_v2_64
303
+ #define cublasDaxpy_64 cublasDaxpy_v2_64
304
+ #define cublasCaxpy_64 cublasCaxpy_v2_64
305
+ #define cublasZaxpy_64 cublasZaxpy_v2_64
306
+
307
+ #define cublasScopy_64 cublasScopy_v2_64
308
+ #define cublasDcopy_64 cublasDcopy_v2_64
309
+ #define cublasCcopy_64 cublasCcopy_v2_64
310
+ #define cublasZcopy_64 cublasZcopy_v2_64
311
+
312
+ #define cublasSswap_64 cublasSswap_v2_64
313
+ #define cublasDswap_64 cublasDswap_v2_64
314
+ #define cublasCswap_64 cublasCswap_v2_64
315
+ #define cublasZswap_64 cublasZswap_v2_64
316
+
317
+ #define cublasIsamax_64 cublasIsamax_v2_64
318
+ #define cublasIdamax_64 cublasIdamax_v2_64
319
+ #define cublasIcamax_64 cublasIcamax_v2_64
320
+ #define cublasIzamax_64 cublasIzamax_v2_64
321
+
322
+ #define cublasIsamin_64 cublasIsamin_v2_64
323
+ #define cublasIdamin_64 cublasIdamin_v2_64
324
+ #define cublasIcamin_64 cublasIcamin_v2_64
325
+ #define cublasIzamin_64 cublasIzamin_v2_64
326
+
327
+ #define cublasSasum_64 cublasSasum_v2_64
328
+ #define cublasDasum_64 cublasDasum_v2_64
329
+ #define cublasScasum_64 cublasScasum_v2_64
330
+ #define cublasDzasum_64 cublasDzasum_v2_64
331
+
332
+ #define cublasSrot_64 cublasSrot_v2_64
333
+ #define cublasDrot_64 cublasDrot_v2_64
334
+ #define cublasCrot_64 cublasCrot_v2_64
335
+ #define cublasCsrot_64 cublasCsrot_v2_64
336
+ #define cublasZrot_64 cublasZrot_v2_64
337
+ #define cublasZdrot_64 cublasZdrot_v2_64
338
+
339
+ #define cublasSrotg_64 cublasSrotg_v2_64
340
+ #define cublasDrotg_64 cublasDrotg_v2_64
341
+ #define cublasCrotg_64 cublasCrotg_v2_64
342
+ #define cublasZrotg_64 cublasZrotg_v2_64
343
+
344
+ #define cublasSrotm_64 cublasSrotm_v2_64
345
+ #define cublasDrotm_64 cublasDrotm_v2_64
346
+
347
+ #define cublasSrotmg_64 cublasSrotmg_v2_64
348
+ #define cublasDrotmg_64 cublasDrotmg_v2_64
349
+
350
+ /* Blas2 Routines */
351
+
352
+ #define cublasSgemv_64 cublasSgemv_v2_64
353
+ #define cublasDgemv_64 cublasDgemv_v2_64
354
+ #define cublasCgemv_64 cublasCgemv_v2_64
355
+ #define cublasZgemv_64 cublasZgemv_v2_64
356
+
357
+ #define cublasSgbmv_64 cublasSgbmv_v2_64
358
+ #define cublasDgbmv_64 cublasDgbmv_v2_64
359
+ #define cublasCgbmv_64 cublasCgbmv_v2_64
360
+ #define cublasZgbmv_64 cublasZgbmv_v2_64
361
+
362
+ #define cublasStrmv_64 cublasStrmv_v2_64
363
+ #define cublasDtrmv_64 cublasDtrmv_v2_64
364
+ #define cublasCtrmv_64 cublasCtrmv_v2_64
365
+ #define cublasZtrmv_64 cublasZtrmv_v2_64
366
+
367
+ #define cublasStbmv_64 cublasStbmv_v2_64
368
+ #define cublasDtbmv_64 cublasDtbmv_v2_64
369
+ #define cublasCtbmv_64 cublasCtbmv_v2_64
370
+ #define cublasZtbmv_64 cublasZtbmv_v2_64
371
+
372
+ #define cublasStpmv_64 cublasStpmv_v2_64
373
+ #define cublasDtpmv_64 cublasDtpmv_v2_64
374
+ #define cublasCtpmv_64 cublasCtpmv_v2_64
375
+ #define cublasZtpmv_64 cublasZtpmv_v2_64
376
+
377
+ #define cublasStrsv_64 cublasStrsv_v2_64
378
+ #define cublasDtrsv_64 cublasDtrsv_v2_64
379
+ #define cublasCtrsv_64 cublasCtrsv_v2_64
380
+ #define cublasZtrsv_64 cublasZtrsv_v2_64
381
+
382
+ #define cublasStpsv_64 cublasStpsv_v2_64
383
+ #define cublasDtpsv_64 cublasDtpsv_v2_64
384
+ #define cublasCtpsv_64 cublasCtpsv_v2_64
385
+ #define cublasZtpsv_64 cublasZtpsv_v2_64
386
+
387
+ #define cublasStbsv_64 cublasStbsv_v2_64
388
+ #define cublasDtbsv_64 cublasDtbsv_v2_64
389
+ #define cublasCtbsv_64 cublasCtbsv_v2_64
390
+ #define cublasZtbsv_64 cublasZtbsv_v2_64
391
+
392
+ #define cublasSsymv_64 cublasSsymv_v2_64
393
+ #define cublasDsymv_64 cublasDsymv_v2_64
394
+ #define cublasCsymv_64 cublasCsymv_v2_64
395
+ #define cublasZsymv_64 cublasZsymv_v2_64
396
+ #define cublasChemv_64 cublasChemv_v2_64
397
+ #define cublasZhemv_64 cublasZhemv_v2_64
398
+
399
+ #define cublasSsbmv_64 cublasSsbmv_v2_64
400
+ #define cublasDsbmv_64 cublasDsbmv_v2_64
401
+ #define cublasChbmv_64 cublasChbmv_v2_64
402
+ #define cublasZhbmv_64 cublasZhbmv_v2_64
403
+
404
+ #define cublasSspmv_64 cublasSspmv_v2_64
405
+ #define cublasDspmv_64 cublasDspmv_v2_64
406
+ #define cublasChpmv_64 cublasChpmv_v2_64
407
+ #define cublasZhpmv_64 cublasZhpmv_v2_64
408
+
409
+ #define cublasSger_64 cublasSger_v2_64
410
+ #define cublasDger_64 cublasDger_v2_64
411
+ #define cublasCgeru_64 cublasCgeru_v2_64
412
+ #define cublasCgerc_64 cublasCgerc_v2_64
413
+ #define cublasZgeru_64 cublasZgeru_v2_64
414
+ #define cublasZgerc_64 cublasZgerc_v2_64
415
+
416
+ #define cublasSsyr_64 cublasSsyr_v2_64
417
+ #define cublasDsyr_64 cublasDsyr_v2_64
418
+ #define cublasCsyr_64 cublasCsyr_v2_64
419
+ #define cublasZsyr_64 cublasZsyr_v2_64
420
+ #define cublasCher_64 cublasCher_v2_64
421
+ #define cublasZher_64 cublasZher_v2_64
422
+
423
+ #define cublasSspr_64 cublasSspr_v2_64
424
+ #define cublasDspr_64 cublasDspr_v2_64
425
+ #define cublasChpr_64 cublasChpr_v2_64
426
+ #define cublasZhpr_64 cublasZhpr_v2_64
427
+
428
+ #define cublasSsyr2_64 cublasSsyr2_v2_64
429
+ #define cublasDsyr2_64 cublasDsyr2_v2_64
430
+ #define cublasCsyr2_64 cublasCsyr2_v2_64
431
+ #define cublasZsyr2_64 cublasZsyr2_v2_64
432
+ #define cublasCher2_64 cublasCher2_v2_64
433
+ #define cublasZher2_64 cublasZher2_v2_64
434
+
435
+ #define cublasSspr2_64 cublasSspr2_v2_64
436
+ #define cublasDspr2_64 cublasDspr2_v2_64
437
+ #define cublasChpr2_64 cublasChpr2_v2_64
438
+ #define cublasZhpr2_64 cublasZhpr2_v2_64
439
+
440
+ /* Blas3 Routines */
441
+
442
+ #define cublasSgemm_64 cublasSgemm_v2_64
443
+ #define cublasDgemm_64 cublasDgemm_v2_64
444
+ #define cublasCgemm_64 cublasCgemm_v2_64
445
+ #define cublasZgemm_64 cublasZgemm_v2_64
446
+
447
+ #define cublasSsyrk_64 cublasSsyrk_v2_64
448
+ #define cublasDsyrk_64 cublasDsyrk_v2_64
449
+ #define cublasCsyrk_64 cublasCsyrk_v2_64
450
+ #define cublasZsyrk_64 cublasZsyrk_v2_64
451
+ #define cublasCherk_64 cublasCherk_v2_64
452
+ #define cublasZherk_64 cublasZherk_v2_64
453
+
454
+ #define cublasSsyr2k_64 cublasSsyr2k_v2_64
455
+ #define cublasDsyr2k_64 cublasDsyr2k_v2_64
456
+ #define cublasCsyr2k_64 cublasCsyr2k_v2_64
457
+ #define cublasZsyr2k_64 cublasZsyr2k_v2_64
458
+ #define cublasCher2k_64 cublasCher2k_v2_64
459
+ #define cublasZher2k_64 cublasZher2k_v2_64
460
+
461
+ #define cublasSsymm_64 cublasSsymm_v2_64
462
+ #define cublasDsymm_64 cublasDsymm_v2_64
463
+ #define cublasCsymm_64 cublasCsymm_v2_64
464
+ #define cublasZsymm_64 cublasZsymm_v2_64
465
+ #define cublasChemm_64 cublasChemm_v2_64
466
+ #define cublasZhemm_64 cublasZhemm_v2_64
467
+
468
+ #define cublasStrsm_64 cublasStrsm_v2_64
469
+ #define cublasDtrsm_64 cublasDtrsm_v2_64
470
+ #define cublasCtrsm_64 cublasCtrsm_v2_64
471
+ #define cublasZtrsm_64 cublasZtrsm_v2_64
472
+
473
+ #define cublasStrmm_64 cublasStrmm_v2_64
474
+ #define cublasDtrmm_64 cublasDtrmm_v2_64
475
+ #define cublasCtrmm_64 cublasCtrmm_v2_64
476
+ #define cublasZtrmm_64 cublasZtrmm_v2_64
477
+
478
+ #endif /* !defined(CUBLAS_V2_H_) */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/include/nvblas.h ADDED
@@ -0,0 +1,824 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2019 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ #if !defined(NVBLAS_H_)
51
+ #define NVBLAS_H_
52
+
53
+ #include "driver_types.h"
54
+ #include "cuComplex.h" /* import complex data type */
55
+
56
+ #if defined(__cplusplus)
57
+ extern "C" {
58
+ #endif
59
+
60
+ /* GEMM */
61
+ void sgemm_(const char* transa,
62
+ const char* transb,
63
+ const int* m,
64
+ const int* n,
65
+ const int* k,
66
+ const float* alpha,
67
+ const float* a,
68
+ const int* lda,
69
+ const float* b,
70
+ const int* ldb,
71
+ const float* beta,
72
+ float* c,
73
+ const int* ldc);
74
+
75
+ void dgemm_(const char* transa,
76
+ const char* transb,
77
+ const int* m,
78
+ const int* n,
79
+ const int* k,
80
+ const double* alpha,
81
+ const double* a,
82
+ const int* lda,
83
+ const double* b,
84
+ const int* ldb,
85
+ const double* beta,
86
+ double* c,
87
+ const int* ldc);
88
+
89
+ void cgemm_(const char* transa,
90
+ const char* transb,
91
+ const int* m,
92
+ const int* n,
93
+ const int* k,
94
+ const cuComplex* alpha,
95
+ const cuComplex* a,
96
+ const int* lda,
97
+ const cuComplex* b,
98
+ const int* ldb,
99
+ const cuComplex* beta,
100
+ cuComplex* c,
101
+ const int* ldc);
102
+
103
+ void zgemm_(const char* transa,
104
+ const char* transb,
105
+ const int* m,
106
+ const int* n,
107
+ const int* k,
108
+ const cuDoubleComplex* alpha,
109
+ const cuDoubleComplex* a,
110
+ const int* lda,
111
+ const cuDoubleComplex* b,
112
+ const int* ldb,
113
+ const cuDoubleComplex* beta,
114
+ cuDoubleComplex* c,
115
+ const int* ldc);
116
+
117
+ void sgemm(const char* transa,
118
+ const char* transb,
119
+ const int* m,
120
+ const int* n,
121
+ const int* k,
122
+ const float* alpha,
123
+ const float* a,
124
+ const int* lda,
125
+ const float* b,
126
+ const int* ldb,
127
+ const float* beta,
128
+ float* c,
129
+ const int* ldc);
130
+
131
+ void dgemm(const char* transa,
132
+ const char* transb,
133
+ const int* m,
134
+ const int* n,
135
+ const int* k,
136
+ const double* alpha,
137
+ const double* a,
138
+ const int* lda,
139
+ const double* b,
140
+ const int* ldb,
141
+ const double* beta,
142
+ double* c,
143
+ const int* ldc);
144
+
145
+ void cgemm(const char* transa,
146
+ const char* transb,
147
+ const int* m,
148
+ const int* n,
149
+ const int* k,
150
+ const cuComplex* alpha,
151
+ const cuComplex* a,
152
+ const int* lda,
153
+ const cuComplex* b,
154
+ const int* ldb,
155
+ const cuComplex* beta,
156
+ cuComplex* c,
157
+ const int* ldc);
158
+
159
+ void zgemm(const char* transa,
160
+ const char* transb,
161
+ const int* m,
162
+ const int* n,
163
+ const int* k,
164
+ const cuDoubleComplex* alpha,
165
+ const cuDoubleComplex* a,
166
+ const int* lda,
167
+ const cuDoubleComplex* b,
168
+ const int* ldb,
169
+ const cuDoubleComplex* beta,
170
+ cuDoubleComplex* c,
171
+ const int* ldc);
172
+
173
+ /* SYRK */
174
+ void ssyrk_(const char* uplo,
175
+ const char* trans,
176
+ const int* n,
177
+ const int* k,
178
+ const float* alpha,
179
+ const float* a,
180
+ const int* lda,
181
+ const float* beta,
182
+ float* c,
183
+ const int* ldc);
184
+
185
+ void dsyrk_(const char* uplo,
186
+ const char* trans,
187
+ const int* n,
188
+ const int* k,
189
+ const double* alpha,
190
+ const double* a,
191
+ const int* lda,
192
+ const double* beta,
193
+ double* c,
194
+ const int* ldc);
195
+
196
+ void csyrk_(const char* uplo,
197
+ const char* trans,
198
+ const int* n,
199
+ const int* k,
200
+ const cuComplex* alpha,
201
+ const cuComplex* a,
202
+ const int* lda,
203
+ const cuComplex* beta,
204
+ cuComplex* c,
205
+ const int* ldc);
206
+
207
+ void zsyrk_(const char* uplo,
208
+ const char* trans,
209
+ const int* n,
210
+ const int* k,
211
+ const cuDoubleComplex* alpha,
212
+ const cuDoubleComplex* a,
213
+ const int* lda,
214
+ const cuDoubleComplex* beta,
215
+ cuDoubleComplex* c,
216
+ const int* ldc);
217
+
218
+ void ssyrk(const char* uplo,
219
+ const char* trans,
220
+ const int* n,
221
+ const int* k,
222
+ const float* alpha,
223
+ const float* a,
224
+ const int* lda,
225
+ const float* beta,
226
+ float* c,
227
+ const int* ldc);
228
+
229
+ void dsyrk(const char* uplo,
230
+ const char* trans,
231
+ const int* n,
232
+ const int* k,
233
+ const double* alpha,
234
+ const double* a,
235
+ const int* lda,
236
+ const double* beta,
237
+ double* c,
238
+ const int* ldc);
239
+
240
+ void csyrk(const char* uplo,
241
+ const char* trans,
242
+ const int* n,
243
+ const int* k,
244
+ const cuComplex* alpha,
245
+ const cuComplex* a,
246
+ const int* lda,
247
+ const cuComplex* beta,
248
+ cuComplex* c,
249
+ const int* ldc);
250
+
251
+ void zsyrk(const char* uplo,
252
+ const char* trans,
253
+ const int* n,
254
+ const int* k,
255
+ const cuDoubleComplex* alpha,
256
+ const cuDoubleComplex* a,
257
+ const int* lda,
258
+ const cuDoubleComplex* beta,
259
+ cuDoubleComplex* c,
260
+ const int* ldc);
261
+
262
+ /* HERK */
263
+ void cherk_(const char* uplo,
264
+ const char* trans,
265
+ const int* n,
266
+ const int* k,
267
+ const float* alpha,
268
+ const cuComplex* a,
269
+ const int* lda,
270
+ const float* beta,
271
+ cuComplex* c,
272
+ const int* ldc);
273
+
274
+ void zherk_(const char* uplo,
275
+ const char* trans,
276
+ const int* n,
277
+ const int* k,
278
+ const double* alpha,
279
+ const cuDoubleComplex* a,
280
+ const int* lda,
281
+ const double* beta,
282
+ cuDoubleComplex* c,
283
+ const int* ldc);
284
+
285
+ void cherk(const char* uplo,
286
+ const char* trans,
287
+ const int* n,
288
+ const int* k,
289
+ const float* alpha,
290
+ const cuComplex* a,
291
+ const int* lda,
292
+ const float* beta,
293
+ cuComplex* c,
294
+ const int* ldc);
295
+
296
+ void zherk(const char* uplo,
297
+ const char* trans,
298
+ const int* n,
299
+ const int* k,
300
+ const double* alpha,
301
+ const cuDoubleComplex* a,
302
+ const int* lda,
303
+ const double* beta,
304
+ cuDoubleComplex* c,
305
+ const int* ldc);
306
+
307
+ /* TRSM */
308
+ void strsm_(const char* side,
309
+ const char* uplo,
310
+ const char* transa,
311
+ const char* diag,
312
+ const int* m,
313
+ const int* n,
314
+ const float* alpha,
315
+ const float* a,
316
+ const int* lda,
317
+ float* b,
318
+ const int* ldb);
319
+
320
+ void dtrsm_(const char* side,
321
+ const char* uplo,
322
+ const char* transa,
323
+ const char* diag,
324
+ const int* m,
325
+ const int* n,
326
+ const double* alpha,
327
+ const double* a,
328
+ const int* lda,
329
+ double* b,
330
+ const int* ldb);
331
+
332
+ void ctrsm_(const char* side,
333
+ const char* uplo,
334
+ const char* transa,
335
+ const char* diag,
336
+ const int* m,
337
+ const int* n,
338
+ const cuComplex* alpha,
339
+ const cuComplex* a,
340
+ const int* lda,
341
+ cuComplex* b,
342
+ const int* ldb);
343
+
344
+ void ztrsm_(const char* side,
345
+ const char* uplo,
346
+ const char* transa,
347
+ const char* diag,
348
+ const int* m,
349
+ const int* n,
350
+ const cuDoubleComplex* alpha,
351
+ const cuDoubleComplex* a,
352
+ const int* lda,
353
+ cuDoubleComplex* b,
354
+ const int* ldb);
355
+
356
+ void strsm(const char* side,
357
+ const char* uplo,
358
+ const char* transa,
359
+ const char* diag,
360
+ const int* m,
361
+ const int* n,
362
+ const float* alpha,
363
+ const float* a,
364
+ const int* lda,
365
+ float* b,
366
+ const int* ldb);
367
+
368
+ void dtrsm(const char* side,
369
+ const char* uplo,
370
+ const char* transa,
371
+ const char* diag,
372
+ const int* m,
373
+ const int* n,
374
+ const double* alpha,
375
+ const double* a,
376
+ const int* lda,
377
+ double* b,
378
+ const int* ldb);
379
+
380
+ void ctrsm(const char* side,
381
+ const char* uplo,
382
+ const char* transa,
383
+ const char* diag,
384
+ const int* m,
385
+ const int* n,
386
+ const cuComplex* alpha,
387
+ const cuComplex* a,
388
+ const int* lda,
389
+ cuComplex* b,
390
+ const int* ldb);
391
+
392
+ void ztrsm(const char* side,
393
+ const char* uplo,
394
+ const char* transa,
395
+ const char* diag,
396
+ const int* m,
397
+ const int* n,
398
+ const cuDoubleComplex* alpha,
399
+ const cuDoubleComplex* a,
400
+ const int* lda,
401
+ cuDoubleComplex* b,
402
+ const int* ldb);
403
+
404
+ /* SYMM */
405
+ void ssymm_(const char* side,
406
+ const char* uplo,
407
+ const int* m,
408
+ const int* n,
409
+ const float* alpha,
410
+ const float* a,
411
+ const int* lda,
412
+ const float* b,
413
+ const int* ldb,
414
+ const float* beta,
415
+ float* c,
416
+ const int* ldc);
417
+
418
+ void dsymm_(const char* side,
419
+ const char* uplo,
420
+ const int* m,
421
+ const int* n,
422
+ const double* alpha,
423
+ const double* a,
424
+ const int* lda,
425
+ const double* b,
426
+ const int* ldb,
427
+ const double* beta,
428
+ double* c,
429
+ const int* ldc);
430
+
431
+ void csymm_(const char* side,
432
+ const char* uplo,
433
+ const int* m,
434
+ const int* n,
435
+ const cuComplex* alpha,
436
+ const cuComplex* a,
437
+ const int* lda,
438
+ const cuComplex* b,
439
+ const int* ldb,
440
+ const cuComplex* beta,
441
+ cuComplex* c,
442
+ const int* ldc);
443
+
444
+ void zsymm_(const char* side,
445
+ const char* uplo,
446
+ const int* m,
447
+ const int* n,
448
+ const cuDoubleComplex* alpha,
449
+ const cuDoubleComplex* a,
450
+ const int* lda,
451
+ const cuDoubleComplex* b,
452
+ const int* ldb,
453
+ const cuDoubleComplex* beta,
454
+ cuDoubleComplex* c,
455
+ const int* ldc);
456
+
457
+ void ssymm(const char* side,
458
+ const char* uplo,
459
+ const int* m,
460
+ const int* n,
461
+ const float* alpha,
462
+ const float* a,
463
+ const int* lda,
464
+ const float* b,
465
+ const int* ldb,
466
+ const float* beta,
467
+ float* c,
468
+ const int* ldc);
469
+
470
+ void dsymm(const char* side,
471
+ const char* uplo,
472
+ const int* m,
473
+ const int* n,
474
+ const double* alpha,
475
+ const double* a,
476
+ const int* lda,
477
+ const double* b,
478
+ const int* ldb,
479
+ const double* beta,
480
+ double* c,
481
+ const int* ldc);
482
+
483
+ void csymm(const char* side,
484
+ const char* uplo,
485
+ const int* m,
486
+ const int* n,
487
+ const cuComplex* alpha,
488
+ const cuComplex* a,
489
+ const int* lda,
490
+ const cuComplex* b,
491
+ const int* ldb,
492
+ const cuComplex* beta,
493
+ cuComplex* c,
494
+ const int* ldc);
495
+
496
+ void zsymm(const char* side,
497
+ const char* uplo,
498
+ const int* m,
499
+ const int* n,
500
+ const cuDoubleComplex* alpha,
501
+ const cuDoubleComplex* a,
502
+ const int* lda,
503
+ const cuDoubleComplex* b,
504
+ const int* ldb,
505
+ const cuDoubleComplex* beta,
506
+ cuDoubleComplex* c,
507
+ const int* ldc);
508
+
509
+ /* HEMM */
510
+ void chemm_(const char* side,
511
+ const char* uplo,
512
+ const int* m,
513
+ const int* n,
514
+ const cuComplex* alpha,
515
+ const cuComplex* a,
516
+ const int* lda,
517
+ const cuComplex* b,
518
+ const int* ldb,
519
+ const cuComplex* beta,
520
+ cuComplex* c,
521
+ const int* ldc);
522
+
523
+ void zhemm_(const char* side,
524
+ const char* uplo,
525
+ const int* m,
526
+ const int* n,
527
+ const cuDoubleComplex* alpha,
528
+ const cuDoubleComplex* a,
529
+ const int* lda,
530
+ const cuDoubleComplex* b,
531
+ const int* ldb,
532
+ const cuDoubleComplex* beta,
533
+ cuDoubleComplex* c,
534
+ const int* ldc);
535
+
536
+ /* HEMM with no underscore*/
537
+ void chemm(const char* side,
538
+ const char* uplo,
539
+ const int* m,
540
+ const int* n,
541
+ const cuComplex* alpha,
542
+ const cuComplex* a,
543
+ const int* lda,
544
+ const cuComplex* b,
545
+ const int* ldb,
546
+ const cuComplex* beta,
547
+ cuComplex* c,
548
+ const int* ldc);
549
+
550
+ void zhemm(const char* side,
551
+ const char* uplo,
552
+ const int* m,
553
+ const int* n,
554
+ const cuDoubleComplex* alpha,
555
+ const cuDoubleComplex* a,
556
+ const int* lda,
557
+ const cuDoubleComplex* b,
558
+ const int* ldb,
559
+ const cuDoubleComplex* beta,
560
+ cuDoubleComplex* c,
561
+ const int* ldc);
562
+
563
+ /* SYR2K */
564
+ void ssyr2k_(const char* uplo,
565
+ const char* trans,
566
+ const int* n,
567
+ const int* k,
568
+ const float* alpha,
569
+ const float* a,
570
+ const int* lda,
571
+ const float* b,
572
+ const int* ldb,
573
+ const float* beta,
574
+ float* c,
575
+ const int* ldc);
576
+
577
+ void dsyr2k_(const char* uplo,
578
+ const char* trans,
579
+ const int* n,
580
+ const int* k,
581
+ const double* alpha,
582
+ const double* a,
583
+ const int* lda,
584
+ const double* b,
585
+ const int* ldb,
586
+ const double* beta,
587
+ double* c,
588
+ const int* ldc);
589
+
590
+ void csyr2k_(const char* uplo,
591
+ const char* trans,
592
+ const int* n,
593
+ const int* k,
594
+ const cuComplex* alpha,
595
+ const cuComplex* a,
596
+ const int* lda,
597
+ const cuComplex* b,
598
+ const int* ldb,
599
+ const cuComplex* beta,
600
+ cuComplex* c,
601
+ const int* ldc);
602
+
603
+ void zsyr2k_(const char* uplo,
604
+ const char* trans,
605
+ const int* n,
606
+ const int* k,
607
+ const cuDoubleComplex* alpha,
608
+ const cuDoubleComplex* a,
609
+ const int* lda,
610
+ const cuDoubleComplex* b,
611
+ const int* ldb,
612
+ const cuDoubleComplex* beta,
613
+ cuDoubleComplex* c,
614
+ const int* ldc);
615
+
616
+ /* SYR2K no_underscore*/
617
+ void ssyr2k(const char* uplo,
618
+ const char* trans,
619
+ const int* n,
620
+ const int* k,
621
+ const float* alpha,
622
+ const float* a,
623
+ const int* lda,
624
+ const float* b,
625
+ const int* ldb,
626
+ const float* beta,
627
+ float* c,
628
+ const int* ldc);
629
+
630
+ void dsyr2k(const char* uplo,
631
+ const char* trans,
632
+ const int* n,
633
+ const int* k,
634
+ const double* alpha,
635
+ const double* a,
636
+ const int* lda,
637
+ const double* b,
638
+ const int* ldb,
639
+ const double* beta,
640
+ double* c,
641
+ const int* ldc);
642
+
643
+ void csyr2k(const char* uplo,
644
+ const char* trans,
645
+ const int* n,
646
+ const int* k,
647
+ const cuComplex* alpha,
648
+ const cuComplex* a,
649
+ const int* lda,
650
+ const cuComplex* b,
651
+ const int* ldb,
652
+ const cuComplex* beta,
653
+ cuComplex* c,
654
+ const int* ldc);
655
+
656
+ void zsyr2k(const char* uplo,
657
+ const char* trans,
658
+ const int* n,
659
+ const int* k,
660
+ const cuDoubleComplex* alpha,
661
+ const cuDoubleComplex* a,
662
+ const int* lda,
663
+ const cuDoubleComplex* b,
664
+ const int* ldb,
665
+ const cuDoubleComplex* beta,
666
+ cuDoubleComplex* c,
667
+ const int* ldc);
668
+
669
+ /* HERK */
670
+ void cher2k_(const char* uplo,
671
+ const char* trans,
672
+ const int* n,
673
+ const int* k,
674
+ const cuComplex* alpha,
675
+ const cuComplex* a,
676
+ const int* lda,
677
+ const cuComplex* b,
678
+ const int* ldb,
679
+ const float* beta,
680
+ cuComplex* c,
681
+ const int* ldc);
682
+
683
+ void zher2k_(const char* uplo,
684
+ const char* trans,
685
+ const int* n,
686
+ const int* k,
687
+ const cuDoubleComplex* alpha,
688
+ const cuDoubleComplex* a,
689
+ const int* lda,
690
+ const cuDoubleComplex* b,
691
+ const int* ldb,
692
+ const double* beta,
693
+ cuDoubleComplex* c,
694
+ const int* ldc);
695
+
696
+ /* HER2K with no underscore */
697
+ void cher2k(const char* uplo,
698
+ const char* trans,
699
+ const int* n,
700
+ const int* k,
701
+ const cuComplex* alpha,
702
+ const cuComplex* a,
703
+ const int* lda,
704
+ const cuComplex* b,
705
+ const int* ldb,
706
+ const float* beta,
707
+ cuComplex* c,
708
+ const int* ldc);
709
+
710
+ void zher2k(const char* uplo,
711
+ const char* trans,
712
+ const int* n,
713
+ const int* k,
714
+ const cuDoubleComplex* alpha,
715
+ const cuDoubleComplex* a,
716
+ const int* lda,
717
+ const cuDoubleComplex* b,
718
+ const int* ldb,
719
+ const double* beta,
720
+ cuDoubleComplex* c,
721
+ const int* ldc);
722
+
723
+ /* TRMM */
724
+ void strmm_(const char* side,
725
+ const char* uplo,
726
+ const char* transa,
727
+ const char* diag,
728
+ const int* m,
729
+ const int* n,
730
+ const float* alpha,
731
+ const float* a,
732
+ const int* lda,
733
+ float* b,
734
+ const int* ldb);
735
+
736
+ void dtrmm_(const char* side,
737
+ const char* uplo,
738
+ const char* transa,
739
+ const char* diag,
740
+ const int* m,
741
+ const int* n,
742
+ const double* alpha,
743
+ const double* a,
744
+ const int* lda,
745
+ double* b,
746
+ const int* ldb);
747
+
748
+ void ctrmm_(const char* side,
749
+ const char* uplo,
750
+ const char* transa,
751
+ const char* diag,
752
+ const int* m,
753
+ const int* n,
754
+ const cuComplex* alpha,
755
+ const cuComplex* a,
756
+ const int* lda,
757
+ cuComplex* b,
758
+ const int* ldb);
759
+
760
+ void ztrmm_(const char* side,
761
+ const char* uplo,
762
+ const char* transa,
763
+ const char* diag,
764
+ const int* m,
765
+ const int* n,
766
+ const cuDoubleComplex* alpha,
767
+ const cuDoubleComplex* a,
768
+ const int* lda,
769
+ cuDoubleComplex* b,
770
+ const int* ldb);
771
+
772
+ void strmm(const char* side,
773
+ const char* uplo,
774
+ const char* transa,
775
+ const char* diag,
776
+ const int* m,
777
+ const int* n,
778
+ const float* alpha,
779
+ const float* a,
780
+ const int* lda,
781
+ float* b,
782
+ const int* ldb);
783
+
784
+ void dtrmm(const char* side,
785
+ const char* uplo,
786
+ const char* transa,
787
+ const char* diag,
788
+ const int* m,
789
+ const int* n,
790
+ const double* alpha,
791
+ const double* a,
792
+ const int* lda,
793
+ double* b,
794
+ const int* ldb);
795
+
796
+ void ctrmm(const char* side,
797
+ const char* uplo,
798
+ const char* transa,
799
+ const char* diag,
800
+ const int* m,
801
+ const int* n,
802
+ const cuComplex* alpha,
803
+ const cuComplex* a,
804
+ const int* lda,
805
+ cuComplex* b,
806
+ const int* ldb);
807
+
808
+ void ztrmm(const char* side,
809
+ const char* uplo,
810
+ const char* transa,
811
+ const char* diag,
812
+ const int* m,
813
+ const int* n,
814
+ const cuDoubleComplex* alpha,
815
+ const cuDoubleComplex* a,
816
+ const int* lda,
817
+ cuDoubleComplex* b,
818
+ const int* ldb);
819
+
820
+ #if defined(__cplusplus)
821
+ }
822
+ #endif /* __cplusplus */
823
+
824
+ #endif /* !defined(NVBLAS_H_) */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (175 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cublas/lib/libnvblas.so.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c2a58dc54154208392301d0fe3d53a120e4c1ebeab9e80ce91fe9948baeadc9
3
+ size 757496
infer_4_37_2/lib/python3.10/site-packages/nvidia/cuda_runtime/include/cooperative_groups/details/info.h ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* Copyright 1993-2021 NVIDIA Corporation. All rights reserved.
2
+ *
3
+ * NOTICE TO LICENSEE:
4
+ *
5
+ * The source code and/or documentation ("Licensed Deliverables") are
6
+ * subject to NVIDIA intellectual property rights under U.S. and
7
+ * international Copyright laws.
8
+ *
9
+ * The Licensed Deliverables contained herein are PROPRIETARY and
10
+ * CONFIDENTIAL to NVIDIA and are being provided under the terms and
11
+ * conditions of a form of NVIDIA software license agreement by and
12
+ * between NVIDIA and Licensee ("License Agreement") or electronically
13
+ * accepted by Licensee. Notwithstanding any terms or conditions to
14
+ * the contrary in the License Agreement, reproduction or disclosure
15
+ * of the Licensed Deliverables to any third party without the express
16
+ * written consent of NVIDIA is prohibited.
17
+ *
18
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
19
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
20
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE
21
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
22
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
23
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
24
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
25
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
26
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
27
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
28
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
29
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
30
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
31
+ * OF THESE LICENSED DELIVERABLES.
32
+ *
33
+ * U.S. Government End Users. These Licensed Deliverables are a
34
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
35
+ * 1995), consisting of "commercial computer software" and "commercial
36
+ * computer software documentation" as such terms are used in 48
37
+ * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government
38
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
39
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
40
+ * U.S. Government End Users acquire the Licensed Deliverables with
41
+ * only those rights set forth herein.
42
+ *
43
+ * Any use of the Licensed Deliverables in individual and commercial
44
+ * software must include, in the user documentation and internal
45
+ * comments to the code, the above Disclaimer and U.S. Government End
46
+ * Users Notice.
47
+ */
48
+
49
+
50
+
51
+ #ifndef _CG_INFO_H_
52
+ #define _CG_INFO_H_
53
+ /*
54
+ ** Define: _CG_VERSION
55
+ */
56
+ #define _CG_VERSION 1000
57
+
58
+ /*
59
+ ** Define: _CG_ABI_VERSION
60
+ */
61
+ #ifndef _CG_ABI_VERSION
62
+ # define _CG_ABI_VERSION 1
63
+ #endif
64
+
65
+ /*
66
+ ** Define: _CG_ABI_EXPERIMENTAL
67
+ ** Desc: If enabled, sets all features enabled (ABI-breaking or experimental)
68
+ */
69
+ #if defined(_CG_ABI_EXPERIMENTAL)
70
+ #endif
71
+
72
+ #define _CG_CONCAT_INNER(x, y) x ## y
73
+ #define _CG_CONCAT_OUTER(x, y) _CG_CONCAT_INNER(x, y)
74
+ #define _CG_NAMESPACE _CG_CONCAT_OUTER(__v, _CG_ABI_VERSION)
75
+
76
+ #define _CG_BEGIN_NAMESPACE \
77
+ namespace cooperative_groups { namespace _CG_NAMESPACE {
78
+ #define _CG_END_NAMESPACE \
79
+ }; using namespace _CG_NAMESPACE; };
80
+
81
+ #if (defined(__cplusplus) && (__cplusplus >= 201103L)) || (defined(_MSC_VER) && (_MSC_VER >= 1900))
82
+ # define _CG_CPP11_FEATURES
83
+ #endif
84
+
85
+ #if !defined(_CG_QUALIFIER)
86
+ # define _CG_QUALIFIER __forceinline__ __device__
87
+ #endif
88
+ #if !defined(_CG_STATIC_QUALIFIER)
89
+ # define _CG_STATIC_QUALIFIER static __forceinline__ __device__
90
+ #endif
91
+ #if !defined(_CG_CONSTEXPR_QUALIFIER)
92
+ # if defined(_CG_CPP11_FEATURES)
93
+ # define _CG_CONSTEXPR_QUALIFIER constexpr __forceinline__ __device__
94
+ # else
95
+ # define _CG_CONSTEXPR_QUALIFIER _CG_QUALIFIER
96
+ # endif
97
+ #endif
98
+ #if !defined(_CG_STATIC_CONSTEXPR_QUALIFIER)
99
+ # if defined(_CG_CPP11_FEATURES)
100
+ # define _CG_STATIC_CONSTEXPR_QUALIFIER static constexpr __forceinline__ __device__
101
+ # else
102
+ # define _CG_STATIC_CONSTEXPR_QUALIFIER _CG_STATIC_QUALIFIER
103
+ # endif
104
+ #endif
105
+
106
+ #if defined(_MSC_VER)
107
+ # define _CG_DEPRECATED __declspec(deprecated)
108
+ #else
109
+ # define _CG_DEPRECATED __attribute__((deprecated))
110
+ #endif
111
+
112
+ #if (__CUDA_ARCH__ >= 600) || !defined(__CUDA_ARCH__)
113
+ # define _CG_HAS_GRID_GROUP
114
+ #endif
115
+ #if (__CUDA_ARCH__ >= 600) || !defined(__CUDA_ARCH__)
116
+ # define _CG_HAS_MULTI_GRID_GROUP
117
+ #endif
118
+ #if (__CUDA_ARCH__ >= 700) || !defined(__CUDA_ARCH__)
119
+ # define _CG_HAS_MATCH_COLLECTIVE
120
+ #endif
121
+
122
+ #if (__CUDA_ARCH__ >= 800) || !defined(__CUDA_ARCH__) && (defined(__NVCC__) || defined(__CUDACC_RTC__))
123
+ # define _CG_HAS_OP_REDUX
124
+ #endif
125
+
126
+ #if ((__CUDA_ARCH__ >= 800) || !defined(__CUDA_ARCH__)) && !defined(_CG_USER_PROVIDED_SHARED_MEMORY)
127
+ # define _CG_HAS_RESERVED_SHARED
128
+ #endif
129
+
130
+ #if ((__CUDA_ARCH__ >= 900) || !defined(__CUDA_ARCH__)) && \
131
+ (defined(__NVCC__) || defined(__CUDACC_RTC__) || defined(_CG_CLUSTER_INTRINSICS_AVAILABLE)) && \
132
+ defined(_CG_CPP11_FEATURES)
133
+ # define _CG_HAS_CLUSTER_GROUP
134
+ #endif
135
+
136
+ #if (__CUDA_ARCH__ >= 900) || !defined(__CUDA_ARCH__)
137
+ # define _CG_HAS_INSTR_ELECT
138
+ #endif
139
+
140
+ // Has __half and __half2
141
+ // Only usable if you include the cuda_fp16.h extension, and
142
+ // _before_ including cooperative_groups.h
143
+ #ifdef __CUDA_FP16_TYPES_EXIST__
144
+ # define _CG_HAS_FP16_COLLECTIVE
145
+ #endif
146
+
147
+ // Include libcu++ where supported.
148
+ #if defined(_CG_CPP11_FEATURES) && !defined(__QNX__) && !defined(__ibmxl__) && \
149
+ (defined(__NVCC__) || defined(__CUDACC_RTC__)) && \
150
+ (defined(__x86_64__) || defined(__aarch64__) || defined(__ppc64__)|| defined(_M_X64) || defined(_M_ARM64)) && \
151
+ (defined(_MSC_VER) || defined(__GNUC__) || defined(__clang__))
152
+ # define _CG_USE_CUDA_STL
153
+ #else
154
+ # define _CG_USE_OWN_TRAITS
155
+ #endif
156
+
157
+ #if defined(_CG_USE_CUDA_STL) && (!defined(__CUDA_ARCH__) || \
158
+ ((!defined(_MSC_VER) && __CUDA_ARCH__ >= 600) || (defined(_MSC_VER) && __CUDA_ARCH__ >= 700)))
159
+ # define _CG_HAS_STL_ATOMICS
160
+ #endif
161
+
162
+ #ifdef _CG_CPP11_FEATURES
163
+ // Use cuda::std:: for type_traits
164
+ # if defined(_CG_USE_CUDA_STL)
165
+ # define _CG_STL_NAMESPACE cuda::std
166
+ # include <cuda/std/type_traits>
167
+ // Use CG's implementation of type traits
168
+ # else
169
+ # define _CG_STL_NAMESPACE cooperative_groups::details::templates
170
+ # endif
171
+ #endif
172
+
173
+ #ifdef _CG_CPP11_FEATURES
174
+ # define _CG_STATIC_CONST_DECL static constexpr
175
+ # define _CG_CONST_DECL constexpr
176
+ #else
177
+ # define _CG_STATIC_CONST_DECL static const
178
+ # define _CG_CONST_DECL const
179
+ #endif
180
+
181
+ #if (defined(_MSC_VER) && !defined(_WIN64)) || defined(__arm__)
182
+ # define _CG_ASM_PTR_CONSTRAINT "r"
183
+ #else
184
+ # define _CG_ASM_PTR_CONSTRAINT "l"
185
+ #endif
186
+
187
+ /*
188
+ ** Define: CG_DEBUG
189
+ ** What: Enables various runtime safety checks
190
+ */
191
+ #if defined(__CUDACC_DEBUG__) && defined(CG_DEBUG) && !defined(NDEBUG)
192
+ # define _CG_DEBUG
193
+ #endif
194
+
195
+ #if defined(_CG_DEBUG)
196
+ # include <assert.h>
197
+ # define _CG_ASSERT(x) assert((x));
198
+ # define _CG_ABORT() assert(0);
199
+ #else
200
+ # define _CG_ASSERT(x)
201
+ # define _CG_ABORT() __trap();
202
+ #endif
203
+
204
+ _CG_BEGIN_NAMESPACE
205
+
206
+ namespace details {
207
+ _CG_STATIC_CONST_DECL unsigned int default_max_block_size = 1024;
208
+
209
+ #if defined(_CG_CPP11_FEATURES) && !defined(_CG_USE_CUDA_STL)
210
+ namespace templates {
211
+
212
+ /**
213
+ * Integral constants
214
+ **/
215
+ template <typename Ty, Ty Val>
216
+ struct integral_constant {
217
+ static constexpr Ty value = Val;
218
+ typedef Ty type;
219
+
220
+ _CG_QUALIFIER constexpr operator type() const noexcept { return value; }
221
+ _CG_QUALIFIER constexpr type operator()() const noexcept { return value; }
222
+ };
223
+
224
+ typedef integral_constant<bool, true> true_type;
225
+ typedef integral_constant<bool, false> false_type;
226
+
227
+ /**
228
+ * CV Qualifiers
229
+ **/
230
+ template <class Ty> struct is_lvalue_reference : public details::templates::false_type {};
231
+ template <class Ty> struct is_lvalue_reference<Ty&> : public details::templates::true_type {};
232
+
233
+ template <class Ty> struct remove_reference {typedef Ty type;};
234
+ template <class Ty> struct remove_reference<Ty&> {typedef Ty type;};
235
+ template <class Ty> struct remove_reference<Ty&&> {typedef Ty type;};
236
+
237
+ template <class Ty>
238
+ using remove_reference_t = typename details::templates::remove_reference<Ty>::type;
239
+
240
+ template <class Ty> struct remove_const {typedef Ty type;};
241
+ template <class Ty> struct remove_const<const Ty> {typedef Ty type;};
242
+
243
+ template <class Ty> struct remove_volatile {typedef Ty type;};
244
+ template <class Ty> struct remove_volatile<volatile Ty> {typedef Ty type;};
245
+
246
+ template <class Ty> struct remove_cv {typedef typename details::templates::remove_volatile<typename details::templates::remove_const<Ty>::type>::type type;};
247
+
248
+ template <class Ty>
249
+ using remove_cv_t = typename details::templates::remove_cv<Ty>::type;
250
+
251
+ template <class Ty>
252
+ _CG_QUALIFIER Ty&& forward(remove_reference_t<Ty> &t) noexcept {
253
+ return static_cast<Ty&&>(t);
254
+ }
255
+
256
+ template <class Ty>
257
+ _CG_QUALIFIER Ty&& forward(remove_reference_t<Ty> &&t) noexcept {
258
+ static_assert(!details::templates::is_lvalue_reference<Ty>::value, "Forwarding an rvalue as an lvalue is not allowed.");
259
+ return static_cast<Ty&&>(t);
260
+ }
261
+
262
+ /**
263
+ * is_integral
264
+ **/
265
+ template <class Ty> struct _is_integral : public details::templates::false_type {};
266
+ template <> struct _is_integral<bool> : public details::templates::true_type {};
267
+ template <> struct _is_integral<char> : public details::templates::true_type {};
268
+ template <> struct _is_integral<unsigned char> : public details::templates::true_type {};
269
+ template <> struct _is_integral<short> : public details::templates::true_type {};
270
+ template <> struct _is_integral<unsigned short> : public details::templates::true_type {};
271
+ template <> struct _is_integral<int> : public details::templates::true_type {};
272
+ template <> struct _is_integral<unsigned int> : public details::templates::true_type {};
273
+ template <> struct _is_integral<long> : public details::templates::true_type {};
274
+ template <> struct _is_integral<long long> : public details::templates::true_type {};
275
+ template <> struct _is_integral<unsigned long> : public details::templates::true_type {};
276
+ template <> struct _is_integral<unsigned long long> : public details::templates::true_type {};
277
+ //Vector type support?
278
+
279
+ template <typename Ty>
280
+ struct is_integral : public details::templates::_is_integral<typename details::templates::remove_cv<Ty>::type> {};
281
+
282
+ /**
283
+ * is_floating_point
284
+ **/
285
+ template <class Ty> struct _is_floating_point : public details::templates::false_type {};
286
+ template <> struct _is_floating_point<float> : public details::templates::true_type {};
287
+ template <> struct _is_floating_point<double> : public details::templates::true_type {};
288
+ template <> struct _is_floating_point<long double> : public details::templates::true_type {};
289
+ # ifdef __CUDA_FP16_TYPES_EXIST__
290
+ template <> struct _is_floating_point<__half> : public details::templates::true_type {};
291
+ template <> struct _is_floating_point<__half2> : public details::templates::true_type {};
292
+ # endif
293
+ //Vector type support?
294
+
295
+ template <typename Ty>
296
+ struct is_floating_point : public details::templates::_is_floating_point<typename details::templates::remove_cv<Ty>::type> {};
297
+
298
+ template <class T>
299
+ struct is_arithmetic : details::templates::integral_constant<
300
+ bool,
301
+ details::templates::is_integral<T>::value ||
302
+ details::templates::is_floating_point<T>::value> {};
303
+
304
+ template <typename Ty, bool = details::templates::is_arithmetic<Ty>::value>
305
+ struct _is_unsigned : details::templates::integral_constant<bool, Ty(0) < Ty(-1)> {};
306
+
307
+ template <typename Ty>
308
+ struct _is_unsigned<Ty,false> : details::templates::false_type {};
309
+
310
+ template <typename Ty>
311
+ struct is_unsigned : _is_unsigned<typename details::templates::remove_cv<Ty>::type> {};
312
+
313
+ template <typename Ty> struct _is_pointer : public details::templates::false_type {};
314
+ template <typename Ty> struct _is_pointer<Ty*> : public details::templates::true_type {};
315
+
316
+ template <typename Ty>
317
+ struct is_pointer : _is_pointer<typename details::templates::remove_cv<Ty>::type> {};
318
+
319
+ /**
320
+ * programmatic type traits
321
+ **/
322
+ template<bool B, class Ty = void>
323
+ struct enable_if {};
324
+
325
+ template<class Ty>
326
+ struct enable_if<true, Ty> { typedef Ty type; };
327
+
328
+ template<bool Cond, typename Ty = void>
329
+ using enable_if_t = typename details::templates::enable_if<Cond, Ty>::type;
330
+
331
+ template<class Ty1, class Ty2>
332
+ struct is_same : details::templates::false_type {};
333
+
334
+ template<class Ty>
335
+ struct is_same<Ty, Ty> : details::templates::true_type {};
336
+
337
+ } // templates
338
+ #endif // _CG_CPP11_FEATURES
339
+
340
+ } // details
341
+ _CG_END_NAMESPACE
342
+
343
+
344
+ #endif // _CG_INFO_H_
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (170 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (178 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cudalibxt.h ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* Copyright 2013,2014 NVIDIA Corporation. All rights reserved.
2
+ *
3
+ * NOTICE TO LICENSEE:
4
+ *
5
+ * The source code and/or documentation ("Licensed Deliverables") are
6
+ * subject to NVIDIA intellectual property rights under U.S. and
7
+ * international Copyright laws.
8
+ *
9
+ * The Licensed Deliverables contained herein are PROPRIETARY and
10
+ * CONFIDENTIAL to NVIDIA and are being provided under the terms and
11
+ * conditions of a form of NVIDIA software license agreement by and
12
+ * between NVIDIA and Licensee ("License Agreement") or electronically
13
+ * accepted by Licensee. Notwithstanding any terms or conditions to
14
+ * the contrary in the License Agreement, reproduction or disclosure
15
+ * of the Licensed Deliverables to any third party without the express
16
+ * written consent of NVIDIA is prohibited.
17
+ *
18
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
19
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
20
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE
21
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
22
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
23
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
24
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
25
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
26
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
27
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
28
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
29
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
30
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
31
+ * OF THESE LICENSED DELIVERABLES.
32
+ *
33
+ * U.S. Government End Users. These Licensed Deliverables are a
34
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
35
+ * 1995), consisting of "commercial computer software" and "commercial
36
+ * computer software documentation" as such terms are used in 48
37
+ * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government
38
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
39
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
40
+ * U.S. Government End Users acquire the Licensed Deliverables with
41
+ * only those rights set forth herein.
42
+ *
43
+ * Any use of the Licensed Deliverables in individual and commercial
44
+ * software must include, in the user documentation and internal
45
+ * comments to the code, the above Disclaimer and U.S. Government End
46
+ * Users Notice.
47
+ */
48
+
49
+ /*!
50
+ * \file cudalibxt.h
51
+ * \brief Public header file for the NVIDIA library multi-GPU support structures
52
+ */
53
+
54
+ #ifndef _CUDA_LIB_XT_H_
55
+ #define _CUDA_LIB_XT_H_
56
+ #include <cuda_runtime.h>
57
+
58
+ #define CUDA_XT_DESCRIPTOR_VERSION 0x01000000 // This is added to CUDART_VERSION
59
+
60
+ enum cudaXtCopyType_t {
61
+ LIB_XT_COPY_HOST_TO_DEVICE,
62
+ LIB_XT_COPY_DEVICE_TO_HOST,
63
+ LIB_XT_COPY_DEVICE_TO_DEVICE
64
+ } ;
65
+ typedef enum cudaXtCopyType_t cudaLibXtCopyType;
66
+
67
+ enum libFormat_t {
68
+ LIB_FORMAT_CUFFT = 0x0,
69
+ LIB_FORMAT_UNDEFINED = 0x1
70
+ };
71
+
72
+ typedef enum libFormat_t libFormat;
73
+
74
+ #define MAX_CUDA_DESCRIPTOR_GPUS 64
75
+
76
+ struct cudaXtDesc_t{
77
+ int version; //descriptor version
78
+ int nGPUs; //number of GPUs
79
+ int GPUs[MAX_CUDA_DESCRIPTOR_GPUS]; //array of device IDs
80
+ void *data[MAX_CUDA_DESCRIPTOR_GPUS]; //array of pointers to data, one per GPU
81
+ size_t size[MAX_CUDA_DESCRIPTOR_GPUS]; //array of data sizes, one per GPU
82
+ void *cudaXtState; //opaque CUDA utility structure
83
+ };
84
+ typedef struct cudaXtDesc_t cudaXtDesc;
85
+
86
+ struct cudaLibXtDesc_t{
87
+ int version; //descriptor version
88
+ cudaXtDesc *descriptor; //multi-GPU memory descriptor
89
+ libFormat library; //which library recognizes the format
90
+ int subFormat; //library specific enumerator of sub formats
91
+ void *libDescriptor; //library specific descriptor e.g. FFT transform plan object
92
+ };
93
+ typedef struct cudaLibXtDesc_t cudaLibXtDesc;
94
+
95
+
96
+ #endif
97
+
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufft.h ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* Copyright 2005-2021 NVIDIA Corporation. All rights reserved.
2
+ *
3
+ * NOTICE TO LICENSEE:
4
+ *
5
+ * The source code and/or documentation ("Licensed Deliverables") are
6
+ * subject to NVIDIA intellectual property rights under U.S. and
7
+ * international Copyright laws.
8
+ *
9
+ * The Licensed Deliverables contained herein are PROPRIETARY and
10
+ * CONFIDENTIAL to NVIDIA and are being provided under the terms and
11
+ * conditions of a form of NVIDIA software license agreement by and
12
+ * between NVIDIA and Licensee ("License Agreement") or electronically
13
+ * accepted by Licensee. Notwithstanding any terms or conditions to
14
+ * the contrary in the License Agreement, reproduction or disclosure
15
+ * of the Licensed Deliverables to any third party without the express
16
+ * written consent of NVIDIA is prohibited.
17
+ *
18
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
19
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
20
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE
21
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
22
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
23
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
24
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
25
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
26
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
27
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
28
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
29
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
30
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
31
+ * OF THESE LICENSED DELIVERABLES.
32
+ *
33
+ * U.S. Government End Users. These Licensed Deliverables are a
34
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
35
+ * 1995), consisting of "commercial computer software" and "commercial
36
+ * computer software documentation" as such terms are used in 48
37
+ * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government
38
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
39
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
40
+ * U.S. Government End Users acquire the Licensed Deliverables with
41
+ * only those rights set forth herein.
42
+ *
43
+ * Any use of the Licensed Deliverables in individual and commercial
44
+ * software must include, in the user documentation and internal
45
+ * comments to the code, the above Disclaimer and U.S. Government End
46
+ * Users Notice.
47
+ */
48
+
49
+ /*!
50
+ * \file cufft.h
51
+ * \brief Public header file for the NVIDIA CUDA FFT library (CUFFT)
52
+ */
53
+
54
+ #ifndef _CUFFT_H_
55
+ #define _CUFFT_H_
56
+
57
+
58
+ #include "cuComplex.h"
59
+ #include "driver_types.h"
60
+ #include "library_types.h"
61
+
62
+ #ifndef CUFFTAPI
63
+ #ifdef _WIN32
64
+ #define CUFFTAPI __stdcall
65
+ #elif __GNUC__ >= 4
66
+ #define CUFFTAPI __attribute__ ((visibility ("default")))
67
+ #else
68
+ #define CUFFTAPI
69
+ #endif
70
+ #endif
71
+
72
+ #ifdef __cplusplus
73
+ extern "C" {
74
+ #endif
75
+
76
+ #define CUFFT_VER_MAJOR 11
77
+ #define CUFFT_VER_MINOR 2
78
+ #define CUFFT_VER_PATCH 1
79
+ #define CUFFT_VER_BUILD 3
80
+
81
+ #define CUFFT_VERSION 11201
82
+
83
+ // CUFFT API function return values
84
+ typedef enum cufftResult_t {
85
+ CUFFT_SUCCESS = 0x0,
86
+ CUFFT_INVALID_PLAN = 0x1,
87
+ CUFFT_ALLOC_FAILED = 0x2,
88
+ CUFFT_INVALID_TYPE = 0x3,
89
+ CUFFT_INVALID_VALUE = 0x4,
90
+ CUFFT_INTERNAL_ERROR = 0x5,
91
+ CUFFT_EXEC_FAILED = 0x6,
92
+ CUFFT_SETUP_FAILED = 0x7,
93
+ CUFFT_INVALID_SIZE = 0x8,
94
+ CUFFT_UNALIGNED_DATA = 0x9,
95
+ CUFFT_INCOMPLETE_PARAMETER_LIST = 0xA,
96
+ CUFFT_INVALID_DEVICE = 0xB,
97
+ CUFFT_PARSE_ERROR = 0xC,
98
+ CUFFT_NO_WORKSPACE = 0xD,
99
+ CUFFT_NOT_IMPLEMENTED = 0xE,
100
+ CUFFT_LICENSE_ERROR = 0x0F,
101
+ CUFFT_NOT_SUPPORTED = 0x10
102
+
103
+ } cufftResult;
104
+
105
+ #define MAX_CUFFT_ERROR 0x11
106
+
107
+
108
+ // CUFFT defines and supports the following data types
109
+
110
+
111
+ // cufftReal is a single-precision, floating-point real data type.
112
+ // cufftDoubleReal is a double-precision, real data type.
113
+ typedef float cufftReal;
114
+ typedef double cufftDoubleReal;
115
+
116
+ // cufftComplex is a single-precision, floating-point complex data type that
117
+ // consists of interleaved real and imaginary components.
118
+ // cufftDoubleComplex is the double-precision equivalent.
119
+ typedef cuComplex cufftComplex;
120
+ typedef cuDoubleComplex cufftDoubleComplex;
121
+
122
+ // CUFFT transform directions
123
+ #define CUFFT_FORWARD -1 // Forward FFT
124
+ #define CUFFT_INVERSE 1 // Inverse FFT
125
+
126
+ // CUFFT supports the following transform types
127
+ typedef enum cufftType_t {
128
+ CUFFT_R2C = 0x2a, // Real to Complex (interleaved)
129
+ CUFFT_C2R = 0x2c, // Complex (interleaved) to Real
130
+ CUFFT_C2C = 0x29, // Complex to Complex, interleaved
131
+ CUFFT_D2Z = 0x6a, // Double to Double-Complex
132
+ CUFFT_Z2D = 0x6c, // Double-Complex to Double
133
+ CUFFT_Z2Z = 0x69 // Double-Complex to Double-Complex
134
+ } cufftType;
135
+
136
+ // CUFFT supports the following data layouts
137
+ typedef enum cufftCompatibility_t {
138
+ CUFFT_COMPATIBILITY_FFTW_PADDING = 0x01 // The default value
139
+ } cufftCompatibility;
140
+
141
+ #define CUFFT_COMPATIBILITY_DEFAULT CUFFT_COMPATIBILITY_FFTW_PADDING
142
+
143
+ //
144
+ // structure definition used by the shim between old and new APIs
145
+ //
146
+ #define MAX_SHIM_RANK 3
147
+
148
+ // cufftHandle is a handle type used to store and access CUFFT plans.
149
+ typedef int cufftHandle;
150
+
151
+
152
+ cufftResult CUFFTAPI cufftPlan1d(cufftHandle *plan,
153
+ int nx,
154
+ cufftType type,
155
+ int batch);
156
+
157
+ cufftResult CUFFTAPI cufftPlan2d(cufftHandle *plan,
158
+ int nx, int ny,
159
+ cufftType type);
160
+
161
+ cufftResult CUFFTAPI cufftPlan3d(cufftHandle *plan,
162
+ int nx, int ny, int nz,
163
+ cufftType type);
164
+
165
+ cufftResult CUFFTAPI cufftPlanMany(cufftHandle *plan,
166
+ int rank,
167
+ int *n,
168
+ int *inembed, int istride, int idist,
169
+ int *onembed, int ostride, int odist,
170
+ cufftType type,
171
+ int batch);
172
+
173
+ cufftResult CUFFTAPI cufftMakePlan1d(cufftHandle plan,
174
+ int nx,
175
+ cufftType type,
176
+ int batch,
177
+ size_t *workSize);
178
+
179
+ cufftResult CUFFTAPI cufftMakePlan2d(cufftHandle plan,
180
+ int nx, int ny,
181
+ cufftType type,
182
+ size_t *workSize);
183
+
184
+ cufftResult CUFFTAPI cufftMakePlan3d(cufftHandle plan,
185
+ int nx, int ny, int nz,
186
+ cufftType type,
187
+ size_t *workSize);
188
+
189
+ cufftResult CUFFTAPI cufftMakePlanMany(cufftHandle plan,
190
+ int rank,
191
+ int *n,
192
+ int *inembed, int istride, int idist,
193
+ int *onembed, int ostride, int odist,
194
+ cufftType type,
195
+ int batch,
196
+ size_t *workSize);
197
+
198
+ cufftResult CUFFTAPI cufftMakePlanMany64(cufftHandle plan,
199
+ int rank,
200
+ long long int *n,
201
+ long long int *inembed,
202
+ long long int istride,
203
+ long long int idist,
204
+ long long int *onembed,
205
+ long long int ostride, long long int odist,
206
+ cufftType type,
207
+ long long int batch,
208
+ size_t * workSize);
209
+
210
+ cufftResult CUFFTAPI cufftGetSizeMany64(cufftHandle plan,
211
+ int rank,
212
+ long long int *n,
213
+ long long int *inembed,
214
+ long long int istride, long long int idist,
215
+ long long int *onembed,
216
+ long long int ostride, long long int odist,
217
+ cufftType type,
218
+ long long int batch,
219
+ size_t *workSize);
220
+
221
+
222
+
223
+
224
+ cufftResult CUFFTAPI cufftEstimate1d(int nx,
225
+ cufftType type,
226
+ int batch,
227
+ size_t *workSize);
228
+
229
+ cufftResult CUFFTAPI cufftEstimate2d(int nx, int ny,
230
+ cufftType type,
231
+ size_t *workSize);
232
+
233
+ cufftResult CUFFTAPI cufftEstimate3d(int nx, int ny, int nz,
234
+ cufftType type,
235
+ size_t *workSize);
236
+
237
+ cufftResult CUFFTAPI cufftEstimateMany(int rank,
238
+ int *n,
239
+ int *inembed, int istride, int idist,
240
+ int *onembed, int ostride, int odist,
241
+ cufftType type,
242
+ int batch,
243
+ size_t *workSize);
244
+
245
+ cufftResult CUFFTAPI cufftCreate(cufftHandle * handle);
246
+
247
+ cufftResult CUFFTAPI cufftGetSize1d(cufftHandle handle,
248
+ int nx,
249
+ cufftType type,
250
+ int batch,
251
+ size_t *workSize );
252
+
253
+ cufftResult CUFFTAPI cufftGetSize2d(cufftHandle handle,
254
+ int nx, int ny,
255
+ cufftType type,
256
+ size_t *workSize);
257
+
258
+ cufftResult CUFFTAPI cufftGetSize3d(cufftHandle handle,
259
+ int nx, int ny, int nz,
260
+ cufftType type,
261
+ size_t *workSize);
262
+
263
+ cufftResult CUFFTAPI cufftGetSizeMany(cufftHandle handle,
264
+ int rank, int *n,
265
+ int *inembed, int istride, int idist,
266
+ int *onembed, int ostride, int odist,
267
+ cufftType type, int batch, size_t *workArea);
268
+
269
+ cufftResult CUFFTAPI cufftGetSize(cufftHandle handle, size_t *workSize);
270
+
271
+ cufftResult CUFFTAPI cufftSetWorkArea(cufftHandle plan, void *workArea);
272
+
273
+ cufftResult CUFFTAPI cufftSetAutoAllocation(cufftHandle plan, int autoAllocate);
274
+
275
+ cufftResult CUFFTAPI cufftExecC2C(cufftHandle plan,
276
+ cufftComplex *idata,
277
+ cufftComplex *odata,
278
+ int direction);
279
+
280
+ cufftResult CUFFTAPI cufftExecR2C(cufftHandle plan,
281
+ cufftReal *idata,
282
+ cufftComplex *odata);
283
+
284
+ cufftResult CUFFTAPI cufftExecC2R(cufftHandle plan,
285
+ cufftComplex *idata,
286
+ cufftReal *odata);
287
+
288
+ cufftResult CUFFTAPI cufftExecZ2Z(cufftHandle plan,
289
+ cufftDoubleComplex *idata,
290
+ cufftDoubleComplex *odata,
291
+ int direction);
292
+
293
+ cufftResult CUFFTAPI cufftExecD2Z(cufftHandle plan,
294
+ cufftDoubleReal *idata,
295
+ cufftDoubleComplex *odata);
296
+
297
+ cufftResult CUFFTAPI cufftExecZ2D(cufftHandle plan,
298
+ cufftDoubleComplex *idata,
299
+ cufftDoubleReal *odata);
300
+
301
+
302
+ // utility functions
303
+ cufftResult CUFFTAPI cufftSetStream(cufftHandle plan,
304
+ cudaStream_t stream);
305
+
306
+ cufftResult CUFFTAPI cufftDestroy(cufftHandle plan);
307
+
308
+ cufftResult CUFFTAPI cufftGetVersion(int *version);
309
+
310
+ cufftResult CUFFTAPI cufftGetProperty(libraryPropertyType type,
311
+ int *value);
312
+
313
+ //
314
+ // Set/Get PlanProperty APIs configures per-plan behavior
315
+ //
316
+ typedef enum cufftProperty_t {
317
+ NVFFT_PLAN_PROPERTY_INT64_PATIENT_JIT = 0x1
318
+ } cufftProperty;
319
+
320
+ cufftResult CUFFTAPI cufftSetPlanPropertyInt64(cufftHandle plan,
321
+ cufftProperty property,
322
+ const long long int inputValueInt);
323
+
324
+ cufftResult CUFFTAPI cufftGetPlanPropertyInt64(cufftHandle plan,
325
+ cufftProperty property,
326
+ long long int* returnPtrValue);
327
+
328
+ cufftResult CUFFTAPI cufftResetPlanProperty(cufftHandle plan, cufftProperty property);
329
+
330
+ #ifdef __cplusplus
331
+ }
332
+ #endif
333
+
334
+ #endif /* _CUFFT_H_ */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufftXt.h ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ /* Copyright 2005-2021 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * The source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * The Licensed Deliverables contained herein are PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and are being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ /*!
51
+ * \file cufftXt.h
52
+ * \brief Public header file for the NVIDIA CUDA FFT library (CUFFT)
53
+ */
54
+
55
+ #ifndef _CUFFTXT_H_
56
+ #define _CUFFTXT_H_
57
+ #include "cudalibxt.h"
58
+ #include "cufft.h"
59
+
60
+
61
+ #ifndef CUFFTAPI
62
+ #ifdef _WIN32
63
+ #define CUFFTAPI __stdcall
64
+ #else
65
+ #define CUFFTAPI
66
+ #endif
67
+ #endif
68
+
69
+ #ifdef __cplusplus
70
+ extern "C" {
71
+ #endif
72
+
73
+ //
74
+ // cufftXtSubFormat identifies the data layout of
75
+ // a memory descriptor owned by cufft.
76
+ // note that multi GPU cufft does not yet support out-of-place transforms
77
+ //
78
+
79
+ typedef enum cufftXtSubFormat_t {
80
+ CUFFT_XT_FORMAT_INPUT = 0x00, //by default input is in linear order across GPUs
81
+ CUFFT_XT_FORMAT_OUTPUT = 0x01, //by default output is in scrambled order depending on transform
82
+ CUFFT_XT_FORMAT_INPLACE = 0x02, //by default inplace is input order, which is linear across GPUs
83
+ CUFFT_XT_FORMAT_INPLACE_SHUFFLED = 0x03, //shuffled output order after execution of the transform
84
+ CUFFT_XT_FORMAT_1D_INPUT_SHUFFLED = 0x04, //shuffled input order prior to execution of 1D transforms
85
+ CUFFT_XT_FORMAT_DISTRIBUTED_INPUT = 0x05,
86
+ CUFFT_XT_FORMAT_DISTRIBUTED_OUTPUT = 0x06,
87
+ CUFFT_FORMAT_UNDEFINED = 0x07
88
+ } cufftXtSubFormat;
89
+
90
+ //
91
+ // cufftXtCopyType specifies the type of copy for cufftXtMemcpy
92
+ //
93
+ typedef enum cufftXtCopyType_t {
94
+ CUFFT_COPY_HOST_TO_DEVICE = 0x00,
95
+ CUFFT_COPY_DEVICE_TO_HOST = 0x01,
96
+ CUFFT_COPY_DEVICE_TO_DEVICE = 0x02,
97
+ CUFFT_COPY_UNDEFINED = 0x03
98
+ } cufftXtCopyType;
99
+
100
+ //
101
+ // cufftXtQueryType specifies the type of query for cufftXtQueryPlan
102
+ //
103
+ typedef enum cufftXtQueryType_t {
104
+ CUFFT_QUERY_1D_FACTORS = 0x00,
105
+ CUFFT_QUERY_UNDEFINED = 0x01
106
+ } cufftXtQueryType;
107
+
108
+ typedef struct cufftXt1dFactors_t {
109
+ long long int size;
110
+ long long int stringCount;
111
+ long long int stringLength;
112
+ long long int substringLength;
113
+ long long int factor1;
114
+ long long int factor2;
115
+ long long int stringMask;
116
+ long long int substringMask;
117
+ long long int factor1Mask;
118
+ long long int factor2Mask;
119
+ int stringShift;
120
+ int substringShift;
121
+ int factor1Shift;
122
+ int factor2Shift;
123
+ } cufftXt1dFactors;
124
+
125
+ //
126
+ // cufftXtWorkAreaPolicy specifies policy for cufftXtSetWorkAreaPolicy
127
+ //
128
+ typedef enum cufftXtWorkAreaPolicy_t {
129
+ CUFFT_WORKAREA_MINIMAL = 0, /* maximum reduction */
130
+ CUFFT_WORKAREA_USER = 1, /* use workSize parameter as limit */
131
+ CUFFT_WORKAREA_PERFORMANCE = 2, /* default - 1x overhead or more, maximum performance */
132
+ } cufftXtWorkAreaPolicy;
133
+
134
+ // multi-GPU routines
135
+ cufftResult CUFFTAPI cufftXtSetGPUs(cufftHandle handle, int nGPUs, int *whichGPUs);
136
+
137
+ cufftResult CUFFTAPI cufftXtMalloc(cufftHandle plan,
138
+ cudaLibXtDesc ** descriptor,
139
+ cufftXtSubFormat format);
140
+
141
+ cufftResult CUFFTAPI cufftXtMemcpy(cufftHandle plan,
142
+ void *dstPointer,
143
+ void *srcPointer,
144
+ cufftXtCopyType type);
145
+
146
+ cufftResult CUFFTAPI cufftXtFree(cudaLibXtDesc *descriptor);
147
+
148
+ cufftResult CUFFTAPI cufftXtSetWorkArea(cufftHandle plan, void **workArea);
149
+
150
+ cufftResult CUFFTAPI cufftXtExecDescriptorC2C(cufftHandle plan,
151
+ cudaLibXtDesc *input,
152
+ cudaLibXtDesc *output,
153
+ int direction);
154
+
155
+ cufftResult CUFFTAPI cufftXtExecDescriptorR2C(cufftHandle plan,
156
+ cudaLibXtDesc *input,
157
+ cudaLibXtDesc *output);
158
+
159
+ cufftResult CUFFTAPI cufftXtExecDescriptorC2R(cufftHandle plan,
160
+ cudaLibXtDesc *input,
161
+ cudaLibXtDesc *output);
162
+
163
+ cufftResult CUFFTAPI cufftXtExecDescriptorZ2Z(cufftHandle plan,
164
+ cudaLibXtDesc *input,
165
+ cudaLibXtDesc *output,
166
+ int direction);
167
+
168
+ cufftResult CUFFTAPI cufftXtExecDescriptorD2Z(cufftHandle plan,
169
+ cudaLibXtDesc *input,
170
+ cudaLibXtDesc *output);
171
+
172
+ cufftResult CUFFTAPI cufftXtExecDescriptorZ2D(cufftHandle plan,
173
+ cudaLibXtDesc *input,
174
+ cudaLibXtDesc *output);
175
+
176
+ // Utility functions
177
+
178
+ cufftResult CUFFTAPI cufftXtQueryPlan(cufftHandle plan, void *queryStruct, cufftXtQueryType queryType);
179
+
180
+
181
+ // callbacks
182
+
183
+
184
+ typedef enum cufftXtCallbackType_t {
185
+ CUFFT_CB_LD_COMPLEX = 0x0,
186
+ CUFFT_CB_LD_COMPLEX_DOUBLE = 0x1,
187
+ CUFFT_CB_LD_REAL = 0x2,
188
+ CUFFT_CB_LD_REAL_DOUBLE = 0x3,
189
+ CUFFT_CB_ST_COMPLEX = 0x4,
190
+ CUFFT_CB_ST_COMPLEX_DOUBLE = 0x5,
191
+ CUFFT_CB_ST_REAL = 0x6,
192
+ CUFFT_CB_ST_REAL_DOUBLE = 0x7,
193
+ CUFFT_CB_UNDEFINED = 0x8
194
+
195
+ } cufftXtCallbackType;
196
+
197
+ typedef cufftComplex (*cufftCallbackLoadC)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer);
198
+ typedef cufftDoubleComplex (*cufftCallbackLoadZ)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer);
199
+ typedef cufftReal (*cufftCallbackLoadR)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer);
200
+ typedef cufftDoubleReal(*cufftCallbackLoadD)(void *dataIn, size_t offset, void *callerInfo, void *sharedPointer);
201
+
202
+ typedef void (*cufftCallbackStoreC)(void *dataOut, size_t offset, cufftComplex element, void *callerInfo, void *sharedPointer);
203
+ typedef void (*cufftCallbackStoreZ)(void *dataOut, size_t offset, cufftDoubleComplex element, void *callerInfo, void *sharedPointer);
204
+ typedef void (*cufftCallbackStoreR)(void *dataOut, size_t offset, cufftReal element, void *callerInfo, void *sharedPointer);
205
+ typedef void (*cufftCallbackStoreD)(void *dataOut, size_t offset, cufftDoubleReal element, void *callerInfo, void *sharedPointer);
206
+
207
+
208
+ cufftResult CUFFTAPI cufftXtSetCallback(cufftHandle plan, void **callback_routine, cufftXtCallbackType cbType, void **caller_info);
209
+ cufftResult CUFFTAPI cufftXtClearCallback(cufftHandle plan, cufftXtCallbackType cbType);
210
+ cufftResult CUFFTAPI cufftXtSetCallbackSharedSize(cufftHandle plan, cufftXtCallbackType cbType, size_t sharedSize);
211
+
212
+ cufftResult CUFFTAPI cufftXtMakePlanMany(cufftHandle plan,
213
+ int rank,
214
+ long long int *n,
215
+ long long int *inembed,
216
+ long long int istride,
217
+ long long int idist,
218
+ cudaDataType inputtype,
219
+ long long int *onembed,
220
+ long long int ostride,
221
+ long long int odist,
222
+ cudaDataType outputtype,
223
+ long long int batch,
224
+ size_t *workSize,
225
+ cudaDataType executiontype);
226
+
227
+ cufftResult CUFFTAPI cufftXtGetSizeMany(cufftHandle plan,
228
+ int rank,
229
+ long long int *n,
230
+ long long int *inembed,
231
+ long long int istride,
232
+ long long int idist,
233
+ cudaDataType inputtype,
234
+ long long int *onembed,
235
+ long long int ostride,
236
+ long long int odist,
237
+ cudaDataType outputtype,
238
+ long long int batch,
239
+ size_t *workSize,
240
+ cudaDataType executiontype);
241
+
242
+
243
+ cufftResult CUFFTAPI cufftXtExec(cufftHandle plan,
244
+ void *input,
245
+ void *output,
246
+ int direction);
247
+
248
+ cufftResult CUFFTAPI cufftXtExecDescriptor(cufftHandle plan,
249
+ cudaLibXtDesc *input,
250
+ cudaLibXtDesc *output,
251
+ int direction);
252
+
253
+ cufftResult CUFFTAPI cufftXtSetWorkAreaPolicy(cufftHandle plan, cufftXtWorkAreaPolicy policy, size_t *workSize);
254
+
255
+ #ifdef __cplusplus
256
+ }
257
+ #endif
258
+
259
+ #endif
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/include/cufftw.h ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ /* Copyright 2005-2014 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * The source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * The Licensed Deliverables contained herein are PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and are being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. THEY ARE
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and are provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ /*!
51
+ * \file cufftw.h
52
+ * \brief Public header file for the NVIDIA CUDA FFTW library (CUFFTW)
53
+ */
54
+
55
+ #ifndef _CUFFTW_H_
56
+ #define _CUFFTW_H_
57
+
58
+
59
+ #include <stdio.h>
60
+ #include "cufft.h"
61
+
62
+ #ifdef __cplusplus
63
+ extern "C" {
64
+ #endif
65
+
66
+ // Transform direction
67
+ #define FFTW_FORWARD -1
68
+ #define FFTW_INVERSE 1
69
+ #define FFTW_BACKWARD 1
70
+
71
+ // Planner flags
72
+ #define FFTW_ESTIMATE 0x01
73
+ #define FFTW_MEASURE 0x02
74
+ #define FFTW_PATIENT 0x03
75
+ #define FFTW_EXHAUSTIVE 0x04
76
+ #define FFTW_WISDOM_ONLY 0x05
77
+
78
+ // Algorithm restriction flags
79
+ #define FFTW_DESTROY_INPUT 0x08
80
+ #define FFTW_PRESERVE_INPUT 0x0C
81
+ #define FFTW_UNALIGNED 0x10
82
+
83
+ // CUFFTW defines and supports the following data types
84
+
85
+ // note if complex.h has been included we use the C99 complex types
86
+ #if !defined(FFTW_NO_Complex) && defined(_Complex_I) && defined (complex)
87
+ typedef double _Complex fftw_complex;
88
+ typedef float _Complex fftwf_complex;
89
+ #else
90
+ typedef double fftw_complex[2];
91
+ typedef float fftwf_complex[2];
92
+ #endif
93
+
94
+ typedef void *fftw_plan;
95
+
96
+ typedef void *fftwf_plan;
97
+
98
+ typedef struct {
99
+ int n;
100
+ int is;
101
+ int os;
102
+ } fftw_iodim;
103
+
104
+ typedef fftw_iodim fftwf_iodim;
105
+
106
+ typedef struct {
107
+ ptrdiff_t n;
108
+ ptrdiff_t is;
109
+ ptrdiff_t os;
110
+ } fftw_iodim64;
111
+
112
+ typedef fftw_iodim64 fftwf_iodim64;
113
+
114
+ // CUFFTW defines and supports the following double precision APIs
115
+
116
+ fftw_plan CUFFTAPI fftw_plan_dft_1d(int n,
117
+ fftw_complex *in,
118
+ fftw_complex *out,
119
+ int sign,
120
+ unsigned flags);
121
+
122
+ fftw_plan CUFFTAPI fftw_plan_dft_2d(int n0,
123
+ int n1,
124
+ fftw_complex *in,
125
+ fftw_complex *out,
126
+ int sign,
127
+ unsigned flags);
128
+
129
+ fftw_plan CUFFTAPI fftw_plan_dft_3d(int n0,
130
+ int n1,
131
+ int n2,
132
+ fftw_complex *in,
133
+ fftw_complex *out,
134
+ int sign,
135
+ unsigned flags);
136
+
137
+ fftw_plan CUFFTAPI fftw_plan_dft(int rank,
138
+ const int *n,
139
+ fftw_complex *in,
140
+ fftw_complex *out,
141
+ int sign,
142
+ unsigned flags);
143
+
144
+ fftw_plan CUFFTAPI fftw_plan_dft_r2c_1d(int n,
145
+ double *in,
146
+ fftw_complex *out,
147
+ unsigned flags);
148
+
149
+ fftw_plan CUFFTAPI fftw_plan_dft_r2c_2d(int n0,
150
+ int n1,
151
+ double *in,
152
+ fftw_complex *out,
153
+ unsigned flags);
154
+
155
+ fftw_plan CUFFTAPI fftw_plan_dft_r2c_3d(int n0,
156
+ int n1,
157
+ int n2,
158
+ double *in,
159
+ fftw_complex *out,
160
+ unsigned flags);
161
+
162
+ fftw_plan CUFFTAPI fftw_plan_dft_r2c(int rank,
163
+ const int *n,
164
+ double *in,
165
+ fftw_complex *out,
166
+ unsigned flags);
167
+
168
+ fftw_plan CUFFTAPI fftw_plan_dft_c2r_1d(int n,
169
+ fftw_complex *in,
170
+ double *out,
171
+ unsigned flags);
172
+
173
+ fftw_plan CUFFTAPI fftw_plan_dft_c2r_2d(int n0,
174
+ int n1,
175
+ fftw_complex *in,
176
+ double *out,
177
+ unsigned flags);
178
+
179
+ fftw_plan CUFFTAPI fftw_plan_dft_c2r_3d(int n0,
180
+ int n1,
181
+ int n2,
182
+ fftw_complex *in,
183
+ double *out,
184
+ unsigned flags);
185
+
186
+ fftw_plan CUFFTAPI fftw_plan_dft_c2r(int rank,
187
+ const int *n,
188
+ fftw_complex *in,
189
+ double *out,
190
+ unsigned flags);
191
+
192
+
193
+ fftw_plan CUFFTAPI fftw_plan_many_dft(int rank,
194
+ const int *n,
195
+ int batch,
196
+ fftw_complex *in,
197
+ const int *inembed, int istride, int idist,
198
+ fftw_complex *out,
199
+ const int *onembed, int ostride, int odist,
200
+ int sign, unsigned flags);
201
+
202
+ fftw_plan CUFFTAPI fftw_plan_many_dft_r2c(int rank,
203
+ const int *n,
204
+ int batch,
205
+ double *in,
206
+ const int *inembed, int istride, int idist,
207
+ fftw_complex *out,
208
+ const int *onembed, int ostride, int odist,
209
+ unsigned flags);
210
+
211
+ fftw_plan CUFFTAPI fftw_plan_many_dft_c2r(int rank,
212
+ const int *n,
213
+ int batch,
214
+ fftw_complex *in,
215
+ const int *inembed, int istride, int idist,
216
+ double *out,
217
+ const int *onembed, int ostride, int odist,
218
+ unsigned flags);
219
+
220
+ fftw_plan CUFFTAPI fftw_plan_guru_dft(int rank, const fftw_iodim *dims,
221
+ int batch_rank, const fftw_iodim *batch_dims,
222
+ fftw_complex *in, fftw_complex *out,
223
+ int sign, unsigned flags);
224
+
225
+ fftw_plan CUFFTAPI fftw_plan_guru_dft_r2c(int rank, const fftw_iodim *dims,
226
+ int batch_rank, const fftw_iodim *batch_dims,
227
+ double *in, fftw_complex *out,
228
+ unsigned flags);
229
+
230
+ fftw_plan CUFFTAPI fftw_plan_guru_dft_c2r(int rank, const fftw_iodim *dims,
231
+ int batch_rank, const fftw_iodim *batch_dims,
232
+ fftw_complex *in, double *out,
233
+ unsigned flags);
234
+
235
+ fftw_plan CUFFTAPI fftw_plan_guru64_dft(int rank, const fftw_iodim64* dims,
236
+ int batch_rank, const fftw_iodim64* batch_dims,
237
+ fftw_complex* in, fftw_complex* out,
238
+ int sign, unsigned flags);
239
+
240
+ fftw_plan CUFFTAPI fftw_plan_guru64_dft_r2c(int rank, const fftw_iodim64* dims,
241
+ int batch_rank, const fftw_iodim64* batch_dims,
242
+ double* in, fftw_complex* out,
243
+ unsigned flags);
244
+
245
+ fftw_plan CUFFTAPI fftw_plan_guru64_dft_c2r(int rank, const fftw_iodim64* dims,
246
+ int batch_rank, const fftw_iodim64* batch_dims,
247
+ fftw_complex* in, double* out,
248
+ unsigned flags);
249
+
250
+ void CUFFTAPI fftw_execute(const fftw_plan plan);
251
+
252
+ void CUFFTAPI fftw_execute_dft(const fftw_plan plan,
253
+ fftw_complex *idata,
254
+ fftw_complex *odata);
255
+
256
+ void CUFFTAPI fftw_execute_dft_r2c(const fftw_plan plan,
257
+ double *idata,
258
+ fftw_complex *odata);
259
+
260
+ void CUFFTAPI fftw_execute_dft_c2r(const fftw_plan plan,
261
+ fftw_complex *idata,
262
+ double *odata);
263
+
264
+ // CUFFTW defines and supports the following single precision APIs
265
+
266
+ fftwf_plan CUFFTAPI fftwf_plan_dft_1d(int n,
267
+ fftwf_complex *in,
268
+ fftwf_complex *out,
269
+ int sign,
270
+ unsigned flags);
271
+
272
+ fftwf_plan CUFFTAPI fftwf_plan_dft_2d(int n0,
273
+ int n1,
274
+ fftwf_complex *in,
275
+ fftwf_complex *out,
276
+ int sign,
277
+ unsigned flags);
278
+
279
+ fftwf_plan CUFFTAPI fftwf_plan_dft_3d(int n0,
280
+ int n1,
281
+ int n2,
282
+ fftwf_complex *in,
283
+ fftwf_complex *out,
284
+ int sign,
285
+ unsigned flags);
286
+
287
+ fftwf_plan CUFFTAPI fftwf_plan_dft(int rank,
288
+ const int *n,
289
+ fftwf_complex *in,
290
+ fftwf_complex *out,
291
+ int sign,
292
+ unsigned flags);
293
+
294
+ fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_1d(int n,
295
+ float *in,
296
+ fftwf_complex *out,
297
+ unsigned flags);
298
+
299
+ fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_2d(int n0,
300
+ int n1,
301
+ float *in,
302
+ fftwf_complex *out,
303
+ unsigned flags);
304
+
305
+ fftwf_plan CUFFTAPI fftwf_plan_dft_r2c_3d(int n0,
306
+ int n1,
307
+ int n2,
308
+ float *in,
309
+ fftwf_complex *out,
310
+ unsigned flags);
311
+
312
+ fftwf_plan CUFFTAPI fftwf_plan_dft_r2c(int rank,
313
+ const int *n,
314
+ float *in,
315
+ fftwf_complex *out,
316
+ unsigned flags);
317
+
318
+ fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_1d(int n,
319
+ fftwf_complex *in,
320
+ float *out,
321
+ unsigned flags);
322
+
323
+ fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_2d(int n0,
324
+ int n1,
325
+ fftwf_complex *in,
326
+ float *out,
327
+ unsigned flags);
328
+
329
+ fftwf_plan CUFFTAPI fftwf_plan_dft_c2r_3d(int n0,
330
+ int n1,
331
+ int n2,
332
+ fftwf_complex *in,
333
+ float *out,
334
+ unsigned flags);
335
+
336
+ fftwf_plan CUFFTAPI fftwf_plan_dft_c2r(int rank,
337
+ const int *n,
338
+ fftwf_complex *in,
339
+ float *out,
340
+ unsigned flags);
341
+
342
+ fftwf_plan CUFFTAPI fftwf_plan_many_dft(int rank,
343
+ const int *n,
344
+ int batch,
345
+ fftwf_complex *in,
346
+ const int *inembed, int istride, int idist,
347
+ fftwf_complex *out,
348
+ const int *onembed, int ostride, int odist,
349
+ int sign, unsigned flags);
350
+
351
+ fftwf_plan CUFFTAPI fftwf_plan_many_dft_r2c(int rank,
352
+ const int *n,
353
+ int batch,
354
+ float *in,
355
+ const int *inembed, int istride, int idist,
356
+ fftwf_complex *out,
357
+ const int *onembed, int ostride, int odist,
358
+ unsigned flags);
359
+
360
+ fftwf_plan CUFFTAPI fftwf_plan_many_dft_c2r(int rank,
361
+ const int *n,
362
+ int batch,
363
+ fftwf_complex *in,
364
+ const int *inembed, int istride, int idist,
365
+ float *out,
366
+ const int *onembed, int ostride, int odist,
367
+ unsigned flags);
368
+
369
+ fftwf_plan CUFFTAPI fftwf_plan_guru_dft(int rank, const fftwf_iodim *dims,
370
+ int batch_rank, const fftwf_iodim *batch_dims,
371
+ fftwf_complex *in, fftwf_complex *out,
372
+ int sign, unsigned flags);
373
+
374
+ fftwf_plan CUFFTAPI fftwf_plan_guru_dft_r2c(int rank, const fftwf_iodim *dims,
375
+ int batch_rank, const fftwf_iodim *batch_dims,
376
+ float *in, fftwf_complex *out,
377
+ unsigned flags);
378
+
379
+ fftwf_plan CUFFTAPI fftwf_plan_guru_dft_c2r(int rank, const fftwf_iodim *dims,
380
+ int batch_rank, const fftwf_iodim *batch_dims,
381
+ fftwf_complex *in, float *out,
382
+ unsigned flags);
383
+
384
+ fftwf_plan CUFFTAPI fftwf_plan_guru64_dft(int rank, const fftwf_iodim64* dims,
385
+ int batch_rank, const fftwf_iodim64* batch_dims,
386
+ fftwf_complex* in, fftwf_complex* out,
387
+ int sign, unsigned flags);
388
+
389
+ fftwf_plan CUFFTAPI fftwf_plan_guru64_dft_r2c(int rank, const fftwf_iodim64* dims,
390
+ int batch_rank, const fftwf_iodim64* batch_dims,
391
+ float* in, fftwf_complex* out,
392
+ unsigned flags);
393
+
394
+ fftwf_plan CUFFTAPI fftwf_plan_guru64_dft_c2r(int rank, const fftwf_iodim64* dims,
395
+ int batch_rank, const fftwf_iodim64* batch_dims,
396
+ fftwf_complex* in, float* out,
397
+ unsigned flags);
398
+
399
+ void CUFFTAPI fftwf_execute(const fftw_plan plan);
400
+
401
+ void CUFFTAPI fftwf_execute_dft(const fftwf_plan plan,
402
+ fftwf_complex *idata,
403
+ fftwf_complex *odata);
404
+
405
+ void CUFFTAPI fftwf_execute_dft_r2c(const fftwf_plan plan,
406
+ float *idata,
407
+ fftwf_complex *odata);
408
+
409
+ void CUFFTAPI fftwf_execute_dft_c2r(const fftwf_plan plan,
410
+ fftwf_complex *idata,
411
+ float *odata);
412
+
413
+ #ifdef _WIN32
414
+ #define _CUFFTAPI(T) T CUFFTAPI
415
+ #else
416
+ #define _CUFFTAPI(T) CUFFTAPI T
417
+ #endif
418
+
419
+ // CUFFTW defines and supports the following support APIs
420
+
421
+ _CUFFTAPI(void *) fftw_malloc(size_t n);
422
+
423
+ _CUFFTAPI(void *) fftwf_malloc(size_t n);
424
+
425
+ void CUFFTAPI fftw_free(void *pointer);
426
+
427
+ void CUFFTAPI fftwf_free(void *pointer);
428
+
429
+ void CUFFTAPI fftw_export_wisdom_to_file(FILE * output_file);
430
+
431
+ void CUFFTAPI fftwf_export_wisdom_to_file(FILE * output_file);
432
+
433
+ int CUFFTAPI fftw_import_wisdom_from_file(FILE * input_file);
434
+
435
+ int CUFFTAPI fftwf_import_wisdom_from_file(FILE * input_file);
436
+
437
+ void CUFFTAPI fftw_print_plan(const fftw_plan plan);
438
+
439
+ void CUFFTAPI fftwf_print_plan(const fftwf_plan plan);
440
+
441
+ void CUFFTAPI fftw_set_timelimit(double seconds);
442
+
443
+ void CUFFTAPI fftwf_set_timelimit(double seconds);
444
+
445
+ double CUFFTAPI fftw_cost(const fftw_plan plan);
446
+
447
+ double CUFFTAPI fftwf_cost(const fftw_plan plan);
448
+
449
+ void CUFFTAPI fftw_flops(const fftw_plan plan, double *add, double *mul, double *fma);
450
+
451
+ void CUFFTAPI fftwf_flops(const fftw_plan plan, double *add, double *mul, double *fma);
452
+
453
+ void CUFFTAPI fftw_destroy_plan(fftw_plan plan);
454
+
455
+ void CUFFTAPI fftwf_destroy_plan(fftwf_plan plan);
456
+
457
+ void CUFFTAPI fftw_cleanup(void);
458
+
459
+ void CUFFTAPI fftwf_cleanup(void);
460
+
461
+ #ifdef __cplusplus
462
+ }
463
+ #endif
464
+
465
+ #endif /* _CUFFTW_H_ */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (174 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cufft/lib/libcufftw.so.11 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2307a5acfccc9b40f989384038218cfead564cd43633701d30c893047e744f44
3
+ size 974888
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (173 Bytes). View file
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverDn.h ADDED
The diff for this file is too large to render. See raw diff
 
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverMg.h ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 2019 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ #if !defined(CUSOLVERMG_H_)
51
+ #define CUSOLVERMG_H_
52
+
53
+ #include <stdint.h>
54
+ #include "cusolverDn.h"
55
+
56
+ #if defined(__cplusplus)
57
+ extern "C" {
58
+ #endif /* __cplusplus */
59
+
60
+ struct cusolverMgContext;
61
+ typedef struct cusolverMgContext *cusolverMgHandle_t;
62
+
63
+ /**
64
+ * \beief This enum decides how 1D device Ids (or process ranks) get mapped to
65
+ * a 2D grid.
66
+ */
67
+ typedef enum {
68
+
69
+ CUDALIBMG_GRID_MAPPING_ROW_MAJOR = 1,
70
+ CUDALIBMG_GRID_MAPPING_COL_MAJOR = 0
71
+
72
+ } cusolverMgGridMapping_t;
73
+
74
+ /** \brief Opaque structure of the distributed grid */
75
+ typedef void *cudaLibMgGrid_t;
76
+ /** \brief Opaque structure of the distributed matrix descriptor */
77
+ typedef void *cudaLibMgMatrixDesc_t;
78
+
79
+ cusolverStatus_t CUSOLVERAPI cusolverMgCreate(cusolverMgHandle_t *handle);
80
+
81
+ cusolverStatus_t CUSOLVERAPI cusolverMgDestroy(cusolverMgHandle_t handle);
82
+
83
+ cusolverStatus_t CUSOLVERAPI cusolverMgDeviceSelect(
84
+ cusolverMgHandle_t handle,
85
+ int nbDevices,
86
+ int deviceId[]);
87
+
88
+ /**
89
+ * \brief Allocates resources related to the shared memory device grid.
90
+ * \param[out] grid the opaque data strcuture that holds the grid
91
+ * \param[in] numRowDevices number of devices in the row
92
+ * \param[in] numColDevices number of devices in the column
93
+ * \param[in] deviceId This array of size height * width stores the
94
+ * device-ids of the 2D grid; each entry must correspond to a valid
95
+ * gpu or to -1 (denoting CPU). \param[in] mapping whether the 2D grid is in
96
+ * row/column major \returns the status code
97
+ */
98
+ cusolverStatus_t CUSOLVERAPI cusolverMgCreateDeviceGrid(
99
+ cudaLibMgGrid_t * grid,
100
+ int32_t numRowDevices,
101
+ int32_t numColDevices,
102
+ const int32_t deviceId[],
103
+ cusolverMgGridMapping_t mapping);
104
+
105
+ /**
106
+ * \brief Releases the allocated resources related to the distributed grid.
107
+ * \param[in] grid the opaque data strcuture that holds the distributed grid
108
+ * \returns the status code
109
+ */
110
+ cusolverStatus_t CUSOLVERAPI cusolverMgDestroyGrid(cudaLibMgGrid_t grid);
111
+
112
+ /**
113
+ * \brief Allocates resources related to the distributed matrix descriptor.
114
+ * \param[out] desc the opaque data strcuture that holds the descriptor
115
+ * \param[in] numRows number of total rows
116
+ * \param[in] numCols number of total columns
117
+ * \param[in] rowBlockSize row block size
118
+ * \param[in] colBlockSize column block size
119
+ * \param[in] dataType the data type of each element in cudaDataType
120
+ * \param[in] grid the opaque data structure of the distributed grid
121
+ * \returns the status code
122
+ */
123
+ cusolverStatus_t CUSOLVERAPI cusolverMgCreateMatrixDesc(
124
+ cudaLibMgMatrixDesc_t *desc,
125
+ int64_t numRows,
126
+ int64_t numCols,
127
+ int64_t rowBlockSize,
128
+ int64_t colBlockSize,
129
+ cudaDataType dataType,
130
+ const cudaLibMgGrid_t grid);
131
+
132
+ /**
133
+ * \brief Releases the allocated resources related to the distributed matrix
134
+ * descriptor. \param[in] desc the opaque data strcuture that holds the
135
+ * descriptor \returns the status code
136
+ */
137
+ cusolverStatus_t CUSOLVERAPI
138
+ cusolverMgDestroyMatrixDesc(cudaLibMgMatrixDesc_t desc);
139
+
140
+ cusolverStatus_t CUSOLVERAPI cusolverMgSyevd_bufferSize(
141
+ cusolverMgHandle_t handle,
142
+ cusolverEigMode_t jobz,
143
+ cublasFillMode_t uplo,
144
+ int N,
145
+ void * array_d_A[],
146
+ int IA,
147
+ int JA,
148
+ cudaLibMgMatrixDesc_t descrA,
149
+ void * W,
150
+ cudaDataType dataTypeW,
151
+ cudaDataType computeType,
152
+ int64_t * lwork);
153
+
154
+ cusolverStatus_t CUSOLVERAPI cusolverMgSyevd(
155
+ cusolverMgHandle_t handle,
156
+ cusolverEigMode_t jobz,
157
+ cublasFillMode_t uplo,
158
+ int N,
159
+ void * array_d_A[],
160
+ int IA,
161
+ int JA,
162
+ cudaLibMgMatrixDesc_t descrA,
163
+ void * W,
164
+ cudaDataType dataTypeW,
165
+ cudaDataType computeType,
166
+ void * array_d_work[],
167
+ int64_t lwork,
168
+ int * info);
169
+
170
+ cusolverStatus_t CUSOLVERAPI cusolverMgGetrf_bufferSize(
171
+ cusolverMgHandle_t handle,
172
+ int M,
173
+ int N,
174
+ void * array_d_A[],
175
+ int IA,
176
+ int JA,
177
+ cudaLibMgMatrixDesc_t descrA,
178
+ int * array_d_IPIV[],
179
+ cudaDataType computeType,
180
+ int64_t * lwork);
181
+
182
+ cusolverStatus_t CUSOLVERAPI cusolverMgGetrf(
183
+ cusolverMgHandle_t handle,
184
+ int M,
185
+ int N,
186
+ void * array_d_A[],
187
+ int IA,
188
+ int JA,
189
+ cudaLibMgMatrixDesc_t descrA,
190
+ int * array_d_IPIV[],
191
+ cudaDataType computeType,
192
+ void * array_d_work[],
193
+ int64_t lwork,
194
+ int * info);
195
+
196
+ cusolverStatus_t CUSOLVERAPI cusolverMgGetrs_bufferSize(
197
+ cusolverMgHandle_t handle,
198
+ cublasOperation_t TRANS,
199
+ int N,
200
+ int NRHS,
201
+ void * array_d_A[],
202
+ int IA,
203
+ int JA,
204
+ cudaLibMgMatrixDesc_t descrA,
205
+ int * array_d_IPIV[],
206
+ void * array_d_B[],
207
+ int IB,
208
+ int JB,
209
+ cudaLibMgMatrixDesc_t descrB,
210
+ cudaDataType computeType,
211
+ int64_t * lwork);
212
+
213
+ cusolverStatus_t CUSOLVERAPI cusolverMgGetrs(
214
+ cusolverMgHandle_t handle,
215
+ cublasOperation_t TRANS,
216
+ int N,
217
+ int NRHS,
218
+ void * array_d_A[],
219
+ int IA,
220
+ int JA,
221
+ cudaLibMgMatrixDesc_t descrA,
222
+ int * array_d_IPIV[],
223
+ void * array_d_B[],
224
+ int IB,
225
+ int JB,
226
+ cudaLibMgMatrixDesc_t descrB,
227
+ cudaDataType computeType,
228
+ void * array_d_work[],
229
+ int64_t lwork,
230
+ int * info);
231
+
232
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotrf_bufferSize(
233
+ cusolverMgHandle_t handle,
234
+ cublasFillMode_t uplo,
235
+ int N,
236
+ void * array_d_A[],
237
+ int IA,
238
+ int JA,
239
+ cudaLibMgMatrixDesc_t descrA,
240
+ cudaDataType computeType,
241
+ int64_t * lwork);
242
+
243
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotrf(
244
+ cusolverMgHandle_t handle,
245
+ cublasFillMode_t uplo,
246
+ int N,
247
+ void * array_d_A[],
248
+ int IA,
249
+ int JA,
250
+ cudaLibMgMatrixDesc_t descrA,
251
+ cudaDataType computeType,
252
+ void * array_d_work[],
253
+ int64_t lwork,
254
+ int * h_info);
255
+
256
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotrs_bufferSize(
257
+ cusolverMgHandle_t handle,
258
+ cublasFillMode_t uplo,
259
+ int n,
260
+ int nrhs,
261
+ void * array_d_A[],
262
+ int IA,
263
+ int JA,
264
+ cudaLibMgMatrixDesc_t descrA,
265
+ void * array_d_B[],
266
+ int IB,
267
+ int JB,
268
+ cudaLibMgMatrixDesc_t descrB,
269
+ cudaDataType computeType,
270
+ int64_t * lwork);
271
+
272
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotrs(
273
+ cusolverMgHandle_t handle,
274
+ cublasFillMode_t uplo,
275
+ int n,
276
+ int nrhs,
277
+ void * array_d_A[],
278
+ int IA,
279
+ int JA,
280
+ cudaLibMgMatrixDesc_t descrA,
281
+ void * array_d_B[],
282
+ int IB,
283
+ int JB,
284
+ cudaLibMgMatrixDesc_t descrB,
285
+ cudaDataType computeType,
286
+ void * array_d_work[],
287
+ int64_t lwork,
288
+ int * h_info);
289
+
290
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotri_bufferSize(
291
+ cusolverMgHandle_t handle,
292
+ cublasFillMode_t uplo,
293
+ int N,
294
+ void * array_d_A[],
295
+ int IA,
296
+ int JA,
297
+ cudaLibMgMatrixDesc_t descrA,
298
+ cudaDataType computeType,
299
+ int64_t * lwork);
300
+
301
+ cusolverStatus_t CUSOLVERAPI cusolverMgPotri(
302
+ cusolverMgHandle_t handle,
303
+ cublasFillMode_t uplo,
304
+ int N,
305
+ void * array_d_A[],
306
+ int IA,
307
+ int JA,
308
+ cudaLibMgMatrixDesc_t descrA,
309
+ cudaDataType computeType,
310
+ void * array_d_work[],
311
+ int64_t lwork,
312
+ int * h_info);
313
+
314
+ #if defined(__cplusplus)
315
+ }
316
+ #endif /* __cplusplus */
317
+
318
+ #endif // CUSOLVERMG_H_
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolverRf.h ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 1993-2014 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ #if !defined(CUSOLVERRF_H_)
51
+ #define CUSOLVERRF_H_
52
+
53
+ #include "driver_types.h"
54
+ #include "cuComplex.h"
55
+ #include "cusolver_common.h"
56
+
57
+ #if defined(__cplusplus)
58
+ extern "C" {
59
+ #endif /* __cplusplus */
60
+
61
+ /* CUSOLVERRF mode */
62
+ typedef enum {
63
+ CUSOLVERRF_RESET_VALUES_FAST_MODE_OFF = 0, // default
64
+ CUSOLVERRF_RESET_VALUES_FAST_MODE_ON = 1
65
+ } cusolverRfResetValuesFastMode_t;
66
+
67
+ /* CUSOLVERRF matrix format */
68
+ typedef enum {
69
+ CUSOLVERRF_MATRIX_FORMAT_CSR = 0, // default
70
+ CUSOLVERRF_MATRIX_FORMAT_CSC = 1
71
+ } cusolverRfMatrixFormat_t;
72
+
73
+ /* CUSOLVERRF unit diagonal */
74
+ typedef enum {
75
+ CUSOLVERRF_UNIT_DIAGONAL_STORED_L = 0, // default
76
+ CUSOLVERRF_UNIT_DIAGONAL_STORED_U = 1,
77
+ CUSOLVERRF_UNIT_DIAGONAL_ASSUMED_L = 2,
78
+ CUSOLVERRF_UNIT_DIAGONAL_ASSUMED_U = 3
79
+ } cusolverRfUnitDiagonal_t;
80
+
81
+ /* CUSOLVERRF factorization algorithm */
82
+ typedef enum {
83
+ CUSOLVERRF_FACTORIZATION_ALG0 = 0, // default
84
+ CUSOLVERRF_FACTORIZATION_ALG1 = 1,
85
+ CUSOLVERRF_FACTORIZATION_ALG2 = 2,
86
+ } cusolverRfFactorization_t;
87
+
88
+ /* CUSOLVERRF triangular solve algorithm */
89
+ typedef enum {
90
+ CUSOLVERRF_TRIANGULAR_SOLVE_ALG1 = 1, // default
91
+ CUSOLVERRF_TRIANGULAR_SOLVE_ALG2 = 2,
92
+ CUSOLVERRF_TRIANGULAR_SOLVE_ALG3 = 3
93
+ } cusolverRfTriangularSolve_t;
94
+
95
+ /* CUSOLVERRF numeric boost report */
96
+ typedef enum {
97
+ CUSOLVERRF_NUMERIC_BOOST_NOT_USED = 0, // default
98
+ CUSOLVERRF_NUMERIC_BOOST_USED = 1
99
+ } cusolverRfNumericBoostReport_t;
100
+
101
+ /* Opaque structure holding CUSOLVERRF library common */
102
+ struct cusolverRfCommon;
103
+ typedef struct cusolverRfCommon* cusolverRfHandle_t;
104
+
105
+ /* CUSOLVERRF create (allocate memory) and destroy (free memory) in the handle
106
+ */
107
+ cusolverStatus_t CUSOLVERAPI cusolverRfCreate(cusolverRfHandle_t* handle);
108
+ cusolverStatus_t CUSOLVERAPI cusolverRfDestroy(cusolverRfHandle_t handle);
109
+
110
+ /* CUSOLVERRF set and get input format */
111
+ cusolverStatus_t CUSOLVERAPI cusolverRfGetMatrixFormat(
112
+ cusolverRfHandle_t handle,
113
+ cusolverRfMatrixFormat_t* format,
114
+ cusolverRfUnitDiagonal_t* diag);
115
+
116
+ cusolverStatus_t CUSOLVERAPI cusolverRfSetMatrixFormat(
117
+ cusolverRfHandle_t handle,
118
+ cusolverRfMatrixFormat_t format,
119
+ cusolverRfUnitDiagonal_t diag);
120
+
121
+ /* CUSOLVERRF set and get numeric properties */
122
+ cusolverStatus_t CUSOLVERAPI cusolverRfSetNumericProperties(
123
+ cusolverRfHandle_t handle,
124
+ double zero,
125
+ double boost);
126
+
127
+ cusolverStatus_t CUSOLVERAPI cusolverRfGetNumericProperties(
128
+ cusolverRfHandle_t handle,
129
+ double* zero,
130
+ double* boost);
131
+
132
+ cusolverStatus_t CUSOLVERAPI cusolverRfGetNumericBoostReport(
133
+ cusolverRfHandle_t handle,
134
+ cusolverRfNumericBoostReport_t* report);
135
+
136
+ /* CUSOLVERRF choose the triangular solve algorithm */
137
+ cusolverStatus_t CUSOLVERAPI cusolverRfSetAlgs(
138
+ cusolverRfHandle_t handle,
139
+ cusolverRfFactorization_t factAlg,
140
+ cusolverRfTriangularSolve_t solveAlg);
141
+
142
+ cusolverStatus_t CUSOLVERAPI cusolverRfGetAlgs(
143
+ cusolverRfHandle_t handle,
144
+ cusolverRfFactorization_t* factAlg,
145
+ cusolverRfTriangularSolve_t* solveAlg);
146
+
147
+ /* CUSOLVERRF set and get fast mode */
148
+ cusolverStatus_t CUSOLVERAPI cusolverRfGetResetValuesFastMode(
149
+ cusolverRfHandle_t handle,
150
+ cusolverRfResetValuesFastMode_t* fastMode);
151
+
152
+ cusolverStatus_t CUSOLVERAPI cusolverRfSetResetValuesFastMode(
153
+ cusolverRfHandle_t handle,
154
+ cusolverRfResetValuesFastMode_t fastMode);
155
+
156
+ /*** Non-Batched Routines ***/
157
+ /* CUSOLVERRF setup of internal structures from host or device memory */
158
+ cusolverStatus_t CUSOLVERAPI
159
+ cusolverRfSetupHost(/* Input (in the host memory) */
160
+ int n,
161
+ int nnzA,
162
+ int* h_csrRowPtrA,
163
+ int* h_csrColIndA,
164
+ double* h_csrValA,
165
+ int nnzL,
166
+ int* h_csrRowPtrL,
167
+ int* h_csrColIndL,
168
+ double* h_csrValL,
169
+ int nnzU,
170
+ int* h_csrRowPtrU,
171
+ int* h_csrColIndU,
172
+ double* h_csrValU,
173
+ int* h_P,
174
+ int* h_Q,
175
+ /* Output */
176
+ cusolverRfHandle_t handle);
177
+
178
+ cusolverStatus_t CUSOLVERAPI
179
+ cusolverRfSetupDevice(/* Input (in the device memory) */
180
+ int n,
181
+ int nnzA,
182
+ int* csrRowPtrA,
183
+ int* csrColIndA,
184
+ double* csrValA,
185
+ int nnzL,
186
+ int* csrRowPtrL,
187
+ int* csrColIndL,
188
+ double* csrValL,
189
+ int nnzU,
190
+ int* csrRowPtrU,
191
+ int* csrColIndU,
192
+ double* csrValU,
193
+ int* P,
194
+ int* Q,
195
+ /* Output */
196
+ cusolverRfHandle_t handle);
197
+
198
+ /* CUSOLVERRF update the matrix values (assuming the reordering, pivoting
199
+ and consequently the sparsity pattern of L and U did not change),
200
+ and zero out the remaining values. */
201
+ cusolverStatus_t CUSOLVERAPI
202
+ cusolverRfResetValues(/* Input (in the device memory) */
203
+ int n,
204
+ int nnzA,
205
+ int* csrRowPtrA,
206
+ int* csrColIndA,
207
+ double* csrValA,
208
+ int* P,
209
+ int* Q,
210
+ /* Output */
211
+ cusolverRfHandle_t handle);
212
+
213
+ /* CUSOLVERRF analysis (for parallelism) */
214
+ cusolverStatus_t CUSOLVERAPI cusolverRfAnalyze(cusolverRfHandle_t handle);
215
+
216
+ /* CUSOLVERRF re-factorization (for parallelism) */
217
+ cusolverStatus_t CUSOLVERAPI cusolverRfRefactor(cusolverRfHandle_t handle);
218
+
219
+ /* CUSOLVERRF extraction: Get L & U packed into a single matrix M */
220
+ cusolverStatus_t CUSOLVERAPI
221
+ cusolverRfAccessBundledFactorsDevice(/* Input */
222
+ cusolverRfHandle_t handle,
223
+ /* Output (in the host memory) */
224
+ int* nnzM,
225
+ /* Output (in the device memory) */
226
+ int** Mp,
227
+ int** Mi,
228
+ double** Mx);
229
+
230
+ cusolverStatus_t CUSOLVERAPI
231
+ cusolverRfExtractBundledFactorsHost(/* Input */
232
+ cusolverRfHandle_t handle,
233
+ /* Output (in the host memory) */
234
+ int* h_nnzM,
235
+ int** h_Mp,
236
+ int** h_Mi,
237
+ double** h_Mx);
238
+
239
+ /* CUSOLVERRF extraction: Get L & U individually */
240
+ cusolverStatus_t CUSOLVERAPI
241
+ cusolverRfExtractSplitFactorsHost(/* Input */
242
+ cusolverRfHandle_t handle,
243
+ /* Output (in the host memory) */
244
+ int* h_nnzL,
245
+ int** h_csrRowPtrL,
246
+ int** h_csrColIndL,
247
+ double** h_csrValL,
248
+ int* h_nnzU,
249
+ int** h_csrRowPtrU,
250
+ int** h_csrColIndU,
251
+ double** h_csrValU);
252
+
253
+ /* CUSOLVERRF (forward and backward triangular) solves */
254
+ cusolverStatus_t CUSOLVERAPI
255
+ cusolverRfSolve(/* Input (in the device memory) */
256
+ cusolverRfHandle_t handle,
257
+ int* P,
258
+ int* Q,
259
+ int nrhs, // only nrhs=1 is supported
260
+ double* Temp, // of size ldt*nrhs (ldt>=n)
261
+ int ldt,
262
+ /* Input/Output (in the device memory) */
263
+ double* XF,
264
+ /* Input */
265
+ int ldxf);
266
+
267
+ /*** Batched Routines ***/
268
+ /* CUSOLVERRF-batch setup of internal structures from host */
269
+ cusolverStatus_t CUSOLVERAPI
270
+ cusolverRfBatchSetupHost(/* Input (in the host memory)*/
271
+ int batchSize,
272
+ int n,
273
+ int nnzA,
274
+ int* h_csrRowPtrA,
275
+ int* h_csrColIndA,
276
+ double* h_csrValA_array[],
277
+ int nnzL,
278
+ int* h_csrRowPtrL,
279
+ int* h_csrColIndL,
280
+ double* h_csrValL,
281
+ int nnzU,
282
+ int* h_csrRowPtrU,
283
+ int* h_csrColIndU,
284
+ double* h_csrValU,
285
+ int* h_P,
286
+ int* h_Q,
287
+ /* Output (in the device memory) */
288
+ cusolverRfHandle_t handle);
289
+
290
+ /* CUSOLVERRF-batch update the matrix values (assuming the reordering,
291
+ pivoting and consequently the sparsity pattern of L and U did not change),
292
+ and zero out the remaining values. */
293
+ cusolverStatus_t CUSOLVERAPI
294
+ cusolverRfBatchResetValues(/* Input (in the device memory) */
295
+ int batchSize,
296
+ int n,
297
+ int nnzA,
298
+ int* csrRowPtrA,
299
+ int* csrColIndA,
300
+ double* csrValA_array[],
301
+ int* P,
302
+ int* Q,
303
+ /* Output */
304
+ cusolverRfHandle_t handle);
305
+
306
+ /* CUSOLVERRF-batch analysis (for parallelism) */
307
+ cusolverStatus_t CUSOLVERAPI
308
+ cusolverRfBatchAnalyze(cusolverRfHandle_t handle);
309
+
310
+ /* CUSOLVERRF-batch re-factorization (for parallelism) */
311
+ cusolverStatus_t CUSOLVERAPI
312
+ cusolverRfBatchRefactor(cusolverRfHandle_t handle);
313
+
314
+ /* CUSOLVERRF-batch (forward and backward triangular) solves */
315
+ cusolverStatus_t CUSOLVERAPI
316
+ cusolverRfBatchSolve(/* Input (in the device memory) */
317
+ cusolverRfHandle_t handle,
318
+ int* P,
319
+ int* Q,
320
+ int nrhs, // only nrhs=1 is supported
321
+ double* Temp, // of size 2*batchSize*(n*nrhs)
322
+ int ldt, // only ldt=n is supported
323
+ /* Input/Output (in the device memory) */
324
+ double* XF_array[],
325
+ /* Input */
326
+ int ldxf);
327
+
328
+ /* CUSOLVERRF-batch obtain the position of zero pivot */
329
+ cusolverStatus_t CUSOLVERAPI
330
+ cusolverRfBatchZeroPivot(/* Input */
331
+ cusolverRfHandle_t handle,
332
+ /* Output (in the host memory) */
333
+ int* position);
334
+
335
+ #if defined(__cplusplus)
336
+ }
337
+ #endif /* __cplusplus */
338
+
339
+ #endif /* CUSOLVERRF_H_ */
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/include/cusolver_common.h ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright 2014 NVIDIA Corporation. All rights reserved.
3
+ *
4
+ * NOTICE TO LICENSEE:
5
+ *
6
+ * This source code and/or documentation ("Licensed Deliverables") are
7
+ * subject to NVIDIA intellectual property rights under U.S. and
8
+ * international Copyright laws.
9
+ *
10
+ * These Licensed Deliverables contained herein is PROPRIETARY and
11
+ * CONFIDENTIAL to NVIDIA and is being provided under the terms and
12
+ * conditions of a form of NVIDIA software license agreement by and
13
+ * between NVIDIA and Licensee ("License Agreement") or electronically
14
+ * accepted by Licensee. Notwithstanding any terms or conditions to
15
+ * the contrary in the License Agreement, reproduction or disclosure
16
+ * of the Licensed Deliverables to any third party without the express
17
+ * written consent of NVIDIA is prohibited.
18
+ *
19
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
20
+ * LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
21
+ * SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
22
+ * PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
23
+ * NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
24
+ * DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
25
+ * NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
26
+ * NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
27
+ * LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
28
+ * SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
29
+ * DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
30
+ * WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
31
+ * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
32
+ * OF THESE LICENSED DELIVERABLES.
33
+ *
34
+ * U.S. Government End Users. These Licensed Deliverables are a
35
+ * "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
36
+ * 1995), consisting of "commercial computer software" and "commercial
37
+ * computer software documentation" as such terms are used in 48
38
+ * C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
39
+ * only as a commercial end item. Consistent with 48 C.F.R.12.212 and
40
+ * 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
41
+ * U.S. Government End Users acquire the Licensed Deliverables with
42
+ * only those rights set forth herein.
43
+ *
44
+ * Any use of the Licensed Deliverables in individual and commercial
45
+ * software must include, in the user documentation and internal
46
+ * comments to the code, the above Disclaimer and U.S. Government End
47
+ * Users Notice.
48
+ */
49
+
50
+ #if !defined(CUSOLVER_COMMON_H_)
51
+ #define CUSOLVER_COMMON_H_
52
+
53
+ #include "library_types.h"
54
+
55
+ #ifndef CUSOLVERAPI
56
+ #ifdef _WIN32
57
+ #define CUSOLVERAPI __stdcall
58
+ #else
59
+ #define CUSOLVERAPI
60
+ #endif
61
+ #endif
62
+
63
+ #if defined(_MSC_VER)
64
+ typedef __int64 int64_t;
65
+ #else
66
+ #include <inttypes.h>
67
+ #endif
68
+
69
+ typedef int cusolver_int_t;
70
+
71
+ #define CUSOLVER_VER_MAJOR 11
72
+ #define CUSOLVER_VER_MINOR 6
73
+ #define CUSOLVER_VER_PATCH 1
74
+ #define CUSOLVER_VER_BUILD 9
75
+ #define CUSOLVER_VERSION \
76
+ (CUSOLVER_VER_MAJOR * 1000 + CUSOLVER_VER_MINOR * 100 + CUSOLVER_VER_PATCH)
77
+
78
+ //------------------------------------------------------------------------------
79
+
80
+ #if !defined(_MSC_VER)
81
+ #define CUSOLVER_CPP_VERSION __cplusplus
82
+ #elif _MSC_FULL_VER >= 190024210 // Visual Studio 2015 Update 3
83
+ #define CUSOLVER_CPP_VERSION _MSVC_LANG
84
+ #else
85
+ #define CUSOLVER_CPP_VERSION 0
86
+ #endif
87
+
88
+ //------------------------------------------------------------------------------
89
+
90
+ #if !defined(DISABLE_CUSOLVER_DEPRECATED)
91
+
92
+ #if CUSOLVER_CPP_VERSION >= 201402L
93
+
94
+ #define CUSOLVER_DEPRECATED(new_func) \
95
+ [[deprecated("please use " #new_func " instead")]]
96
+
97
+ #elif defined(_MSC_VER)
98
+
99
+ #define CUSOLVER_DEPRECATED(new_func) \
100
+ __declspec(deprecated("please use " #new_func " instead"))
101
+
102
+ #elif defined(__INTEL_COMPILER) || defined(__clang__) || \
103
+ (defined(__GNUC__) && \
104
+ (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5)))
105
+
106
+ #define CUSOLVER_DEPRECATED(new_func) \
107
+ __attribute__((deprecated("please use " #new_func " instead")))
108
+
109
+ #elif defined(__GNUC__) || defined(__xlc__)
110
+
111
+ #define CUSOLVER_DEPRECATED(new_func) __attribute__((deprecated))
112
+
113
+ #else
114
+
115
+ #define CUSOLVER_DEPRECATED(new_func)
116
+
117
+ #endif // defined(__cplusplus) && __cplusplus >= 201402L
118
+ //------------------------------------------------------------------------------
119
+
120
+ #if CUSOLVER_CPP_VERSION >= 201703L
121
+
122
+ #define CUSOLVER_DEPRECATED_ENUM(new_enum) \
123
+ [[deprecated("please use " #new_enum " instead")]]
124
+
125
+ #elif defined(__clang__) || \
126
+ (defined(__GNUC__) && __GNUC__ >= 6 && !defined(__PGI))
127
+
128
+ #define CUSOLVER_DEPRECATED_ENUM(new_enum) \
129
+ __attribute__((deprecated("please use " #new_enum " instead")))
130
+
131
+ #else
132
+
133
+ #define CUSOLVER_DEPRECATED_ENUM(new_enum)
134
+
135
+ #endif // defined(__cplusplus) && __cplusplus >= 201402L
136
+
137
+ #else // defined(DISABLE_CUSOLVER_DEPRECATED)
138
+
139
+ #define CUSOLVER_DEPRECATED(new_func)
140
+ #define CUSOLVER_DEPRECATED_ENUM(new_enum)
141
+
142
+ #endif // !defined(DISABLE_CUSOLVER_DEPRECATED)
143
+
144
+ #undef CUSOLVER_CPP_VERSION
145
+
146
+ #if defined(__cplusplus)
147
+ extern "C" {
148
+ #endif /* __cplusplus */
149
+
150
+ typedef enum {
151
+ CUSOLVER_STATUS_SUCCESS = 0,
152
+ CUSOLVER_STATUS_NOT_INITIALIZED = 1,
153
+ CUSOLVER_STATUS_ALLOC_FAILED = 2,
154
+ CUSOLVER_STATUS_INVALID_VALUE = 3,
155
+ CUSOLVER_STATUS_ARCH_MISMATCH = 4,
156
+ CUSOLVER_STATUS_MAPPING_ERROR = 5,
157
+ CUSOLVER_STATUS_EXECUTION_FAILED = 6,
158
+ CUSOLVER_STATUS_INTERNAL_ERROR = 7,
159
+ CUSOLVER_STATUS_MATRIX_TYPE_NOT_SUPPORTED = 8,
160
+ CUSOLVER_STATUS_NOT_SUPPORTED = 9,
161
+ CUSOLVER_STATUS_ZERO_PIVOT = 10,
162
+ CUSOLVER_STATUS_INVALID_LICENSE = 11,
163
+ CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED = 12,
164
+ CUSOLVER_STATUS_IRS_PARAMS_INVALID = 13,
165
+ CUSOLVER_STATUS_IRS_PARAMS_INVALID_PREC = 14,
166
+ CUSOLVER_STATUS_IRS_PARAMS_INVALID_REFINE = 15,
167
+ CUSOLVER_STATUS_IRS_PARAMS_INVALID_MAXITER = 16,
168
+ CUSOLVER_STATUS_IRS_INTERNAL_ERROR = 20,
169
+ CUSOLVER_STATUS_IRS_NOT_SUPPORTED = 21,
170
+ CUSOLVER_STATUS_IRS_OUT_OF_RANGE = 22,
171
+ CUSOLVER_STATUS_IRS_NRHS_NOT_SUPPORTED_FOR_REFINE_GMRES = 23,
172
+ CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED = 25,
173
+ CUSOLVER_STATUS_IRS_INFOS_NOT_DESTROYED = 26,
174
+ CUSOLVER_STATUS_IRS_MATRIX_SINGULAR = 30,
175
+ CUSOLVER_STATUS_INVALID_WORKSPACE = 31
176
+ } cusolverStatus_t;
177
+
178
+ typedef enum {
179
+ CUSOLVER_EIG_TYPE_1 = 1,
180
+ CUSOLVER_EIG_TYPE_2 = 2,
181
+ CUSOLVER_EIG_TYPE_3 = 3
182
+ } cusolverEigType_t;
183
+
184
+ typedef enum {
185
+ CUSOLVER_EIG_MODE_NOVECTOR = 0,
186
+ CUSOLVER_EIG_MODE_VECTOR = 1
187
+ } cusolverEigMode_t;
188
+
189
+ typedef enum {
190
+ CUSOLVER_EIG_RANGE_ALL = 1001,
191
+ CUSOLVER_EIG_RANGE_I = 1002,
192
+ CUSOLVER_EIG_RANGE_V = 1003,
193
+ } cusolverEigRange_t;
194
+
195
+ typedef enum {
196
+ CUSOLVER_INF_NORM = 104,
197
+ CUSOLVER_MAX_NORM = 105,
198
+ CUSOLVER_ONE_NORM = 106,
199
+ CUSOLVER_FRO_NORM = 107,
200
+ } cusolverNorm_t;
201
+
202
+ typedef enum {
203
+ CUSOLVER_IRS_REFINE_NOT_SET = 1100,
204
+ CUSOLVER_IRS_REFINE_NONE = 1101,
205
+ CUSOLVER_IRS_REFINE_CLASSICAL = 1102,
206
+ CUSOLVER_IRS_REFINE_CLASSICAL_GMRES = 1103,
207
+ CUSOLVER_IRS_REFINE_GMRES = 1104,
208
+ CUSOLVER_IRS_REFINE_GMRES_GMRES = 1105,
209
+ CUSOLVER_IRS_REFINE_GMRES_NOPCOND = 1106,
210
+
211
+ CUSOLVER_PREC_DD = 1150,
212
+ CUSOLVER_PREC_SS = 1151,
213
+ CUSOLVER_PREC_SHT = 1152,
214
+
215
+ } cusolverIRSRefinement_t;
216
+
217
+ typedef enum {
218
+ CUSOLVER_R_8I = 1201,
219
+ CUSOLVER_R_8U = 1202,
220
+ CUSOLVER_R_64F = 1203,
221
+ CUSOLVER_R_32F = 1204,
222
+ CUSOLVER_R_16F = 1205,
223
+ CUSOLVER_R_16BF = 1206,
224
+ CUSOLVER_R_TF32 = 1207,
225
+ CUSOLVER_R_AP = 1208,
226
+ CUSOLVER_C_8I = 1211,
227
+ CUSOLVER_C_8U = 1212,
228
+ CUSOLVER_C_64F = 1213,
229
+ CUSOLVER_C_32F = 1214,
230
+ CUSOLVER_C_16F = 1215,
231
+ CUSOLVER_C_16BF = 1216,
232
+ CUSOLVER_C_TF32 = 1217,
233
+ CUSOLVER_C_AP = 1218,
234
+ } cusolverPrecType_t;
235
+
236
+ typedef enum {
237
+ CUSOLVER_ALG_0 = 0, /* default algorithm */
238
+ CUSOLVER_ALG_1 = 1,
239
+ CUSOLVER_ALG_2 = 2
240
+ } cusolverAlgMode_t;
241
+
242
+ typedef enum {
243
+ CUBLAS_STOREV_COLUMNWISE = 0,
244
+ CUBLAS_STOREV_ROWWISE = 1
245
+ } cusolverStorevMode_t;
246
+
247
+ typedef enum {
248
+ CUBLAS_DIRECT_FORWARD = 0,
249
+ CUBLAS_DIRECT_BACKWARD = 1
250
+ } cusolverDirectMode_t;
251
+
252
+ cusolverStatus_t CUSOLVERAPI
253
+ cusolverGetProperty(libraryPropertyType type, int *value);
254
+
255
+ cusolverStatus_t CUSOLVERAPI cusolverGetVersion(int *version);
256
+
257
+ #if defined(__cplusplus)
258
+ }
259
+ #endif /* __cplusplus */
260
+
261
+ #endif // CUSOLVER_COMMON_H_
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/lib/__init__.py ADDED
File without changes
infer_4_37_2/lib/python3.10/site-packages/nvidia/cusolver/lib/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (177 Bytes). View file
 
janus/lib/python3.10/site-packages/sympy/combinatorics/prufer.py ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sympy.core import Basic
2
+ from sympy.core.containers import Tuple
3
+ from sympy.tensor.array import Array
4
+ from sympy.core.sympify import _sympify
5
+ from sympy.utilities.iterables import flatten, iterable
6
+ from sympy.utilities.misc import as_int
7
+
8
+ from collections import defaultdict
9
+
10
+
11
+ class Prufer(Basic):
12
+ """
13
+ The Prufer correspondence is an algorithm that describes the
14
+ bijection between labeled trees and the Prufer code. A Prufer
15
+ code of a labeled tree is unique up to isomorphism and has
16
+ a length of n - 2.
17
+
18
+ Prufer sequences were first used by Heinz Prufer to give a
19
+ proof of Cayley's formula.
20
+
21
+ References
22
+ ==========
23
+
24
+ .. [1] https://mathworld.wolfram.com/LabeledTree.html
25
+
26
+ """
27
+ _prufer_repr = None
28
+ _tree_repr = None
29
+ _nodes = None
30
+ _rank = None
31
+
32
+ @property
33
+ def prufer_repr(self):
34
+ """Returns Prufer sequence for the Prufer object.
35
+
36
+ This sequence is found by removing the highest numbered vertex,
37
+ recording the node it was attached to, and continuing until only
38
+ two vertices remain. The Prufer sequence is the list of recorded nodes.
39
+
40
+ Examples
41
+ ========
42
+
43
+ >>> from sympy.combinatorics.prufer import Prufer
44
+ >>> Prufer([[0, 3], [1, 3], [2, 3], [3, 4], [4, 5]]).prufer_repr
45
+ [3, 3, 3, 4]
46
+ >>> Prufer([1, 0, 0]).prufer_repr
47
+ [1, 0, 0]
48
+
49
+ See Also
50
+ ========
51
+
52
+ to_prufer
53
+
54
+ """
55
+ if self._prufer_repr is None:
56
+ self._prufer_repr = self.to_prufer(self._tree_repr[:], self.nodes)
57
+ return self._prufer_repr
58
+
59
+ @property
60
+ def tree_repr(self):
61
+ """Returns the tree representation of the Prufer object.
62
+
63
+ Examples
64
+ ========
65
+
66
+ >>> from sympy.combinatorics.prufer import Prufer
67
+ >>> Prufer([[0, 3], [1, 3], [2, 3], [3, 4], [4, 5]]).tree_repr
68
+ [[0, 3], [1, 3], [2, 3], [3, 4], [4, 5]]
69
+ >>> Prufer([1, 0, 0]).tree_repr
70
+ [[1, 2], [0, 1], [0, 3], [0, 4]]
71
+
72
+ See Also
73
+ ========
74
+
75
+ to_tree
76
+
77
+ """
78
+ if self._tree_repr is None:
79
+ self._tree_repr = self.to_tree(self._prufer_repr[:])
80
+ return self._tree_repr
81
+
82
+ @property
83
+ def nodes(self):
84
+ """Returns the number of nodes in the tree.
85
+
86
+ Examples
87
+ ========
88
+
89
+ >>> from sympy.combinatorics.prufer import Prufer
90
+ >>> Prufer([[0, 3], [1, 3], [2, 3], [3, 4], [4, 5]]).nodes
91
+ 6
92
+ >>> Prufer([1, 0, 0]).nodes
93
+ 5
94
+
95
+ """
96
+ return self._nodes
97
+
98
+ @property
99
+ def rank(self):
100
+ """Returns the rank of the Prufer sequence.
101
+
102
+ Examples
103
+ ========
104
+
105
+ >>> from sympy.combinatorics.prufer import Prufer
106
+ >>> p = Prufer([[0, 3], [1, 3], [2, 3], [3, 4], [4, 5]])
107
+ >>> p.rank
108
+ 778
109
+ >>> p.next(1).rank
110
+ 779
111
+ >>> p.prev().rank
112
+ 777
113
+
114
+ See Also
115
+ ========
116
+
117
+ prufer_rank, next, prev, size
118
+
119
+ """
120
+ if self._rank is None:
121
+ self._rank = self.prufer_rank()
122
+ return self._rank
123
+
124
+ @property
125
+ def size(self):
126
+ """Return the number of possible trees of this Prufer object.
127
+
128
+ Examples
129
+ ========
130
+
131
+ >>> from sympy.combinatorics.prufer import Prufer
132
+ >>> Prufer([0]*4).size == Prufer([6]*4).size == 1296
133
+ True
134
+
135
+ See Also
136
+ ========
137
+
138
+ prufer_rank, rank, next, prev
139
+
140
+ """
141
+ return self.prev(self.rank).prev().rank + 1
142
+
143
+ @staticmethod
144
+ def to_prufer(tree, n):
145
+ """Return the Prufer sequence for a tree given as a list of edges where
146
+ ``n`` is the number of nodes in the tree.
147
+
148
+ Examples
149
+ ========
150
+
151
+ >>> from sympy.combinatorics.prufer import Prufer
152
+ >>> a = Prufer([[0, 1], [0, 2], [0, 3]])
153
+ >>> a.prufer_repr
154
+ [0, 0]
155
+ >>> Prufer.to_prufer([[0, 1], [0, 2], [0, 3]], 4)
156
+ [0, 0]
157
+
158
+ See Also
159
+ ========
160
+ prufer_repr: returns Prufer sequence of a Prufer object.
161
+
162
+ """
163
+ d = defaultdict(int)
164
+ L = []
165
+ for edge in tree:
166
+ # Increment the value of the corresponding
167
+ # node in the degree list as we encounter an
168
+ # edge involving it.
169
+ d[edge[0]] += 1
170
+ d[edge[1]] += 1
171
+ for i in range(n - 2):
172
+ # find the smallest leaf
173
+ for x in range(n):
174
+ if d[x] == 1:
175
+ break
176
+ # find the node it was connected to
177
+ y = None
178
+ for edge in tree:
179
+ if x == edge[0]:
180
+ y = edge[1]
181
+ elif x == edge[1]:
182
+ y = edge[0]
183
+ if y is not None:
184
+ break
185
+ # record and update
186
+ L.append(y)
187
+ for j in (x, y):
188
+ d[j] -= 1
189
+ if not d[j]:
190
+ d.pop(j)
191
+ tree.remove(edge)
192
+ return L
193
+
194
+ @staticmethod
195
+ def to_tree(prufer):
196
+ """Return the tree (as a list of edges) of the given Prufer sequence.
197
+
198
+ Examples
199
+ ========
200
+
201
+ >>> from sympy.combinatorics.prufer import Prufer
202
+ >>> a = Prufer([0, 2], 4)
203
+ >>> a.tree_repr
204
+ [[0, 1], [0, 2], [2, 3]]
205
+ >>> Prufer.to_tree([0, 2])
206
+ [[0, 1], [0, 2], [2, 3]]
207
+
208
+ References
209
+ ==========
210
+
211
+ .. [1] https://hamberg.no/erlend/posts/2010-11-06-prufer-sequence-compact-tree-representation.html
212
+
213
+ See Also
214
+ ========
215
+ tree_repr: returns tree representation of a Prufer object.
216
+
217
+ """
218
+ tree = []
219
+ last = []
220
+ n = len(prufer) + 2
221
+ d = defaultdict(lambda: 1)
222
+ for p in prufer:
223
+ d[p] += 1
224
+ for i in prufer:
225
+ for j in range(n):
226
+ # find the smallest leaf (degree = 1)
227
+ if d[j] == 1:
228
+ break
229
+ # (i, j) is the new edge that we append to the tree
230
+ # and remove from the degree dictionary
231
+ d[i] -= 1
232
+ d[j] -= 1
233
+ tree.append(sorted([i, j]))
234
+ last = [i for i in range(n) if d[i] == 1] or [0, 1]
235
+ tree.append(last)
236
+
237
+ return tree
238
+
239
+ @staticmethod
240
+ def edges(*runs):
241
+ """Return a list of edges and the number of nodes from the given runs
242
+ that connect nodes in an integer-labelled tree.
243
+
244
+ All node numbers will be shifted so that the minimum node is 0. It is
245
+ not a problem if edges are repeated in the runs; only unique edges are
246
+ returned. There is no assumption made about what the range of the node
247
+ labels should be, but all nodes from the smallest through the largest
248
+ must be present.
249
+
250
+ Examples
251
+ ========
252
+
253
+ >>> from sympy.combinatorics.prufer import Prufer
254
+ >>> Prufer.edges([1, 2, 3], [2, 4, 5]) # a T
255
+ ([[0, 1], [1, 2], [1, 3], [3, 4]], 5)
256
+
257
+ Duplicate edges are removed:
258
+
259
+ >>> Prufer.edges([0, 1, 2, 3], [1, 4, 5], [1, 4, 6]) # a K
260
+ ([[0, 1], [1, 2], [1, 4], [2, 3], [4, 5], [4, 6]], 7)
261
+
262
+ """
263
+ e = set()
264
+ nmin = runs[0][0]
265
+ for r in runs:
266
+ for i in range(len(r) - 1):
267
+ a, b = r[i: i + 2]
268
+ if b < a:
269
+ a, b = b, a
270
+ e.add((a, b))
271
+ rv = []
272
+ got = set()
273
+ nmin = nmax = None
274
+ for ei in e:
275
+ got.update(ei)
276
+ nmin = min(ei[0], nmin) if nmin is not None else ei[0]
277
+ nmax = max(ei[1], nmax) if nmax is not None else ei[1]
278
+ rv.append(list(ei))
279
+ missing = set(range(nmin, nmax + 1)) - got
280
+ if missing:
281
+ missing = [i + nmin for i in missing]
282
+ if len(missing) == 1:
283
+ msg = 'Node %s is missing.' % missing.pop()
284
+ else:
285
+ msg = 'Nodes %s are missing.' % sorted(missing)
286
+ raise ValueError(msg)
287
+ if nmin != 0:
288
+ for i, ei in enumerate(rv):
289
+ rv[i] = [n - nmin for n in ei]
290
+ nmax -= nmin
291
+ return sorted(rv), nmax + 1
292
+
293
+ def prufer_rank(self):
294
+ """Computes the rank of a Prufer sequence.
295
+
296
+ Examples
297
+ ========
298
+
299
+ >>> from sympy.combinatorics.prufer import Prufer
300
+ >>> a = Prufer([[0, 1], [0, 2], [0, 3]])
301
+ >>> a.prufer_rank()
302
+ 0
303
+
304
+ See Also
305
+ ========
306
+
307
+ rank, next, prev, size
308
+
309
+ """
310
+ r = 0
311
+ p = 1
312
+ for i in range(self.nodes - 3, -1, -1):
313
+ r += p*self.prufer_repr[i]
314
+ p *= self.nodes
315
+ return r
316
+
317
+ @classmethod
318
+ def unrank(self, rank, n):
319
+ """Finds the unranked Prufer sequence.
320
+
321
+ Examples
322
+ ========
323
+
324
+ >>> from sympy.combinatorics.prufer import Prufer
325
+ >>> Prufer.unrank(0, 4)
326
+ Prufer([0, 0])
327
+
328
+ """
329
+ n, rank = as_int(n), as_int(rank)
330
+ L = defaultdict(int)
331
+ for i in range(n - 3, -1, -1):
332
+ L[i] = rank % n
333
+ rank = (rank - L[i])//n
334
+ return Prufer([L[i] for i in range(len(L))])
335
+
336
+ def __new__(cls, *args, **kw_args):
337
+ """The constructor for the Prufer object.
338
+
339
+ Examples
340
+ ========
341
+
342
+ >>> from sympy.combinatorics.prufer import Prufer
343
+
344
+ A Prufer object can be constructed from a list of edges:
345
+
346
+ >>> a = Prufer([[0, 1], [0, 2], [0, 3]])
347
+ >>> a.prufer_repr
348
+ [0, 0]
349
+
350
+ If the number of nodes is given, no checking of the nodes will
351
+ be performed; it will be assumed that nodes 0 through n - 1 are
352
+ present:
353
+
354
+ >>> Prufer([[0, 1], [0, 2], [0, 3]], 4)
355
+ Prufer([[0, 1], [0, 2], [0, 3]], 4)
356
+
357
+ A Prufer object can be constructed from a Prufer sequence:
358
+
359
+ >>> b = Prufer([1, 3])
360
+ >>> b.tree_repr
361
+ [[0, 1], [1, 3], [2, 3]]
362
+
363
+ """
364
+ arg0 = Array(args[0]) if args[0] else Tuple()
365
+ args = (arg0,) + tuple(_sympify(arg) for arg in args[1:])
366
+ ret_obj = Basic.__new__(cls, *args, **kw_args)
367
+ args = [list(args[0])]
368
+ if args[0] and iterable(args[0][0]):
369
+ if not args[0][0]:
370
+ raise ValueError(
371
+ 'Prufer expects at least one edge in the tree.')
372
+ if len(args) > 1:
373
+ nnodes = args[1]
374
+ else:
375
+ nodes = set(flatten(args[0]))
376
+ nnodes = max(nodes) + 1
377
+ if nnodes != len(nodes):
378
+ missing = set(range(nnodes)) - nodes
379
+ if len(missing) == 1:
380
+ msg = 'Node %s is missing.' % missing.pop()
381
+ else:
382
+ msg = 'Nodes %s are missing.' % sorted(missing)
383
+ raise ValueError(msg)
384
+ ret_obj._tree_repr = [list(i) for i in args[0]]
385
+ ret_obj._nodes = nnodes
386
+ else:
387
+ ret_obj._prufer_repr = args[0]
388
+ ret_obj._nodes = len(ret_obj._prufer_repr) + 2
389
+ return ret_obj
390
+
391
+ def next(self, delta=1):
392
+ """Generates the Prufer sequence that is delta beyond the current one.
393
+
394
+ Examples
395
+ ========
396
+
397
+ >>> from sympy.combinatorics.prufer import Prufer
398
+ >>> a = Prufer([[0, 1], [0, 2], [0, 3]])
399
+ >>> b = a.next(1) # == a.next()
400
+ >>> b.tree_repr
401
+ [[0, 2], [0, 1], [1, 3]]
402
+ >>> b.rank
403
+ 1
404
+
405
+ See Also
406
+ ========
407
+
408
+ prufer_rank, rank, prev, size
409
+
410
+ """
411
+ return Prufer.unrank(self.rank + delta, self.nodes)
412
+
413
+ def prev(self, delta=1):
414
+ """Generates the Prufer sequence that is -delta before the current one.
415
+
416
+ Examples
417
+ ========
418
+
419
+ >>> from sympy.combinatorics.prufer import Prufer
420
+ >>> a = Prufer([[0, 1], [1, 2], [2, 3], [1, 4]])
421
+ >>> a.rank
422
+ 36
423
+ >>> b = a.prev()
424
+ >>> b
425
+ Prufer([1, 2, 0])
426
+ >>> b.rank
427
+ 35
428
+
429
+ See Also
430
+ ========
431
+
432
+ prufer_rank, rank, next, size
433
+
434
+ """
435
+ return Prufer.unrank(self.rank -delta, self.nodes)
janus/lib/python3.10/site-packages/sympy/core/__pycache__/_print_helpers.cpython-310.pyc ADDED
Binary file (2.32 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/add.cpython-310.pyc ADDED
Binary file (35.8 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/alphabets.cpython-310.pyc ADDED
Binary file (321 Bytes). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/assumptions.cpython-310.pyc ADDED
Binary file (18.7 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/assumptions_generated.cpython-310.pyc ADDED
Binary file (12.6 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/backend.cpython-310.pyc ADDED
Binary file (3.92 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/basic.cpython-310.pyc ADDED
Binary file (69.9 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/cache.cpython-310.pyc ADDED
Binary file (6.09 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/compatibility.cpython-310.pyc ADDED
Binary file (1.28 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/containers.cpython-310.pyc ADDED
Binary file (14.5 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/core.cpython-310.pyc ADDED
Binary file (1.04 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/coreerrors.cpython-310.pyc ADDED
Binary file (649 Bytes). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/facts.cpython-310.pyc ADDED
Binary file (16.7 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/intfunc.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
janus/lib/python3.10/site-packages/sympy/core/__pycache__/kind.cpython-310.pyc ADDED
Binary file (11.7 kB). View file