cyd0806 commited on
Commit
954dcd4
·
verified ·
1 Parent(s): de25fc3

Upload apex-master/csrc/mlp_cuda.cu with huggingface_hub

Browse files
Files changed (1) hide show
  1. apex-master/csrc/mlp_cuda.cu +1678 -0
apex-master/csrc/mlp_cuda.cu ADDED
@@ -0,0 +1,1678 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #include <ATen/ATen.h>
2
+ #include <ATen/cuda/CUDAContext.h>
3
+ #include <assert.h>
4
+ #include <stdio.h>
5
+ #include <stdlib.h>
6
+ #include <string.h>
7
+ #include <torch/torch.h>
8
+
9
+ /* Includes, cuda */
10
+ #include <cublas_v2.h>
11
+ #include <cuda_runtime.h>
12
+
13
+ #if defined(CUBLAS_VERSION) && CUBLAS_VERSION >= 11000
14
+ // includes cublaslt
15
+ #include <cublasLt.h>
16
+ #endif
17
+ // constants for fused bias+relu kernel
18
+ #define BIAS_RELU_FW_NTHREADS 128 // forward number of thread per block
19
+ #define BIAS_RELU_BW_NTHREADS_X 32 // backward number of thread in feature dim
20
+ #define BIAS_RELU_BW_NTHREADS_Y 16 // backward number of thread in batch dim
21
+ #define BIAS_RELU_RED_PER_THREAD 16 // backward minimal reduction length per thread
22
+
23
+ // move to a header later on
24
+ #define ILP 4
25
+ template<typename T>
26
+ __host__ __device__ __forceinline__ bool is_aligned(T* p){
27
+ return ((uint64_t)p) % (ILP*sizeof(T)) == 0;
28
+ }
29
+
30
+ template<typename T>
31
+ __device__ __forceinline__ void load_store(T* dst, T* src, int dst_offset, int src_offset){
32
+ typedef typename std::aligned_storage<ILP*sizeof(T), ILP*alignof(T)>::type LT;
33
+ ((LT*)dst)[dst_offset] = ((LT*)src)[src_offset];
34
+ }
35
+ template<typename T>
36
+ __device__ __forceinline__ void load_store(T* dst, volatile T* src, int dst_offset, int src_offset){
37
+ typedef typename std::aligned_storage<ILP*sizeof(T), ILP*alignof(T)>::type LT;
38
+ ((LT*)dst)[dst_offset] = ((LT*)src)[src_offset];
39
+ }
40
+ template<typename T>
41
+ __device__ __forceinline__ void load_store(volatile T* dst, T* src, int dst_offset, int src_offset){
42
+ typedef typename std::aligned_storage<ILP*sizeof(T), ILP*alignof(T)>::type LT;
43
+ ((LT*)dst)[dst_offset] = ((LT*)src)[src_offset];
44
+ }
45
+
46
+ // Keep ReLU in float only. When using half, cast to float before calling.
47
+ __device__ __inline__ float relu(float a) {
48
+ float retf = max(a, 0.f);
49
+ return (retf);
50
+ }
51
+
52
+ // Keep Sigmoid in float only. When using half, cast to float before calling.
53
+ __device__ __inline__ float sigmoid(float a) {
54
+ float retf = 1.f / (1.f + expf(-a));
55
+ return (retf);
56
+ }
57
+
58
+ // FP64 Wrapper around cublas GEMMEx
59
+ cublasStatus_t mlp_gemm(
60
+ cublasHandle_t handle,
61
+ cublasOperation_t transa,
62
+ cublasOperation_t transb,
63
+ int m,
64
+ int n,
65
+ int k,
66
+ float* alpha,
67
+ const double* A,
68
+ int lda,
69
+ const double* B,
70
+ int ldb,
71
+ const float* beta,
72
+ double* C,
73
+ int ldc) {
74
+ return cublasGemmEx(
75
+ handle,
76
+ transa,
77
+ transb,
78
+ m,
79
+ n,
80
+ k,
81
+ alpha,
82
+ A,
83
+ CUDA_R_64F,
84
+ lda,
85
+ B,
86
+ CUDA_R_64F,
87
+ ldb,
88
+ beta,
89
+ C,
90
+ CUDA_R_64F,
91
+ ldc,
92
+ CUDA_R_64F,
93
+ CUBLAS_GEMM_DEFAULT);
94
+ }
95
+
96
+ // FP32 Wrapper around cublas GEMMEx
97
+ cublasStatus_t mlp_gemm(
98
+ cublasHandle_t handle,
99
+ cublasOperation_t transa,
100
+ cublasOperation_t transb,
101
+ int m,
102
+ int n,
103
+ int k,
104
+ float* alpha,
105
+ const float* A,
106
+ int lda,
107
+ const float* B,
108
+ int ldb,
109
+ const float* beta,
110
+ float* C,
111
+ int ldc) {
112
+ return cublasGemmEx(
113
+ handle,
114
+ transa,
115
+ transb,
116
+ m,
117
+ n,
118
+ k,
119
+ alpha,
120
+ A,
121
+ CUDA_R_32F,
122
+ lda,
123
+ B,
124
+ CUDA_R_32F,
125
+ ldb,
126
+ beta,
127
+ C,
128
+ CUDA_R_32F,
129
+ ldc,
130
+ CUDA_R_32F,
131
+ CUBLAS_GEMM_DEFAULT);
132
+ }
133
+
134
+ // FP16 Tensor core wrapper around cublas GEMMEx
135
+ cublasStatus_t mlp_gemm(
136
+ cublasHandle_t handle,
137
+ cublasOperation_t transa,
138
+ cublasOperation_t transb,
139
+ int m,
140
+ int n,
141
+ int k,
142
+ float* alpha,
143
+ const at::Half* A,
144
+ int lda,
145
+ const at::Half* B,
146
+ int ldb,
147
+ float* beta,
148
+ at::Half* C,
149
+ int ldc) {
150
+ return cublasGemmEx(
151
+ handle,
152
+ transa,
153
+ transb,
154
+ m,
155
+ n,
156
+ k,
157
+ alpha,
158
+ A,
159
+ CUDA_R_16F,
160
+ lda,
161
+ B,
162
+ CUDA_R_16F,
163
+ ldb,
164
+ beta,
165
+ C,
166
+ CUDA_R_16F,
167
+ ldc,
168
+ CUDA_R_32F,
169
+ CUBLAS_GEMM_DEFAULT_TENSOR_OP);
170
+ }
171
+ #if defined(CUBLAS_VERSION) && CUBLAS_VERSION >= 11000
172
+ int mlp_gemm_lt(
173
+ cublasLtHandle_t ltHandle,
174
+ cublasOperation_t transa,
175
+ cublasOperation_t transb,
176
+ int m,
177
+ int n,
178
+ int k,
179
+ float *alpha, /* host pointer */
180
+ const at::Half* A,
181
+ int lda,
182
+ const at::Half* B,
183
+ int ldb,
184
+ float *beta, /* host pointer */
185
+ at::Half* C,
186
+ int ldc,
187
+ void *workspace,
188
+ size_t workspaceSize,
189
+ cudaStream_t stream,
190
+ bool use_bias,
191
+ bool use_relu,
192
+ const void* bias) {
193
+ cublasStatus_t status = CUBLAS_STATUS_SUCCESS;
194
+
195
+ cublasLtMatmulDescOpaque_t operationDesc = {};
196
+ cublasLtMatrixLayoutOpaque_t Adesc = {}, Bdesc = {}, Cdesc = {};
197
+ cublasLtMatmulPreferenceOpaque_t preference = {};
198
+
199
+ int returnedResults = 0;
200
+ cublasLtMatmulHeuristicResult_t heuristicResult = {};
201
+ cublasLtEpilogue_t epilogue = CUBLASLT_EPILOGUE_DEFAULT;
202
+
203
+ // Create operation descriptor; see cublasLtMatmulDescAttributes_t
204
+ // for details about defaults; here we just set the transforms for
205
+ // A and B.
206
+ status = cublasLtMatmulDescInit(&operationDesc, CUBLAS_COMPUTE_32F, CUDA_R_32F);
207
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
208
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_TRANSA, &transa, sizeof(transa));
209
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
210
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_TRANSB, &transb, sizeof(transa));
211
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
212
+
213
+ if (use_bias) {
214
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_BIAS_POINTER, &bias, sizeof(bias));
215
+ if (status != CUBLAS_STATUS_SUCCESS) {
216
+ goto CLEANUP;
217
+ }
218
+ if (use_relu) {
219
+ epilogue = CUBLASLT_EPILOGUE_RELU_BIAS;
220
+ } else {
221
+ epilogue = CUBLASLT_EPILOGUE_BIAS;
222
+ }
223
+ } else {
224
+ if (use_relu) {
225
+ epilogue = CUBLASLT_EPILOGUE_RELU;
226
+ }
227
+ }
228
+
229
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_EPILOGUE, &epilogue, sizeof(epilogue));
230
+ if (status != CUBLAS_STATUS_SUCCESS) {
231
+ goto CLEANUP;
232
+ }
233
+
234
+ // Create matrix descriptors. Not setting any extra attributes.
235
+ status = cublasLtMatrixLayoutInit(
236
+ &Adesc, CUDA_R_16F, transa == CUBLAS_OP_N ? m : k, transa == CUBLAS_OP_N ? k : m, lda);
237
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
238
+ status = cublasLtMatrixLayoutInit(
239
+ &Bdesc, CUDA_R_16F, transb == CUBLAS_OP_N ? k : n, transb == CUBLAS_OP_N ? n : k, ldb);
240
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
241
+ status = cublasLtMatrixLayoutInit(&Cdesc, CUDA_R_16F, m, n, ldc);
242
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
243
+
244
+ // Create preference handle; In general, extra attributes can be
245
+ // used here to disable tensor ops or to make sure algo selected
246
+ // will work with badly aligned A, B, C. However, for simplicity
247
+ // here we assume A,B,C are always well aligned (e.g., directly
248
+ // come from cudaMalloc)
249
+ status = cublasLtMatmulPreferenceInit(&preference);
250
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
251
+ status = cublasLtMatmulPreferenceSetAttribute(
252
+ &preference, CUBLASLT_MATMUL_PREF_MAX_WORKSPACE_BYTES, &workspaceSize, sizeof(workspaceSize));
253
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
254
+
255
+ // We just need the best available heuristic to try and run matmul.
256
+ // There is no guarantee that this will work. For example, if A is
257
+ // badly aligned, you can request more (e.g. 32) algos and try to
258
+ // run them one by one until something works.
259
+ status = cublasLtMatmulAlgoGetHeuristic(
260
+ ltHandle, &operationDesc, &Adesc, &Bdesc, &Cdesc, &Cdesc, &preference, 1, &heuristicResult, &returnedResults);
261
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
262
+
263
+ if (returnedResults == 0) {
264
+ status = CUBLAS_STATUS_NOT_SUPPORTED;
265
+ goto CLEANUP;
266
+ }
267
+ status = cublasLtMatmul(ltHandle,
268
+ &operationDesc,
269
+ alpha,
270
+ A,
271
+ &Adesc,
272
+ B,
273
+ &Bdesc,
274
+ beta,
275
+ C,
276
+ &Cdesc,
277
+ C,
278
+ &Cdesc,
279
+ &heuristicResult.algo,
280
+ workspace,
281
+ workspaceSize,
282
+ stream);
283
+
284
+ CLEANUP:
285
+ // Descriptors are no longer needed as all GPU work was already
286
+ // enqueued.
287
+ return status == CUBLAS_STATUS_SUCCESS ? 0 : 1;
288
+ }
289
+
290
+ int mlp_gemm_lt(
291
+ cublasLtHandle_t ltHandle,
292
+ cublasOperation_t transa,
293
+ cublasOperation_t transb,
294
+ int m,
295
+ int n,
296
+ int k,
297
+ float *alpha, /* host pointer */
298
+ const double* A,
299
+ int lda,
300
+ const double* B,
301
+ int ldb,
302
+ float *beta, /* host pointer */
303
+ double* C,
304
+ int ldc,
305
+ void *workspace,
306
+ size_t workspaceSize,
307
+ cudaStream_t stream,
308
+ bool use_bias,
309
+ bool use_relu,
310
+ const void* bias) {
311
+ return 1;
312
+ }
313
+
314
+ int mlp_gemm_lt(
315
+ cublasLtHandle_t ltHandle,
316
+ cublasOperation_t transa,
317
+ cublasOperation_t transb,
318
+ int m,
319
+ int n,
320
+ int k,
321
+ float *alpha, /* host pointer */
322
+ const float *A,
323
+ int lda,
324
+ const float *B,
325
+ int ldb,
326
+ float *beta, /* host pointer */
327
+ float *C,
328
+ int ldc,
329
+ void *workspace,
330
+ size_t workspaceSize,
331
+ cudaStream_t stream,
332
+ bool use_bias,
333
+ bool use_relu,
334
+ const void* bias) {
335
+ cublasStatus_t status = CUBLAS_STATUS_SUCCESS;
336
+
337
+ cublasLtMatmulDescOpaque_t operationDesc = {};
338
+ cublasLtMatrixLayoutOpaque_t Adesc = {}, Bdesc = {}, Cdesc = {};
339
+ cublasLtMatmulPreferenceOpaque_t preference = {};
340
+
341
+ int returnedResults = 0;
342
+ cublasLtMatmulHeuristicResult_t heuristicResult = {};
343
+ cublasLtEpilogue_t epilogue = CUBLASLT_EPILOGUE_DEFAULT;
344
+
345
+ // Create operation descriptor; see cublasLtMatmulDescAttributes_t
346
+ // for details about defaults; here we just set the transforms for
347
+ // A and B.
348
+ status = cublasLtMatmulDescInit(&operationDesc, CUBLAS_COMPUTE_32F, CUDA_R_32F);
349
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
350
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_TRANSA, &transa, sizeof(transa));
351
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
352
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_TRANSB, &transb, sizeof(transa));
353
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
354
+
355
+ if (use_bias) {
356
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_BIAS_POINTER, &bias, sizeof(bias));
357
+ if (status != CUBLAS_STATUS_SUCCESS) {
358
+ goto CLEANUP;
359
+ }
360
+ if (use_relu) {
361
+ epilogue = CUBLASLT_EPILOGUE_RELU_BIAS;
362
+ } else {
363
+ epilogue = CUBLASLT_EPILOGUE_BIAS;
364
+ }
365
+ } else {
366
+ if (use_relu) {
367
+ epilogue = CUBLASLT_EPILOGUE_RELU;
368
+ }
369
+ }
370
+
371
+ status = cublasLtMatmulDescSetAttribute(&operationDesc, CUBLASLT_MATMUL_DESC_EPILOGUE, &epilogue, sizeof(epilogue));
372
+ if (status != CUBLAS_STATUS_SUCCESS) {
373
+ goto CLEANUP;
374
+ }
375
+
376
+ // Create matrix descriptors. Not setting any extra attributes.
377
+ status = cublasLtMatrixLayoutInit(
378
+ &Adesc, CUDA_R_32F, transa == CUBLAS_OP_N ? m : k, transa == CUBLAS_OP_N ? k : m, lda);
379
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
380
+ status = cublasLtMatrixLayoutInit(
381
+ &Bdesc, CUDA_R_32F, transb == CUBLAS_OP_N ? k : n, transb == CUBLAS_OP_N ? n : k, ldb);
382
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
383
+ status = cublasLtMatrixLayoutInit(&Cdesc, CUDA_R_32F, m, n, ldc);
384
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
385
+
386
+ // Create preference handle; In general, extra attributes can be
387
+ // used here to disable tensor ops or to make sure algo selected
388
+ // will work with badly aligned A, B, C. However, for simplicity
389
+ // here we assume A,B,C are always well aligned (e.g., directly
390
+ // come from cudaMalloc)
391
+ status = cublasLtMatmulPreferenceInit(&preference);
392
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
393
+ status = cublasLtMatmulPreferenceSetAttribute(
394
+ &preference, CUBLASLT_MATMUL_PREF_MAX_WORKSPACE_BYTES, &workspaceSize, sizeof(workspaceSize));
395
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
396
+
397
+ // We just need the best available heuristic to try and run matmul.
398
+ // There is no guarantee that this will work. For example, if A is
399
+ // badly aligned, you can request more (e.g. 32) algos and try to
400
+ // run them one by one until something works.
401
+ status = cublasLtMatmulAlgoGetHeuristic(
402
+ ltHandle, &operationDesc, &Adesc, &Bdesc, &Cdesc, &Cdesc, &preference, 1, &heuristicResult, &returnedResults);
403
+ if (status != CUBLAS_STATUS_SUCCESS) goto CLEANUP;
404
+
405
+ if (returnedResults == 0) {
406
+ status = CUBLAS_STATUS_NOT_SUPPORTED;
407
+ goto CLEANUP;
408
+ }
409
+
410
+ status = cublasLtMatmul(ltHandle,
411
+ &operationDesc,
412
+ alpha,
413
+ A,
414
+ &Adesc,
415
+ B,
416
+ &Bdesc,
417
+ beta,
418
+ C,
419
+ &Cdesc,
420
+ C,
421
+ &Cdesc,
422
+ &heuristicResult.algo,
423
+ workspace,
424
+ workspaceSize,
425
+ stream);
426
+
427
+ CLEANUP:
428
+ // Descriptors are no longer needed as all GPU work was already
429
+ // enqueued.
430
+ return status == CUBLAS_STATUS_SUCCESS ? 0 : 1;
431
+ }
432
+ #endif
433
+
434
+ // Bias ADD. Assume input X is [features x batch size], column major.
435
+ // Bias is one 'features' long vector, with implicit broadcast.
436
+ template <typename T>
437
+ __global__ void biasAdd_fprop(T *X, T *b, uint batch_size, uint features) {
438
+ T r_x[ILP];
439
+ T r_b[ILP];
440
+ if(is_aligned(X) && is_aligned(b) && features % ILP ==0) {
441
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
442
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
443
+ int row = tid % (features / ILP);
444
+ load_store(r_x, X, 0 , tid);
445
+ load_store(r_b, b, 0 , row);
446
+ #pragma unroll
447
+ for(int ii = 0; ii < ILP; ii++) {
448
+ float bias_sum = static_cast<float>(r_x[ii]) + static_cast<float>(r_b[ii]);
449
+ r_x[ii] = bias_sum;
450
+ }
451
+ load_store(X, r_x, tid , 0);
452
+ }
453
+ } else {
454
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
455
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
456
+ #pragma unroll
457
+ for(int ii = 0; ii < ILP; ii++) {
458
+ int idx = tid + ii * blockDim.x * gridDim.x;
459
+ if(idx < features * batch_size) {
460
+ int row = tid % features;
461
+ r_x[ii] = X[idx];
462
+ r_b[ii] = b[row];
463
+ }
464
+ }
465
+ #pragma unroll
466
+ for(int ii = 0; ii < ILP; ii++) {
467
+ float bias_sum = static_cast<float>(r_x[ii]) + static_cast<float>(r_b[ii]);
468
+ r_x[ii] = bias_sum;
469
+ }
470
+ #pragma unroll
471
+ for(int ii = 0; ii < ILP; ii++) {
472
+ int idx = tid + ii * blockDim.x * gridDim.x;
473
+ if(idx < features * batch_size) {
474
+ X[idx] = r_x[ii];
475
+ }
476
+ }
477
+ }
478
+ }
479
+ }
480
+
481
+ // Bias ADD + ReLU. Assume input X is [features x batch size], column major.
482
+ // Activation support fuesed ReLU. Safe to call in-place.
483
+ template <typename T>
484
+ __global__ void biasAddRelu_fprop(T *X, T *b, uint batch_size, uint features) {
485
+ T r_x[ILP];
486
+ T r_b[ILP];
487
+ if(is_aligned(X) && is_aligned(b) && features % ILP ==0) {
488
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
489
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
490
+ int row = tid % (features / ILP);
491
+ load_store(r_x, X, 0 , tid);
492
+ load_store(r_b, b, 0 , row);
493
+ #pragma unroll
494
+ for(int ii = 0; ii < ILP; ii++) {
495
+ float bias_sum = static_cast<float>(r_x[ii]) + static_cast<float>(r_b[ii]);
496
+ r_x[ii] = relu(bias_sum);
497
+ }
498
+ load_store(X, r_x, tid , 0);
499
+ }
500
+ } else {
501
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
502
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
503
+ #pragma unroll
504
+ for(int ii = 0; ii < ILP; ii++) {
505
+ int idx = tid + ii * blockDim.x * gridDim.x;
506
+ if(idx < features * batch_size) {
507
+ int row = tid % features;
508
+ r_x[ii] = X[idx];
509
+ r_b[ii] = b[row];
510
+ }
511
+ }
512
+ #pragma unroll
513
+ for(int ii = 0; ii < ILP; ii++) {
514
+ float bias_sum = static_cast<float>(r_x[ii]) + static_cast<float>(r_b[ii]);
515
+ r_x[ii] = relu(bias_sum);
516
+ }
517
+ #pragma unroll
518
+ for(int ii = 0; ii < ILP; ii++) {
519
+ int idx = tid + ii * blockDim.x * gridDim.x;
520
+ if(idx < features * batch_size) {
521
+ X[idx] = r_x[ii];
522
+ }
523
+ }
524
+ }
525
+ }
526
+ }
527
+
528
+ // ReLU. Assume input X is [features x batch size], column major.
529
+ // Safe to call in-place.
530
+ template <typename T>
531
+ __global__ void Relu_fprop(T *X, uint batch_size, uint features) {
532
+ T r_x[ILP];
533
+ if(is_aligned(X) && features % ILP ==0) {
534
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
535
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
536
+ load_store(r_x, X, 0 , tid);
537
+ #pragma unroll
538
+ for(int ii = 0; ii < ILP; ii++) {
539
+ r_x[ii] = relu(static_cast<float>(r_x[ii]));
540
+ }
541
+ load_store(X, r_x, tid , 0);
542
+ }
543
+ } else {
544
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
545
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
546
+ #pragma unroll
547
+ for(int ii = 0; ii < ILP; ii++) {
548
+ int idx = tid + ii * blockDim.x * gridDim.x;
549
+ if(idx < features * batch_size) {
550
+ r_x[ii] = X[idx];
551
+ }
552
+ }
553
+ #pragma unroll
554
+ for(int ii = 0; ii < ILP; ii++) {
555
+ r_x[ii] = relu(static_cast<float>(r_x[ii]));
556
+ }
557
+ #pragma unroll
558
+ for(int ii = 0; ii < ILP; ii++) {
559
+ int idx = tid + ii * blockDim.x * gridDim.x;
560
+ if(idx < features * batch_size) {
561
+ X[idx] = r_x[ii];
562
+ }
563
+ }
564
+ }
565
+ }
566
+ }
567
+
568
+ // Sigmoid. Assume input X is [features x batch size], column major.
569
+ // Safe to call in-place.
570
+ template <typename T>
571
+ __global__ void Sigmoid_fprop(T *X, uint batch_size, uint features) {
572
+ T r_x[ILP];
573
+ if(is_aligned(X) && features % ILP ==0) {
574
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
575
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
576
+ load_store(r_x, X, 0 , tid);
577
+ #pragma unroll
578
+ for(int ii = 0; ii < ILP; ii++) {
579
+ r_x[ii] = sigmoid(static_cast<float>(r_x[ii]));
580
+ }
581
+ load_store(X, r_x, tid , 0);
582
+ }
583
+ } else {
584
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
585
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
586
+ #pragma unroll
587
+ for(int ii = 0; ii < ILP; ii++) {
588
+ int idx = tid + ii * blockDim.x * gridDim.x;
589
+ if(idx < features * batch_size) {
590
+ r_x[ii] = X[idx];
591
+ }
592
+ }
593
+ #pragma unroll
594
+ for(int ii = 0; ii < ILP; ii++) {
595
+ r_x[ii] = sigmoid(static_cast<float>(r_x[ii]));
596
+ }
597
+ #pragma unroll
598
+ for(int ii = 0; ii < ILP; ii++) {
599
+ int idx = tid + ii * blockDim.x * gridDim.x;
600
+ if(idx < features * batch_size) {
601
+ X[idx] = r_x[ii];
602
+ }
603
+ }
604
+ }
605
+ }
606
+ }
607
+
608
+ // ReLU. Assume input X is [features x batch size], column major.
609
+ // Safe to call in-place.
610
+ template <typename T>
611
+ __global__ void Relu_bprop(T *dY, T *Y, uint batch_size, uint features, T *dX) {
612
+ T r_dy[ILP];
613
+ T r_y[ILP];
614
+ if(is_aligned(dY) &&
615
+ is_aligned(Y) &&
616
+ is_aligned(dX) &&
617
+ features % ILP ==0) {
618
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
619
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
620
+ load_store(r_dy, dY, 0 , tid);
621
+ load_store(r_y, Y, 0 , tid);
622
+ #pragma unroll
623
+ for(int ii=0;ii<ILP;ii++){
624
+ if ((float)r_y[ii] <= 0.f)
625
+ r_dy[ii] = 0;
626
+ }
627
+ load_store(dX, r_dy, tid, 0);
628
+ }
629
+ } else {
630
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
631
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
632
+ #pragma unroll
633
+ for(int ii = 0; ii < ILP; ii++) {
634
+ int idx = tid + ii * blockDim.x * gridDim.x;
635
+ if(idx < features * batch_size) {
636
+ r_dy[ii] = dY[idx];
637
+ r_y[ii] = Y[idx];
638
+ }
639
+ }
640
+ #pragma unroll
641
+ for(int ii = 0; ii < ILP; ii++) {
642
+ if ((float)r_y[ii] <= 0.f)
643
+ r_dy[ii] = 0;
644
+ }
645
+ #pragma unroll
646
+ for(int ii = 0; ii < ILP; ii++) {
647
+ int idx = tid + ii * blockDim.x * gridDim.x;
648
+ if(idx < features * batch_size) {
649
+ dX[idx] = r_dy[ii];
650
+ }
651
+ }
652
+ }
653
+ }
654
+ }
655
+
656
+ // Sigmoid. Assume input X is [features x batch size], column major.
657
+ // Safe to call in-place.
658
+ template <typename T>
659
+ __global__ void Sigmoid_bprop(T *dY, T *Y, uint batch_size, uint features, T *dX) {
660
+ T r_dy[ILP];
661
+ T r_y[ILP];
662
+ if(is_aligned(dY) &&
663
+ is_aligned(Y) &&
664
+ is_aligned(dX) &&
665
+ features % ILP ==0) {
666
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
667
+ for (; tid*ILP < features * batch_size; tid += blockDim.x * gridDim.x) {
668
+ load_store(r_dy, dY, 0 , tid);
669
+ load_store(r_y, Y, 0 , tid);
670
+ #pragma unroll
671
+ for(int ii=0;ii<ILP;ii++){
672
+ float grad_out = r_dy[ii];
673
+ float out = r_y[ii];
674
+ float grad_i = out * ( 1.f - out) * grad_out;
675
+ r_dy[ii] = grad_i;
676
+ }
677
+ load_store(dX, r_dy, tid, 0);
678
+ }
679
+ } else {
680
+ int tid = blockIdx.x * blockDim.x + threadIdx.x;
681
+ for (; tid < features * batch_size; tid += ILP * blockDim.x * gridDim.x) {
682
+ #pragma unroll
683
+ for(int ii = 0; ii < ILP; ii++) {
684
+ int idx = tid + ii * blockDim.x * gridDim.x;
685
+ if(idx < features * batch_size) {
686
+ r_dy[ii] = dY[idx];
687
+ r_y[ii] = Y[idx];
688
+ }
689
+ }
690
+ #pragma unroll
691
+ for(int ii = 0; ii < ILP; ii++) {
692
+ float grad_out = r_dy[ii];
693
+ float out = r_y[ii];
694
+ float grad_i = out * ( 1.f - out) * grad_out;
695
+ r_dy[ii] = grad_i;
696
+ }
697
+ #pragma unroll
698
+ for(int ii = 0; ii < ILP; ii++) {
699
+ int idx = tid + ii * blockDim.x * gridDim.x;
700
+ if(idx < features * batch_size) {
701
+ dX[idx] = r_dy[ii];
702
+ }
703
+ }
704
+ }
705
+ }
706
+ }
707
+
708
+ // Compute grid size for pointwise backward kernel.
709
+ // block_x/y is total elment being handled per block, not number of threads
710
+ void get_biasAddRelu_bprop_grid_size(
711
+ int yfeat,
712
+ int batch_size,
713
+ int block_x,
714
+ int block_y,
715
+ int* grid_x,
716
+ int* grid_y) {
717
+
718
+ *grid_x = (yfeat + block_x - 1) / block_x;
719
+ // Get number of SMs for efficient reduction.
720
+ int num_SMs = at::cuda::getCurrentDeviceProperties()->multiProcessorCount;
721
+ // can switch to occupancy calculation. use 4 below now for sm_70
722
+ int max_blocks_y = (num_SMs * 4+(*grid_x)-1) / (*grid_x);
723
+ // block_y should be from minimal work per thread
724
+ int nRedSplits = (batch_size + block_y - 1) / block_y;
725
+ // increase number of elem per thread redcution to not launch more than enough
726
+ // kernel adjust work, so here we just launch max block
727
+ *grid_y = std::min(nRedSplits, max_blocks_y);
728
+ return;
729
+ }
730
+
731
+ // Addition done deterministically via a 2-pass approach. Each CTA writes out partial
732
+ // sum, and the last CTA in grid Y dimension accumulates partials serially and writes to result.
733
+ template <typename T, int UNROLL_FACTOR>
734
+ __global__ void biasAdd_bprop(
735
+ T* dY,
736
+ int features,
737
+ int batch_size,
738
+ volatile float* intermediate,
739
+ int* semaphores,
740
+ T* db) {
741
+ // The feature that this thread is responsible for
742
+ int f = blockIdx.x * blockDim.x + threadIdx.x;
743
+
744
+ // Compute the span this thread is responsible for
745
+ // For this block
746
+ int b_chunkSize = (batch_size + gridDim.y - 1) / gridDim.y;
747
+ int b_nStart = blockIdx.y * b_chunkSize;
748
+ int b_nSpan = min(batch_size, b_nStart + b_chunkSize) - b_nStart;
749
+ // For this thread
750
+ int chunkSize = (b_chunkSize + blockDim.y - 1) / blockDim.y;
751
+ int nStart = threadIdx.y * chunkSize + b_nStart;
752
+ int nSpan = min(b_nStart + b_nSpan, nStart + chunkSize) - nStart;
753
+
754
+ volatile float* out = intermediate + blockIdx.y * features;
755
+
756
+ // Flag to trigger last reduction.
757
+ __shared__ bool isLastBlock;
758
+ // we know block size for now
759
+ __shared__ float smem[BIAS_RELU_BW_NTHREADS_X*BIAS_RELU_BW_NTHREADS_Y];
760
+
761
+ // Accumulate db in FP32 always
762
+ float db_local = 0;
763
+ if (f < features) {
764
+ int nidx = 0;
765
+ // Handle non-multiple of UNROLL_FACTOR residue
766
+ for (; nidx < nSpan % UNROLL_FACTOR; nidx++) {
767
+ int64_t row, col, flat_idx;
768
+ row = f;
769
+ col = nStart + nidx;
770
+ flat_idx = col * features + row;
771
+ db_local += (float)dY[flat_idx];
772
+ }
773
+
774
+ // Handle meat of work
775
+ for (; (nidx + UNROLL_FACTOR - 1) < nSpan; nidx += UNROLL_FACTOR) {
776
+ int64_t row, col, flat_idx;
777
+ row = f;
778
+ col = nStart + nidx;
779
+ flat_idx = col * features + row;
780
+ #pragma unroll 4
781
+ for (int u = 0; u < UNROLL_FACTOR; u++) {
782
+ db_local += (float)dY[flat_idx];
783
+ flat_idx += features;
784
+ }
785
+ }
786
+
787
+ // naive block reduction on y-dim
788
+ int linear_idx = threadIdx.y * blockDim.x + threadIdx.x;
789
+ smem[linear_idx] = db_local;
790
+ }
791
+ __syncthreads();
792
+ if (f < features) {
793
+ if(threadIdx.y == 0) {
794
+ for(int yidx = 1; yidx < blockDim.y; yidx++){
795
+ db_local += smem[yidx * blockDim.x + threadIdx.x];
796
+ }
797
+
798
+ // block result is in db_local now for all threadIdx.y == 0
799
+ // Write out partial result
800
+ out[f] = db_local;
801
+ }
802
+ }
803
+ __threadfence();
804
+ __syncthreads();
805
+
806
+ // Increment semaphore and check if this is the last CTA in the grid_y dimension.
807
+ // Only thread (0,0) calls this
808
+ if (threadIdx.x == 0 && threadIdx.y == 0 && f < features) {
809
+ unsigned int sum_idx;
810
+ sum_idx = atomicAdd(&(semaphores[blockIdx.x]), 1);
811
+ isLastBlock = (sum_idx == (gridDim.y - 1));
812
+ }
813
+ __syncthreads();
814
+
815
+ db_local = 0;
816
+ // No block reduction for now, only thread (*,0) do grid reduction
817
+ if (isLastBlock && f < features) {
818
+ if(threadIdx.y == 0) {
819
+ for (int n = 0; n < gridDim.y; n++) {
820
+ int row, col;
821
+ row = f;
822
+ col = n;
823
+ db_local += (float)(intermediate[col * features + row]);
824
+ }
825
+ db[f] = (T)db_local;
826
+ }
827
+ }
828
+ }
829
+
830
+ // Addition done deterministically via a 2-pass approach. Each CTA writes out partial
831
+ // sum, and the last CTA in grid Y dimension accumulates partials serially and writes to result.
832
+ template <typename T, int UNROLL_FACTOR>
833
+ __global__ void biasAddRelu_bprop(
834
+ T* Y,
835
+ T* dY,
836
+ int features,
837
+ int batch_size,
838
+ T* dX,
839
+ volatile float* intermediate,
840
+ int* semaphores,
841
+ T* db) {
842
+ // The feature that this thread is responsible for
843
+ int f = blockIdx.x * blockDim.x + threadIdx.x;
844
+
845
+ // Compute the span this thread is responsible for
846
+ // For this block
847
+ int b_chunkSize = (batch_size + gridDim.y - 1) / gridDim.y;
848
+ int b_nStart = blockIdx.y * b_chunkSize;
849
+ int b_nSpan = min(batch_size, b_nStart + b_chunkSize) - b_nStart;
850
+ // For this thread
851
+ int chunkSize = (b_chunkSize + blockDim.y - 1) / blockDim.y;
852
+ int nStart = threadIdx.y * chunkSize + b_nStart;
853
+ int nSpan = min(b_nStart + b_nSpan, nStart + chunkSize) - nStart;
854
+
855
+ volatile float* out = intermediate + blockIdx.y * features;
856
+
857
+ // Flag to trigger last reduction.
858
+ __shared__ bool isLastBlock;
859
+ // we know block size for now
860
+ __shared__ float smem[BIAS_RELU_BW_NTHREADS_X*BIAS_RELU_BW_NTHREADS_Y];
861
+
862
+ // Accumulate db in FP32 always
863
+ float db_local = 0;
864
+ if (f < features) {
865
+ int nidx = 0;
866
+ // Handle non-multiple of UNROLL_FACTOR residue
867
+ for (; nidx < nSpan % UNROLL_FACTOR; nidx++) {
868
+ int row, col, flat_idx;
869
+ row = f;
870
+ col = nStart + nidx;
871
+ flat_idx = col * features + row;
872
+ T y_val = Y[flat_idx];
873
+ T dy_val = dY[flat_idx];
874
+ T dx_val;
875
+ if ((float)y_val > 0.f)
876
+ dx_val = dy_val;
877
+ else
878
+ dx_val = 0;
879
+ dX[flat_idx] = dx_val;
880
+ db_local += (float)dx_val;
881
+ }
882
+
883
+ // Handle meat of work
884
+ for (; (nidx + UNROLL_FACTOR - 1) < nSpan; nidx += UNROLL_FACTOR) {
885
+ int row, col, flat_idx;
886
+ row = f;
887
+ col = nStart + nidx;
888
+ flat_idx = col * features + row;
889
+ #pragma unroll 4
890
+ for (int u = 0; u < UNROLL_FACTOR; u++) {
891
+ T y_val = Y[flat_idx];
892
+ T dy_val = dY[flat_idx];
893
+ T dx_val;
894
+ if ((float)y_val > 0.f)
895
+ dx_val = dy_val;
896
+ else
897
+ dx_val = 0;
898
+ dX[flat_idx] = dx_val;
899
+ db_local += (float)dx_val;
900
+ flat_idx += features;
901
+ }
902
+ }
903
+
904
+ // naive block reduction on y-dim
905
+ int linear_idx = threadIdx.y * blockDim.x + threadIdx.x;
906
+ smem[linear_idx] = db_local;
907
+ }
908
+ __syncthreads();
909
+ if (f < features) {
910
+ if(threadIdx.y == 0) {
911
+ for(int yidx = 1; yidx < blockDim.y; yidx++){
912
+ db_local += smem[yidx * blockDim.x + threadIdx.x];
913
+ }
914
+
915
+ // block result is in db_local now for all threadIdx.y == 0
916
+ // Write out partial result
917
+ out[f] = db_local;
918
+ }
919
+ }
920
+ __threadfence();
921
+ __syncthreads();
922
+
923
+ // Increment semaphore and check if this is the last CTA in the grid_y dimension.
924
+ // Only thread (0,0) calls this
925
+ if (threadIdx.x == 0 && threadIdx.y == 0 && f < features) {
926
+ unsigned int sum_idx;
927
+ sum_idx = atomicAdd(&(semaphores[blockIdx.x]), 1);
928
+ isLastBlock = (sum_idx == (gridDim.y - 1));
929
+ }
930
+ __syncthreads();
931
+
932
+ db_local = 0;
933
+ // No block reduction for now, only thread (*,0) do grid reduction
934
+ if (isLastBlock && f < features) {
935
+ if(threadIdx.y == 0) {
936
+ for (int n = 0; n < gridDim.y; n++) {
937
+ int row, col;
938
+ row = f;
939
+ col = n;
940
+ db_local += (float)(intermediate[col * features + row]);
941
+ }
942
+ db[f] = (T)db_local;
943
+ }
944
+ }
945
+ }
946
+
947
+ // Addition done deterministically via a 2-pass approach. Each CTA writes out partial
948
+ // sum, and the last CTA in grid Y dimension accumulates partials serially and writes to result.
949
+ template <typename T, int UNROLL_FACTOR>
950
+ __global__ void biasAddRelu_bprop_aligned(
951
+ T* Y,
952
+ T* dY,
953
+ int features,
954
+ int batch_size,
955
+ T* dX,
956
+ volatile float* intermediate,
957
+ int* semaphores,
958
+ T* db) {
959
+ // The feature that this thread is responsible for
960
+ int f = blockIdx.x * blockDim.x + threadIdx.x;
961
+
962
+ // Compute the span this thread is responsible for
963
+ // For this block
964
+ int b_chunkSize = (batch_size + gridDim.y - 1) / gridDim.y;
965
+ int b_nStart = blockIdx.y * b_chunkSize;
966
+ int b_nSpan = min(batch_size, b_nStart + b_chunkSize) - b_nStart;
967
+ // For this thread
968
+ int chunkSize = (b_chunkSize + blockDim.y - 1) / blockDim.y;
969
+ int nStart = threadIdx.y * chunkSize + b_nStart;
970
+ int nSpan = min(b_nStart + b_nSpan, nStart + chunkSize) - nStart;
971
+
972
+ volatile float* out = intermediate + blockIdx.y * features;
973
+
974
+ // Flag to trigger last reduction.
975
+ __shared__ bool isLastBlock;
976
+
977
+ // Accumulate db in FP32 always
978
+ float db_local[ILP];
979
+ T r_y[ILP];
980
+ T r_dy[ILP];
981
+ #pragma unroll
982
+ for(int ii=0;ii<ILP;ii++){
983
+ db_local[ii] = 0.f;
984
+ }
985
+
986
+ // f always <= features in this case
987
+ //if (f < features) {
988
+ int nidx = 0;
989
+
990
+ // Handle non-multiple of UNROLL_FACTOR residue
991
+ for (; nidx < nSpan % UNROLL_FACTOR; nidx++) {
992
+ int row, col, flat_idx;
993
+ row = f;
994
+ col = nStart + nidx;
995
+ flat_idx = col * features / ILP + row;
996
+
997
+ load_store(r_y, Y, 0, flat_idx);
998
+ load_store(r_dy, dY, 0, flat_idx);
999
+ #pragma unroll
1000
+ for(int ii=0;ii<ILP;ii++){
1001
+ if ((float)r_y[ii] <= 0.f)
1002
+ r_dy[ii] = 0;
1003
+ db_local[ii] += (float)r_dy[ii];
1004
+ }
1005
+ load_store(dX, r_dy, flat_idx, 0);
1006
+ }
1007
+
1008
+ // Handle meat of work
1009
+ for (; (nidx + UNROLL_FACTOR - 1) < nSpan; nidx += UNROLL_FACTOR) {
1010
+ int row, col, flat_idx;
1011
+ row = f;
1012
+ col = nStart + nidx;
1013
+ flat_idx = col * features / ILP + row; // total threads in x == features/ILP
1014
+ #pragma unroll
1015
+ for (int u = 0; u < UNROLL_FACTOR; u++) {
1016
+ load_store(r_y, Y, 0, flat_idx);
1017
+ load_store(r_dy, dY, 0, flat_idx);
1018
+ #pragma unroll
1019
+ for(int ii=0;ii<ILP;ii++){
1020
+ if ((float)r_y[ii] <= 0.f)
1021
+ r_dy[ii] = 0;
1022
+ db_local[ii] += (float)r_dy[ii];
1023
+ }
1024
+ load_store(dX, r_dy, flat_idx, 0);
1025
+ flat_idx += features/ILP;
1026
+ }
1027
+ }
1028
+
1029
+ // we know block size for now
1030
+ __shared__ float smem[BIAS_RELU_BW_NTHREADS_X*BIAS_RELU_BW_NTHREADS_Y*ILP];
1031
+ // naive block reduction on y-dim
1032
+ int linear_idx = threadIdx.y * blockDim.x + threadIdx.x;
1033
+ float* smem_out = smem + ILP * linear_idx;
1034
+ #pragma unroll
1035
+ for(int ii=0;ii<ILP;ii++){
1036
+ smem_out[ii] = db_local[ii]; // reuse local dy buffer
1037
+ }
1038
+ __syncthreads();
1039
+ if(threadIdx.y == 0) {
1040
+ for(int yidx = 1; yidx < blockDim.y; yidx++){
1041
+ float* smem_in = smem + ILP * (yidx * blockDim.x + threadIdx.x);
1042
+ #pragma unroll
1043
+ for(int ii=0;ii<ILP;ii++){
1044
+ db_local[ii] += smem_in[ii]; // reuse local dy buffer
1045
+ }
1046
+ }
1047
+
1048
+ // block result is in db_local now for all threadIdx.y == 0
1049
+ if(gridDim.y == 1) {
1050
+ #pragma unroll
1051
+ for(int ii=0;ii<ILP;ii++){
1052
+ r_dy[ii] = db_local[ii]; // reuse local dy buffer
1053
+ }
1054
+ load_store(db, r_dy, f, 0);
1055
+ return;
1056
+ }
1057
+
1058
+ // Write out partial result
1059
+ load_store(out, db_local, f, 0);
1060
+ }
1061
+ __threadfence();
1062
+ __syncthreads();
1063
+
1064
+ // Increment semaphore and check if this is the last CTA in the grid_y dimension.
1065
+ // Only thread (0,0) calls this
1066
+ if (threadIdx.x == 0 && threadIdx.y == 0) {
1067
+ unsigned int sum_idx;
1068
+ sum_idx = atomicAdd(&(semaphores[blockIdx.x]), 1);
1069
+ isLastBlock = (sum_idx == (gridDim.y - 1));
1070
+ }
1071
+ __syncthreads();
1072
+
1073
+ #pragma unroll
1074
+ for(int ii=0;ii<ILP;ii++){
1075
+ db_local[ii] = 0.f;
1076
+ }
1077
+ float r_db[ILP];
1078
+
1079
+ // No block reduction for now, only thread (*,0) do grid reduction
1080
+ if (isLastBlock) {
1081
+ if(threadIdx.y == 0){
1082
+ for (int n = 0; n < gridDim.y; n++) {
1083
+ int row, col;
1084
+ row = f;
1085
+ col = n;
1086
+ load_store(r_db, intermediate, 0, col * features / ILP + row);
1087
+ #pragma unroll
1088
+ for(int ii=0;ii<ILP;ii++){
1089
+ db_local[ii] += r_db[ii];
1090
+ }
1091
+ }
1092
+ #pragma unroll
1093
+ for(int ii=0;ii<ILP;ii++){
1094
+ r_dy[ii] = db_local[ii]; // reuse local dy buffer
1095
+ }
1096
+ load_store(db, r_dy, f, 0);
1097
+ }
1098
+ }
1099
+ }
1100
+
1101
+ // Lists where the num_layers-1 intermediate Y buffers start in reserved space on fprop, starting
1102
+ // offset 0. The last Y value is, of course, stored in the user provided output buffer.
1103
+ void get_y_offsets(
1104
+ int batch_size,
1105
+ int num_layers,
1106
+ const int* output_features,
1107
+ int* y_start_offsets) {
1108
+ y_start_offsets[0] = 0;
1109
+ for (int i = 1; i < num_layers; i++) {
1110
+ y_start_offsets[i] = y_start_offsets[i - 1] + batch_size * output_features[i - 1];
1111
+ }
1112
+ }
1113
+
1114
+ // Returns the reserved space (in elements) needed for the MLP
1115
+ size_t get_mlp_reserved_space(int64_t batch_size, int num_layers, const int* output_features) {
1116
+ size_t res_space = 0;
1117
+ // Need to store output of every intermediate MLP - size equal to output_features[i] * batch_size
1118
+ // for all 'i' in [0, num_layers-1)
1119
+ for (int l = 0; l < num_layers; l++) {
1120
+ res_space += output_features[l] * batch_size;
1121
+ }
1122
+ return res_space;
1123
+ }
1124
+
1125
+ // Returns the size of all fprop activations combined
1126
+ size_t get_all_activations_size(int64_t batch_size, int num_layers, const int* output_features) {
1127
+ size_t acts_size = 0;
1128
+ for (int l = 0; l < num_layers; l++) {
1129
+ acts_size += output_features[l] * batch_size;
1130
+ }
1131
+ return acts_size;
1132
+ }
1133
+
1134
+ #if 0
1135
+ // Returns the work space (in elements) needed for the MLP bprop.
1136
+ size_t get_mlp_bp_workspace (int batch_size, int num_layers, const int* output_features) {
1137
+ /*
1138
+ Workspace is partitioned as
1139
+ DY_GEMMs : DX_GEMMs
1140
+ */
1141
+ size_t work_space = 0;
1142
+
1143
+ // Store each intermediate dY explicitly. Need 2 dYs per MLP layer (one for o/p
1144
+ // of biasReLU_bp and one for o/p of dgrad GEMM).
1145
+ work_space += 2*get_all_activations_size(batch_size, num_layers, output_features);
1146
+
1147
+ return work_space;
1148
+ }
1149
+ #endif
1150
+
1151
+ // Scratch space needed for reductions in number of elements
1152
+ size_t get_reduction_scratch_space(int batch_size, int num_layers, const int* output_features) {
1153
+ size_t max_scratch_space = 0;
1154
+ // Loop over all layers to see which one needs the max scratch space
1155
+ for (int l = 0; l < num_layers; l++) {
1156
+ // need to find max(aligned, not_aligned)
1157
+ int tmp, res0, res1;
1158
+
1159
+ int block_x = BIAS_RELU_BW_NTHREADS_X;
1160
+ int block_y = BIAS_RELU_RED_PER_THREAD * BIAS_RELU_BW_NTHREADS_Y;
1161
+ get_biasAddRelu_bprop_grid_size(
1162
+ output_features[l], batch_size, block_x, block_y, &tmp, &res0);
1163
+
1164
+ block_x = ILP * BIAS_RELU_BW_NTHREADS_X;
1165
+ get_biasAddRelu_bprop_grid_size(
1166
+ output_features[l], batch_size, block_x, block_y, &tmp, &res1);
1167
+
1168
+ max_scratch_space = std::max(max_scratch_space, (size_t)(output_features[l] * res0));
1169
+ max_scratch_space = std::max(max_scratch_space, (size_t)(output_features[l] * res1));
1170
+ }
1171
+
1172
+ return max_scratch_space;
1173
+ }
1174
+
1175
+ // Buffer for semaphores
1176
+ size_t get_semaphores_size(int num_layers, const int* output_features) {
1177
+ // Upper bound on semaphores is one per feature for the layer
1178
+ // with the most features.
1179
+ int max_features = 0;
1180
+ for (int l = 0; l < num_layers; l++) {
1181
+ max_features = std::max(max_features, output_features[l]);
1182
+ }
1183
+ return (size_t)max_features;
1184
+ }
1185
+
1186
+ // Returns the work space (in elements) needed for the MLP bprop.
1187
+ template <typename T>
1188
+ size_t get_mlp_bp_workspace_in_bytes(int batch_size, int num_layers, const int* output_features) {
1189
+ size_t work_space = 0;
1190
+
1191
+ // Store each intermediate dY explicitly. Need 2 dYs per MLP layer (one for o/p
1192
+ // of biasReLU_bp and one for o/p of dgrad GEMM).
1193
+ work_space += 2 * get_all_activations_size(batch_size, num_layers, output_features) * sizeof(T);
1194
+ work_space +=
1195
+ get_reduction_scratch_space(batch_size, num_layers, output_features) * sizeof(float);
1196
+ work_space += get_semaphores_size(num_layers, output_features) * sizeof(int);
1197
+
1198
+ return work_space;
1199
+ }
1200
+
1201
+ // Returns pointers to each segment of the workspace
1202
+ template <typename T>
1203
+ void partition_mlp_bp_workspace(
1204
+ int batch_size,
1205
+ int num_layers,
1206
+ const int* output_features,
1207
+ void* work_space,
1208
+ T** dy_gemms,
1209
+ T** dx_gemms,
1210
+ float** db_scratch,
1211
+ int** semaphores) {
1212
+ /*
1213
+ Workspace is partitioned as
1214
+ DY_GEMMs : DX_GEMMs : DB_SCRATCH : SEMAPHORES
1215
+ */
1216
+ // Start address where dy_gemm tensors are stored
1217
+ *dy_gemms = reinterpret_cast<T*>(work_space);
1218
+ // Start address where dx_gemm tensors are stored
1219
+ *dx_gemms = *dy_gemms + get_all_activations_size(batch_size, num_layers, output_features);
1220
+ // Start address where db intermediate tensors are stored
1221
+ *db_scratch = reinterpret_cast<float*>(
1222
+ *dx_gemms + get_all_activations_size(batch_size, num_layers, output_features));
1223
+ // Start address of semaphores
1224
+ *semaphores = reinterpret_cast<int*>(
1225
+ *db_scratch + get_reduction_scratch_space(batch_size, num_layers, output_features));
1226
+
1227
+ return;
1228
+ }
1229
+
1230
+ // Does a simple MLP fprop (GEMM+bias+ReLU).
1231
+ // Can handle num_layers number of layers, each with its own shape. Output of layer i is assumed
1232
+ // to be input of layer i+1. output_features, WPtr and BPtr are arrays of length num_layers, and
1233
+ // must be in the same order i.e. WPtr[i] and BPtr[i] are respectively the weight and bias of layer
1234
+ // 'i'.
1235
+ template <typename T>
1236
+ int mlp_fp(
1237
+ T* X,
1238
+ int input_features,
1239
+ int batch_size,
1240
+ T** WPtr,
1241
+ int num_layers,
1242
+ int* output_features,
1243
+ T** BPtr,
1244
+ T* Y,
1245
+ T* reserved_space,
1246
+ int use_bias,
1247
+ int activation,
1248
+ void* lt_workspace) {
1249
+ T *weight, *input, *output, *bias;
1250
+ T *reserved_space_x, *reserved_space_y;
1251
+ reserved_space_x = NULL;
1252
+ reserved_space_y = reserved_space;
1253
+
1254
+ // Get cublas handle from Pytorch
1255
+ cublasHandle_t handle = at::cuda::getCurrentCUDABlasHandle();
1256
+ // Get the stream from cublas handle to reuse for biasReLU kernel.
1257
+ cudaStream_t stream;
1258
+ cublasGetStream(handle, &stream);
1259
+
1260
+ for (int layer = 0; layer < num_layers; layer++) {
1261
+ weight = WPtr[layer];
1262
+ input = (layer == 0) ? X : reserved_space_x;
1263
+ output = (layer == num_layers - 1) ? Y : reserved_space_y;
1264
+ if (use_bias) {
1265
+ bias = BPtr[layer];
1266
+ }
1267
+ int ifeat = (layer == 0) ? input_features : output_features[layer - 1];
1268
+ int ofeat = output_features[layer];
1269
+
1270
+ float one = 1.f;
1271
+ float zero = 0.f;
1272
+
1273
+ // try with cublaslt first for supported case with valid handle
1274
+ int cublaslt_status = 1;
1275
+ #if defined(CUBLAS_VERSION) && CUBLAS_VERSION >= 11000
1276
+ if(activation < 1){
1277
+ cublaslt_status = mlp_gemm_lt(
1278
+ //ltHandle,
1279
+ (cublasLtHandle_t)handle,
1280
+ CUBLAS_OP_T,
1281
+ CUBLAS_OP_N,
1282
+ ofeat,
1283
+ batch_size,
1284
+ ifeat,
1285
+ &one,
1286
+ weight,
1287
+ ifeat,
1288
+ input,
1289
+ ifeat,
1290
+ &zero,
1291
+ output,
1292
+ ofeat,
1293
+ lt_workspace,
1294
+ 1 << 22,
1295
+ stream,
1296
+ use_bias == 1,
1297
+ activation == 1,
1298
+ bias);
1299
+ }
1300
+ #endif
1301
+
1302
+ // if cublaslt failed or not executed, fallback to cublas
1303
+ if (cublaslt_status != 0) {
1304
+ cublasStatus_t cublas_status;
1305
+ // Call GEMM: fprop is Y = W'X
1306
+ cublas_status = mlp_gemm(
1307
+ handle,
1308
+ CUBLAS_OP_T,
1309
+ CUBLAS_OP_N,
1310
+ ofeat,
1311
+ batch_size,
1312
+ ifeat,
1313
+ &one,
1314
+ weight,
1315
+ ifeat,
1316
+ input,
1317
+ ifeat,
1318
+ &zero,
1319
+ output,
1320
+ ofeat);
1321
+
1322
+ if (cublas_status != CUBLAS_STATUS_SUCCESS) {
1323
+ printf("GEMM fprop failed with %d\n", cublas_status);
1324
+ return 1;
1325
+ }
1326
+
1327
+ const uint &input_size = ofeat;
1328
+ int num_blocks = 0;
1329
+ int num_SMs = at::cuda::getCurrentDeviceProperties()->multiProcessorCount;
1330
+ // Call biasReLU
1331
+ if(use_bias == 1) {
1332
+ if (activation == 0) { // no activation
1333
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, biasAdd_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1334
+ biasAdd_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, bias, batch_size, input_size);
1335
+ } else if (activation == 1) { // relu
1336
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, biasAddRelu_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1337
+ biasAddRelu_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, bias, batch_size, input_size);
1338
+ } else if (activation == 2) { // sigmoid
1339
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, biasAdd_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1340
+ biasAdd_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, bias, batch_size, input_size);
1341
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Sigmoid_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1342
+ Sigmoid_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, batch_size, input_size);
1343
+ }
1344
+ } else {
1345
+ // don't need to do anything in case of no activation and no bias
1346
+ if (activation == 1) { // relu
1347
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Relu_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1348
+ Relu_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, batch_size, input_size);
1349
+ } else if (activation == 2) { // sigmoid
1350
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Sigmoid_fprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1351
+ Sigmoid_fprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(output, batch_size, input_size);
1352
+ }
1353
+ }
1354
+ }
1355
+ // Set current output as next layer input
1356
+ reserved_space_x = reserved_space_y;
1357
+ // Set next layer output
1358
+ reserved_space_y += ofeat * batch_size;
1359
+ }
1360
+
1361
+ return 0;
1362
+ }
1363
+
1364
+ // Does a simple MLP bprop (GEMM+bias+ReLU).
1365
+ // Needs reserved space to come back exactly as it was populated in fprop.
1366
+ // Does dgrad and wgrad sequentially.
1367
+ template <typename T>
1368
+ int mlp_bp(
1369
+ T* X,
1370
+ T* Y,
1371
+ int input_features,
1372
+ int batch_size,
1373
+ T** WPtr,
1374
+ int num_layers,
1375
+ int* output_features,
1376
+ T* dY,
1377
+ T* reserved_space,
1378
+ T* work_space,
1379
+ T* dX,
1380
+ T** dwPtr,
1381
+ T** dbPtr,
1382
+ bool requires_grad,
1383
+ int use_bias,
1384
+ int activation) {
1385
+ T* weight;
1386
+ T *dweight, *dx, *dy, *dbias;
1387
+ T *x, *y;
1388
+
1389
+ // Where the dx of the biasReLU (== dy of gemm) is stored. Can be thrown away
1390
+ // after bp call.
1391
+ T* dy_gemm_base;
1392
+ // Where the dx after GEMM is stored.
1393
+ T* dx_gemm_base;
1394
+ // Where partial reduction results are stored.
1395
+ float* db_scratch;
1396
+ // Semaphores for reduction.
1397
+ int* semaphores;
1398
+
1399
+ partition_mlp_bp_workspace<T>(
1400
+ batch_size,
1401
+ num_layers,
1402
+ output_features,
1403
+ work_space,
1404
+ &dy_gemm_base,
1405
+ &dx_gemm_base,
1406
+ &db_scratch,
1407
+ &semaphores);
1408
+
1409
+ size_t semaphore_size = get_semaphores_size(num_layers, output_features) * sizeof(int);
1410
+
1411
+ // Get cublas handle from Pytorch
1412
+ cublasHandle_t handle = at::cuda::getCurrentCUDABlasHandle();
1413
+ // Get the stream from cublas handle to reuse for biasReLU kernel.
1414
+ cudaStream_t stream;
1415
+ cublasGetStream(handle, &stream);
1416
+
1417
+ int* y_offsets = (int*)malloc(num_layers * sizeof(int));
1418
+ get_y_offsets(batch_size, num_layers, output_features, y_offsets);
1419
+
1420
+ for (int layer = num_layers - 1; layer >= 0; layer--) {
1421
+ weight = WPtr[layer];
1422
+ dweight = dwPtr[layer];
1423
+
1424
+ // x is read from reserved space
1425
+ x = (layer == 0) ? X : reserved_space + y_offsets[layer - 1];
1426
+ // dx is written in workspace for all but layer==0
1427
+ dx = (layer == 0) ? dX : dx_gemm_base + y_offsets[layer - 1];
1428
+
1429
+ // y is read from reserved space
1430
+ y = (layer == num_layers - 1) ? Y : reserved_space + y_offsets[layer];
1431
+ // dx from layer+1
1432
+ dy = (layer == num_layers - 1) ? dY : dx_gemm_base + y_offsets[layer];
1433
+ // dy_gemm is written to and read immediately
1434
+ T* dy_gemm = dy_gemm_base + y_offsets[layer];
1435
+
1436
+ dbias = dbPtr[layer];
1437
+ int xfeat = (layer == 0) ? input_features : output_features[layer - 1];
1438
+ int yfeat = output_features[layer];
1439
+
1440
+ float one = 1.f;
1441
+ float zero = 0.f;
1442
+
1443
+ if (use_bias == 1) {
1444
+ if (activation == 0) { // no acitvation
1445
+ // bgrad
1446
+ dim3 block(BIAS_RELU_BW_NTHREADS_X, BIAS_RELU_BW_NTHREADS_Y);
1447
+ int grid_x, grid_y;
1448
+ cudaMemsetAsync(semaphores, 0, semaphore_size, stream);
1449
+
1450
+ int block_x = BIAS_RELU_BW_NTHREADS_X;
1451
+ int block_y = BIAS_RELU_RED_PER_THREAD * BIAS_RELU_BW_NTHREADS_Y;
1452
+ get_biasAddRelu_bprop_grid_size(yfeat, batch_size, block_x, block_y, &grid_x, &grid_y);
1453
+ dim3 grid(grid_x, grid_y);
1454
+ biasAdd_bprop<T, 4><<<grid, block, 0, stream>>>(
1455
+ dy, yfeat, batch_size, db_scratch, semaphores, dbias);
1456
+ // bypass dgrad through reset pointer
1457
+ dy_gemm = dy;
1458
+ } else if (activation == 1) { // relu
1459
+ dim3 block(BIAS_RELU_BW_NTHREADS_X, BIAS_RELU_BW_NTHREADS_Y);
1460
+ int grid_x, grid_y;
1461
+ cudaMemsetAsync(semaphores, 0, semaphore_size, stream);
1462
+
1463
+ if(yfeat % (ILP * BIAS_RELU_BW_NTHREADS_X) == 0 &&
1464
+ is_aligned(y) &&
1465
+ is_aligned(dy) &&
1466
+ is_aligned(dy_gemm) &&
1467
+ is_aligned(dbias)){
1468
+ int block_x = ILP * BIAS_RELU_BW_NTHREADS_X;
1469
+ int block_y = BIAS_RELU_RED_PER_THREAD * BIAS_RELU_BW_NTHREADS_Y;
1470
+ get_biasAddRelu_bprop_grid_size(yfeat, batch_size, block_x, block_y, &grid_x, &grid_y);
1471
+ dim3 grid(grid_x, grid_y);
1472
+ biasAddRelu_bprop_aligned<T, 4><<<grid, block, 0, stream>>>(
1473
+ y, dy, yfeat, batch_size, dy_gemm, db_scratch, semaphores, dbias);
1474
+ } else {
1475
+ int block_x = BIAS_RELU_BW_NTHREADS_X;
1476
+ int block_y = BIAS_RELU_RED_PER_THREAD * BIAS_RELU_BW_NTHREADS_Y;
1477
+ get_biasAddRelu_bprop_grid_size(yfeat, batch_size, block_x, block_y, &grid_x, &grid_y);
1478
+ dim3 grid(grid_x, grid_y);
1479
+ biasAddRelu_bprop<T, 4><<<grid, block, 0, stream>>>(
1480
+ y, dy, yfeat, batch_size, dy_gemm, db_scratch, semaphores, dbias);
1481
+ }
1482
+ } else if (activation == 2) { // sigmoid
1483
+ // activation backward
1484
+ int num_blocks = 0;
1485
+ int num_SMs = at::cuda::getCurrentDeviceProperties()->multiProcessorCount;
1486
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Sigmoid_bprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1487
+ Sigmoid_bprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(dy, y, batch_size, yfeat, dy_gemm);
1488
+
1489
+ // bgrad, from dy_gemm
1490
+ dim3 block(BIAS_RELU_BW_NTHREADS_X, BIAS_RELU_BW_NTHREADS_Y);
1491
+ int grid_x, grid_y;
1492
+ cudaMemsetAsync(semaphores, 0, semaphore_size, stream);
1493
+
1494
+ int block_x = BIAS_RELU_BW_NTHREADS_X;
1495
+ int block_y = BIAS_RELU_RED_PER_THREAD * BIAS_RELU_BW_NTHREADS_Y;
1496
+ get_biasAddRelu_bprop_grid_size(yfeat, batch_size, block_x, block_y, &grid_x, &grid_y);
1497
+ dim3 grid(grid_x, grid_y);
1498
+ biasAdd_bprop<T, 4><<<grid, block, 0, stream>>>(
1499
+ dy_gemm, yfeat, batch_size, db_scratch, semaphores, dbias);
1500
+ }
1501
+ } else { // no bias below
1502
+ if (activation == 0) {
1503
+ // bypass dgrad through reset pointer
1504
+ dy_gemm = dy;
1505
+ } else if (activation == 1) { // relu
1506
+ int num_blocks = 0;
1507
+ int num_SMs = at::cuda::getCurrentDeviceProperties()->multiProcessorCount;
1508
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Relu_bprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1509
+ Relu_bprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(dy, y, batch_size, yfeat, dy_gemm);
1510
+ } else if (activation == 2) { // sigmoid
1511
+ int num_blocks = 0;
1512
+ int num_SMs = at::cuda::getCurrentDeviceProperties()->multiProcessorCount;
1513
+ cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks, Sigmoid_bprop<T>, BIAS_RELU_FW_NTHREADS, 0);
1514
+ Sigmoid_bprop<<<num_SMs*num_blocks, BIAS_RELU_FW_NTHREADS, 0, stream>>>(dy, y, batch_size, yfeat, dy_gemm);
1515
+ }
1516
+ }
1517
+
1518
+ cublasStatus_t cublas_status;
1519
+ // Call GEMM dgrad
1520
+ if (layer > 0 || requires_grad == 1) {
1521
+ cublas_status = mlp_gemm(
1522
+ handle,
1523
+ CUBLAS_OP_N,
1524
+ CUBLAS_OP_N,
1525
+ xfeat,
1526
+ batch_size,
1527
+ yfeat,
1528
+ &one,
1529
+ weight,
1530
+ xfeat,
1531
+ dy_gemm,
1532
+ yfeat,
1533
+ &zero,
1534
+ dx,
1535
+ xfeat);
1536
+
1537
+ if (cublas_status != CUBLAS_STATUS_SUCCESS) {
1538
+ printf("GEMM dgrad failed with %d\n", cublas_status);
1539
+ return 1;
1540
+ }
1541
+ }
1542
+
1543
+ // Call GEMM wgrad
1544
+ cublas_status = mlp_gemm(
1545
+ handle,
1546
+ CUBLAS_OP_N,
1547
+ CUBLAS_OP_T,
1548
+ xfeat,
1549
+ yfeat,
1550
+ batch_size,
1551
+ &one,
1552
+ x,
1553
+ xfeat,
1554
+ dy_gemm,
1555
+ yfeat,
1556
+ &zero,
1557
+ dweight,
1558
+ xfeat);
1559
+
1560
+ if (cublas_status != CUBLAS_STATUS_SUCCESS) {
1561
+ printf("GEMM wgrad failed with %d\n", cublas_status);
1562
+ return 1;
1563
+ }
1564
+ }
1565
+
1566
+ return 0;
1567
+ }
1568
+
1569
+ // Instantiate for floating point types
1570
+ template int mlp_fp<float>(
1571
+ float* X,
1572
+ int input_features,
1573
+ int batch_size,
1574
+ float** WPtr,
1575
+ int num_layers,
1576
+ int* output_features,
1577
+ float** BPtr,
1578
+ float* Y,
1579
+ float* reserved_space,
1580
+ int use_bias,
1581
+ int activation,
1582
+ void* lt_workspace);
1583
+
1584
+ template int mlp_bp<float>(
1585
+ float* X,
1586
+ float* Y,
1587
+ int input_features,
1588
+ int batch_size,
1589
+ float** WPtr,
1590
+ int num_layers,
1591
+ int* output_features,
1592
+ float* dY,
1593
+ float* reserved_space,
1594
+ float* work_space,
1595
+ float* dX,
1596
+ float** dwPtr,
1597
+ float** dbPtr,
1598
+ bool requires_grad,
1599
+ int use_bias,
1600
+ int activation);
1601
+
1602
+ template int mlp_fp<at::Half>(
1603
+ at::Half* X,
1604
+ int input_features,
1605
+ int batch_size,
1606
+ at::Half** WPtr,
1607
+ int num_layers,
1608
+ int* output_features,
1609
+ at::Half** BPtr,
1610
+ at::Half* Y,
1611
+ at::Half* reserved_space,
1612
+ int use_bias,
1613
+ int activation,
1614
+ void* lt_workspace);
1615
+
1616
+ template int mlp_bp<at::Half>(
1617
+ at::Half* X,
1618
+ at::Half* Y,
1619
+ int input_features,
1620
+ int batch_size,
1621
+ at::Half** WPtr,
1622
+ int num_layers,
1623
+ int* output_features,
1624
+ at::Half* dY,
1625
+ at::Half* reserved_space,
1626
+ at::Half* work_space,
1627
+ at::Half* dX,
1628
+ at::Half** dwPtr,
1629
+ at::Half** dbPtr,
1630
+ bool requires_grad,
1631
+ int use_bias,
1632
+ int activation);
1633
+
1634
+ template int mlp_fp<double>(
1635
+ double* X,
1636
+ int input_features,
1637
+ int batch_size,
1638
+ double** WPtr,
1639
+ int num_layers,
1640
+ int* output_features,
1641
+ double** BPtr,
1642
+ double* Y,
1643
+ double* reserved_space,
1644
+ int use_bias,
1645
+ int activation,
1646
+ void* lt_workspace);
1647
+
1648
+ template int mlp_bp<double>(
1649
+ double* X,
1650
+ double* Y,
1651
+ int input_features,
1652
+ int batch_size,
1653
+ double** WPtr,
1654
+ int num_layers,
1655
+ int* output_features,
1656
+ double* dY,
1657
+ double* reserved_space,
1658
+ double* work_space,
1659
+ double* dX,
1660
+ double** dwPtr,
1661
+ double** dbPtr,
1662
+ bool requires_grad,
1663
+ int use_bias,
1664
+ int activation);
1665
+
1666
+ template size_t get_mlp_bp_workspace_in_bytes<float>(
1667
+ int batch_size,
1668
+ int num_layers,
1669
+ const int* output_features);
1670
+ template size_t get_mlp_bp_workspace_in_bytes<at::Half>(
1671
+ int batch_size,
1672
+ int num_layers,
1673
+ const int* output_features);
1674
+ template size_t get_mlp_bp_workspace_in_bytes<double>(
1675
+ int batch_size,
1676
+ int num_layers,
1677
+ const int* output_features);
1678
+