id
int64 | title
string | body
string | created_at
string | user
string | body_length
int64 | has_bug
int64 |
|---|---|---|---|---|---|---|
3,651,243,760
|
[XLA:CPU][XTile] Add first experimental integration of tiled emitter.
|
[XLA:CPU][XTile] Add first experimental integration of tiled emitter.
Can be enabled with `XLA_FLAGS="--xla_backend_extra_options=xla_cpu_enable_tiled_emitter"` (!warning! may not work as expected for now)
|
2025-11-21T11:12:37Z
|
user_450
| 207
| 0
|
3,651,236,174
|
[XTile] Enable passing fusions without gpu backend config.
|
[XTile] Enable passing fusions without gpu backend config.
This will enable us to emit cpu fusions.
|
2025-11-21T11:10:24Z
|
user_450
| 101
| 0
|
3,651,217,644
|
PR #34173: [ROCm][XLA:GPU] Rename warp to shmem_group in PackedTranspose
|
PR #34173: [ROCm][XLA:GPU] Rename warp to shmem_group in PackedTranspose
Imported from GitHub PR https://github.com/openxla/xla/pull/34173
Rename `warp` to `shmem_group` in `PackedTranspose`.
Also calculate their count as `kNumThreadsPerBlock / kNumShmemBanks` to avoid inconsistency when manually specified.
This change is NFC for any GPU in upstream. However, it fixes a performance regression in downstream for AMD GPUs caused by inconsistency between `shmem_group size`, `kNumThreadsPerBlock` and `kNumShmemBanks`. It ended up in a situation downstream where half of the launched threads per block were not utilized at all.
Update packed transpose tests to verify correct thread utilization.
Copybara import of the project:
--
390f1a7283327449f6319de6ada81b61d006b916 by Aleksei Nurmukhametov <anurmukh@amd.com>:
[XLA:GPU] Rename warp to shmem_group in PackedTranspose
Also calculate their count as kNumThreadsPerBlock / kNumShmemBanks to
avoid inconsistency when manually specified.
This change is NFC for any GPU in upstream. However, it fixes a
performance regression in downstream for AMD GPUs caused by
inconsistency between shmem_group size, kNumThreadsPerBlock and
kNumShmemBanks. It ended up in a situation downstream where half of the
launched threads per block were not utilized at all.
Update packed transpose tests to verify correct thread utilization.
Merging this change closes #34173
FUTURE_COPYBARA_INTEGRATE_REVIEW=https://github.com/openxla/xla/pull/34173 from ROCm:anurmukh/fix-packed-transpose-threads 390f1a7283327449f6319de6ada81b61d006b916
|
2025-11-21T11:04:48Z
|
user_450
| 1,580
| 0
|
3,650,988,939
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T09:56:54Z
|
user_450
| 22
| 0
|
3,650,981,405
|
[XTile] Add compatible_with_portable rules to enable CPU linking.
|
[XTile] Add compatible_with_portable rules to enable CPU linking.
|
2025-11-21T09:54:47Z
|
user_450
| 66
| 0
|
3,650,978,604
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T09:54:06Z
|
user_450
| 22
| 0
|
3,650,962,306
|
DepthwiseConv2dNativeBackpropInput causes CHECK failed in tensor_format.h on CPU
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
TF 2.21.0
### Custom code
Yes
### OS platform and distribution
Windows 11 x86_64
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Running the following valid-looking raw op crashes the Python interpreter with a C++ CHECK failed:
F tensorflow/core/util/tensor_format.h:428]
Check failed: index >= 0 && index < num_total_dims
Invalid index from the dimension: 3, 0, C
On Windows this terminates the process with:
Process finished with exit code -1073740791 (0xC0000409)
This is a native crash, not a Python exception.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_sizes = tf.constant([1, 2, 2, 1], dtype=tf.int32)
filter = tf.constant([1.0], dtype=tf.float32)
out_backprop = tf.constant([1.0], dtype=tf.float32)
tf.raw_ops.DepthwiseConv2dNativeBackpropInput(input_sizes=input_sizes, filter=filter, out_backprop=out_backprop, strides=[1, 1], padding='SAME')
```
### Relevant log output
```shell
E:\AI\miniconda\envs\tf-nightly\python.exe E:\daimajianyan\pythonProject\fl\t3.py
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1763717376.604100 25544 port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1763717383.208960 25544 port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
I0000 00:00:1763717384.579648 25544 cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
F0000 00:00:1763717384.600228 25544 tensor_format.h:428] Check failed: index >= 0 && index < num_total_dims Invalid index from the dimension: 3, 0, C
*** Check failure stack trace: ***
@ 00007FFF19CD2F52 (unknown)
@ 00007FFEEEE37EEE (unknown)
@ 00007FFEF14E6F95 (unknown)
@ 00007FFEF14EF24F (unknown)
@ 00007FFF1986BDAD (unknown)
@ 00007FFF190952A7 (unknown)
@ 00007FFF16953D4B (unknown)
@ 00007FFF1695F0F3 (unknown)
@ 00007FFF1697E4AA (unknown)
@ 00007FFF16980D1E (unknown)
@ 00007FFF190B4417 (unknown)
@ 00007FFF1695756D (unknown)
@ 00007FFF169566C5 (unknown)
@ 00007FFF1695B1FB (unknown)
@ 00007FFF1695B2CD (unknown)
@ 00007FFF1693E7A6 (unknown)
@ 00007FFF169472DF (unknown)
@ 00007FFF1CC4CCA6 (unknown)
@ 00007FFF1CBF141C (unknown)
@ 00007FFF1CBFCC88 (unknown)
@ 00007FFF1D217B0B (unknown)
@ 00007FFF1CBF0B0F (unknown)
@ 00007FFF1CBEDEB4 (unknown)
@ 00007FFF1CBF1E9D (unknown)
@ 00007FFF1C8618ED (unknown)
@ 00007FFF1CC47E9E (unknown)
@ 00007FFF1A351DAF (unknown)
@ 00007FFF1A30E933 (unknown)
@ 00007FF8125D593D (unknown)
@ 00007FF8125D5880 (unknown)
@ 00007FF8125AAE44 (unknown)
@ 00007FF801F5142D (unknown)
@ 00007FF801F0C2C8 (unknown)
@ 00007FF802020E52 (unknown)
@ 00007FF80201D19D (unknown)
@ 00007FF80201F52F (unknown)
@ 00007FF801F0C67E (unknown)
@ 00007FF801F0C3E1 (unknown)
@ 00007FF802021052 (unknown)
@ 00007FF80201BFB0 (unknown)
@ 00007FF80201F52F (unknown)
@ 00007FF801F0C67E (unknown)
@ 00007FF8020188E1 (unknown)
@ 00007FF802020E52 (unknown)
@ 00007FF80201D20E (unknown)
@ 00007FF80201F52F (unknown)
@ 00007FF8020926B1 (unknown)
@ 00007FF802092798 (unknown)
@ 00007FF802092398 (unknown)
@ 00007FF8020901BB (unknown)
@ 00007FF801E8BA8A (unknown)
@ 00007FF801E8C701 (unknown)
@ 00007FF801E8D453 (unknown)
@ 00007FF801E8D4C6 (unknown)
@ 00007FF72BF11490 (unknown)
@ 00007FF8C363E8D7 (unknown)
@ 00007FF8C4CEC53C (unknown)
进程已结束,退出代码为 -1073740791 (0xC0000409)
```
|
2025-11-21T09:49:43Z
|
user_242
| 4,692
| 1
|
3,650,245,007
|
Refactor CreateOutputLeafTpuBuffer to call DefineBuffer instead. Because of
|
Refactor CreateOutputLeafTpuBuffer to call DefineBuffer instead. Because of
error buffers this needs to take memory_space.
|
2025-11-21T05:13:28Z
|
user_450
| 123
| 0
|
3,650,176,765
|
Fix some c++ readability issues in latency hiding scheduler
|
Fix some c++ readability issues in latency hiding scheduler
- Optimized logging by using bsl::StringAppend
- Refactored HloGraphNode class, added GetMutableInstr(), removed const_cast
|
2025-11-21T04:37:33Z
|
user_450
| 185
| 0
|
3,650,153,529
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T04:23:28Z
|
user_450
| 22
| 0
|
3,650,142,341
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T04:16:51Z
|
user_450
| 22
| 0
|
3,650,139,801
|
CUDA device context initialization failure in tf.raw_ops.GatherV2with invalid axis parameter (axis=9 for 2D tensor)
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.17 (based on traceback)
CUDA Environment: GPU-enabled system
Total GPU Memory: 51.04 GB (51041271808 bytes)
Python Version: 3.10
Current Behavior
Error Description
The application crashes during CUDA device context initialization when calling tf.raw_ops.GatherV2with an invalid axis parameter (axis=9 for a 2D tensor). The failure occurs before any gather operation, during GPU context acquisition.
Error Log
RuntimeError: Bad StatusOr access: INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 51041271808
Steps to Reproduce
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
resource = tf.Variable([[1, 2, 3], [4, 5, 6]], dtype=tf.int32)
indices = tf.constant([0, 1], dtype=tf.int32)
dtype = tf.int32
batch_dims = 0
output = tf.raw_ops.GatherV2(params=resource, indices=indices, axis=
9, batch_dims=batch_dims)
```
### Relevant log output
```shell
```
|
2025-11-21T04:15:18Z
|
user_393
| 1,507
| 1
|
3,650,127,765
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T04:08:17Z
|
user_450
| 22
| 0
|
3,650,124,743
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T04:06:47Z
|
user_450
| 22
| 0
|
3,650,119,383
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T04:03:46Z
|
user_450
| 22
| 0
|
3,650,110,298
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:58:20Z
|
user_450
| 22
| 0
|
3,650,100,448
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:52:12Z
|
user_450
| 22
| 0
|
3,650,095,417
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:50:02Z
|
user_450
| 22
| 0
|
3,650,095,407
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:50:02Z
|
user_450
| 22
| 0
|
3,650,088,986
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:46:50Z
|
user_450
| 22
| 0
|
3,650,084,478
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:44:52Z
|
user_450
| 22
| 0
|
3,650,082,592
|
Automated Code Change
|
Automated Code Change
|
2025-11-21T03:44:08Z
|
user_450
| 22
| 0
|
3,650,061,340
|
Integrate LLVM at llvm/llvm-project@423bdb2bf257
|
Integrate LLVM at llvm/llvm-project@423bdb2bf257
Updates LLVM usage to match
[423bdb2bf257](https://github.com/llvm/llvm-project/commit/423bdb2bf257)
|
2025-11-21T03:34:29Z
|
user_450
| 151
| 0
|
3,650,045,252
|
Updating the Preloaded Executables Store to handle IFRT IR executables
|
Updating the Preloaded Executables Store to handle IFRT IR executables
Add tests and refactor the testing to make the individual tests simpler and more obvious.
|
2025-11-21T03:25:22Z
|
user_450
| 162
| 0
|
3,650,021,967
|
CUDA invalid resource handle and memory copy failure in tf.nn.conv3doperation with large tensor dimensions
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (based on error patterns)
CUDA Environment: GPU-enabled system
Error Type: CUDA resource handle corruption
Python Version: 3.x
Current Behavior
Error Description
The application crashes with a core dump when executing 3D convolution operations with large tensor dimensions. The failure occurs during GPU-to-CPU memory copy operation with invalid resource handles.
Error Log tf2.10
2025-11-21 11:07:08.127020: E tensorflow/stream_executor/stream.cc:320] Error recording event in stream: Error recording CUDA event: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; not marking stream as bad, as the Event object may be at fault. Monitor for further errors.
2025-11-21 11:07:08.127072: F tensorflow/core/common_runtime/gpu/gpu_util.cc:303] GPU->CPU Memcpy failed
Aborted (core dumped)
### tf2.17
RuntimeError: Bad StatusOr access: INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 51041271808
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_data = tf.random.uniform([10, 10,
10, 10, 10], minval=0,
maxval=1, dtype=tf.float32)
filters_data = tf.random.uniform([3, 3, 3, 3, 5], minval=0, maxval=1, dtype
=tf.float32)
strides_data = [1, 2, 2, 2, 1]
padding_data = 'SAME'
data_format_data = 'NDHWC'
dilations_data = [1, 1, 1, 1, 1]
name_data = 'conv3d_op'
output = tf.nn.conv3d(input=input_data, filters=filters_data, strides=
strides_data, padding=padding_data, data_format=data_format_data,
dilations=dilations_data, name=name_data)
```
### Relevant log output
```shell
```
|
2025-11-21T03:13:45Z
|
user_393
| 2,155
| 1
|
3,650,016,107
|
CUDA device context initialization failure in tf.signal.rfft2dwith special characters in operation name parameter
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (with oneDNN optimization)
CUDA Environment: GPU-enabled system
Total GPU Memory: 11.5 GB (11539054592 bytes)
Python Version: 3.x
Current Behavior
Error Description
The application crashes during CUDA device context initialization when calling tf.signal.rfft2dwith a name parameter containing special characters (;touch tf.signal.rfft2d_Qrfft2d.txt). The failure occurs before any FFT computation, during the GPU context acquisition phase.
Error Log tf-2.10
2025-11-21 11:06:45.363931: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 11539054592
Aborted (core dumped)
### tf2.17
RuntimeError: Bad StatusOr access: INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 51041271808
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_data = tf.random.uniform(shape=[10, 10], minval=0, maxval=1, dtype=tf
.float32)
fft_length = tf.constant([10, 10], dtype=tf.int32)
result = tf.signal.rfft2d(input_tensor=input_data, fft_length=fft_length,
name='txt')
```
### Relevant log output
```shell
```
|
2025-11-21T03:10:10Z
|
user_393
| 1,885
| 1
|
3,650,002,904
|
Integrate LLVM at llvm/llvm-project@fbc093588f65
|
Integrate LLVM at llvm/llvm-project@fbc093588f65
Updates LLVM usage to match
[fbc093588f65](https://github.com/llvm/llvm-project/commit/fbc093588f65)
|
2025-11-21T03:02:26Z
|
user_450
| 151
| 0
|
3,650,001,887
|
CUDA device context initialization failure during IFFT operations with extreme axis parameter values
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (with oneDNN optimization)
CUDA Environment: GPU-enabled system
Total GPU Memory: 11.5 GB (11539054592 bytes)
Python Version: 3.x
Current Behavior
Error Description
The application crashes during CUDA device context initialization when performing inverse FFT operations with extreme axis parameter values (36028797018963968). The failure occurs before the actual FFT computation, during GPU context acquisition phase.
Error Log
2025-11-21 10:59:22.617965: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 11539054592
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_data = tf.constant([[1 + 2.0j, 3 + 4.0j], [5 + 6.0j, 7 + 8.0j]],
dtype=tf.complex64)
fft_length = tf.constant([2, 2], dtype=tf.int32)
axes = tf.constant([36028797018963968, 36028797018963968], dtype=tf.int32)
result = tf.signal.ifft2d(input_data, name='ifftnd_op')
```
### Relevant log output
```shell
```
|
2025-11-21T03:01:53Z
|
user_393
| 1,654
| 1
|
3,649,995,989
|
CUDA invalid resource handle errors and memory corruption in lu_solve and sparse operations with extreme tensor values
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
2025-11-21 02:56:17.456814: I tensorflow/core/util/cuda_solvers.cc:178] Creating GpuSolver handles for stream 0x27254a0
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
F0000 00:00:1763693777.496048 1463725 cuda_solvers.cc:114] Check failed: cusolverDnCreate(&cusolver_dn_handle) == CUSOLVER_STATUS_SUCCESS Failed to create cuSolverDN instance.
*** Check failure stack trace: ***
Aborted (core dumped) # try > 3times
### Standalone code to reproduce the issue
```shell
2025-11-21 10:52:26.283899: E tensorflow/stream_executor/cuda/cuda_driver.cc:1157] failed to enqueue async memcpy from device to host: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; host dst: 0xc10b7a40; GPU src: 0x7fadf7e00000; size: 16=0x10
2025-11-21 10:52:26.283927: E tensorflow/stream_executor/stream.cc:320] Error recording event in stream: Error recording CUDA event: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; not marking stream as bad, as the Event object may be at fault. Monitor for further errors.
2025-11-21 10:52:26.284020: F tensorflow/core/common_runtime/gpu/gpu_util.cc:303] GPU->CPU Memcpy failed
Aborted (core dumped)
(FreeFuzz) zhourongkui@supermicro:~/work$ cat /home/zhourongkui/work/2024/DLF_LLM/src1120/mutated_codes_tensorflow/tensorflow.linalg.lu_solve/2_1.py /home/zhourongkui/work/2024/
DLF_LLM/src1120/mutated_codes_tensorflow/tensorflow.raw_ops.SparseDenseCwiseMul/1_0.py
import tensorflow as tf
lower_upper = tf.constant([[3.0, 1.0], [0.0, 2.0]], dtype=tf.float32)
perm = tf.constant([0, 0], dtype=tf.int32)
rhs = tf.constant([[1.0, 0.0], [0.0, 1.0]], dtype=tf.float32)
solution = tf.linalg.lu_solve(lower_upper, perm, rhs, validate_args=True,
name='solve_linear_eqn')
```
### Relevant log output
```shell
```
|
2025-11-21T02:57:53Z
|
user_393
| 2,224
| 1
|
3,649,991,052
|
Allow setting sub-allocator visitors from `xla::GpuAllocatorConfig`
|
Allow setting sub-allocator visitors from `xla::GpuAllocatorConfig`
|
2025-11-21T02:54:26Z
|
user_450
| 68
| 0
|
3,649,977,234
|
CUDA memory corruption and invalid handle errors in tf.map_fnwhen using parallel_iterationsin eager execution mode
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf2.10
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (based on log structure)
CUDA Version: Unknown (from driver logs)
GPU Environment: Multi-GPU system
Python Version: 3.x
Current Behavior
Error Description
The application crashes with a core dump when executing tf.map_fnwith parallel_iterationsparameter in eager execution mode. The failure sequence involves:
GPU memory allocation failure (despite small request size)
Stream synchronization issues
Invalid CUDA handle errors during memory copy
Fatal GPU->CPU memcpy failure
Error Log Sequence
2025-11-21 10:41:37.494986: I tensorflow/stream_executor/cuda/cuda_driver.cc:733] failed to allocate 2.2K (2304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
WARNING:tensorflow:Setting parallel_iterations > 1 has no effect when executing eagerly. Consider calling map_fn with tf.function to execute fn in parallel.
2025-11-21 10:41:38.458887: I tensorflow/stream_executor/stream.cc:1035] [stream=0xae19b10,impl=0x3b680520] did not wait for [stream=0xbf17eea0,impl=0x3b6804f0]
2025-11-21 10:41:38.458941: E tensorflow/stream_executor/cuda/cuda_driver.cc:1157] failed to enqueue async memcpy from device to host: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; host dst: 0xbfb6e980; GPU src: 0x7fec61c00200; size: 1=0x1
2025-11-21 10:41:38.458967: E tensorflow/stream_executor/stream.cc:320] Error recording event in stream: Error recording CUDA event: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; not marking stream as bad, as the Event object may be at fault. Monitor for further errors.
2025-11-21 10:41:38.459028: F tensorflow/core/common_runtime/gpu/gpu_util.cc:303] GPU->CPU Memcpy failed
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_data = tf.constant([1, 2, 3, 4, 5])
def square(x):
return x * x
result = tf.map_fn(fn=square, elems=input_data, dtype=tf.int32,
parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=
True, name='map_fn_example', fn_output_signature=tf.int32)
print(result)
```
### Relevant log output
```shell
```
|
2025-11-21T02:44:40Z
|
user_393
| 2,535
| 1
|
3,649,970,119
|
CUDA device context initialization failure when processing tensors with extreme integer values in reduction operations
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (with oneDNN optimization)
CUDA Environment: Multi-GPU system
GPU Memory: 11.5GB total (11539054592 bytes)
Python Version: 3.x
Current Behavior
Error Description
The application crashes during CUDA device context initialization when performing reduction operations on tensors containing extreme integer values (36028797018963968). The failure occurs before the actual reduction operation, during GPU context acquisition.
Error Log
2025-11-21 10:36:47.525650: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 11539054592
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_tensor = tf.constant([36028797018963968, 36028797018963968])
result = tf.reduce_min(input_tensor, axis=1, keepdims=True, name=
'reduce_min_op')
```
### Relevant log output
```shell
```
|
2025-11-21T02:39:53Z
|
user_393
| 1,536
| 1
|
3,649,952,764
|
GPU memory exhaustion and cuFFT batched plan failure when using extreme fft_lengthvalues in FFT operations
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf2.10
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (based on log structure)
CUDA Version: Unknown (from CUDA driver logs)
GPU Environment: Multi-GPU system
Python Version: 3.x
Current Behavior
Error Description
The application crashes with a core dump when performing FFT operations with extremely large fft_lengthvalues (36028797018963968). The failure sequence involves:
GPU memory allocation failure due to excessive memory request
cuFFT batched plan initialization failure
Fatal error in CUDA FFT component
Error Log Sequence
2025-11-21 10:27:15.599582: I tensorflow/stream_executor/cuda/cuda_driver.cc:733] failed to allocate 6.31M (6617856 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2025-11-21 10:27:15.630055: E tensorflow/stream_executor/cuda/cuda_fft.cc:225] failed to make cuFFT batched plan:5
2025-11-21 10:27:15.630112: E tensorflow/stream_executor/cuda/cuda_fft.cc:430] Initialize Params: rank: 2 elem_count: 2 input_embed: 2 input_stride: 1 input_distance: 4 output_embed: 2 output_stride: 1 output_distance: 4 batch_count: 1
2025-11-21 10:27:15.630128: F tensorflow/stream_executor/cuda/cuda_fft.cc:439] failed to initialize batched cufft plan with customized allocator: Failed to make cuFFT batched plan.
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
input_data = tf.constant([[1.0 + 1.0j, 2.0 + 2.0j], [3.0 + 3.0j, 4.0 + 4.0j
]], dtype=tf.complex64)
fft_length = tf.constant([36028797018963968, 36028797018963968], dtype=tf.int32
)
axes = tf.constant([0, 1], dtype=tf.int32)
result = tf.signal.fft2d(input_data)
```
### Relevant log output
```shell
```
|
2025-11-21T02:30:52Z
|
user_393
| 2,085
| 1
|
3,649,945,538
|
GPU memory corruption and invalid CUDA handle when using extreme negative summarizevalue in tf.debugging.assert_less
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (based on log structure)
CUDA Version: Unknown (from CUDA driver logs)
GPU Environment: Multi-GPU system with NVIDIA cards
Python Version: 3.x
Current Behavior
Error Description
The application crashes with a core dump when executing tf.debugging.assert_lesswith an extremely large negative summarizeparameter (-36028797018963968). The failure sequence involves:
GPU memory allocation failure
Asynchronous memory copy errors with invalid handles
GPU utility layer fatal error
Error Log Sequence
2025-11-21 10:22:02.116225: I tensorflow/stream_executor/cuda/cuda_driver.cc:733] failed to allocate 2.2K (2304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2025-11-21 10:22:02.839022: I tensorflow/stream_executor/stream.cc:1035] [stream=0xc02540e0,impl=0xbb3c970] did not wait for [stream=0xc06ef1b0,impl=0xbb3b720]
2025-11-21 10:22:02.839103: E tensorflow/stream_executor/cuda/cuda_driver.cc:1157] failed to enqueue async memcpy from device to host: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; host dst: 0x1bc7040; GPU src: 0x7fc847c00200; size: 1=0x1
2025-11-21 10:22:02.839141: E tensorflow/stream_executor/stream.cc:320] Error recording event in stream: Error recording CUDA event: CUDA_ERROR_INVALID_HANDLE: invalid resource handle; not marking stream as bad, as the Event object may be at fault. Monitor for further errors.
2025-11-21 10:22:02.839225: F tensorflow/core/common_runtime/gpu/gpu_util.cc:303] GPU->CPU Memcpy failed
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
### code
import tensorflow as tf
x = tf.constant([1, 2, 3], dtype=tf.float32)
y = tf.constant([4, 5, 6], dtype=tf.float32)
tf.debugging.assert_less(x, y, message='x should be less than y', summarize
=-36028797018963968, name='assert_less_check')
```
### Relevant log output
```shell
```
|
2025-11-21T02:26:07Z
|
user_393
| 2,310
| 1
|
3,649,911,516
|
cuSolverDN initialization failure causes core dump in tf.raw_ops.Luoperation on multi-GPU system
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Version
TensorFlow Version: 2.x (based on log timestamps)
CUDA Version: Unknown (needs verification)
GPU Drivers: NVIDIA, compute capability 7.5 & 8.6
Python Version: 3.x
Current Behavior
Error Description
The application crashes with a core dump when attempting to perform LU decomposition using tf.raw_ops.Luon a multi-GPU system. The failure occurs during cuSolverDN library initialization.
Error Log
2025-11-21 09:11:37.370928: I tensorflow/core/util/cuda_solvers.cc:179] Creating GpuSolver handles for stream 0xbfa4870
2025-11-21 09:11:37.551073: F tensorflow/core/util/cuda_solvers.cc:114] Check failed: cusolverDnCreate(&cusolver_dn_handle) == CUSOLVER_STATUS_SUCCESS Failed to create cuSolverDN instance.
Aborted (core dumped)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
# Simple LU decomposition that triggers the issue
input_data = tf.constant([[4.0, 3.0], [6.0, 3.0]], dtype=tf.float32)
lu, p = tf.raw_ops.Lu(input=input_data, output_idx_type=tf.int32, name='txt')
```
### Relevant log output
```shell
This issue is not consistently reproducible and exhibits intermittent behavior. The core dump occurs probabilistically rather than deterministically.
```
|
2025-11-21T02:11:44Z
|
user_393
| 1,660
| 1
|
3,649,876,719
|
Integrate LLVM at llvm/llvm-project@88055b3a56c6
|
Integrate LLVM at llvm/llvm-project@88055b3a56c6
Updates LLVM usage to match
[88055b3a56c6](https://github.com/llvm/llvm-project/commit/88055b3a56c6)
|
2025-11-21T01:57:17Z
|
user_450
| 151
| 0
|
3,649,825,601
|
argument removal without building prototype
|
argument removal without building prototype
|
2025-11-21T01:26:31Z
|
user_450
| 44
| 0
|
3,649,812,056
|
BlockLSTMGrad CHECK failure on CPU
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.20.0
### Custom code
Yes
### OS platform and distribution
Kali Linux (kali-rolling)
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Calling tf.raw_ops.BlockLSTMGrad with minimal valid-rank tensors causes a fatal C++ CHECK failure:
Check failed: d < dims() (2 vs. 2)
zsh: IOT instruction python p1.py
This is a process-terminating abort() from within TensorFlow's C++ runtime.
There is no Python exception, meaning TensorFlow attempts to index a TensorShape dimension out of bounds before any validation occurs.
This crash happens:
✔ In stable TensorFlow 2.20.0
✔ In tf-nightly (latest)
✔ On CPU-only hardware
✔ With a minimal reproducible example
✔ Without any Graph/XLA/CUDA involvement
✔ Inside the BlockLSTMGradOp C++ kernel
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
seq_len_max = tf.constant(1, dtype=tf.int64)
x = tf.constant([[[1.0]]], dtype=tf.float32)
cs_prev = tf.constant([[1.0]], dtype=tf.float32)
h_prev = tf.constant([[1.0]], dtype=tf.float32)
w = tf.constant([[1.0]], dtype=tf.float32)
wci = tf.constant([1.0], dtype=tf.float32)
wcf = tf.constant([1.0], dtype=tf.float32)
wco = tf.constant([1.0], dtype=tf.float32)
b = tf.constant([1.0], dtype=tf.float32)
i = tf.constant([[1.0]], dtype=tf.float32)
cs = tf.constant([[1.0]], dtype=tf.float32)
f = tf.constant([[1.0]], dtype=tf.float32)
o = tf.constant([[1.0]], dtype=tf.float32)
ci = tf.constant([[1.0]], dtype=tf.float32)
co = tf.constant([[1.0]], dtype=tf.float32)
h = tf.constant([[1.0]], dtype=tf.float32)
cs_grad = tf.constant([[1.0]], dtype=tf.float32)
h_grad = tf.constant([[1.0]], dtype=tf.float32)
use_peephole = True
result = tf.raw_ops.BlockLSTMGrad(
seq_len_max=seq_len_max, x=x, cs_prev=cs_prev, h_prev=h_prev,
w=w, wci=wci, wcf=wcf, wco=wco, b=b,
i=i, cs=cs, f=f, o=o, ci=ci, co=co, h=h,
cs_grad=cs_grad, h_grad=h_grad, use_peephole=use_peephole
)
print(result)
```
### Relevant log output
```shell
2025-11-21 09:13:27.695138: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
2025-11-21 09:13:27.743584: F tensorflow/core/framework/tensor_shape.cc:359] Check failed: d < dims() (2 vs. 2)
zsh: IOT instruction python p1.py
```
|
2025-11-21T01:21:33Z
|
user_242
| 2,607
| 1
|
3,649,785,695
|
TensorShape CHECK failure in nightly & stable
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.20.0
### Custom code
Yes
### OS platform and distribution
Kali Linux (kali-rolling)
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Running tf.raw_ops.Unbatch() with very simple 1-D test inputs causes a native crash inside TensorFlow's C++ runtime.
Both stable TensorFlow (2.20.0) and tf-nightly abort execution with:
F tensorflow/core/framework/tensor_shape.cc:360] Check failed: d < dims() (1 vs. 1)
zsh: IOT instruction python t1.py
This is a C++ CHECK failure → abort() → SIGABRT and terminates the Python interpreter.
There is no Python-level exception, which indicates a bug in TensorFlow’s internal shape handling.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
batched = tf.constant([1], dtype=tf.int32)
batch_index = tf.constant([0], dtype=tf.int64)
result = tf.raw_ops.Unbatch(
batched_tensor=batched,
batch_index=batch_index,
id=tf.constant(0, dtype=tf.int64),
timeout_micros=0
)
print(result)
```
### Relevant log output
```shell
2025-11-21 08:58:53.256574: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
2025-11-21 08:58:53.269244: F tensorflow/core/framework/tensor_shape.cc:359] Check failed: d < dims() (1 vs. 1)
zsh: IOT instruction python t1.py
```
|
2025-11-21T01:11:05Z
|
user_242
| 1,660
| 1
|
3,649,708,236
|
CollectiveGatherV2 TensorShape CHECK failure
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.20.0
### Custom code
Yes
### OS platform and distribution
Kali Linux (kali-rolling) Linux kali 6.11.2-amd64
### Mobile device
_No response_
### Python version
Python 3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Executing tf.raw_ops.CollectiveGatherV2 on a CPU-only VMware virtual machine causes a native crash inside TensorFlow.
Before the crash, TensorFlow prints a cuInit(303) message (expected for a CPU-only machine), and then immediately hits a fatal C++ CHECK failure inside tensor_shape.cc:
2025-11-21 08:11:23.906652: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
2025-11-21 08:11:23.925656: F tensorflow/core/framework/tensor_shape.cc:587] Check failed: d < dims() (0 vs. 0)
zsh: IOT instruction python q2.py
There is no Python exception.
The entire Python process aborts with an IOT instruction (SIGABRT) due to the failed CHECK.
This indicates a bug in the TensorShape handling logic inside CollectiveGatherV2.
For scalar inputs (shape=[]), the kernel attempts to access dimension 0, producing the illegal condition 0 < 0.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tf.raw_ops.CollectiveGatherV2(
input=tf.constant(1.0, dtype=tf.float32),
group_size=tf.constant(1, dtype=tf.int32),
group_key=tf.constant(0, dtype=tf.int32),
instance_key=tf.constant(0, dtype=tf.int32),
ordering_token=[]
)
```
### Relevant log output
```shell
2025-11-21 08:11:23.906652: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
2025-11-21 08:11:23.925656: F tensorflow/core/framework/tensor_shape.cc:587] Check failed: d < dims() (0 vs. 0)
zsh: IOT instruction python q2.py
```
|
2025-11-21T00:36:22Z
|
user_242
| 2,123
| 1
|
3,649,627,184
|
Remove unused deprecated absl::testing usages in service and stream_executor sub-folders.
|
Remove unused deprecated absl::testing usages in service and stream_executor sub-folders.
New namespace is absl_testing::
|
2025-11-21T00:00:28Z
|
user_450
| 122
| 0
|
3,649,460,684
|
Implement `CreateErrorBuffer` in pjrt c api
|
Implement `CreateErrorBuffer` in pjrt c api
|
2025-11-20T22:53:12Z
|
user_450
| 44
| 0
|
3,649,451,700
|
Refactor HloDCE to use a setter for removing dead entry parameters.
|
Refactor HloDCE to use a setter for removing dead entry parameters.
Removes `remove_dead_parameters_from_entry_computation` from the HloDCE constructor in favor of an explicit setter. This option is risky in production, so moving it ensures a safe default and prevents accidental enablement via constructor arguments.
|
2025-11-20T22:50:41Z
|
user_450
| 319
| 0
|
3,649,422,753
|
[ReplicaGroupV3][Partitioner][Utilities] cleanup iota functions for creating V2 replica groups and add test for untested function.
|
[ReplicaGroupV3][Partitioner][Utilities] cleanup iota functions for creating V2 replica groups and add test for untested function.
|
2025-11-20T22:42:43Z
|
user_450
| 131
| 0
|
3,649,410,328
|
Implement memory_space_by_kind for PjRtCApiDevice.
|
Implement memory_space_by_kind for PjRtCApiDevice.
|
2025-11-20T22:39:02Z
|
user_450
| 51
| 0
|
3,649,386,980
|
speed up xla_device_test 5x by reusing session
|
speed up xla_device_test 5x by reusing session
|
2025-11-20T22:32:11Z
|
user_450
| 47
| 0
|
3,649,373,927
|
Allow HloDCE to remove dead parameters from the entry computation.
|
Allow HloDCE to remove dead parameters from the entry computation.
This change introduces a new option `remove_dead_parameters_from_entry_computation` to `HloDCE`. When this option is enabled, HloDCE can remove parameters from the entry computation if they are dead. This is generally not allowed as it breaks the contract with the frontend but is useful for tests.
|
2025-11-20T22:28:36Z
|
user_450
| 367
| 0
|
3,649,256,835
|
Add support for JPEG XL in TensorFlow DecodeImage.
|
Add support for JPEG XL in TensorFlow DecodeImage.
|
2025-11-20T21:55:15Z
|
user_450
| 51
| 0
|
3,649,058,725
|
Update Maven package name in error messages from TF Lite to LiteRT.
|
Update Maven package name in error messages from TF Lite to LiteRT.
|
2025-11-20T20:59:25Z
|
user_450
| 68
| 0
|
3,648,762,483
|
Fix typo in `reduce_scatter_decomposer_test`.
|
Fix typo in `reduce_scatter_decomposer_test`.
|
2025-11-20T19:32:27Z
|
user_450
| 46
| 0
|
Dataset Card for Github Issues - TensorFlow
Dataset Details
Dataset Description
This dataset contains 50 open issues collected from the public TensorFlow GitHub repository. Each record includes the issue ID, title, body text, creation date, anonymized user ID, body length, and a flag indicating whether the issue mentions a bug. The dataset has been structured for analysis and learning purposes.
- Curated by: Lin Shi
- Language(s) (NLP): English
- License: Create Commons Zero v1.0 Universal (CC0 1.0)
Dataset Sources [optional]
- Repository: https://github.com/tensorflow/tensorflow
Uses
Direct Use
This dataset can be used for text analysis, summarization, or bug detection exercises.
Out-of-Scope Use
Not intended for production software bug tracking or any commercial purpose. User information has been anonymized.
Dataset Structure
id: int64, unique identifier for each issue
title: string, issue title
body: string, issue content
created_at: string, creation date
user: string, anonymized user ID
body_length: int64, number of characters in the body
has_bug: int64, 1 if the body mentions 'bug', otherwise 0
Split: train, 50 examples
Dataset Creation
Curation Rationale
This dataset was created to provide a small, structured sample of GitHub issues for learning and experimentation in text analysis and bug detection.
Source Data
Collected via the GitHub API using requests library. Data was filtered and structured in a Pandas DataFrame.
Usernames were anonymized for privacy.
Data Collection and Processing
The latest 50 open issues were retrieved from the TensorFlow GitHub repository.
Each issue's ID, title, body, creation date, and username were extracted.
Usernames were anonymized using a hashing method to protect privacy.
Additional derived fields include body_length and has_bug.
Who are the source data producers?
The source data producers are the contributors of the TensorFlow repository on GitHub. No personal information beyond publicly available usernames (which were anonymized) is included.
Annotation process
No manual annotation was performed for this dataset.
The only derived labels are programmatically generated fields such as body_length andhas_bug, which were computed automatically using simple text-processing rules.
No annotation tools or human annotators were involved.
Who are the annotators?
There were no human annotators.
All derived fields were generated automatically through Python code written by the dataset curator (Lin Shi).
Personal and Sensitive Information
All usernames have been anonymized, and no sensitive or private information is included.
The dataset only contains publicly available GitHub issue text.
It is intended solely for educational use as part of a TAFE coursework assignment.
Bias, Risks, and Limitations
The dataset only contains 50 open issues from one repository, so it is not representative
of all GitHub projects or issue types. Derived fields like has_bug are simplistic and may not fully capture actual bugs.
Recommendations
Users should be aware that this dataset is for educational purposes only and should not be used for production bug tracking or commercial analysis.
BibTeX:
No formal citation is available. Please cite the TensorFlow GitHub repository if needed.
APA:
No formal citation available. Refer to the TensorFlow GitHub repository for source data.
Dataset Card Contact
For questions about this dataset, please contact:
- Name: Lin Shi
- Purpose: Educational use only (TAFE coursework)
- Downloads last month
- 20