You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Heap-buffer-overflow WRITE in TFLite ExpandDims kernel via crafted .tflite model

Summary

A crafted .tflite model triggers a heap-buffer-overflow WRITE of 136 bytes in the TFLite ExpandDims kernel's Eval() function (tensorflow/lite/kernels/expand_dims.cc:123). The ExpandDims kernel calls memcpy to copy tensor data from input to output, but the output buffer was allocated as only 128 bytes by SimpleMemoryArena, while the memcpy copies 136 bytes (8 bytes past the end). The kernel trusts that the arena-allocated output buffer is correctly sized for the copy, but a crafted model causes a mismatch between the allocated output size and the actual data size being copied.

The crafted model passes TFLite's FlatBuffers Verifier (VerifyAndBuildFromBuffer), meaning this is a bug in TFLite's kernel logic, not the model parser. The crash occurs during Invoke().

  • CWE-787: Out-of-bounds Write
  • Severity: Critical
  • Impact: Heap corruption via controlled WRITE -- potential arbitrary code execution

Affected Software

Component Version
TensorFlow Lite 2.20.0 (latest)

Likely all versions containing the ExpandDims kernel implementation are affected.

Root Cause

The Allocation vs. Write Mismatch

The ExpandDims kernel reshapes an input tensor by inserting a new dimension of size 1. During inference:

  1. SimpleMemoryArena::Commit() allocates a 128-byte region for the output tensor via AlignedAlloc(). This allocation size is computed during AllocateTensors() based on the arena planner's analysis of tensor lifetimes and sizes.

  2. expand_dims::Eval() at expand_dims.cc:123 calls memcpy to copy the input tensor data directly to the output tensor buffer. Because ExpandDims is a purely shape-changing operation, this is expected to be a simple data passthrough.

  3. The memcpy copies 136 bytes into the 128-byte output buffer, writing 8 bytes past the end of the allocation.

The root cause is that the crafted model creates a mismatch between the arena-planned output buffer size (128 bytes) and the actual input tensor data size that Eval() copies (136 bytes). The ExpandDims kernel does not validate that the output buffer is large enough before performing the memcpy.

Vulnerable Code Path

tflite::impl::Interpreter::Invoke()
  → Subgraph::Invoke()
    → Subgraph::InvokeImpl()
      → expand_dims::Eval()          [expand_dims.cc:123]
        → memcpy()
          → __asan_memcpy             [WRITE of 136 bytes past 128-byte region]

Why the Verifier Does Not Help

The FlatBuffers Verifier validates the structural integrity of the .tflite file format (offsets, sizes, field types). It does not validate semantic correctness of tensor shapes or the relationship between input and output sizes for individual kernels. The ExpandDims kernel trusts that the arena has allocated a correctly-sized output buffer, but a crafted model violates this assumption.

ASAN Output

==93777==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x5100000000c0 at pc 0x61e30d65c2a8 bp 0x7fff20c0bdb0 sp 0x7fff20c0b570
WRITE of size 136 at 0x5100000000c0 thread T0
    #0 0x61e30d65c2a7 in __asan_memcpy
    #1 0x61e30e182e7d in tflite::ops::builtin::expand_dims::Eval(TfLiteContext*, TfLiteNode*) tensorflow/lite/kernels/expand_dims.cc:123:5
    #2 0x61e30d7dfe84 in tflite::Subgraph::InvokeImpl() tensorflow/lite/core/subgraph.cc:1726:18
    #3 0x61e30d7dcce0 in tflite::Subgraph::Invoke() tensorflow/lite/core/subgraph.cc:1619:17
    #4 0x61e30d7209d6 in tflite::impl::Interpreter::Invoke() tensorflow/lite/core/interpreter.cc:243:3
    #5 0x61e30d69f65e in main reproduce.cpp:25

0x5100000000c0 is located 0 bytes after 128-byte region [0x510000000040,0x5100000000c0)
allocated by thread T0 here:
    #0 0x61e30d65ec26 in aligned_alloc
    #1 0x61e30eebfd7d in (anonymous namespace)::AlignedAlloc(unsigned long, unsigned long) tensorflow/lite/simple_memory_arena.cc:111:31
    #2 0x61e30eebfd7d in (anonymous namespace)::AlignedRealloc(tflite::PointerAlignedPointerPair const&, unsigned long, unsigned long, unsigned long) tensorflow/lite/simple_memory_arena.cc:137:7
    #3 0x61e30eebfd7d in tflite::ResizableAlignedBuffer::Resize(unsigned long) tensorflow/lite/simple_memory_arena.cc:166:21
    #4 0x61e30eec6846 in tflite::SimpleMemoryArena::Commit(bool*) tensorflow/lite/simple_memory_arena.cc:291:43
    #5 0x61e30ee5fc07 in tflite::ArenaPlanner::Commit(bool*) tensorflow/lite/arena_planner.cc:433:3
    #6 0x61e30ee588bf in tflite::ArenaPlanner::ExecuteAllocations(int, int) tensorflow/lite/arena_planner.cc:367:3
    #7 0x61e30d7c1074 in tflite::Subgraph::PrepareOpsAndTensors() tensorflow/lite/core/subgraph.cc:1570:3
    #8 0x61e30d7b9c64 in tflite::Subgraph::AllocateTensors() tensorflow/lite/core/subgraph.cc:1013:3

SUMMARY: AddressSanitizer: heap-buffer-overflow in __asan_memcpy

Key facts:

  • WRITE of 136 bytes past the end of a 128-byte allocation
  • The write starts at offset 0 bytes after the allocated region (immediately past the buffer)
  • The buffer was allocated by SimpleMemoryArena::Commit() via AlignedAlloc() -- standard TFLite tensor allocation
  • The overwrite occurs in expand_dims::Eval() via memcpy in the ExpandDims kernel

Attack Scenario

An attacker crafts a .tflite file with an ExpandDims operator whose tensor shape metadata causes the arena to allocate an undersized output buffer relative to the data actually copied during inference. The malicious file:

  1. Passes the FlatBuffers Verifier (VerifyAndBuildFromBuffer) -- the model is structurally valid
  2. Builds and resolves successfully via InterpreterBuilder
  3. Allocates tensors without error via AllocateTensors()
  4. Corrupts the heap during Invoke() with a 136-byte out-of-bounds write

Affected applications:

  • Any mobile inference application loading untrusted .tflite models
  • TFLite Serving and model evaluation pipelines
  • Edge ML deployments (Android, iOS, embedded devices)
  • Model registries and validation services
  • Jupyter/Colab notebooks loading shared TFLite models

Severity justification: This is a heap-buffer-overflow WRITE, not a read. The 136-byte write is substantial and overwrites adjacent heap allocations. ExpandDims is one of the most commonly used TFLite operations -- it appears in virtually every real-world model for shape manipulation, making this a very wide attack surface. Heap corruption of this magnitude enables:

  • Overwriting heap metadata for arbitrary write primitives
  • Overwriting adjacent allocations (function pointers, vtables, tensors)
  • Heap spray scenarios (136 bytes is large enough for shellcode or ROP gadget placement)
  • Potential remote code execution in applications that load untrusted models

Proof of Concept

PoC File

poc.tflite (608 bytes) -- a structurally valid TFLite model containing an ExpandDims operator with crafted tensor shape metadata.

Reproduction via Python API

import tensorflow as tf

# Load the crafted model -- passes FlatBuffers Verifier
interpreter = tf.lite.Interpreter(model_path="poc.tflite")

# Allocate tensors -- succeeds without error
interpreter.allocate_tensors()

# This triggers the heap-buffer-overflow WRITE
interpreter.invoke()  # CRASH: heap corruption / SEGV / SIGABRT

Reproduction via C++ (ASAN build)

# Step 1: Clone TensorFlow
git clone --depth 1 --branch v2.20.0 https://github.com/tensorflow/tensorflow.git tf-src

# Step 2: Build TFLite with ASAN
mkdir build-asan && cd build-asan
cmake ../tf-src/tensorflow/lite \
  -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \
  -DCMAKE_C_FLAGS="-fsanitize=address -fno-omit-frame-pointer -g" \
  -DCMAKE_CXX_FLAGS="-fsanitize=address -fno-omit-frame-pointer -g" \
  -DTFLITE_ENABLE_XNNPACK=ON -DTFLITE_ENABLE_GPU=OFF \
  -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake --build . -j$(nproc)
cd ..

# Step 3: Compile reproducer
cat > reproduce.cpp << 'EOF'
#include <cstdio>
#include <cstdlib>
#include <fstream>
#include <vector>
#include "tensorflow/lite/model_builder.h"
#include "tensorflow/lite/interpreter_builder.h"
#include "tensorflow/lite/kernels/register.h"

int main(int argc, char** argv) {
    if (argc != 2) { fprintf(stderr, "Usage: %s <model.tflite>\n", argv[0]); return 1; }

    // Read model into buffer
    std::ifstream f(argv[1], std::ios::binary);
    std::vector<char> buf((std::istreambuf_iterator<char>(f)), {});

    // Use VerifyAndBuildFromBuffer -- the model passes the verifier
    auto model = tflite::FlatBufferModel::VerifyAndBuildFromBuffer(buf.data(), buf.size());
    if (!model) { fprintf(stderr, "VerifyAndBuildFromBuffer failed\n"); return 0; }

    tflite::ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    tflite::InterpreterBuilder(*model, resolver)(&interpreter);
    if (!interpreter) { fprintf(stderr, "InterpreterBuilder failed\n"); return 0; }

    // AllocateTensors succeeds -- arena allocates the undersized output buffer
    interpreter->AllocateTensors();

    // Invoke triggers the heap-buffer-overflow WRITE in expand_dims::Eval()
    interpreter->Invoke();
    return 0;
}
EOF

# Step 4: Build and run
clang++ -fsanitize=address -g -O1 \
  -I tf-src -I build-asan/flatbuffers/include \
  reproduce.cpp \
  -Wl,--start-group build-asan/libtensorflow-lite.a \
  $(find build-asan -name "*.a" ! -name "libtensorflow-lite.a" | sort) \
  -Wl,--end-group -Wl,--allow-multiple-definition \
  -lpthread -ldl -lm -o reproduce_tflite

./reproduce_tflite poc.tflite
# Expected: AddressSanitizer: heap-buffer-overflow WRITE

Suggested Fix

Option A: Validate output tensor size before memcpy in Eval

File: tensorflow/lite/kernels/expand_dims.cc

Add a bounds check before the memcpy in Eval() to verify that the output buffer is large enough to hold the input data:

TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
  const TfLiteTensor* input;
  TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInput, &input));
  TfLiteTensor* output;
  TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, kOutput, &output));

  // Validate output buffer can hold input data before copying
  if (output->bytes < input->bytes) {
    TF_LITE_KERNEL_LOG(context,
        "ExpandDims: output buffer too small (%zu) for input data (%zu)",
        output->bytes, input->bytes);
    return kTfLiteError;
  }

  memcpy(output->data.raw, input->data.raw, input->bytes);
  return kTfLiteOk;
}

Option B: Ensure arena allocation matches input tensor size during Prepare

Audit expand_dims::Prepare() to ensure that the output tensor size computed for the arena planner exactly matches the input tensor size. Since ExpandDims only changes shape (not data), the output must always be the same byte size as the input.

Option C: Use ResizeTensor to force correct output allocation

In Prepare(), explicitly call context->ResizeTensor(context, output, ...) with dimensions derived from the input tensor, ensuring the arena allocates the correct size regardless of what the crafted model metadata claims.

Timeline

Date Event
2026-02-27 Vulnerability discovered via AFL++ fuzzing (fuzz_tflite_invoke_v2 harness)
2026-02-27 Root cause analyzed, PoC minimized and verified
2026-02-27 Submission package prepared

References

Downloads last month
5