You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

TensorRT .engine PoC: inference-time crash via serialized BatchedNMSDynamic_TRT state

Summary

This repository contains a proof-of-concept TensorRT .engine file that crashes during normal inference after successful model loading.

  • baseline.engine loads and executes successfully.
  • poc.engine also loads successfully and creates an execution context, but crashes the process during inference because the serialized plugin field topK was patched to 0.

The model interface, input tensors, and host-side API usage are identical in both cases.

Files

  • baseline.engine: clean reference engine
  • poc.engine: malicious PoC engine
  • manifest.json: SHA-256 hashes for both engines
  • verification-rerun.json: verification output from a fresh rerun
  • verify_bounty_poc.py: reproduction script

Reproduction

python verify_bounty_poc.py \
  --baseline baseline.engine \
  --candidate poc.engine \
  --mode crash \
  --output-json verification-local.json

Expected Result

  • baseline.engine: success
  • poc.engine: process exits with SIGSEGV / return code -11

Impact

A malicious TensorRT .engine file can embed crafted built-in plugin metadata that passes deserialization but crashes the host process during inference.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support