YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Keras Native (.keras) Scanner Bypass PoC
Vulnerability
A malicious .keras file achieves Arbitrary Code Execution when loaded via
keras.models.load_model(safe_mode=False) or after calling
keras.config.enable_unsafe_deserialization(), while bypassing modelscan 0.8.8.
Scanner Bypass Details
modelscan's KerasLambdaDetectScan only checks for class_name == "Lambda" in the
top-level config.layers list. It does NOT detect:
__lambda__configs embedded in other layer parameters (e.g., Dense activation)- Lambda layers nested inside container models (Sequential/Functional)
This PoC hides a __lambda__ ACE payload inside a Dense layer's activation field.
The scanner sees class_name: "Dense" and passes it.
Reproduction
import keras
# Method 1: explicit safe_mode=False
model = keras.models.load_model("malicious_model.keras", safe_mode=False)
# Method 2: enable_unsafe_deserialization (commonly used)
keras.config.enable_unsafe_deserialization()
model = keras.models.load_model("malicious_model.keras")
# Trigger ACE on inference
import numpy as np
model.predict(np.random.randn(1, 10))
# Creates /tmp/keras_ace_proof.txt
Scanner Results
modelscan -p malicious_model.keras
# No issues found!
Impact
Users who trust modelscan to detect malicious Keras models are vulnerable.
The enable_unsafe_deserialization() call is extremely common in ML pipelines
and is recommended by Keras's own error messages.
Versions
- Keras 3.13.2
- modelscan 0.8.8
- Python 3.12
- Downloads last month
- 24