You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This repository contains a security proof-of-concept demonstrating arbitrary code execution via a crafted .keras model file. Access is restricted to authorized security researchers. By requesting access you acknowledge this is for responsible disclosure purposes only.

Log in or Sign Up to review the conditions and access this dataset content.

PoC: Keras RWKV Tokenizer eval() — safe_mode=True Bypass

Severity: Critical CWE: CWE-502 (Deserialization of Untrusted Data) Affected: Keras >= 3.0 with keras-hub >= 0.26.0

Vulnerability Summary

A malicious .keras model file achieves arbitrary code execution when loaded with keras.saving.load_model() even with safe_mode=True (the default and only safety mechanism).

The attack exploits an eval() call inside the RWKVTokenizer class from keras-hub, which is one of four packages allowlisted by Keras during deserialization. The vocabulary data stored in config.json is passed directly to Python's eval(), enabling arbitrary code execution during model loading.

Attack Chain

  1. Attacker crafts a .keras ZIP containing config.json with RWKVTokenizer class
  2. Vocabulary entries contain Python code instead of token data
  3. Victim loads model: keras.saving.load_model("model.keras") (all defaults)
  4. Keras resolves RWKVTokenizer via allowlist (keras-hub is trusted)
  5. RWKVTokenizer.__init__() calls eval() on each vocabulary entry
  6. Arbitrary code executes before any error or warning

Why safe_mode=True Does Not Help

  • No __lambda__ is used (which safe_mode blocks)
  • keras_hub is in the allowlist, so the class passes safety checks
  • The vulnerability is in the constructor of an allowlisted class
  • Keras validates which classes are loaded, not what those classes do with config data

Reproduction

# Install dependencies
pip install keras keras-hub

# Generate malicious .keras file
python3 poc_generator.py

# Verify code execution with safe_mode=True
python3 poc_verify.py

Or run the self-contained script:

bash reproduce.sh

Expected Output

[+] Created malicious .keras file: poc_rwkv_ace.keras
[+] Payload: writes 'RWKV_SAFE_MODE_BYPASS_ACE' to /tmp/rwkv_keras_ace_proof.txt

Load error (expected): Expected a model.weights.h5 or model.weights.npz file.

>>> ARBITRARY CODE EXECUTION CONFIRMED <<<
>>> Marker file content: RWKV_SAFE_MODE_BYPASS_ACE
>>> safe_mode=True did NOT prevent execution

The model load raises an error afterward (missing weights file), but the malicious code has already executed during config deserialization, before any weights loading occurs. The PoC writes a marker file to /tmp/ to prove execution.

Files

File Description
poc_generator.py Generates the malicious poc_rwkv_ace.keras file
poc_verify.py Loads the file with safe_mode=True and checks for code execution
reproduce.sh One-command reproduction script
SUBMISSION.md Full vulnerability writeup

Root Cause

eval() calls at keras_hub/src/models/rwkv7/rwkv7_tokenizer.py lines 117 and 275 execute arbitrary Python code from attacker-controlled vocabulary data in the model config.

Suggested Fix

Immediate: Replace eval() with ast.literal_eval() in RWKVTokenizerBase.__init__() and RWKVTokenizer.set_vocabulary().

Architectural: Audit all classes in the four allowlisted packages (keras, keras_hub, keras_cv, keras_nlp) for dangerous calls (eval, exec, open, subprocess, os.system) in constructors and from_config().

Tested Versions

  • Keras 3.13.2
  • keras-hub 0.26.0
  • Python 3.12
Downloads last month
14