Instructions to use Moriz82/parseq-keras-lfi-poc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Keras
How to use Moriz82/parseq-keras-lfi-poc with Keras:
# Available backend options are: "jax", "torch", "tensorflow". import os os.environ["KERAS_BACKEND"] = "jax" import keras model = keras.saving.load_model("hf://Moriz82/parseq-keras-lfi-poc") - Notebooks
- Google Colab
- Kaggle
huntr MFV PoC β keras-hub PARSeqTokenizer arbitrary file read (LFI) via vocabulary path
Do not load evil.keras outside an isolated VM / container.
Loading this .keras file via keras.saving.load_model() with the default
safe_mode=True causes keras_hub.models.PARSeqTokenizer.set_vocabulary to
open and read an attacker-chosen file from the victim filesystem and embed
its contents into tokenizer.vocabulary.
Vuln summary
- Repo affected:
keras-team/keras-hub - File:
keras_hub/src/models/parseq/parseq_tokenizer.py - Sink:
PARSeqTokenizer.set_vocabularyline 103-106 βopen(vocabulary, "r")on attacker-controlled string from deserialized config - Reachability:
PARSeqTokenizeris aKerasSaveablesubclass underkeras_hub, reachable via the post-CVE-2025-1550 module allowlist inkeras.src.saving.serialization_lib._retrieve_class_or_fn - Bypasses:
safe_mode=True(default) β noin_safe_mode()check (other tokenizers likeBytePairTokenizer.set_vocabularydo check)
Payload behavior
The bundled evil.keras sets vocabulary="/etc/passwd". On load, the file
contents land in tokenizer.vocabulary as a string. Exfiltration paths:
- victim re-saves the model via
model.save(...)β file content embedded in the new archive - victim hosts an inference endpoint and the captured chars affect tokenizer output β side-channel oracle
load_model() returns silently with no warning and no exception.
File
evil.kerasβ 614 bytes
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support