You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

huntr MFV PoC β€” keras-hub PARSeqTokenizer arbitrary file read (LFI) via vocabulary path

Do not load evil.keras outside an isolated VM / container.

Loading this .keras file via keras.saving.load_model() with the default safe_mode=True causes keras_hub.models.PARSeqTokenizer.set_vocabulary to open and read an attacker-chosen file from the victim filesystem and embed its contents into tokenizer.vocabulary.

Vuln summary

  • Repo affected: keras-team/keras-hub
  • File: keras_hub/src/models/parseq/parseq_tokenizer.py
  • Sink: PARSeqTokenizer.set_vocabulary line 103-106 β€” open(vocabulary, "r") on attacker-controlled string from deserialized config
  • Reachability: PARSeqTokenizer is a KerasSaveable subclass under keras_hub, reachable via the post-CVE-2025-1550 module allowlist in keras.src.saving.serialization_lib._retrieve_class_or_fn
  • Bypasses: safe_mode=True (default) β€” no in_safe_mode() check (other tokenizers like BytePairTokenizer.set_vocabulary do check)

Payload behavior

The bundled evil.keras sets vocabulary="/etc/passwd". On load, the file contents land in tokenizer.vocabulary as a string. Exfiltration paths:

  • victim re-saves the model via model.save(...) β†’ file content embedded in the new archive
  • victim hosts an inference endpoint and the captured chars affect tokenizer output β†’ side-channel oracle

load_model() returns silently with no warning and no exception.

File

  • evil.keras β€” 614 bytes
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support