Spaces:
Paused
Fix: Force safetensors loading to bypass CVE-2025-32434 (CRITICAL SECURITY)
Browse filesROOT CAUSE:
- CVE-2025-32434: Critical RCE vulnerability in PyTorch < 2.6 (CVSS 9.3/10)
- Transformers library (4.48+) blocks loading .bin files with torch < 2.6
- Current torch version: 2.1.0-2.3.0 (below minimum safe version 2.6.0)
- CLIP model downloads BOTH pytorch_model.bin and model.safetensors
- Default behavior tries to load pytorch_model.bin → Security check fails
ERROR MESSAGE:
"Due to a serious vulnerability issue in `torch.load`, even with
`weights_only=True`, we now require users to upgrade torch to at least
v2.6 in order to use the function. This version restriction does not
apply when loading files with safetensors."
SOLUTION:
- Add use_safetensors=True parameter to force safetensors loading
- Bypasses torch version requirement (safetensors format is secure)
- Uses already-downloaded model.safetensors (605MB, no re-download)
- No torch upgrade needed (avoids breaking other dependencies)
SECURITY BENEFITS:
- Safetensors format has no pickle deserialization (inherently secure)
- Eliminates CVE-2025-32434 RCE attack vector
- Future-proof (safetensors is HuggingFace recommended format)
CHANGES:
- code/clip_retrieval.py lines 107-117:
- CLIPModel.from_pretrained(..., use_safetensors=True)
- CLIPProcessor.from_pretrained(..., use_safetensors=True)
EXPECTED BEHAVIOR:
- CLIP model loads from cached model.safetensors
- No security errors
- Text-to-LEGO feature functional
References:
- CVE: https://nvd.nist.gov/vuln/detail/CVE-2025-32434
- Safetensors: https://huggingface.co/docs/safetensors/
- code/clip_retrieval.py +4 -2
|
@@ -106,12 +106,14 @@ class CLIPRetriever:
|
|
| 106 |
try:
|
| 107 |
self.model = CLIPModel.from_pretrained(
|
| 108 |
self.model_name,
|
| 109 |
-
cache_dir=HF_CACHE_DIR
|
|
|
|
| 110 |
).to(self.device)
|
| 111 |
|
| 112 |
self.processor = CLIPProcessor.from_pretrained(
|
| 113 |
self.model_name,
|
| 114 |
-
cache_dir=HF_CACHE_DIR
|
|
|
|
| 115 |
)
|
| 116 |
|
| 117 |
self.model.eval()
|
|
|
|
| 106 |
try:
|
| 107 |
self.model = CLIPModel.from_pretrained(
|
| 108 |
self.model_name,
|
| 109 |
+
cache_dir=HF_CACHE_DIR,
|
| 110 |
+
use_safetensors=True # Force safetensors to bypass CVE-2025-32434
|
| 111 |
).to(self.device)
|
| 112 |
|
| 113 |
self.processor = CLIPProcessor.from_pretrained(
|
| 114 |
self.model_name,
|
| 115 |
+
cache_dir=HF_CACHE_DIR,
|
| 116 |
+
use_safetensors=True # Force safetensors to bypass CVE-2025-32434
|
| 117 |
)
|
| 118 |
|
| 119 |
self.model.eval()
|