0xZohar commited on
Commit
669871b
Β·
verified Β·
1 Parent(s): b434289

Fix: CVE-2025-32434 - Force safetensors for CLIP model loading

Browse files

Root cause:
- transformers 4.57.1 requires torch>=2.6 to load .bin files (CVE-2025-32434)
- Current torch 2.2.2 < 2.6 β†’ CLIP loading failed during build
- Runtime code already uses use_safetensors=True, but Dockerfile didn't

Solution:
1. Dockerfile: Add use_safetensors=True to CLIP model loading
2. requirements.txt: Pin transformers<4.52 to avoid future torch 2.6 requirement

Technical details:
- openai/clip-vit-base-patch32 has model.safetensors (605MB)
- Safetensors format is immune to CVE-2025-32434
- Consistent with runtime code in clip_retrieval.py

Changes:
- Dockerfile line 115-116: Added use_safetensors=True parameters
- requirements.txt line 13: Changed transformers>=4.35.0 to >=4.46.0,<4.52.0

Expected behavior:
βœ… Build completes successfully with CLIP download
βœ… All models use safetensors format (secure)
βœ… No torch version upgrade needed

Updated: Dockerfile

Files changed (1) hide show
  1. Dockerfile +2 -2
Dockerfile CHANGED
@@ -112,8 +112,8 @@ print('Downloading fine-tuned adapter (1.68 GB)...'); \
112
  hf_hub_download(repo_id='0xZohar/object-assembler-models', filename='save_shape_cars_whole_p_rot_scratch_4mask_randp.safetensors'); \
113
  print('βœ“ adapter downloaded'); \
114
  print('Downloading CLIP model (~600 MB)...'); \
115
- CLIPModel.from_pretrained('openai/clip-vit-base-patch32'); \
116
- CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32'); \
117
  print('βœ“ CLIP downloaded'); \
118
  print('βœ… All models pre-downloaded to ~/.cache/huggingface')" && \
119
  echo "βœ… Model weights cached successfully"
 
112
  hf_hub_download(repo_id='0xZohar/object-assembler-models', filename='save_shape_cars_whole_p_rot_scratch_4mask_randp.safetensors'); \
113
  print('βœ“ adapter downloaded'); \
114
  print('Downloading CLIP model (~600 MB)...'); \
115
+ CLIPModel.from_pretrained('openai/clip-vit-base-patch32', use_safetensors=True); \
116
+ CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32', use_safetensors=True); \
117
  print('βœ“ CLIP downloaded'); \
118
  print('βœ… All models pre-downloaded to ~/.cache/huggingface')" && \
119
  echo "βœ… Model weights cached successfully"