Marlin Lee Claude Sonnet 4.6 commited on
Commit
035a542
Β·
1 Parent(s): 66e61ed

Fix DynaDiff VD weights, xformers, and brain thumbnail path resolution

Browse files

- Pre-bake Versatile Diffusion weights into the image via snapshot_download
to avoid ~3 GB runtime download from HF Hub
- Add xformers to pip deps (models.py calls enable_xformers unconditionally)
- Resolve brain image_paths against --brain-thumbnails even when stored as
absolute paths that don't exist on the current machine (scratch paths)
- Bump CACHEBUST to 9 to force a clean rebuild

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

Files changed (2) hide show
  1. Dockerfile +17 -1
  2. scripts/explorer_app.py +6 -3
Dockerfile CHANGED
@@ -1,7 +1,7 @@
1
  FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
2
 
3
  # Cache-bust: increment to force a full rebuild
4
- ARG CACHEBUST=8
5
  RUN echo "Build $CACHEBUST"
6
 
7
  # System deps
@@ -36,6 +36,7 @@ RUN pip install --no-cache-dir \
36
  scikit-image==0.24.0 \
37
  dreamsim==0.2.0 \
38
  wandb \
 
39
  "git+https://github.com/openai/CLIP.git"
40
 
41
  # ── Runtime ENV: CUDA path for deepspeed inference + disable op compilation ──
@@ -56,6 +57,21 @@ COPY dynadiff/diffusers /app/dynadiff/diffusers
56
  RUN pip install --no-cache-dir -e /app/dynadiff/diffusers
57
  RUN pip install --no-cache-dir transformers==4.41.2
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  # ── DynaDiff model code ──────────────────────────────────────────────────────
60
  COPY dynadiff/model /app/dynadiff/model
61
  COPY dynadiff/config /app/dynadiff/config
 
1
  FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
2
 
3
  # Cache-bust: increment to force a full rebuild
4
+ ARG CACHEBUST=9
5
  RUN echo "Build $CACHEBUST"
6
 
7
  # System deps
 
36
  scikit-image==0.24.0 \
37
  dreamsim==0.2.0 \
38
  wandb \
39
+ xformers \
40
  "git+https://github.com/openai/CLIP.git"
41
 
42
  # ── Runtime ENV: CUDA path for deepspeed inference + disable op compilation ──
 
57
  RUN pip install --no-cache-dir -e /app/dynadiff/diffusers
58
  RUN pip install --no-cache-dir transformers==4.41.2
59
 
60
+ # ── Pre-download Versatile Diffusion weights ─────────────────────────────────
61
+ # Baked into the image so cold-starts don't download ~3 GB from HF Hub.
62
+ # Placed after patched-diffusers install and before COPY statements
63
+ # (which change often) to maximise Docker layer cache reuse.
64
+ RUN python -c "
65
+ from huggingface_hub import snapshot_download
66
+ print('Pre-downloading shi-labs/versatile-diffusion ...')
67
+ snapshot_download(
68
+ 'shi-labs/versatile-diffusion',
69
+ local_dir='/app/dynadiff/versatile_diffusion',
70
+ local_dir_use_symlinks=False,
71
+ )
72
+ print('Versatile Diffusion saved to /app/dynadiff/versatile_diffusion')
73
+ "
74
+
75
  # ── DynaDiff model code ──────────────────────────────────────────────────────
76
  COPY dynadiff/model /app/dynadiff/model
77
  COPY dynadiff/config /app/dynadiff/config
scripts/explorer_app.py CHANGED
@@ -391,10 +391,13 @@ def _load_brain_dataset_dict(path, label, thumb_dir):
391
  print(f"[Brain] WARNING: Failed to load NSD dataset: {err}")
392
  return None
393
 
394
- # Resolve image_paths: prepend thumb_dir when paths are stored as basenames.
 
395
  raw_paths = bd.get('image_paths', [])
396
- if raw_paths and thumb_dir and not os.path.isabs(raw_paths[0]):
397
- bd_paths = [os.path.join(thumb_dir, p) for p in raw_paths]
 
 
398
  else:
399
  bd_paths = raw_paths
400
 
 
391
  print(f"[Brain] WARNING: Failed to load NSD dataset: {err}")
392
  return None
393
 
394
+ # Resolve image_paths: prepend thumb_dir when paths are stored as basenames,
395
+ # or when stored as absolute paths that don't exist on this machine.
396
  raw_paths = bd.get('image_paths', [])
397
+ if raw_paths and thumb_dir and (
398
+ not os.path.isabs(raw_paths[0]) or not os.path.exists(raw_paths[0])
399
+ ):
400
+ bd_paths = [os.path.join(thumb_dir, os.path.basename(p)) for p in raw_paths]
401
  else:
402
  bd_paths = raw_paths
403