sofieneb commited on
Commit
702aa4a
·
1 Parent(s): b57fd77

Update README

Browse files
Files changed (1) hide show
  1. README.md +25 -12
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
 
9
  ## Model Summary
10
 
11
- **HistAug** is a lightweight transformer-based generator for **controllable latent-space augmentations** in the feature space of the [CONCH foundation model](https://www.nature.com/articles/s41591-024-02856-4). Instead of applying costly image-space augmentations on millions of WSI patches, HistAug operates **directly on patch embeddings** extracted from a given foundation model(here CONCH). By conditioning on explicit transformation parameters (e.g., hue shift, erosion, HED color transform), HistAug generates realistic augmented embeddings while preserving semantic content. In practice, the CONCH variant of HistAug can reconstruct the corresponding ground-truth augmented embeddings with an average cosine similarity of **about 90%** at **10X, 20X, and 40X magnification**.
12
 
13
  This enables training of Multiple Instance Learning (MIL) models with:
14
  - ⚡ **Fast augmentation**
@@ -194,39 +194,52 @@ for bag_features, label in loader: # bag_features: (num_patches, embed_dim)
194
 
195
  ## Offline usage (HPC clusters without internet)
196
 
197
- If your cluster doesn’t allow internet access on compute nodes, you have two ways to use HistAug offline:
198
 
199
- 1. **Rely on the cache**: pre-download the model on the front-end/login node (with internet), so it’s cached locally, then run jobs offline:
 
 
 
 
 
 
 
 
200
 
201
  ```bash
202
  # On the front-end/login node (with internet):
203
  python -c "from transformers import AutoModel; AutoModel.from_pretrained('sofieneb/histaug-conch', trust_remote_code=True)"
 
204
 
205
- # On your compute job (no internet):
206
- export HF_HUB_OFFLINE=1
207
- export TRANSFORMERS_OFFLINE=1
208
- ````
209
 
210
- This prevents unnecessary network calls and ensures `transformers` loads HistAug from the local cache.
 
 
 
 
 
 
 
211
 
212
- 2. **Explicitly download with `hf download`**: instead of relying on the cache variables above, you can download the model files manually on the front-end/login node (with internet) and always point to the local folder:
213
 
214
  ```bash
215
  # On the front-end/login node (with internet):
216
  hf download sofieneb/histaug-conch --local-dir ./histaug-conch
217
  ```
218
 
219
- Then load the model from that directory in your script:
220
 
221
  ```python
222
  from transformers import AutoModel
223
-
224
  cross_transformer = AutoModel.from_pretrained(
225
  "./histaug-conch", # local path instead of hub ID
226
  trust_remote_code=True,
227
- local_files_only=True
228
  )
229
  ```
 
230
  ---
231
  ## Citation
232
  If our work contributes to your research, or if you incorporate part of this code, please consider citing our paper:
 
8
 
9
  ## Model Summary
10
 
11
+ **HistAug** is a lightweight transformer-based generator for **controllable latent-space augmentations** in the feature space of the [CONCH foundation model](https://www.nature.com/articles/s41591-024-02856-4). Instead of applying costly image-space augmentations on millions of WSI patches, HistAug operates **directly on patch embeddings** extracted from a given foundation model(here CONCH). By conditioning on explicit transformation parameters (e.g., hue shift, erosion, HED color transform), HistAug generates realistic augmented embeddings while preserving semantic content. In practice, the CONCH variant of HistAug can reconstruct the corresponding ground-truth augmented embeddings with an average cosine similarity of **about 93%** at **10X, 20X, and 40X magnification**.
12
 
13
  This enables training of Multiple Instance Learning (MIL) models with:
14
  - ⚡ **Fast augmentation**
 
194
 
195
  ## Offline usage (HPC clusters without internet)
196
 
197
+ If compute nodes don’t have internet, **always** run jobs with the offline flags to **prevent unnecessary network calls** and force local loads:
198
 
199
+ ```bash
200
+ # On your compute job (no internet):
201
+ export HF_HUB_OFFLINE=1
202
+ export TRANSFORMERS_OFFLINE=1
203
+ ```
204
+
205
+ Prepare the model **in advance** on a front-end/login node (with internet), then choose **either** approach below.
206
+
207
+ ### Option — Warm the cache (simplest)
208
 
209
  ```bash
210
  # On the front-end/login node (with internet):
211
  python -c "from transformers import AutoModel; AutoModel.from_pretrained('sofieneb/histaug-conch', trust_remote_code=True)"
212
+ ```
213
 
214
+ Then in your offline job/script:
 
 
 
215
 
216
+ ```python
217
+ from transformers import AutoModel
218
+ model = AutoModel.from_pretrained(
219
+ "sofieneb/histaug-conch",
220
+ trust_remote_code=True,
221
+ local_files_only=True, # uses local cache only
222
+ )
223
+ ```
224
 
225
+ ### Option Download to a local folder with `hf download`
226
 
227
  ```bash
228
  # On the front-end/login node (with internet):
229
  hf download sofieneb/histaug-conch --local-dir ./histaug-conch
230
  ```
231
 
232
+ Then in your offline job/script:
233
 
234
  ```python
235
  from transformers import AutoModel
 
236
  cross_transformer = AutoModel.from_pretrained(
237
  "./histaug-conch", # local path instead of hub ID
238
  trust_remote_code=True,
239
+ local_files_only=True, # uses local files only
240
  )
241
  ```
242
+
243
  ---
244
  ## Citation
245
  If our work contributes to your research, or if you incorporate part of this code, please consider citing our paper: