Update README.md
Browse files
README.md
CHANGED
|
@@ -4,7 +4,77 @@ license: mit
|
|
| 4 |
|
| 5 |
Without the consent of Xiangya Hospital of Central South University or its Pathology Department, the use of the raw data is prohibited. Should any data be used for commercial purposes, we will hold the user legally accountable.
|
| 6 |
We support the following 21 pre-trained foundation models to extract the feature representation of WSI. Please contact us by email before using. (Strongly recommended!!)
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
| Patch Encoder | Embedding Dim | Args | Link |
|
| 9 |
|-----------------------|---------------:|------------------------------------------------------------------|------|
|
| 10 |
| **UNI** | 1024 | `--patch_encoder uni_v1 --patch_size 256 --mag 20` | [MahmoodLab/UNI](https://huggingface.co/MahmoodLab/UNI) |
|
|
@@ -26,6 +96,35 @@ a
|
|
| 26 |
| **CTransPath-CHIEF** | 768 | `--patch_encoder ctranspath --patch_size 256 --mag 10` | — |
|
| 27 |
| **ResNet50** | 1024 | `--patch_encoder resnet50 --patch_size 256 --mag 20` | — |
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
Here, we provide the feature representations of WSIs extracted by CTransPath. Link: [https://pan.baidu.com/s/1zpt7D\_XNgqZpLnUyOmtkgA?pwd=8yn6](https://pan.baidu.com/s/1zpt7D_XNgqZpLnUyOmtkgA?pwd=8yn6) (password: 8yn6).
|
| 31 |
|
|
|
|
| 4 |
|
| 5 |
Without the consent of Xiangya Hospital of Central South University or its Pathology Department, the use of the raw data is prohibited. Should any data be used for commercial purposes, we will hold the user legally accountable.
|
| 6 |
We support the following 21 pre-trained foundation models to extract the feature representation of WSI. Please contact us by email before using. (Strongly recommended!!)
|
| 7 |
+
|
| 8 |
+
### 🔨 1. **Installation**:
|
| 9 |
+
- Create an environment: `conda create -n "trident" python=3.10`, and activate it `conda activate trident`.
|
| 10 |
+
- Cloning: `git clone https://github.com/mahmoodlab/trident.git && cd trident`.
|
| 11 |
+
- Local installation: `pip install -e .`.
|
| 12 |
+
|
| 13 |
+
Additional packages may be required to load some pretrained models. Follow error messages for instructions.
|
| 14 |
+
|
| 15 |
+
### 🔨 2. **Running Trident**:
|
| 16 |
+
|
| 17 |
+
**Already familiar with WSI processing?** Perform segmentation, patching, and UNI feature extraction from a directory of WSIs with:
|
| 18 |
+
|
| 19 |
+
```
|
| 20 |
+
python run_batch_of_slides.py --task all --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
**Feeling cautious?**
|
| 24 |
+
|
| 25 |
+
Run this command to perform all processing steps for a **single** slide:
|
| 26 |
+
```
|
| 27 |
+
python run_single_slide.py --slide_path ./wsis/xxxx.svs --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
**Or follow step-by-step instructions:**
|
| 31 |
+
|
| 32 |
+
**Step 1: Tissue Segmentation:** Segments tissue vs. background from a dir of WSIs
|
| 33 |
+
- **Command**:
|
| 34 |
+
```bash
|
| 35 |
+
python run_batch_of_slides.py --task seg --wsi_dir ./wsis --job_dir ./trident_processed --gpu 0 --segmenter hest
|
| 36 |
+
```
|
| 37 |
+
- `--task seg`: Specifies that you want to do tissue segmentation.
|
| 38 |
+
- `--wsi_dir ./wsis`: Path to dir with your WSIs.
|
| 39 |
+
- `--job_dir ./trident_processed`: Output dir for processed results.
|
| 40 |
+
- `--gpu 0`: Uses GPU with index 0.
|
| 41 |
+
- `--segmenter`: Segmentation model. Defaults to `hest`. Switch to `grandqc` for fast H&E segmentation. Add the option `--remove_artifacts` for additional artifact clean up.
|
| 42 |
+
- **Outputs**:
|
| 43 |
+
- WSI thumbnails in `./trident_processed/thumbnails`.
|
| 44 |
+
- WSI thumbnails with tissue contours in `./trident_processed/contours`.
|
| 45 |
+
- GeoJSON files containing tissue contours in `./trident_processed/contours_geojson`. These can be opened in [QuPath](https://qupath.github.io/) for editing/quality control, if necessary.
|
| 46 |
+
|
| 47 |
+
**Step 2: Tissue Patching:** Extracts patches from segmented tissue regions at a specific magnification.
|
| 48 |
+
- **Command**:
|
| 49 |
+
```bash
|
| 50 |
+
python run_batch_of_slides.py --task coords --wsi_dir ./wsis --job_dir ./trident_processed --mag 20 --patch_size 256 --overlap 0
|
| 51 |
+
```
|
| 52 |
+
- `--task coords`: Specifies that you want to do patching.
|
| 53 |
+
- `--wsi_dir wsis`: Path to the dir with your WSIs.
|
| 54 |
+
- `--job_dir ./trident_processed`: Output dir for processed results.
|
| 55 |
+
- `--mag 20`: Extracts patches at 20x magnification.
|
| 56 |
+
- `--patch_size 256`: Each patch is 256x256 pixels.
|
| 57 |
+
- `--overlap 0`: Patches overlap by 0 pixels (**always** an absolute number in pixels, e.g., `--overlap 128` for 50% overlap for 256x256 patches.
|
| 58 |
+
- **Outputs**:
|
| 59 |
+
- Patch coordinates as h5 files in `./trident_processed/20x_256px/patches`.
|
| 60 |
+
- WSI thumbnails annotated with patch borders in `./trident_processed/20x_256px/visualization`.
|
| 61 |
+
|
| 62 |
+
**Step 3a: Patch Feature Extraction:** Extracts features from tissue patches using a specified encoder
|
| 63 |
+
- **Command**:
|
| 64 |
+
```bash
|
| 65 |
+
python run_batch_of_slides.py --task feat --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
|
| 66 |
+
```
|
| 67 |
+
- `--task feat`: Specifies that you want to do feature extraction.
|
| 68 |
+
- `--wsi_dir wsis`: Path to the dir with your WSIs.
|
| 69 |
+
- `--job_dir ./trident_processed`: Output dir for processed results.
|
| 70 |
+
- `--patch_encoder uni_v1`: Uses the `UNI` patch encoder. See below for list of supported models.
|
| 71 |
+
- `--mag 20`: Features are extracted from patches at 20x magnification.
|
| 72 |
+
- `--patch_size 256`: Patches are 256x256 pixels in size.
|
| 73 |
+
- **Outputs**:
|
| 74 |
+
- Features are saved as h5 files in `./trident_processed/20x_256px/features_uni_v1`. (Shape: `(n_patches, feature_dim)`)
|
| 75 |
+
|
| 76 |
+
Trident supports 21 patch encoders, loaded via a patch [`encoder_factory`](https://github.com/mahmoodlab/trident/blob/main/trident/patch_encoder_models/load.py#L14). Models requiring specific installations will return error messages with additional instructions. Gated models on HuggingFace require access requests.
|
| 77 |
+
|
| 78 |
| Patch Encoder | Embedding Dim | Args | Link |
|
| 79 |
|-----------------------|---------------:|------------------------------------------------------------------|------|
|
| 80 |
| **UNI** | 1024 | `--patch_encoder uni_v1 --patch_size 256 --mag 20` | [MahmoodLab/UNI](https://huggingface.co/MahmoodLab/UNI) |
|
|
|
|
| 96 |
| **CTransPath-CHIEF** | 768 | `--patch_encoder ctranspath --patch_size 256 --mag 10` | — |
|
| 97 |
| **ResNet50** | 1024 | `--patch_encoder resnet50 --patch_size 256 --mag 20` | — |
|
| 98 |
|
| 99 |
+
**Step 3b: Slide Feature Extraction:** Extracts slide embeddings using a slide encoder. Will also automatically extract the right patch embeddings.
|
| 100 |
+
- **Command**:
|
| 101 |
+
```bash
|
| 102 |
+
python run_batch_of_slides.py --task feat --wsi_dir ./wsis --job_dir ./trident_processed --slide_encoder titan --mag 20 --patch_size 512
|
| 103 |
+
```
|
| 104 |
+
- `--task feat`: Specifies that you want to do feature extraction.
|
| 105 |
+
- `--wsi_dir wsis`: Path to the dir containing WSIs.
|
| 106 |
+
- `--job_dir ./trident_processed`: Output dir for processed results.
|
| 107 |
+
- `--slide_encoder titan`: Uses the `Titan` slide encoder. See below for supported models.
|
| 108 |
+
- `--mag 20`: Features are extracted from patches at 20x magnification.
|
| 109 |
+
- `--patch_size 512`: Patches are 512x512 pixels in size.
|
| 110 |
+
- **Outputs**:
|
| 111 |
+
- Features are saved as h5 files in `./trident_processed/20x_256px/slide_features_titan`. (Shape: `(feature_dim)`)
|
| 112 |
+
|
| 113 |
+
Trident supports 5 slide encoders, loaded via a slide-level [`encoder_factory`](https://github.com/mahmoodlab/trident/blob/main/trident/slide_encoder_models/load.py#L14). Models requiring specific installations will return error messages with additional instructions. Gated models on HuggingFace require access requests.
|
| 114 |
+
|
| 115 |
+
| Slide Encoder | Patch Encoder | Args | Link |
|
| 116 |
+
|---------------|----------------|------|------|
|
| 117 |
+
| **Threads** | conch_v15 | `--slide_encoder threads --patch_size 512 --mag 20` | *(Coming Soon!)* |
|
| 118 |
+
| **Titan** | conch_v15 | `--slide_encoder titan --patch_size 512 --mag 20` | [MahmoodLab/TITAN](https://huggingface.co/MahmoodLab/TITAN) |
|
| 119 |
+
| **PRISM** | virchow | `--slide_encoder prism --patch_size 224 --mag 20` | [paige-ai/Prism](https://huggingface.co/paige-ai/Prism) |
|
| 120 |
+
| **CHIEF** | ctranspath | `--slide_encoder chief --patch_size 256 --mag 10` | [CHIEF](https://github.com/hms-dbmi/CHIEF) |
|
| 121 |
+
| **GigaPath** | gigapath | `--slide_encoder gigapath --patch_size 256 --mag 20` | [prov-gigapath](https://huggingface.co/prov-gigapath/prov-gigapath) |
|
| 122 |
+
| **Madeleine** | conch_v1 | `--slide_encoder madeleine --patch_size 256 --mag 10` | [MahmoodLab/madeleine](https://huggingface.co/MahmoodLab/madeleine) |
|
| 123 |
+
|
| 124 |
+
> [!NOTE]
|
| 125 |
+
> If your task includes multiple slides per patient, you can generate patient-level embeddings by: (1) processing each slide independently and taking their average slide embedding (late fusion) or (2) pooling all patches together and processing that as a single "pseudo-slide" (early fusion). For an implementation of both fusion strategies, please check out our sister repository [Patho-Bench](https://github.com/mahmoodlab/Patho-Bench).
|
| 126 |
+
|
| 127 |
+
|
| 128 |
|
| 129 |
Here, we provide the feature representations of WSIs extracted by CTransPath. Link: [https://pan.baidu.com/s/1zpt7D\_XNgqZpLnUyOmtkgA?pwd=8yn6](https://pan.baidu.com/s/1zpt7D_XNgqZpLnUyOmtkgA?pwd=8yn6) (password: 8yn6).
|
| 130 |
|