| | --- |
| | license: cc-by-nc-sa-4.0 |
| | language: |
| | - en |
| | tags: |
| | - spatial-transcriptomics |
| | - histology |
| | - pathology |
| | task_categories: |
| | - image-classification |
| | - feature-extraction |
| | - image-segmentation |
| | size_categories: |
| | - 100B<n<1T |
| | --- |
| | # Model Card for HEST-1k |
| |
|
| | <img src="fig1a.jpg" alt="Description" style="width: 38%;" align="right"/> |
| |
|
| |
|
| | #### What is HEST-1k? |
| |
|
| | - A collection of <b>1,255</b> spatial transcriptomic profiles, each linked and aligned to a Whole Slide Image (with pixel size < 1.15 µm/px) and metadata. |
| | - HEST-1k was assembled from 131 public and internal cohorts encompassing: |
| | - 26 organs |
| | - 2 species (Homo Sapiens and Mus Musculus) |
| | - 367 cancer samples from 25 cancer types. |
| |
|
| | HEST-1k processing enabled the identification of <b>1.5 million</b> expression/morphology pairs and <b>76 million</b> nuclei |
| |
|
| | ### Updates |
| |
|
| | - **6.01.26**: 27 new high-quality Visium HD samples added to HEST (v1.2.0)! |
| |
|
| | - **21.10.24**: HEST has been accepted to NeurIPS 2024 as a Spotlight! We will be in Vancouver from Dec 10th to 15th. Send us a message if you wanna learn more about HEST (gjaume@bwh.harvard.edu). |
| |
|
| | - **23.09.24**: 121 new samples released, including 27 Xenium and 7 Visium HD! We also make the aligned Xenium transcripts + the aligned DAPI segmented cells/nuclei public. |
| |
|
| | - **30.08.24**: HEST-Benchmark results updated. Includes H-Optimus-0, Virchow 2, Virchow, and GigaPath. New COAD task based on 4 Xenium samples. HuggingFace bench data have been updated. |
| |
|
| | - **28.08.24**: New set of helpers for batch effect visualization and correction. Tutorial [here](https://github.com/mahmoodlab/HEST/blob/main/tutorials/5-Batch-effect-visualization.ipynb). |
| |
|
| |
|
| | ## Instructions for Setting Up HuggingFace Account and Token |
| |
|
| | ### 1. Create an Account on HuggingFace |
| | Follow the instructions provided on the [HuggingFace sign-up page](https://huggingface.co/join). |
| |
|
| | ### 2. Accept terms of use of HEST |
| |
|
| | 1. On this page click request access (access will be automatically granted) |
| | 2. At this stage, you can already manually inspect the data by navigating in the `Files and version` |
| |
|
| | ### 3. Create a Hugging Face Token |
| |
|
| | 1. **Go to Settings:** Navigate to your profile settings by clicking on your profile picture in the top right corner and selecting `Settings` from the dropdown menu. |
| |
|
| | 2. **Access Tokens:** In the settings menu, find and click on `Access tokens`. |
| |
|
| | 3. **Create New Token:** |
| | - Click on `New token`. |
| | - Set the token name (e.g., `hest`). |
| | - Set the access level to `Write`. |
| | - Click on `Create`. |
| |
|
| | 4. **Copy Token:** After the token is created, copy it to your clipboard. You will need this token for authentication. |
| |
|
| | ### 4. Logging in |
| |
|
| | 1. Install huggingface-hub |
| |
|
| | ``` |
| | pip install huggingface-hub |
| | ``` |
| |
|
| | 2. Log in |
| |
|
| | ``` |
| | from huggingface_hub import login |
| | |
| | login(token="YOUR HUGGINGFACE TOKEN") |
| | ``` |
| |
|
| | ## Download the entire HEST-1k dataset: |
| | ```python |
| | import os |
| | import zipfile |
| | |
| | from huggingface_hub import snapshot_download |
| | from tqdm import tqdm |
| | |
| | |
| | def download_hest(patterns, local_dir): |
| | repo_id = 'MahmoodLab/hest' |
| | snapshot_download(repo_id=repo_id, allow_patterns=patterns, repo_type="dataset", local_dir=local_dir) |
| | |
| | seg_dir = os.path.join(local_dir, 'cellvit_seg') |
| | if os.path.exists(seg_dir): |
| | print('Unzipping cell vit segmentation...') |
| | for filename in tqdm([s for s in os.listdir(seg_dir) if s.endswith('.zip')]): |
| | path_zip = os.path.join(seg_dir, filename) |
| | |
| | with zipfile.ZipFile(path_zip, 'r') as zip_ref: |
| | zip_ref.extractall(seg_dir) |
| | |
| | |
| | local_dir='hest_data' # hest will be dowloaded to this folder |
| | |
| | # Note that the full dataset is around 1TB of data |
| | download_hest('*', local_dir) |
| | ``` |
| |
|
| | ## Download a subset of HEST-1k: |
| | ```python |
| | import datasets |
| | |
| | local_dir='hest_data' # hest will be dowloaded to this folder |
| | |
| | ids_to_query = ['TENX96', 'TENX99'] # list of ids to query |
| | |
| | list_patterns = [f"*{id}[_.]**" for id in ids_to_query] |
| | download_hest(list_patterns, local_dir) # see method definition above |
| | ``` |
| |
|
| |
|
| | #### Query HEST by organ, techonology, oncotree code... |
| | ```python |
| | import datasets |
| | import pandas as pd |
| | |
| | local_dir='hest_data' # hest will be dowloaded to this folder |
| | |
| | meta_df = pd.read_csv("hf://datasets/MahmoodLab/hest/HEST_v1_2_1.csv") |
| | |
| | # Filter the dataframe by organ, oncotree code... |
| | meta_df = meta_df[meta_df['oncotree_code'] == 'IDC'] |
| | meta_df = meta_df[meta_df['organ'] == 'Breast'] |
| | |
| | ids_to_query = meta_df['id'].values |
| | |
| | list_patterns = [f"*{id}[_.]**" for id in ids_to_query] |
| | download_hest(list_patterns, local_dir) # see method definition above |
| | ``` |
| |
|
| | ## Loading the data with the python library `hest` |
| |
|
| | Once downloaded, you can then easily iterate through the dataset: |
| | ```python |
| | from hest import iter_hest |
| | |
| | for st in iter_hest('../hest_data', id_list=['TENX95']): |
| | print(st) |
| | ``` |
| |
|
| | Please visit the [github repo](https://github.com/mahmoodlab/hest) and the [documentation](https://hest.readthedocs.io/en/latest/) for more information about the `hest` library API. |
| |
|
| |
|
| | ## Data organization |
| |
|
| | For each sample: |
| |
|
| | - `wsis/`: H&E stained Whole Slide Images in pyramidal Generic TIFF (or pyramidal Generic BigTIFF if >4.1GB) |
| | - `st/`: spatial transcriptomics expressions in a scanpy `.h5ad` object |
| | - `metadata/`: metadata |
| | - `spatial_plots/`: overlay of the WSI with the st spots |
| | - `thumbnails/`: downscaled version of the WSI |
| | - `tissue_seg/`: tissue segmentation masks: |
| | - {id}_mask.jpg: downscaled or full resolution greyscale tissue mask |
| | - {id}_mask.pkl: tissue/holes contours in a pickle file |
| | - {id}_vis.jpg: visualization of the tissue mask on the downscaled WSI |
| | - `pixel_size_vis/`: visualization of the pixel size |
| | - `patches/`: 224x224px H&E patches (0.5µm/px) extracted around ST spots in a .h5 object optimized for deep-learning. Each patch is matched to the corresponding ST profile (see `st/`) with a barcode. |
| | - `patches_vis/`: visualization of the mask and patches on a downscaled WSI. |
| | - `cellvit_seg/`: cellvit nuclei segmentation |
| |
|
| |
|
| | For each xenium sample: |
| |
|
| | - `transcripts/`: individual transcripts aligned to H&E for xenium samples; read with pandas.read_parquet; aligned coordinates in pixel are in columns `['he_x', 'he_y']` |
| | - `xenium_seg/`: xenium segmentation on DAPI and aligned to H&E |
| |
|
| |
|
| | ### How to cite: |
| |
|
| | ``` |
| | @article{jaume2024hest, |
| | author = {Jaume, Guillaume and Doucet, Paul and Song, Andrew H. and Lu, Ming Y. and Almagro-Perez, Cristina and Wagner, Sophia J. and Vaidya, Anurag J. and Chen, Richard J. and Williamson, Drew F. K. and Kim, Ahrong and Mahmood, Faisal}, |
| | title = {{HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis}}, |
| | journal = {arXiv}, |
| | year = {2024}, |
| | month = jun, |
| | eprint = {2406.16192}, |
| | url = {https://arxiv.org/abs/2406.16192v1} |
| | } |
| | ``` |
| |
|
| | ### Contact: |
| | - <b>Guillaume Jaume</b> Harvard Medical School, Boston, Mahmood Lab (`gjaume@bwh.harvard.edu`) |
| | - <b>Paul Doucet</b> Harvard Medical School, Boston, Mahmood Lab (`pdoucet@bwh.harvard.edu`) |
| |
|
| | <i>The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)</i> |