The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Foundation Models on Biomedical Imaging

A curated list of foundation models and large-scale pre-training papers for biomedical imaging and multi-omics.


🚀 Quick Links


Journal Papers

2026

  • [Nature Medicine] A multimodal sleep foundation model for disease prediction. [Paper]
    • Rahul Thapa, Magnus Ruud Kjaer, Bryan He, ..., James Zou.
    • Summary: A multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations.
    • Computing hardware: Pretraining was performed on 432,000h of sleep data using an NVIDIA A100 GPU.

2025

  • [Nature Medicine] A multimodal whole-slide foundation model for pathology. [Paper]

    • Tong Ding, Sophia J. Wagner, Andrew H. Song, ..., Faisal Mahmood.
    • Summary: A multimodal whole-slide foundation model pretrained using 335,645 whole-slide images via visual SSL and vision-language alignment.
    • Computing hardware: 8× 80GB NVIDIA A100 GPUs (multi-node DDP).
  • [Nature Methods] A visual–omics foundation model to bridge histopathology with spatial transcriptomics. [Paper]

    • Weiqing Chen, Pengzhi Zhang, Tu N. Tran, ..., Guangyu Wang.
    • Summary: OmiCLIP, a visual–omics foundation model linking H&E images and transcriptomics using tissue patches from Visium data.
    • Computing hardware: Trained for 20 epochs using one NVIDIA A100 80-GB GPU.
  • [Nature] Towards multimodal foundation models in molecular cell biology. [Paper]

    • Haotian Cui, Alejandro Tejada-Lapuerta, Maria Brbić, ..., Bo Wang.
    • Summary: Developing multimodal foundation models pretrained on diverse omics datasets, including genomics, transcriptomics, and spatial profiling.
  • [Nature Medicine] Large language model-based biological age prediction in large-scale populations. [Paper]

    • Yanjun Li, Qi Huang, Jin Jiang, ..., Qian Di.
    • Summary: A framework leveraging LLMs to estimate individual overall and organ-specific aging using health examination reports.
  • [Nature BME] A data-efficient strategy for building high-performing medical foundation models. [Paper]

    • Yuqi Sun, Weimin Tan, Zhuoyao Gu, ..., Bo Yan.
    • Summary: Synthetic data generated via conditioning with disease labels can be leveraged for building high-performing medical foundation models.
  • [arXiv] Decipher-MR: A Vision-Language Foundation Model for 3D MRI Representations. [Paper]

    • Zhijian Yang, Noel DSouza, Istvan Megyeri, ..., Erhan Bas.
    • Summary: MRI-specific vision-language foundation model for 3D representations.

2024

  • [Nature Methods] A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities. [Paper]

    • Theodore Zhao, Yu Gu, Jianwei Yang, ..., Sheng Wang.
    • Summary: BiomedParse, a foundation model that can jointly conduct segmentation, detection, and recognition across nine imaging modalities.
    • Computing hardware: 16× NVIDIA A100-SXM4-40GB (58h training).
  • [Nature Medicine] Towards a general-purpose foundation model for computational pathology. [Paper]

    • Richard J. Chen, Tong Ding, Ming Y. Lu, ..., Faisal Mahmood.
    • Summary: UNI, a general-purpose self-supervised model for pathology, pretrained using over 100 million images.
    • Computing hardware: 4 × 8 80GB NVIDIA A100 GPU nodes.
  • [Nature Medicine] A visual-language foundation model for computational pathology. [Paper]

    • Ming Y. Lu, Bowen Chen, Drew F. K. Williamson, ..., Faisal Mahmood.
    • Summary: CONCH, a visual-language foundation model developed using over 1.17 million histopathology image–caption pairs.
    • Computing hardware: 8× NVIDIA A100 80-GB.
  • [Nature Methods] scGPT: toward building a foundation model for single-cell multi-omics using generative AI. [Paper]

    • Haotian Cui, Chloe Wang, Hassaan Maan, ..., Bo Wang.
    • Summary: Foundation model for single-cell biology constructed via generative MAE pre-training.
  • [NEJM AI] A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs. [Paper]

    • Sheng Zhang, Yanbo Xu, Naoto Usuyama, ..., Hoifung Poon.
    • Summary: CLIP-Style Pre-training using the PCM-15M dataset.
  • [Technical Report] MedImageInsight: AN OPEN-SOURCE EMBEDDING MODEL FOR GENERAL DOMAIN MEDICAL IMAGING. [Paper]

    • Noel C. F. Codella, Ying Jin, Shrey Jain, ..., Mu Wei.
    • Summary: Multi-modality CLIP training across diverse domains including X-Ray, CT, MRI, and Histopathology.

2023

  • [Biomedical Imaging] BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. [Paper]
    • Sheng Zhang, Yanbo Xu, Naoto Usuyama, ..., Jaspreet Bagga.
    • Summary: Proposing PMC-15M for text & biomedical image contrastive learning.
Downloads last month
8

Papers for jimmydq/Awesome-Biomedical-Foundation-Models