BiomedCLIP_for_AD / README.md
Yasmine97's picture
Update README.md
6403294 verified

BiomedCLIP MRI + Clinical Text Classifier

This model fine-tunes BiomedCLIP (PubMedBERT ViT-B/16) for Alzheimer’s disease classification from MRI (3D volumes) and synthetic clinical text.


🧩 Model Details

  • Backbone: BiomedCLIP (image + text encoders)
  • Input MRI: 3D NIfTI → reduced to 3 mid-slices (axial, coronal, sagittal) → stacked into RGB
  • Input Text: Synthetic patient note (tokenized with PubMedBERT)
  • Fusion: Concatenate image & text embeddings
  • Head: MLP (Linear → ReLU → Dropout → Linear) → 3-way classification
  • Labels:
    • CN – Cognitively Normal
    • MCI – Mild Cognitive Impairment
    • Dementia

🚀 Usage

Install

pip install open_clip_torch nibabel torch torchvision


##Load Pretrained Model

import torch from model import BiomedClipClassifier, predict_from_paths

device = "cuda" if torch.cuda.is_available() else "cpu"

Load from repo (assuming you pushed pytorch_model.bin + config.json here)

model = BiomedClipClassifier.from_pretrained(".", device=device)

Example inference

pred, probs = predict_from_paths( model, "/path/to/sample_brain.nii.gz", "Patient shows mild memory impairment and hippocampal atrophy.", device=device )

print("Prediction:", pred) print("Probabilities:", probs) # [CN, MCI, Dementia]

##Run Inference 

python inference.py --weights . --mri /path/to/sample.nii.gz --text "Patient shows memory issues"