| --- |
| license: apache-2.0 |
| language: |
| - en |
| library_name: transformers |
| pipeline_tag: feature-extraction |
| tags: |
| - medical |
| - cardiovascular |
| - ecg-image |
| - ecg-text representation learning |
| - ecg-foundation-model |
| - pytorch |
| --- |
| |
|
|
| <div align="center" style="font-size: 1.5em;"> |
| <strong>Learning ECG Image Representations via Dual Physiological-Aware Alignments</strong> |
| </div> |
|
|
| <div align="center" style="font-size: 2em;"> |
| </div> |
|
|
| <div align="center"> |
| <a href="https://arxiv.org/pdf/2604.01526" style="display:inline-block;"> |
| <img src="https://img.shields.io/badge/arxiv-Paper-red?style=for-the-badge"> |
| </a> |
| <a href="/" style="display:inline-block;"> |
| <img src="https://img.shields.io/badge/Code-Github-blue?style=for-the-badge"> |
| </a> |
| |
| <a href="https://huggingface.co/Manhph2211/ECG-Scan" style="display:inline-block;"> |
| <img src="https://img.shields.io/badge/Checkpoint-%F0%9F%A4%97%20Hugging%20Face-White?style=for-the-badge"> |
| </a> |
| </div> |
| |
|
|
|
|
|
|
| ## Quickstart |
|
|
| ```python |
| from transformers import AutoModel, CLIPImageProcessor |
| from PIL import Image |
| import torch |
| |
| model = AutoModel.from_pretrained("Manhph2211/ECG-Scan", trust_remote_code=True) |
| model.eval() |
| |
| processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14-336") |
| img = Image.open("ecg.png").convert("RGB") |
| pixel_values = processor(images=img, return_tensors="pt")["pixel_values"] |
| |
| with torch.no_grad(): |
| out = model(pixel_values).embeddings |
| ``` |
|
|
| ## Citation |
|
|
| ```bibtex |
| @article{pham2026learning, |
| title={Learning ECG Image Representations via Dual Physiological-Aware Alignments}, |
| author={Pham, Hung Manh and Tang, Jialu and Saeed, Aaqib and Ma, Dong and Zhu, Bin and Zhou, Pan}, |
| journal={arXiv preprint arXiv:2604.01526}, |
| year={2026} |
| } |
| ``` |
|
|