vitmae_ecg / README.md
alsalivan's picture
Update README.md
943f9b9 verified
metadata
license: apache-2.0
tags:
  - vit
  - ecg
  - self-supervised
  - contrastive-learning

ViT-MAE ECG Encoder

This repository contains a Vision Transformer (ViT-MAE) model pretrained on ECG signals using masked autoencoding. It is a forked and modified version of the Hugging Face transformers library.

Model Files

  • model.safetensors: Model weights (~77MB) saved in safetensors format.
  • config.json: Model architecture and configuration.

Usage

To use this model, make sure to install the forked version of transformers (see below), which includes modifications for original implementations of ViTMAE model from HuggingFace.

git clone git@github.com:Alsalivan/ecgcmr.git
cd ecgcmr/external/transformers
pip install -e .

Load model

from transformers import ViTMAEModel
model = ViTMAEModel.from_pretrained("alsalivan/vitmae_ecg")