File size: 875 Bytes
23fcd14 40a0e73 23fcd14 d45d837 23fcd14 fcd2e6d 23fcd14 d45d837 23fcd14 fcd2e6d 23fcd14 943f9b9 fcd2e6d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ---
license: apache-2.0
tags:
- vit
- ecg
- self-supervised
- contrastive-learning
---
# ViT-MAE ECG Encoder
This repository contains a Vision Transformer (ViT-MAE) model pretrained on ECG signals using masked autoencoding. It is a forked and modified version of the Hugging Face `transformers` library.
## Model Files
- `model.safetensors`: Model weights (~77MB) saved in `safetensors` format.
- `config.json`: Model architecture and configuration.
## Usage
To use this model, make sure to install the forked version of `transformers` (see below), which includes modifications for original implementations of ViTMAE model from HuggingFace.
```bash
git clone git@github.com:Alsalivan/ecgcmr.git
cd ecgcmr/external/transformers
pip install -e .
```
## Load model
```
from transformers import ViTMAEModel
model = ViTMAEModel.from_pretrained("alsalivan/vitmae_ecg")
``` |