alsalivan commited on
Commit
23fcd14
·
verified ·
1 Parent(s): d032e73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -3
README.md CHANGED
@@ -1,3 +1,32 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vit
5
+ - ecg
6
+ - self-supervised
7
+ - contrastive-learning
8
+ ---
9
+
10
+ # ViT-MAE ECG Encoder
11
+
12
+ This repository contains a Vision Transformer (ViT-MAE) model pretrained on ECG signals using masked autoencoding. It is a forked and modified version of the Hugging Face `transformers` library.
13
+
14
+ ## Model Files
15
+
16
+ - `model.safetensors`: Model weights (~77MB) saved in `safetensors` format.
17
+ - `config.json`: Model architecture and configuration.
18
+
19
+ ## Usage
20
+
21
+ To use this model, make sure to install the forked version of `transformers` (see below), which includes modifications for ECG input handling.
22
+
23
+ ```bash
24
+ git clone git@github.com:Alsalivan/ecgcmr.git
25
+ cd ecgcmr
26
+ pip install -e .
27
+
28
+
29
+ from transformers import ViTMAEModel
30
+
31
+ # Load model from the Hugging Face Hub
32
+ model = ViTMAEModel.from_pretrained("alsalivan/vitmae-ecg")