nielsr HF Staff commited on
Commit
20337d9
Β·
verified Β·
1 Parent(s): 3195846

Update license, add pipeline tag, code link and usage example

Browse files

This PR improves the model card by:
- Correcting the license from Apache 2.0 to MIT, as stated in the project's GitHub repository.
- Adding the `pipeline_tag: video-feature-extraction` to help users discover this model for relevant tasks.
- Adding an explicit link to the GitHub repository in the model card content.
- Including a sample usage for pretraining the model.

Please review and merge if these changes look good.

Files changed (1) hide show
  1. README.md +43 -11
README.md CHANGED
@@ -1,24 +1,26 @@
1
  ---
2
- license: apache-2.0
3
- language: en
4
- tags:
5
- - self-supervised-learning
6
- - echocardiography
7
- - medical-imaging
8
- - video-representation
9
  datasets:
10
- - EchoDynamic
11
- - RVENet
12
- - EchoNet-Pediatric-LVH
 
13
  library_name: pytorch
 
 
 
 
 
 
14
  model_index: deep-learning
15
  paper: https://arxiv.org/pdf/2506.11777
 
16
  ---
17
 
18
  # πŸ«€ DISCOVR β€” Self-Supervised Echocardiography Representations
19
 
20
  **Paper:** *Self-Supervised Learning of Echocardiographic Video Representations via Online Cluster Distillation* β€” NeurIPS 2025
21
  πŸ“„ [arXiv:2506.11777](https://arxiv.org/pdf/2506.11777)
 
22
 
23
  ---
24
 
@@ -49,6 +51,30 @@ It learns both fine-grained anatomical semantics and global temporal dynamics, s
49
 
50
  ---
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ## πŸ”– Quick Facts
53
  - **Repo:** `Div97/DISCOVR_ADULT_PEDIATRIC_MODEL`
54
  - **Model family:** DISCOVR checkpoints (199 β†’ 799)
@@ -67,7 +93,13 @@ If you use DISCOVR in your work, please cite:
67
  ```bibtex
68
  @article{mishra2025self,
69
  title={Self-supervised Learning of Echocardiographic Video Representations via Online Cluster Distillation},
70
- author={Mishra, Divyanshu and Salehi, Mohammadreza and Saha, Pramit and Patey, Olga and Papageorghiou, Aris T and Asano, Yuki M and Noble, J Alison},
71
  journal={arXiv preprint arXiv:2506.11777},
72
  year={2025}
73
  }
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
 
2
  datasets:
3
+ - EchoDynamic
4
+ - RVENet
5
+ - EchoNet-Pediatric-LVH
6
+ language: en
7
  library_name: pytorch
8
+ license: mit
9
+ tags:
10
+ - self-supervised-learning
11
+ - echocardiography
12
+ - medical-imaging
13
+ - video-representation
14
  model_index: deep-learning
15
  paper: https://arxiv.org/pdf/2506.11777
16
+ pipeline_tag: video-feature-extraction
17
  ---
18
 
19
  # πŸ«€ DISCOVR β€” Self-Supervised Echocardiography Representations
20
 
21
  **Paper:** *Self-Supervised Learning of Echocardiographic Video Representations via Online Cluster Distillation* β€” NeurIPS 2025
22
  πŸ“„ [arXiv:2506.11777](https://arxiv.org/pdf/2506.11777)
23
+ **Code:** [https://github.com/mdivyanshu97/DISCOVR](https://github.com/mdivyanshu97/DISCOVR)
24
 
25
  ---
26
 
 
51
 
52
  ---
53
 
54
+ ## Sample Usage
55
+
56
+ To pretrain the model on echocardiographic videos:
57
+
58
+ ```bash
59
+ python -m torch.distributed.launch --nproc_per_node=NUM_GPUS \
60
+ scripts/run_mae_pretraining.py \
61
+ --data_path /path/to/echo_videos \
62
+ --data_path_csv /path/to/train.csv \
63
+ --data_path_val /path/to/val.csv \
64
+ --data_path_test /path/to/test.csv \
65
+ --mask_type multi_local \
66
+ --loss_func SIGMA \
67
+ --model pretrain_videomae_base_patch16_224 \
68
+ --batch_size 48 \
69
+ --num_frames 64 \
70
+ --opt adamw \
71
+ --opt_betas 0.9 0.95 \
72
+ --warmup_epochs 40 \
73
+ --epochs 400
74
+ ```
75
+
76
+ ---
77
+
78
  ## πŸ”– Quick Facts
79
  - **Repo:** `Div97/DISCOVR_ADULT_PEDIATRIC_MODEL`
80
  - **Model family:** DISCOVR checkpoints (199 β†’ 799)
 
93
  ```bibtex
94
  @article{mishra2025self,
95
  title={Self-supervised Learning of Echocardiographic Video Representations via Online Cluster Distillation},
96
+ author={Mishra, Divyanshu and Salehi, Mohammadreza and Saha, Pramit Saha and Patey, Olga and Papageorghiou, Aris T and Asano, Yuki M and Noble, J Alison},
97
  journal={arXiv preprint arXiv:2506.11777},
98
  year={2025}
99
  }
100
+ ```
101
+
102
+ ---
103
+
104
+ ## License
105
+ This project is licensed under the MIT License.