| | --- |
| | license: other |
| | license_name: physionet-credentialed-health-data-license-150 |
| | license_link: https://physionet.org/content/mimic-iv-echo/view-license/0.1/ |
| | extra_gated_heading: "Access to MIMIC-IV-Echo V-JEPA2 Embeddings" |
| | extra_gated_description: > |
| | This dataset contains metadata derived from MIMIC-IV-Echo, which is a credentialed dataset on PhysioNet. |
| | To access this dataset, you must have an active PhysioNet credentialed account with signed |
| | Data Use Agreement (DUA) for MIMIC-IV-Echo. |
| | extra_gated_fields: |
| | PhysioNet username: text |
| | I have an active credentialed account on PhysioNet: checkbox |
| | I have signed the MIMIC-IV-Echo Data Use Agreement: checkbox |
| | I agree to not redistribute this data: checkbox |
| | Affiliation: text |
| | tags: |
| | - medical-imaging |
| | - echocardiography |
| | - embeddings |
| | - mimic-iv-echo |
| | - v-jepa2 |
| | - video-embeddings |
| | - self-supervised-learning |
| | --- |
| | |
| | # MIMIC-IV-Echo V-JEPA2 Embeddings |
| |
|
| | Pre-computed video embeddings for [MIMIC-IV-Echo](https://physionet.org/content/mimic-iv-echo/) echocardiography videos, extracted using [V-JEPA2](https://github.com/facebookresearch/jepa) (Meta's self-supervised video encoder). |
| |
|
| | > **Access requirement:** This dataset includes metadata from MIMIC-IV-Echo (subject IDs, study IDs, timestamps, clinical note references). You must have [PhysioNet credentialed access](https://physionet.org/settings/credentialing/) and a signed DUA for [MIMIC-IV-Echo](https://physionet.org/content/mimic-iv-echo/) before requesting access. |
| |
|
| | ## Dataset |
| |
|
| | | | | |
| | |---|---| |
| | | **Videos** | 525,328 echocardiography clips | |
| | | **Subjects** | ~4,800 patients | |
| | | **Studies** | ~7,200 echo studies | |
| | | **Embedding model** | V-JEPA2 ViT-L (300M params) | |
| | | **Embedding dim** | 1024 (float32) | |
| | | **Format** | Sharded Parquet (10 shards, ~2.1 GB total) | |
| |
|
| | ## Structure |
| |
|
| | ``` |
| | mimic-iv-echo-jepa-embeddings/ |
| | └── jepa-l-embeddings/ |
| | ├── train-00000-of-00010.parquet (p10, 51K rows, 202 MB) |
| | ├── train-00001-of-00010.parquet (p11, 52K rows, 205 MB) |
| | ├── ... |
| | └── train-00009-of-00010.parquet (p19, 53K rows, 208 MB) |
| | ``` |
| |
|
| | Each shard corresponds to one MIMIC-IV patient folder (p10-p19). |
| |
|
| | ## Columns |
| |
|
| | | Column | Type | Source | Description | |
| | |--------|------|--------|-------------| |
| | | `subject_id` | int64 | echo-record-list.csv | MIMIC patient ID | |
| | | `study_id` | int64 | echo-record-list.csv | Echo study ID | |
| | | `dicom_id` | str | filename | Original DICOM identifier (e.g. `94106955_0001`) | |
| | | `file_path` | str | embedding key | Relative path to source MP4 | |
| | | `acquisition_datetime` | str | echo-record-list.csv | Per-video acquisition timestamp | |
| | | `study_datetime` | str | echo-study-list.csv | Per-study timestamp | |
| | | `note_id` | str | echo-study-list.csv | Clinical note reference (nullable) | |
| | | `note_seq` | str | echo-study-list.csv | Note sequence number (nullable) | |
| | | `note_charttime` | str | echo-study-list.csv | Note chart time (nullable) | |
| | | `embedding` | list[float32] | V-JEPA2 ViT-L | 1024-dim video embedding | |
| |
|
| | Metadata is joined from two MIMIC-IV-Echo CSV files so each row is self-contained: |
| | - **echo-record-list.csv** (525K rows) — per-video: subject, study, acquisition time |
| | - **echo-study-list.csv** (7K rows) — per-study: study time, clinical notes |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | ds = load_dataset("MITCriticalData/mimic-iv-echo-jepa-embeddings") |
| | print(ds["train"][0]) # {'subject_id': 10002221, 'embedding': [...], ...} |
| | ``` |
| |
|
| | ```python |
| | import pyarrow.parquet as pq |
| | |
| | table = pq.read_table("jepa-l-embeddings/") |
| | print(table.num_rows) # 525328 |
| | print(table.schema) |
| | ``` |
| |
|
| | ## Extraction |
| |
|
| | Embeddings were extracted using the pipeline described in [readme-embeddings.md](https://github.com/sebasmos/EchoJEPA-VE/blob/main/readme-embeddings.md). |
| |
|
| | | Step | Command | |
| | |------|---------| |
| | | Extract (SLURM) | `sbatch extract_slurm.sh vitl` | |
| | | Merge .pt files | `python merge_embeddings.py --model vitl` | |
| | | Convert to Parquet | `python to_parquet.py --model vitl` | |
| |
|
| | Config: L40S GPU, batch=256, 8 DataLoader workers, ~30 min per folder. |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset, please cite both the original MIMIC-IV-Echo dataset and V-JEPA2: |
| |
|
| | ```bibtex |
| | @article{mimic-iv-echo, |
| | title={MIMIC-IV-Echo: A Large-Scale Echocardiography Dataset}, |
| | note={PhysioNet, https://physionet.org/content/mimic-iv-echo/} |
| | } |
| | |
| | @article{bardes2025vjepa2, |
| | title={Revisiting Feature Prediction for Learning Visual Representations from Video}, |
| | author={Bardes, Adrien and Garrido, Quentin and Ponce, Jean and Chen, Xinlei and Rabbat, Michael and LeCun, Yann and Assran, Mahmoud and Ballas, Nicolas}, |
| | year={2025} |
| | } |
| | ``` |
| |
|