YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

EEG-DINO: Learning EEG Foundation Models via Hierarchical Self-Distillation

positions

We propose EEG-DINO, a novel foundation model for EEG encoding based on a hierarchical self-distillation framework. By multi-view semantic alignment, the model is able to extract multi-level semantic features from EEG data, which captures a wide range of semantic information, increasing the robustness against noise and variances inherent in complex EEG signals. Moreover, acknowledging the unique heterogeneous spatial-temporal dependencies in EEG signals, we design a channel-aware sampling mechanism and a decoupled positional coding scheme. They independently address spatial and temporal dimensions, enabling the model to capture the intricate structural characteristics of EEG signals. We pre-train EEG-DINO on a large-scale EEG corpus spanning over 9000 hours, which consistently achieves state-of-the-art performance on multiple downstream tasks. These results demonstrate the great effectiveness of our self-distillation framework for EEG encoding.

Pre-trained Models

Model Params
EEG-DINO-Small 4.6M
EEG-DINO-Medium 33M
EEG-DINO-Large 201M

Usage

CUDA_VISIBLE_DEVICES=0 python /path/to/run_finetuning.py

The default settings are for EEG-DINO-Small, if you want to use medium or large, you could change the embedding model in /path/to/models/eeg_encoder.py:

from models.embedding_small import PatchEmbedding

and change the default settings in /path/to/run_finetuning.py:

    parser.add_argument('--feature_size', default=200, type=int)
    parser.add_argument('--num_layers', default=12, type=int)
    parser.add_argument('--dim_feedforward', default=512, type=int)

512/16/1024 for medium and 1024/24/2048 for large.

Evaluation Results

We evaluate the performance of EEG-DINO on multiple downstream tasks, including TUEV, SEED-V and TUAB. The results consistently demonstrate the effectiveness of our model.

TUEV

Linear Probing:

Model Params Banlanced Acc. Cohen's Kappa Weighted F1
BIOT 3.2M 0.3327 0.3835 0.6792
LaBraM-Base 5.8M 0.3461 0.3968 0.6974
CBraMod 4.0M 0.3246 0.3884 0.6889
EEG-DINO-Small 4.6M 0.5482 0.5673 0.7861
EEG-DINO-Medium 33M 0.5880 0.6180 0.8111
EEG-DINO-Large 201M 0.6054 0.6419 0.8214

Full-parameter fine-tuning:

Model Params Banlanced Acc. Cohen's Kappa Weighted F1
BIOT 3.2M 0.5281 0.5273 0.7492
LaBraM-Base 5.8M 0.6409 0.6637 0.8312
CBraMod 4.0M 0.5942 0.5818 0.7817
EEG-DINO-Small 4.6M 0.6516 0.6654 0.8356
EEG-DINO-Medium 33M 0.6611 0.6739 0.8357
EEG-DINO-Large 201M 0.6679 0.6809 0.8398

SEED-V

Linear Probing:

Model Params Balanced Acc. Cohen's Kappa Weighted F1
BIOT 3.2M 0.2461 0.0798 0.2489
LaBraM-Base 5.8M 0.2521 0.0854 0.2543
CBraMod 4.0M 0.2536 0.0842 0.2568
EEG-DINO-Small 4.6M 0.2981 0.1273 0.3035
EEG-DINO-Medium 33M 0.3365 0.1707 0.3426
EEG-DINO-Large 201M 0.3579 0.1984 0.3652

Full-parameter fine-tuning:

Model Params Banlanced Acc. Cohen's Kappa Weighted F1
BIOT 3.2M 0.3837 0.2261 0.3856
LaBraM-Base 5.8M 0.3976 0.2386 0.3974
CBraMod 4.0M 0.3899 0.2414 0.3977
EEG-DINO-Small 4.6M 0.4063 0.2564 0.4092
EEG-DINO-Medium 33M 0.4138 0.2727 0.4234
EEG-DINO-Large 201M 0.4177 0.2801 0.4315

TUAB

Linear Probing:

Model Params Balanced Acc. AUC-PR AUROC
BIOT 3.2M 0.7308 0.7849 0.8013
LaBraM-Base 5.8M 0.7457 0.8081 0.8115
CBraMod 4.0M 0.6785 0.7721 0.7826
EEG-DINO-Small 4.6M 0.7841 0.8666 0.8706
EEG-DINO-Medium 33M 0.7915 0.8680 0.8763
EEG-DINO-Large 201M 0.7963 0.8701 0.8814

Full-parameter fine-tuning:

Model Params Balanced Acc. AUC-PR AUROC
BIOT 3.2M 0.7959 0.8792 0.8815
LaBraM-Base 5.8M 0.8140 0.8965 0.9022
CBraMod 4.0M 0.8091 0.8906 0.8831
EEG-DINO-Small 4.6M 0.8137 0.8906 0.8981
EEG-DINO-Medium 33M 0.8155 0.8963 0.9018
EEG-DINO-Large 201M 0.8207 0.9012 0.9100
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support