metadata
license: mit
tags:
- sleep-staging
- wav2sleep
- polysomnography
- time-series
- pytorch
library_name: wav2sleep
pipeline_tag: other
wav2sleep
Cardio-respiratory sleep staging (4-class: Wake, Light, Deep, REM)
Model Description
This is a wav2sleep model for automatic sleep stage classification from cardio-respiratory signals (ECG, PPG, respiratory). wav2sleep is a unified multi-modal deep learning approach that can process various combinations of physiological signals for sleep staging.
- Paper: wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification
- Repository: GitHub
- Conference: ML4H 2024
Model Details
| Property | Value |
|---|---|
| Input Signals | ECG, PPG, ABD, THX |
| Output Classes | 4 |
| Architecture | Non-causal (bidirectional) |
Signal Specifications
| Signal | Samples per 30s epoch |
|---|---|
| ECG, PPG | 1,024 |
| ABD, THX | 256 |
| EOG-L, EOG-R | 4,096 |
Usage
from wav2sleep import load_model
# Load model from Hugging Face Hub
model = load_model("hf://joncarter/wav2sleep")
# Or load from local checkpoint
model = load_model("/path/to/checkpoint")
For inference on new data:
from wav2sleep import load_model, predict_on_folder
model = load_model("hf://joncarter/wav2sleep")
predict_on_folder(
input_folder="/path/to/edf_files",
output_folder="/path/to/predictions",
model=model,
)
Training Data
The model was trained on polysomnography data from multiple publicly available datasets managed by the National Sleep Research Resource (NSRR).
Citation
@misc{carter2024wav2sleep,
title={wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals},
author={Jonathan F. Carter and Lionel Tarassenko},
year={2024},
eprint={2411.04644},
archivePrefix={arXiv},
primaryClass={cs.LG},
}
License
MIT