license: apache-2.0
SilentWear: An Ultra-Low Power Wearable Interface for EMG-Based Silent Speech Recognition
This repository provides a multi-session surface electromyography (EMG) dataset for vocalized and silent speech recognition, recorded using a wearable neckband interface.
The dataset is designed to support research in:
- EMG-based speech decoding
- Human–machine interaction (HMI)
- Assistive communication technologies
- Ultra-low-power wearable AI systems
The data were collected using SilentWear, an unobtrusive, ultra-low-power EMG neckband designed for silent and vocalized speech detection.
Dataset Description
The dataset includes recordings from:
- 4 subjects (3 male, 1 female)
- Vocalized and silent speech conditions
- 8 HMI commands:
up, down, left, right, start, stop, forward, backward
plus a rest (no-speech) class - 3 recording days per subject
- Multiple sessions, collected over 3 days, each containing:
- 5 vocalized batches.
- 5 silent batches
- Each batch contains 20 repetitions of each word, plus rest.
This structure enables evaluation under multi-day conditions, supporting research on robustness to electrode repositioning and inter-session variability.
Further details on the data collection methodology are available at:
https://arxiv.org/placeholder
Repository Organization
The repository contains two subfolders:
1️⃣ data_raw_and_filt
This folder contains full-length EMG recordings for each subject, condition, session, and batch.
Each file:
- Contains raw EMG signals
- Contains filtered EMG signals (4th-order high-pass at 20 Hz + 50 Hz notch)
- Is stored in
.h5format\ - Uses the HDF5 key
"emg"
Directory structure example:
data_raw_and_filt/
└── S01/s
└── silent/
└── sess_1_batch_1.h5
.
.
└── sess_3_batch_5.h5
└── vocalized/
└── sess_1_batch_1.h5
.
.
└── sess_3_batch_5.h5
└── S02
└── S03
└── S04
Example: Loading a File
import pandas as pd
df = pd.read_hdf("data_raw_and_filt/S01/silent/sess_1_batch_1.h5", key="emg")
df.head()
File Content Structure (data_raw_and_filt)
Each .h5 file contains:
```
Columns Description
Raw EMG Ch_0--Ch_15 Raw data
Filtered EMG Ch_0_filt--Ch_15_filt High-pass + notch filtered data
Labels Label_int, Integer Labels
Label_str String Labels
Session Metadata session_id Recording session identifier
Batch Metadata batch_id Batch identifier within session
### 2️⃣ `wins_and_features`
- Non-overlapping windowed segments
- Raw and filtered signals
- Extracted time-frequency features
These files can be directly used for model training or benchmarking.
---
# Code and Usage
The dataset is designed to be used in conjunction with the SilentWear repository:
https://github.com/pulp-bio/silent_wear
Please refer to the repository `README.md` for:
- Data loading utilities
- Preprocessing pipelines
- Training scripts
- Evaluation scripts
The repository creates the files contained in `wins_and_features` folder; these files are then used for model training.
Alternatively, you may directly use the `data_raw_and_filt` folder to:
- Build custom dataloaders
- Train your own architectures
- Benchmark novel EMG decoding methods
---
#
# Contributing
We aim to promote standardized evaluation and fair comparison across models.
We strongly encourage contributions of trained models and evaluation results to:
https://github.com/pulp-bio/silent_wear
Please refer to the repository README for submission guidelines.
---
# Citation
If you use this dataset, please cite:
```bibtex
@online{spacone_silentwear_26,
author = {Spacone, Giusy and Frey, Sebastian and Pollo, Giovanni and Burrello, Alessio and Pagliari, J. Daniele and Kartsch, Victor and Cossettini, Andrea and Benini, Luca},
title = {SilentWear: An Ultra-Low Power Wearable Interface for EMG-Based Silent Speech Recognition},
year = {202},
url = {https://arxiv.org/placeholder}
}