Reduced WLASL Dataset
This dataset is a task-specific reduced version of the WLASL dataset, constructed for American Sign Language (ASL) recognition experiments.
Contents
videos/
Video clips organized by gloss label.metadata.csv
Per-sample metadata including:- file path
- gloss label
- fps (after normalization, if applied)
- video resolution
- normalized bounding box coordinates
gloss_map.json
Mapping from gloss labels to integer class IDs.
Dataset Construction
The dataset was generated from the original WLASL dataset using a custom CLI preprocessing script with the following criteria:
- Glosses restricted to a selected subset
- Samples without bounding boxes were excluded
- Videos optionally normalized to a fixed FPS
- Original videos preserved (no cropping applied)
- Bounding boxes stored as metadata for downstream processing
Command Used
The dataset was generated using the following command:
python .\src\sign_language_model\scripts\build_reduced_wlasl.py --wlasl-root .\data\WLASL\ --output-root .\data\wlasl_reduced --glosses before,cool,thin,go,drink,help,computer,cousin,who,bowling,trade,bed,accident,tall,thanksgiving,candy,short,pizza,man,no,wait,good,bad,son,like,doctor,now,find,you,thank you,please,hospital,bathroom,me,i --target-fps 24 --dry-run
Notes
- Bounding boxes are stored in normalized coordinates (0-1) with top-left origin.
- FPS normalization (if applied) uses frame dropping (no interpolation).