Search is not available for this dataset
video
video |
|---|
End of preview. Expand
in Data Studio
SignTalk-GH + GSL Dictionary (Sampled) — Dataset Card for Hugging Face
Dataset generated by merging the SignTalk-GH corpus with the GSL dictionary (zenodo openpose videos) and sampling to a balanced subset. Intended for downstream preprocessing (preprocessing.ipynb) and Text2Sign pipelines.
Summary
- Source corpora: SignTalk-GH (Videos + Metadata.xlsx) and GSL OpenPose data.
- Integration: GSL concepts normalized (token splitting on
_OR_,_AND_, hyphens preserved), de-duped against existing SignTalk sentences, and copied into the SignTalk video layout. - Sampling: Stratified by category with per-sentence variant caps; target size ~1,500 videos (or ~2,500 if
DOUBLE_SAMPLE=True). - Artifacts:
sample_dataset/videos/*.mp4sample_dataset/sampled_metadata.csvsample_dataset/dataset_summary.csv- Optional zip produced in Colab:
SignTalk-GH_Sampled.zip
Data Fields (sampled_metadata.csv)
video_file(str): File name of the mp4 invideos/(e.g.,123A.mp4).sentence_id(int): Unique sentence identifier from SignTalk; GSL additions start after the max existing ID.sentence(str): Normalized sentence/gloss text.category(str): Original SignTalk category orGSL Dictionaryfor imported entries.variant(str): Variant token inferred from filename suffix;BASEif none.
Directory Structure
sample_dataset/
├── videos/ # mp4 clips
├── sampled_metadata.csv # tabular metadata
└── dataset_summary.csv # quick stats
Creation Steps (notebook key points)
- Download sources: SignTalk-GH zip (Kaggle) + GSL OpenPose zip (Zenodo).
- Parse GSL folders, normalize concept tokens, and copy variants into SignTalk
Videos/with new IDs. - Map videos to sentences via filename patterns (e.g.,
1A.mp4). - Stratified sampling by category with per-sentence variant bounds (
MIN_VARIANTS_PER_SENTENCE,MAX_VARIANTS_PER_SENTENCE). - Copy sampled videos into
sample_dataset/videos/and writesampled_metadata.csv+dataset_summary.csv.
Example Row
video_file: "184C.mp4"
sentence_id: 184
sentence: "Go To Hospital"
category: "Healthcare"
variant: "C"
Loading with datasets.load_dataset (after pushing to HF Hub)
from datasets import load_dataset
ds = load_dataset("zahemen9900/signtalk-gh-gsl", split="train")
print(ds)
print(ds[0]) # columns: video_file, sentence_id, sentence, category, variant
If storing videos as files in the repo, set keep_in_memory=False and ensure videos/ paths are relative. For large media, prefer an external storage bucket and include URLs in video_file.
Recommended Splits
- Use
sentence_idstratification to create train/val/test (e.g., 80/10/10) to avoid leakage across variants of the same sentence. - When generating splits, keep all variants of a sentence in the same split.
Licensing
- Respect original SignTalk-GH and GSL licenses. If unsure, mark as "research-only" and require users to acknowledge source licenses.
Known Caveats
- Some GSL variants may be skipped if variant count exceeds alphabet length (A–Z).
unmapped_videos.csvmay exist if filenames don't match the expected pattern; review before release.- Frame rates and resolutions are not normalized at this stage; preprocessing handles normalization.
Provenance & Reproducibility
- Notebook:
testing-gsl-datasets.ipynb(A copy of the notebook is here) - Random seed: 42
- Sampling knobs:
TARGET_SAMPLE_COUNT,MIN_VARIANTS_PER_SENTENCE,MAX_VARIANTS_PER_SENTENCE,DOUBLE_SAMPLE
Changelog
- v1.0: Initial sampled release (~11k clips) with GSL integration.
- Downloads last month
- 10