The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
STS-Mocap-v1
Dataset Summary
STS Mocap v1 dataset comprises of sentence by sentence translations from spoken Swedish to Swedish Sign Language (STS). The Swedish sentences come from simplified news website -- 8sidor.se.
The total duration of this dataset is 4.1 hours. The face data associated with this dataset is to be published later spring/summer 2026.
Supported Tasks
- [text-to-3d]: If you align sentence level annotations to frames, this dataset can be used to generate 3D motion from input sentences.
- [sign-language-generation]: The authors of this dataset have shown that OT-CFM type models such as Matcha-TTS model can be trained on this dataset both unconditionally and conditioned on respective 2D poses (see our paper in Citations section for more details).
Dataset Structure
bvh.zipis a collection of all .bvh files in this dataset.
Data Fields
| Field | Type | Description |
|---|---|---|
[field_name] |
[type] |
[Description] |
[field_name] |
[type] |
[Description] |
Data Splits
| Split | Size | Description |
|---|---|---|
train |
[N sequences] | [Description] |
test |
[N sequences] | [Description] |
Dataset Creation Process
The dataset was recorded in motion capture costume with OptiTrack cameras, Manus gloves, and iPhone LiveLink Metahuman Animator. The body motion data is recorded at 120 fps. The hands data was also recorded at 120 fps, but due to network issues it ended up being at a lower fps, so we recommend to downsample it to 60 fps when working with it.
The dataset was processed in Motive software. We solved it and then manually edited the mixed markers and occlusions where it was needed.
The data can be retargeted to a Metahuman Avatar. See tutorial on our github: metahuman-render-howto.md. This is a recommended pipeline because the face data is processed to the Metahuman Animator face control rig and is best compatible with it. At the same time we do have videos of face recordings that can also be used to extract Arkit blendshapes if needed.
Considerations for Using the Data
Bias, Risks, and Limitations
- The dataset consists of a single actor and does not generalize over demographics or inter-signer differences.
Citation
If you use this dataset, please cite:
@inproceedings{stsmocapv1klezovich,
author = {Klezovich, Anna and Mesch, Johanna and Henter, Gustav Eje and Beskow, Jonas},
title = {How much Data is Enough Data? A New Motion Capture Corpus for Probabilistic Sign Language Generation},
booktitle = {Proceedings of the [Nth] International Conference on Language Resources and Evaluation (LREC 2026)},
year = {2026},
pages = {[pages]},
address = {[location]},
url = {[URL]},
doi = {[DOI]}
}
⚠️License
This dataset is licensed under CC BY-NC-SA 4.0 with additional restrictions. Use of this dataset is subject to the LICENSE_DATA_ADDENDUM.md, which prohibits re-identification of individuals and requires GDPR compliance. All derivative works, synthetic data, and model weights trained on this data must carry the same license and addendum.
Non-commercial use only. ShareAlike required. Re-identification prohibited.
Contact
Anna Klezovich (KTH Royal Institute of Technology, Speech, Music and Hearing Department, Stockholm, Sweden)
Github for raising issues: STS-mocap-dataset-v1-sample
- Downloads last month
- 14