Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,9 @@ pretty_name: "STS-Mocap-v1"
|
|
| 24 |
|
| 25 |
## Dataset Summary
|
| 26 |
|
| 27 |
-
STS Mocap v1 dataset comprises of sentence by sentence translations from spoken Swedish to Swedish Sign Language (STS).
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## Supported Tasks
|
| 30 |
|
|
@@ -33,6 +35,8 @@ STS Mocap v1 dataset comprises of sentence by sentence translations from spoken
|
|
| 33 |
|
| 34 |
## Dataset Structure
|
| 35 |
|
|
|
|
|
|
|
| 36 |
### Data Fields
|
| 37 |
|
| 38 |
| Field | Type | Description |
|
|
@@ -49,9 +53,11 @@ STS Mocap v1 dataset comprises of sentence by sentence translations from spoken
|
|
| 49 |
|
| 50 |
## Dataset Creation Process
|
| 51 |
|
| 52 |
-
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
## Considerations for Using the Data
|
| 57 |
|
|
|
|
| 24 |
|
| 25 |
## Dataset Summary
|
| 26 |
|
| 27 |
+
STS Mocap v1 dataset comprises of sentence by sentence translations from spoken Swedish to Swedish Sign Language (STS). The Swedish sentences come from simplified news website -- 8sidor.se.
|
| 28 |
+
|
| 29 |
+
The total duration of this dataset is 4.1 hours. The face data associated with this dataset is to be published later spring/summer 2026.
|
| 30 |
|
| 31 |
## Supported Tasks
|
| 32 |
|
|
|
|
| 35 |
|
| 36 |
## Dataset Structure
|
| 37 |
|
| 38 |
+
* `bvh.zip` is a collection of all .bvh files in this dataset.
|
| 39 |
+
|
| 40 |
### Data Fields
|
| 41 |
|
| 42 |
| Field | Type | Description |
|
|
|
|
| 53 |
|
| 54 |
## Dataset Creation Process
|
| 55 |
|
| 56 |
+
The dataset was recorded in motion capture costume with OptiTrack cameras, Manus gloves, and iPhone LiveLink Metahuman Animator. The body motion data is recorded at 120 fps. The hands data was also recorded at 120 fps, but due to network issues it ended up being at a lower fps, so we recommend to downsample it to 60 fps when working with it.
|
| 57 |
+
|
| 58 |
+
The dataset was processed in Motive software. We solved it and then manually edited the mixed markers and occlusions where it was needed.
|
| 59 |
|
| 60 |
+
The data can be retargeted to a Metahuman Avatar. See tutorial on our github: [metahuman-render-howto.md](https://github.com/Pandaklez/STS-mocap-dataset-v1-sample/blob/main/metahuman-render-howto.md). This is a recommended pipeline because the face data is processed to the Metahuman Animator face control rig and is best compatible with it. At the same time we do have videos of face recordings that can also be used to extract Arkit blendshapes if needed.
|
| 61 |
|
| 62 |
## Considerations for Using the Data
|
| 63 |
|