Commit ·
9c6ddbf
1
Parent(s): e479720
Polish dataset card and switch dataset license to CC BY-NC 4.0
Browse files- README.md +186 -131
- metadata.json +1 -1
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
language:
|
| 4 |
- ase
|
| 5 |
- bfi
|
|
@@ -43,109 +43,171 @@ size_categories:
|
|
| 43 |
pretty_name: "SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages"
|
| 44 |
---
|
| 45 |
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
-
It converts ~2 million publicly available sign language video clips covering **25+ sign languages** into unified [DWPose](https://github.com/IDEA-Research/DWPose) keypoint sequences, providing a standardized interface directly compatible with modern pose-driven generation and recognition pipelines.
|
| 50 |
|
| 51 |
-
|
| 52 |
-
> [Project Page](https://signerx.github.io/SignVerse-2M) · [GitHub / Benchmark](https://github.com/SignerX/SignVerse-2M)
|
| 53 |
|
| 54 |
-
--
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
| Compat. w/ modern generation | Direct | Requires re-processing |
|
| 68 |
-
| Style-agnostic | Background / clothing removed | Mixed in |
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
##
|
| 73 |
|
| 74 |
-
|
| 75 |
-
|---|---|
|
| 76 |
-
| Videos | ~39,000 |
|
| 77 |
-
| Clips / segments | ~2,000,000 |
|
| 78 |
-
| Sign languages | 25+ |
|
| 79 |
-
| Pose backend | DWPose (RTMPose-based) |
|
| 80 |
-
| Keypoints per frame | 18 body + 21×2 hands + 68 face = 128 total |
|
| 81 |
-
| Frame rate | 24 FPS |
|
| 82 |
-
| Annotation | Automatic (no manual keypoint labeling) |
|
| 83 |
-
| Supervision signal | Auto-structured subtitles (segment-level + document-level) |
|
| 84 |
-
| Release format | Per-video `.tar` shards containing `poses.npz` + caption JSON |
|
| 85 |
-
|
| 86 |
-
---
|
| 87 |
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
| `asf` | Australian Sign Language (Auslan) | `ngt` | Sign Language of the Netherlands (NGT) |
|
| 99 |
-
| `jsl` | Japanese Sign Language (JSL) | `kvk` | Korean Sign Language (KSL) |
|
| 100 |
-
| `csl` | Chinese Sign Language (CSL) | `bzs` | Brazilian Sign Language (Libras) |
|
| 101 |
-
| `lsm` | Mexican Sign Language (LSM) | `pjm` | Polish Sign Language (PJM) |
|
| 102 |
|
| 103 |
-
|
| 104 |
|
| 105 |
-
|
|
|
|
|
|
|
| 106 |
|
| 107 |
-
|
| 108 |
|
| 109 |
-
``
|
| 110 |
-
Sign_DWPose_NPZ_XXXXXX.tar
|
| 111 |
-
└── {video_id}/
|
| 112 |
-
├── poses.npz # DWPose keypoints for all frames
|
| 113 |
-
├── caption.json # Structured subtitles + English supervision
|
| 114 |
-
└── {video_id}.complete
|
| 115 |
-
```
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
```python
|
| 120 |
-
|
| 121 |
-
"video_id":
|
| 122 |
-
"fps":
|
| 123 |
-
"num_frames":
|
| 124 |
-
"frame_ids":
|
| 125 |
-
"width":
|
| 126 |
-
"height":
|
| 127 |
-
"frames": [
|
| 128 |
{
|
| 129 |
"num_people": int,
|
| 130 |
-
"frame_id":
|
| 131 |
-
"width":
|
| 132 |
-
"height":
|
| 133 |
"person_0": {
|
| 134 |
-
"body":
|
| 135 |
-
"face":
|
| 136 |
-
"left_hand":
|
| 137 |
"right_hand": float[21, 3],
|
| 138 |
},
|
| 139 |
-
#
|
|
|
|
| 140 |
},
|
| 141 |
...
|
| 142 |
]
|
| 143 |
}
|
| 144 |
```
|
| 145 |
|
| 146 |
-
Keypoint coordinates are in
|
| 147 |
|
| 148 |
-
### `caption.json`
|
| 149 |
|
| 150 |
```json
|
| 151 |
{
|
|
@@ -154,40 +216,41 @@ Keypoint coordinates are in **pixel space** (not normalized). Confidence scores
|
|
| 154 |
"title": "...",
|
| 155 |
"duration_s": 312.4,
|
| 156 |
"segments": [
|
| 157 |
-
{"start": 0.0, "end": 4.2, "text": "
|
| 158 |
-
...
|
| 159 |
],
|
| 160 |
-
"document_text": "
|
| 161 |
"english_source": "native"
|
| 162 |
}
|
| 163 |
```
|
| 164 |
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
---
|
| 168 |
-
|
| 169 |
-
## Quick start
|
| 170 |
|
| 171 |
-
##
|
| 172 |
|
| 173 |
```python
|
| 174 |
-
import
|
|
|
|
|
|
|
| 175 |
|
| 176 |
-
with tarfile.open("Sign_DWPose_NPZ_000001.tar") as tar:
|
| 177 |
-
tar.extractall("./
|
| 178 |
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
body_kps = frames[0]["person_0"]["body"] # shape (18, 3) → (x, y, score)
|
| 183 |
|
| 184 |
-
|
| 185 |
-
with open("extracted/{video_id}/caption.json") as f:
|
| 186 |
caption = json.load(f)
|
|
|
|
|
|
|
| 187 |
print(caption["segments"][0]["text"])
|
| 188 |
```
|
| 189 |
|
| 190 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
| 191 |
|
| 192 |
```bash
|
| 193 |
python scripts/visualize_dwpose_npz.py \
|
|
@@ -196,7 +259,7 @@ python scripts/visualize_dwpose_npz.py \
|
|
| 196 |
--out viz/
|
| 197 |
```
|
| 198 |
|
| 199 |
-
### Reproduce the
|
| 200 |
|
| 201 |
```bash
|
| 202 |
# Single machine
|
|
@@ -206,59 +269,52 @@ bash reproduce_independently.sh
|
|
| 206 |
bash reproduce_independently_slurm.sh
|
| 207 |
```
|
| 208 |
|
| 209 |
-
|
| 210 |
|
| 211 |
-
|
| 212 |
|
| 213 |
-
|
| 214 |
|
| 215 |
-
|
| 216 |
-
|
| 217 |
-
> generated DWPose sequence → pose-space SL translator → spoken text → BLEU / ROUGE vs. original input
|
| 218 |
-
|
| 219 |
-
A **SignDW Transformer** baseline (40M and 1.2B parameter variants) is provided. Benchmark code and model weights are at [github.com/SignerX/SignVerse-2M](https://github.com/SignerX/SignVerse-2M).
|
| 220 |
-
|
| 221 |
-
---
|
| 222 |
|
| 223 |
-
|
| 224 |
|
| 225 |
-
|
| 226 |
-
- **Fine-grained hand detail.** The 21-keypoint hand model does not fully capture handshape distinctions needed for lexical discrimination.
|
| 227 |
-
- **Non-manual features.** Facial expressions and mouth patterns carry linguistic meaning in many sign languages; the 68-point face model is a partial proxy.
|
| 228 |
-
- **Language imbalance.** The corpus follows a long-tail distribution; ASL and a few other languages dominate total hours.
|
| 229 |
-
- **Subtitle quality.** Captions are automatically structured from platform exports; mistranslations and alignment errors propagate into the supervision signal.
|
| 230 |
-
- **Primary-signer assumption.** The pipeline indexes `person_0` as the primary signer; multi-signer frames may be misattributed.
|
| 231 |
-
- **No manual annotation.** No human-verified keypoints or signer identity metadata are included.
|
| 232 |
|
| 233 |
-
|
| 234 |
|
| 235 |
-
|
|
|
|
|
|
|
|
|
|
| 236 |
|
| 237 |
-
|
| 238 |
-
- Multilingual sign language generation (text → pose → video)
|
| 239 |
-
- Pose-space sign language recognition and translation research
|
| 240 |
-
- Cross-lingual transfer and adaptation studies
|
| 241 |
-
- Compatibility and benchmarking with modern pose-driven video generation models
|
| 242 |
|
| 243 |
-
|
| 244 |
-
-
|
| 245 |
-
-
|
| 246 |
-
- Definitive linguistic completeness claims for any specific sign language
|
| 247 |
|
| 248 |
-
|
| 249 |
|
| 250 |
-
|
| 251 |
|
| 252 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 253 |
|
| 254 |
-
|
| 255 |
|
| 256 |
-
**
|
| 257 |
|
| 258 |
-
--
|
| 259 |
|
| 260 |
## Citation
|
| 261 |
|
|
|
|
|
|
|
| 262 |
```bibtex
|
| 263 |
@inproceedings{fang2026signverse2m,
|
| 264 |
title = {{SignVerse-2M}: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages},
|
|
@@ -269,9 +325,8 @@ A **SignDW Transformer** baseline (40M and 1.2B parameter variants) is provided.
|
|
| 269 |
}
|
| 270 |
```
|
| 271 |
|
| 272 |
-
---
|
| 273 |
-
|
| 274 |
## License
|
| 275 |
|
| 276 |
-
|
| 277 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
language:
|
| 4 |
- ase
|
| 5 |
- bfi
|
|
|
|
| 43 |
pretty_name: "SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages"
|
| 44 |
---
|
| 45 |
|
| 46 |
+
<div align="center" style="position: relative; width: 100%; max-width: 1400px; margin: 0 auto 24px auto;">
|
| 47 |
+
<img
|
| 48 |
+
src="https://signerx.github.io/SignVerse-2M/static/images/background_gallery_big.gif"
|
| 49 |
+
alt="SignVerse-2M cover"
|
| 50 |
+
style="width: 100%; display: block; border-radius: 18px;"
|
| 51 |
+
/>
|
| 52 |
+
<div
|
| 53 |
+
style="
|
| 54 |
+
position: absolute;
|
| 55 |
+
inset: 0;
|
| 56 |
+
display: flex;
|
| 57 |
+
align-items: center;
|
| 58 |
+
justify-content: center;
|
| 59 |
+
background: linear-gradient(to bottom, rgba(0,0,0,0.18), rgba(0,0,0,0.28));
|
| 60 |
+
border-radius: 18px;
|
| 61 |
+
"
|
| 62 |
+
>
|
| 63 |
+
<div
|
| 64 |
+
style="
|
| 65 |
+
color: white;
|
| 66 |
+
font-size: 3.4rem;
|
| 67 |
+
font-weight: 800;
|
| 68 |
+
letter-spacing: 0.04em;
|
| 69 |
+
text-shadow: 0 4px 20px rgba(0,0,0,0.45);
|
| 70 |
+
"
|
| 71 |
+
>
|
| 72 |
+
SignVerse-2M
|
| 73 |
+
</div>
|
| 74 |
+
</div>
|
| 75 |
+
</div>
|
| 76 |
+
|
| 77 |
+
## SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages
|
| 78 |
+
|
| 79 |
+
Links: [[Paper]]() | [[Dataset]](https://huggingface.co/datasets/SignerX/SignVerse-2M/tree/main/dataset) | [[Project Page]](https://signerx.github.io/SignVerse-2M)
|
| 80 |
+
|
| 81 |
+
**SignVerse-2M** is a large-scale multilingual pose-native dataset for sign language research. The dataset reorganizes publicly available sign language videos into a unified DWPose-based representation and releases the result as approximately **2 million clips** from **39,196 videos** covering **25+ sign languages**. Rather than distributing raw RGB video, SignVerse-2M provides per-frame body, hand, and face keypoints together with structured subtitle supervision, making the corpus directly usable for pose-conditioned sign language generation, recognition, and translation research.
|
| 82 |
+
|
| 83 |
+
## Overview
|
| 84 |
+
|
| 85 |
+
Existing large-scale sign language resources are typically organized as video-text corpora. That format is appropriate for RGB-based recognition or translation, but it is not the most natural interface for modern pose-driven generation pipelines, which increasingly operate on standardized human keypoint controls such as DWPose. SignVerse-2M addresses this mismatch by converting multilingual public sign language videos into a common pose space.
|
| 86 |
+
|
| 87 |
+
The release is intended to support research questions such as:
|
| 88 |
+
|
| 89 |
+
- multilingual sign language generation in pose space
|
| 90 |
+
- pose-based sign language recognition and translation
|
| 91 |
+
- cross-lingual transfer across heterogeneous sign language sources
|
| 92 |
+
- benchmarking of sign language motion representations under open-world conditions
|
| 93 |
+
|
| 94 |
+
## Key Characteristics
|
| 95 |
+
|
| 96 |
+
| Property | Value |
|
| 97 |
+
| --- | --- |
|
| 98 |
+
| Dataset name | SignVerse-2M |
|
| 99 |
+
| Core representation | DWPose keypoint sequences |
|
| 100 |
+
| Videos | 39,196 |
|
| 101 |
+
| Clips / subtitle segments | Approximately 2 million |
|
| 102 |
+
| Sign languages | 25+ |
|
| 103 |
+
| Frame rate | 24 FPS |
|
| 104 |
+
| Per-frame keypoints | 18 body + 21 left hand + 21 right hand + 68 face = 128 |
|
| 105 |
+
| Source type | Public multilingual sign language videos |
|
| 106 |
+
| Raw RGB frames released | No |
|
| 107 |
+
| Released supervision | Structured subtitle text and document-level text |
|
| 108 |
|
| 109 |
+
## Why A Pose-Native Release
|
|
|
|
| 110 |
|
| 111 |
+
SignVerse-2M should not be understood as merely a larger multilingual video-text corpus. Its main contribution is the release of a unified pose-native interface for sign language research.
|
|
|
|
| 112 |
|
| 113 |
+
Compared with raw-video releases, the pose-native representation offers three practical advantages:
|
| 114 |
|
| 115 |
+
1. It reduces nuisance variation from background, clothing, and appearance, allowing models to focus more directly on motion.
|
| 116 |
+
2. It aligns naturally with contemporary pose-conditioned generation pipelines that already consume DWPose-like controls.
|
| 117 |
+
3. It provides a common representation for multilingual benchmarking, making comparisons across methods more interpretable.
|
| 118 |
|
| 119 |
+
## Data Source And Processing
|
| 120 |
|
| 121 |
+
The corpus is built from publicly available multilingual sign language videos, including resources inherited from large public sign language collections such as YouTube-SL-25 and related open web sources. Each video is processed through a unified pipeline that:
|
| 122 |
|
| 123 |
+
1. retrieves metadata and available subtitles,
|
| 124 |
+
2. structures subtitle tracks into segment-level and document-level text,
|
| 125 |
+
3. decodes the video at 24 FPS,
|
| 126 |
+
4. applies DWPose to extract body, hand, and face keypoints frame by frame,
|
| 127 |
+
5. packages the outputs into per-video artifacts for public release.
|
|
|
|
|
|
|
| 128 |
|
| 129 |
+
No manual keypoint annotation is provided. The keypoints and subtitles are produced automatically through the preprocessing pipeline.
|
| 130 |
|
| 131 |
+
## Languages
|
| 132 |
|
| 133 |
+
The corpus covers more than 25 sign languages. Major language codes in the current release include:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
+
| Code | Language | Code | Language |
|
| 136 |
+
| --- | --- | --- | --- |
|
| 137 |
+
| `ase` | American Sign Language | `lsf` | French Sign Language |
|
| 138 |
+
| `bfi` | British Sign Language | `lse` | Spanish Sign Language |
|
| 139 |
+
| `gsg` | German Sign Language | `lis` | Italian Sign Language |
|
| 140 |
+
| `sgd` | Swiss German Sign Language | `lgp` | Portuguese Sign Language |
|
| 141 |
+
| `asf` | Australian Sign Language | `ngt` | Sign Language of the Netherlands |
|
| 142 |
+
| `jsl` | Japanese Sign Language | `kvk` | Korean Sign Language |
|
| 143 |
+
| `csl` | Chinese Sign Language | `bzs` | Brazilian Sign Language |
|
| 144 |
+
| `lsm` | Mexican Sign Language | `pjm` | Polish Sign Language |
|
| 145 |
+
|
| 146 |
+
The language distribution is long-tailed rather than balanced. High-resource languages account for a disproportionate share of the total data volume.
|
| 147 |
+
|
| 148 |
+
## Repository Structure
|
| 149 |
+
|
| 150 |
+
The public release is organized around `.tar` shards stored under `dataset/`. Each shard contains per-video directories:
|
| 151 |
+
|
| 152 |
+
```text
|
| 153 |
+
dataset/
|
| 154 |
+
Sign_DWPose_NPZ_000001.tar
|
| 155 |
+
Sign_DWPose_NPZ_000002.tar
|
| 156 |
+
...
|
| 157 |
+
```
|
| 158 |
|
| 159 |
+
Within each shard:
|
| 160 |
|
| 161 |
+
```text
|
| 162 |
+
{video_id}/
|
| 163 |
+
poses.npz
|
| 164 |
+
caption.json
|
| 165 |
+
{video_id}.complete
|
| 166 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
|
| 168 |
+
The main files are:
|
| 169 |
|
| 170 |
+
- `poses.npz`: per-video DWPose payload with frame-wise keypoints
|
| 171 |
+
- `caption.json`: structured subtitle and supervision metadata
|
| 172 |
+
- `.complete`: completion marker produced by the processing pipeline
|
| 173 |
|
| 174 |
+
## Data Schema
|
| 175 |
|
| 176 |
+
### `poses.npz`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |
|
| 178 |
+
Each `poses.npz` file stores a person-centric per-frame representation. A simplified schema is shown below:
|
| 179 |
|
| 180 |
```python
|
| 181 |
+
{
|
| 182 |
+
"video_id": str,
|
| 183 |
+
"fps": float,
|
| 184 |
+
"num_frames": int,
|
| 185 |
+
"frame_ids": int[T],
|
| 186 |
+
"width": int,
|
| 187 |
+
"height": int,
|
| 188 |
+
"frames": [
|
| 189 |
{
|
| 190 |
"num_people": int,
|
| 191 |
+
"frame_id": int,
|
| 192 |
+
"width": int,
|
| 193 |
+
"height": int,
|
| 194 |
"person_0": {
|
| 195 |
+
"body": float[18, 3],
|
| 196 |
+
"face": float[68, 3],
|
| 197 |
+
"left_hand": float[21, 3],
|
| 198 |
"right_hand": float[21, 3],
|
| 199 |
},
|
| 200 |
+
# optional additional people:
|
| 201 |
+
# "person_1": { ... }
|
| 202 |
},
|
| 203 |
...
|
| 204 |
]
|
| 205 |
}
|
| 206 |
```
|
| 207 |
|
| 208 |
+
Keypoint coordinates are stored in pixel space as `(x, y, score)`, where confidence scores lie in `[0, 1]`.
|
| 209 |
|
| 210 |
+
### `caption.json`
|
| 211 |
|
| 212 |
```json
|
| 213 |
{
|
|
|
|
| 216 |
"title": "...",
|
| 217 |
"duration_s": 312.4,
|
| 218 |
"segments": [
|
| 219 |
+
{ "start": 0.0, "end": 4.2, "text": "..." }
|
|
|
|
| 220 |
],
|
| 221 |
+
"document_text": "...",
|
| 222 |
"english_source": "native"
|
| 223 |
}
|
| 224 |
```
|
| 225 |
|
| 226 |
+
The field `english_source` records whether the English supervision is native or automatically selected from an available translated subtitle track.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 227 |
|
| 228 |
+
## Loading Example
|
| 229 |
|
| 230 |
```python
|
| 231 |
+
import json
|
| 232 |
+
import tarfile
|
| 233 |
+
import numpy as np
|
| 234 |
|
| 235 |
+
with tarfile.open("dataset/Sign_DWPose_NPZ_000001.tar") as tar:
|
| 236 |
+
tar.extractall("./tmp_signverse")
|
| 237 |
|
| 238 |
+
npz = np.load("./tmp_signverse/{video_id}/poses.npz", allow_pickle=True)
|
| 239 |
+
frames = npz["frames"].tolist()
|
| 240 |
+
body = frames[0]["person_0"]["body"]
|
|
|
|
| 241 |
|
| 242 |
+
with open("./tmp_signverse/{video_id}/caption.json", "r", encoding="utf-8") as f:
|
|
|
|
| 243 |
caption = json.load(f)
|
| 244 |
+
|
| 245 |
+
print(body.shape)
|
| 246 |
print(caption["segments"][0]["text"])
|
| 247 |
```
|
| 248 |
|
| 249 |
+
## Visualization And Reproduction
|
| 250 |
+
|
| 251 |
+
The repository includes scripts for inspecting the released pose files and for reproducing the processing pipeline.
|
| 252 |
+
|
| 253 |
+
### Visualize one pose file
|
| 254 |
|
| 255 |
```bash
|
| 256 |
python scripts/visualize_dwpose_npz.py \
|
|
|
|
| 259 |
--out viz/
|
| 260 |
```
|
| 261 |
|
| 262 |
+
### Reproduce the pipeline
|
| 263 |
|
| 264 |
```bash
|
| 265 |
# Single machine
|
|
|
|
| 269 |
bash reproduce_independently_slurm.sh
|
| 270 |
```
|
| 271 |
|
| 272 |
+
The pipeline is organized into acquisition, subtitle structuring, pose extraction, and upload/publication stages.
|
| 273 |
|
| 274 |
+
## Benchmark Setting
|
| 275 |
|
| 276 |
+
The accompanying paper introduces a multilingual **text-to-pose** benchmark for sign language generation. A generated DWPose sequence is evaluated through back-translation into spoken text, and standard text metrics such as BLEU and ROUGE are reported against the source input. The benchmark repository also provides a **SignDW Transformer** baseline in both small and large model configurations.
|
| 277 |
|
| 278 |
+
For model code and experimental setup, refer to the benchmark repository:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 279 |
|
| 280 |
+
- [https://github.com/SignerX/SignVerse-2M](https://github.com/SignerX/SignVerse-2M)
|
| 281 |
|
| 282 |
+
## Intended Use
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 283 |
|
| 284 |
+
The release is intended for research use, including:
|
| 285 |
|
| 286 |
+
- sign language generation from text via pose space
|
| 287 |
+
- pose-based sign language translation and recognition
|
| 288 |
+
- cross-lingual transfer, adaptation, and benchmarking
|
| 289 |
+
- comparison of pose-native motion representations under open-world distributions
|
| 290 |
|
| 291 |
+
The release is not intended for:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 292 |
|
| 293 |
+
- safety-critical interpretation in medical, legal, or emergency settings
|
| 294 |
+
- re-identification of individual signers
|
| 295 |
+
- claims of full linguistic coverage for any specific sign language
|
|
|
|
| 296 |
|
| 297 |
+
## Limitations
|
| 298 |
|
| 299 |
+
Users should account for the following limitations when interpreting results:
|
| 300 |
|
| 301 |
+
- Keypoints are extracted automatically and may be noisy under fast motion, occlusion, or multi-person scenes.
|
| 302 |
+
- Fine-grained handshape distinctions are only partially captured by the released 21-keypoint hand representation.
|
| 303 |
+
- Non-manual linguistic signals such as facial expression and mouthing are only partially represented by 68 face landmarks.
|
| 304 |
+
- Subtitle timing and translations are automatically processed and may contain alignment or semantic errors.
|
| 305 |
+
- The corpus is language-imbalanced and inherits the long-tail distribution of public web video sources.
|
| 306 |
+
- `person_0` is treated as the primary signer, which may be imperfect in multi-signer videos.
|
| 307 |
|
| 308 |
+
## Responsible Use
|
| 309 |
|
| 310 |
+
SignVerse-2M is derived from publicly posted sign language videos. This repository does **not** redistribute raw RGB videos; it releases pose keypoints and structured subtitle text only. Even so, pose sequences may still carry information that can contribute to signer identification when combined with external metadata. Users should treat the corpus as human-subject-derived data and use it responsibly.
|
| 311 |
|
| 312 |
+
The data distribution is also shaped by what is publicly available online. Educational or interpreter-style content may be overrepresented, while conversational, regional, or community-specific signing practices may be underrepresented.
|
| 313 |
|
| 314 |
## Citation
|
| 315 |
|
| 316 |
+
If you use SignVerse-2M in academic work, please cite:
|
| 317 |
+
|
| 318 |
```bibtex
|
| 319 |
@inproceedings{fang2026signverse2m,
|
| 320 |
title = {{SignVerse-2M}: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages},
|
|
|
|
| 325 |
}
|
| 326 |
```
|
| 327 |
|
|
|
|
|
|
|
| 328 |
## License
|
| 329 |
|
| 330 |
+
The released dataset annotations, pose keypoints, and accompanying metadata are distributed under **CC BY-NC 4.0**.
|
| 331 |
+
|
| 332 |
+
Source videos are **not** redistributed in this repository and remain subject to the original platform terms and the rights of their respective creators.
|
metadata.json
CHANGED
|
@@ -40,7 +40,7 @@
|
|
| 40 |
"conformsTo": "http://mlcommons.org/croissant/1.0",
|
| 41 |
"description": "SignVerse-2M is a large-scale multilingual pose-native dataset for sign language research. It converts approximately two million publicly available sign language video clips covering 25+ sign languages into unified DWPose keypoint sequences (18 body + 21x2 hand + 68 face keypoints per frame at 24 FPS), providing a standardized data interface directly compatible with modern pose-driven generation and recognition pipelines.",
|
| 42 |
"citeAs": "@inproceedings{fang2026signverse2m, title={{SignVerse-2M}: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages}, author={Fang, Sen and Zhong, Hongbin and Zhang, Yanxin and Metaxas, Dimitris N.}, booktitle={Advances in Neural Information Processing Systems (NeurIPS)}, year={2026}, note={Evaluations \\& Datasets Track}}",
|
| 43 |
-
"license": "https://
|
| 44 |
"url": "https://huggingface.co/datasets/SignerX/SignVerse-2M",
|
| 45 |
"version": "1.0.0",
|
| 46 |
"keywords": [
|
|
|
|
| 40 |
"conformsTo": "http://mlcommons.org/croissant/1.0",
|
| 41 |
"description": "SignVerse-2M is a large-scale multilingual pose-native dataset for sign language research. It converts approximately two million publicly available sign language video clips covering 25+ sign languages into unified DWPose keypoint sequences (18 body + 21x2 hand + 68 face keypoints per frame at 24 FPS), providing a standardized data interface directly compatible with modern pose-driven generation and recognition pipelines.",
|
| 42 |
"citeAs": "@inproceedings{fang2026signverse2m, title={{SignVerse-2M}: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages}, author={Fang, Sen and Zhong, Hongbin and Zhang, Yanxin and Metaxas, Dimitris N.}, booktitle={Advances in Neural Information Processing Systems (NeurIPS)}, year={2026}, note={Evaluations \\& Datasets Track}}",
|
| 43 |
+
"license": "https://creativecommons.org/licenses/by-nc/4.0/",
|
| 44 |
"url": "https://huggingface.co/datasets/SignerX/SignVerse-2M",
|
| 45 |
"version": "1.0.0",
|
| 46 |
"keywords": [
|