SignVerse-2M / README.md
FangSen9000's picture
Update README.md
72d2534 verified
metadata
license: cc-by-nc-4.0
language:
  - ase
  - bfi
  - gsg
  - sgd
  - fsl
  - lsf
  - lse
  - lis
  - lgp
  - ngt
  - asf
  - jsl
  - kvk
  - csl
  - aed
  - tsm
  - pjm
  - rsl
  - swl
  - dsl
  - fse
  - nsl
  - lsc
  - lsm
  - bzs
task_categories:
  - other
tags:
  - sign-language
  - pose-estimation
  - dwpose
  - multilingual
  - keypoint
  - video-understanding
  - sign-language-generation
  - sign-language-recognition
  - pose-native
size_categories:
  - 1M<n<10M
pretty_name: 'SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages'
SignVerse-2M cover
SignVerse-2M

SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages

Links: [Paper] | [Data Files] | [Project Page]

SignVerse-2M is a large-scale multilingual pose-native dataset for sign language research. The dataset reorganizes publicly available sign language videos into a unified DWPose-based representation and releases the result as approximately 2 million clips from 39,196 videos covering 25+ sign languages. Rather than distributing raw RGB video, SignVerse-2M provides per-frame body, hand, and face keypoints together with structured subtitle supervision, making the corpus directly usable for pose-conditioned sign language generation, recognition, and translation research.

Overview

Existing large-scale sign language resources are typically organized as video-text corpora. That format is appropriate for RGB-based recognition or translation, but it is not the most natural interface for modern pose-driven generation pipelines, which increasingly operate on standardized human keypoint controls such as DWPose. SignVerse-2M addresses this mismatch by converting multilingual public sign language videos into a common pose space.

The release is intended to support research questions such as:

  • multilingual sign language generation in pose space
  • pose-based sign language recognition and translation
  • cross-lingual transfer across heterogeneous sign language sources
  • benchmarking of sign language motion representations under open-world conditions

Key Characteristics

Property Value
Dataset name SignVerse-2M
Core representation DWPose keypoint sequences
Videos 39,196
Clips / subtitle segments Approximately 2 million
Sign languages 25+
Frame rate 24 FPS
Per-frame keypoints 18 body + 21 left hand + 21 right hand + 68 face = 128
Source type Public multilingual sign language videos
Raw RGB frames released No
Released supervision Structured subtitle text and document-level text

Why A Pose-Native Release

SignVerse-2M should not be understood as merely a larger multilingual video-text corpus. Its main contribution is the release of a unified pose-native interface for sign language research.

Compared with raw-video releases, the pose-native representation offers three practical advantages:

  1. It reduces nuisance variation from background, clothing, and appearance, allowing models to focus more directly on motion.
  2. It aligns naturally with contemporary pose-conditioned generation pipelines that already consume DWPose-like controls.
  3. It provides a common representation for multilingual benchmarking, making comparisons across methods more interpretable.

Data Source And Processing

The corpus is built from publicly available multilingual sign language videos, including resources inherited from large public sign language collections such as YouTube-SL-25 and related open web sources. Each video is processed through a unified pipeline that:

  1. retrieves metadata and available subtitles,
  2. structures subtitle tracks into segment-level and document-level text,
  3. decodes the video at 24 FPS,
  4. applies DWPose to extract body, hand, and face keypoints frame by frame,
  5. packages the outputs into per-video artifacts for public release.

No manual keypoint annotation is provided. The keypoints and subtitles are produced automatically through the preprocessing pipeline.

Languages

The corpus covers more than 25 sign languages. Major language codes in the current release include:

Code Language Code Language
ase American Sign Language lsf French Sign Language
bfi British Sign Language lse Spanish Sign Language
gsg German Sign Language lis Italian Sign Language
sgd Swiss German Sign Language lgp Portuguese Sign Language
asf Australian Sign Language ngt Sign Language of the Netherlands
jsl Japanese Sign Language kvk Korean Sign Language
csl Chinese Sign Language bzs Brazilian Sign Language
lsm Mexican Sign Language pjm Polish Sign Language

The language distribution is long-tailed rather than balanced. High-resource languages account for a disproportionate share of the total data volume.

Repository Structure

The public release is organized around .tar shards stored under dataset/. Each shard contains per-video directories:

dataset/
  Sign_DWPose_NPZ_000001.tar
  Sign_DWPose_NPZ_000002.tar
  ...

Within each shard:

{video_id}/
  poses.npz
  caption.json
  {video_id}.complete

The main files are:

  • poses.npz: per-video DWPose payload with frame-wise keypoints
  • caption.json: structured subtitle and supervision metadata
  • .complete: completion marker produced by the processing pipeline

Data Schema

poses.npz

Each poses.npz file stores a person-centric per-frame representation. A simplified schema is shown below:

{
    "video_id": str,
    "fps": float,
    "num_frames": int,
    "frame_ids": int[T],
    "width": int,
    "height": int,
    "frames": [
        {
            "num_people": int,
            "frame_id": int,
            "width": int,
            "height": int,
            "person_0": {
                "body": float[18, 3],
                "face": float[68, 3],
                "left_hand": float[21, 3],
                "right_hand": float[21, 3],
            },
            # optional additional people:
            # "person_1": { ... }
        },
        ...
    ]
}

Keypoint coordinates are stored in pixel space as (x, y, score), where confidence scores lie in [0, 1].

caption.json

{
  "video_id": "...",
  "sign_language": "ase",
  "title": "...",
  "duration_s": 312.4,
  "segments": [
    { "start": 0.0, "end": 4.2, "text": "..." }
  ],
  "document_text": "...",
  "english_source": "native"
}

The field english_source records whether the English supervision is native or automatically selected from an available translated subtitle track.

Loading Example

import json
import tarfile
import numpy as np

with tarfile.open("dataset/Sign_DWPose_NPZ_000001.tar") as tar:
    tar.extractall("./tmp_signverse")

npz = np.load("./tmp_signverse/{video_id}/poses.npz", allow_pickle=True)
frames = npz["frames"].tolist()
body = frames[0]["person_0"]["body"]

with open("./tmp_signverse/{video_id}/caption.json", "r", encoding="utf-8") as f:
    caption = json.load(f)

print(body.shape)
print(caption["segments"][0]["text"])

Visualization And Reproduction

The repository includes scripts for inspecting the released pose files and for reproducing the processing pipeline.

Visualize one pose file

python scripts/visualize_dwpose_npz.py \
    --npz extracted/{video_id}/poses.npz \
    --style openpose \
    --out viz/

Reproduce the pipeline

# Single machine
bash reproduce_independently.sh

# SLURM cluster
bash reproduce_independently_slurm.sh

The pipeline is organized into acquisition, subtitle structuring, pose extraction, and upload/publication stages.

Benchmark Setting

The accompanying paper introduces a multilingual text-to-pose benchmark for sign language generation. A generated DWPose sequence is evaluated through back-translation into spoken text, and standard text metrics such as BLEU and ROUGE are reported against the source input. The benchmark repository also provides a SignDW Transformer baseline in both small and large model configurations.

For model code and experimental setup, refer to the benchmark repository:

Intended Use

The release is intended for research use, including:

  • sign language generation from text via pose space
  • pose-based sign language translation and recognition
  • cross-lingual transfer, adaptation, and benchmarking
  • comparison of pose-native motion representations under open-world distributions

The release is not intended for:

  • safety-critical interpretation in medical, legal, or emergency settings
  • re-identification of individual signers
  • claims of full linguistic coverage for any specific sign language

Responsible Use

SignVerse-2M is derived from publicly posted sign language videos. This repository does not redistribute raw RGB videos; it releases pose keypoints and structured subtitle text only. Even so, pose sequences may still carry information that can contribute to signer identification when combined with external metadata. Users should treat the corpus as human-subject-derived data and use it responsibly.

The data distribution is also shaped by what is publicly available online. Educational or interpreter-style content may be overrepresented, while conversational, regional, or community-specific signing practices may be underrepresented.

Citation

If you use SignVerse-2M in academic work, please cite:

@misc{fang2026signverse2mtwomillionclipposenativeuniverse,
      title={SignVerse-2M: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages}, 
      author={Sen Fang and Hongbin Zhong and Yanxin Zhang and Dimitris N. Metaxas},
      year={2026},
      eprint={2605.01720},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2605.01720}, 
}

License

The released dataset annotations, pose keypoints, and accompanying metadata are distributed under CC BY-NC 4.0.

Source videos are not redistributed in this repository and remain subject to the original platform terms and the rights of their respective creators.