Alexeus17071's picture
Update README.md
05cb7ba verified
metadata
tags:
  - sign-language
  - how2sign
  - features

How2Sign — Extracted Features

Pre-computed features from the How2Sign dataset.

Layout

Each modality is split by train / test / val and packed into ~3 GB tar shards.

Modality Content Approx size
depth_rendered rendered depth-map JPGs per clip ~39 GB
poses_rendered rendered pose-skeleton JPGs per clip ~43 GB
poses raw pose .npy per clip ~18 GB
optical_flow optical flow .npy per clip ~3 GB
optical_flow_rendered rendered optical_flow JPGs per clip with stride 2 ~11 GB
processed_english_translations translations csv per split ~6 MB

Inside each tar, paths are relative to the split, e.g.:

<clip_name>/00000.jpg     # rendered modalities
<clip_name>.npy           # npy modalities

Notes on .npy files

Poses .npy is a 0-d object array wrapping a Python object (typically a dict). Load with:

import numpy as np
data = np.load("clip.npy", allow_pickle=True).item()

Downloading

Everything:

from huggingface_hub import snapshot_download
snapshot_download(
    repo_id="Alexeus17071/How2Sign_with_features",
    repo_type="dataset",
    local_dir="./how2sign_features",
)

Just one modality/split:

snapshot_download(
    repo_id="Alexeus17071/How2Sign_with_features",
    repo_type="dataset",
    local_dir="./how2sign_features",
    allow_patterns=["poses/train/*"],
)

Extract:

mkdir -p extracted/poses/train
for f in how2sign_features/poses/train/*.tar; do
    tar -xf "$f" -C extracted/poses/train/
done