--- tags: - sign-language - how2sign - features --- # How2Sign — Extracted Features Pre-computed features from the [How2Sign](https://huggingface.co/datasets/aipieces/How2Sign) dataset. ## Layout Each modality is split by `train` / `test` / `val` and packed into ~3 GB tar shards. | Modality | Content | Approx size | |-------------------------------------|--------------------------------------------------|-------------| | `depth_rendered` | rendered depth-map JPGs per clip | ~39 GB | | `poses_rendered` | rendered pose-skeleton JPGs per clip | ~43 GB | | `poses` | raw pose `.npy` per clip | ~18 GB | | `optical_flow` | optical flow `.npy` per clip | ~3 GB | | `optical_flow_rendered` | rendered optical_flow JPGs per clip with stride 2| ~11 GB | | `processed_english_translations` | translations csv per split | ~6 MB | Inside each tar, paths are relative to the split, e.g.: /00000.jpg # rendered modalities .npy # npy modalities ## Notes on `.npy` files Poses `.npy` is a 0-d object array wrapping a Python object (typically a dict). Load with: ```python import numpy as np data = np.load("clip.npy", allow_pickle=True).item() ``` ## Downloading Everything: ```python from huggingface_hub import snapshot_download snapshot_download( repo_id="Alexeus17071/How2Sign_with_features", repo_type="dataset", local_dir="./how2sign_features", ) ``` Just one modality/split: ```python snapshot_download( repo_id="Alexeus17071/How2Sign_with_features", repo_type="dataset", local_dir="./how2sign_features", allow_patterns=["poses/train/*"], ) ``` Extract: ```bash mkdir -p extracted/poses/train for f in how2sign_features/poses/train/*.tar; do tar -xf "$f" -C extracted/poses/train/ done ```