--- pretty_name: How2Sign Holistic language: en license: - mit tags: - sign-language - asl - mediapipe - holistic - pose-landmarks - hand-landmarks - face-landmarks - gesture-recognition - sequence-modeling - time-series - computer-vision - deep-learning source_datasets: - Duarte_CVPR2021/How2Sign task_categories: - feature-extraction - translation task_ids: - pose-estimation - conversational citation: - "@inproceedings{Duarte_CVPR2021, title={{How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language}}, author={Duarte, Amanda and Palaskar, Shruti and Ventura, Lucas and Ghadiyaram, Deepti and DeHaan, Kenneth and Metze, Florian and Torres, Jordi and Giro-i-Nieto, Xavier}, booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021} }" - "@misc{MediaPipe, title={MediaPipe}, author={Google Inc.}, year={2020}, url={https://mediapipe.dev/} }" --- # How2Sign Holistic ### Mediapipe Holistic Landmark Features Extracted from the How2Sign ASL Dataset ## Overview **How2Sign Holistic** is a curated dataset providing frame-level Mediapipe Holistic landmarks extracted from the full How2Sign American Sign Language corpus. Each sentence-level video clip has pose, face, and hand landmark sequences stored as `.npy` files. This dataset is designed to support research in: - ASL recognition and translation - Pose-based sign generation - Sequence and time-series modeling - Gesture understanding - Multiview motion analysis ## Base Directory **`how2sign_holistic_features/`** is the root folder containing all splits and metadata. ## Sources The original data comes from the **How2Sign dataset** (Duarte et al., CVPR 2021), a large-scale multimodal American Sign Language dataset sourced from YouTube videos. ## Collection Methodology - Sentence-level clips were extracted from the original videos according to How2Sign protocol. - Frame-level landmarks were extracted using **Google Mediapipe Holistic** (pose, face, hands). - Each clip saved as `.npy` with frontal and side views. - Metadata CSVs map clips to sentences, start/end timestamps, and video identifiers. - CSVs can be opened in pandas: `pd.read_csv('filename.csv', sep='\t')` ## Dataset Structure ``` how2sign_holistic_features/ │ ├── metadata/ # Original How2Sign metadata (CSV files) │ ├── how2sign_realigned_train.csv │ ├── how2sign_realigned_val.csv │ ├── how2sign_realigned_test.csv │ ├── how2sign_train.csv │ ├── how2sign_val.csv │ └── how2sign_test.csv │ ├── train/ # Training split .npy files │ ├── frontal/ │ │ ├── _front_holistic.npy │ │ └── ... │ └── side/ │ ├── _side_holistic.npy │ └── ... │ ├── val/ # Validation split │ ├── frontal/ │ └── side/ │ └── test/ # Test split ├── frontal/ └── side/ ``` ### Notes - `.npy` files contain **frame-level Mediapipe Holistic landmarks**. - Frontal and side views are synchronized. - Filenames follow: `VIDEO_NAME_START-END-rgb_front/side_holistic.npy` - Metadata CSVs map clips to video ID, sentence, start/end timestamps, and How2Sign identifiers. ## Citation If you use this dataset, please cite: Duarte, A., Palaskar, S., Ventura, L., Ghadiyaram, D., DeHaan, K., Metze, F., Torres, J., & Giro-i-Nieto, X. **“How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language.”** _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021._ ## Recommended Tags `ASL`, `Sign Language`, `Mediapipe`, `Holistic`, `Pose Landmarks`, `Hand Landmarks`, `Face Landmarks`, `Keypoints`, `Motion Capture`, `Time Series`, `Gesture Recognition`, `Computer Vision`, `Deep Learning`, `Sequence Modeling`