| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - visual-question-answering |
| | language: |
| | - en |
| | - ase |
| | tags: |
| | - sign-language |
| | - ASL |
| | - american-sign-language |
| | - gesture-recognition |
| | pretty_name: PopSign Images |
| | size_categories: |
| | - 100K<n<1M |
| | configs: |
| | - config_name: game |
| | data_files: |
| | - split: train |
| | path: data/game/train-*.parquet |
| | - split: validation |
| | path: data/game/validation-*.parquet |
| | - split: test |
| | path: data/game/test-*.parquet |
| | - config_name: non-game |
| | data_files: |
| | - split: train |
| | path: data/non-game/train-*.parquet |
| | - split: validation |
| | path: data/non-game/validation-*.parquet |
| | - split: test |
| | path: data/non-game/test-*.parquet |
| | --- |
| | |
| | # PopSign Images Dataset |
| |
|
| | This dataset contains frame sequences extracted from PopSign ASL (American Sign Language) video clips, organized for sign language recognition tasks. |
| |
|
| | ## Dataset Description |
| |
|
| | The PopSign dataset consists of short video clips of isolated ASL signs. This version provides pre-extracted image frames from each video clip, suitable for training image-based or video-based models for sign language recognition. |
| |
|
| | ### Subsets |
| |
|
| | The dataset contains two subsets: |
| |
|
| | - **game**: Signs collected in a gamified data collection environment |
| | - **non-game**: Signs collected in a standard recording environment |
| |
|
| | ### Splits |
| |
|
| | Each subset contains three splits: |
| | - **train**: Training data |
| | - **validation**: Validation data |
| | - **test**: Test data |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Features |
| |
|
| | | Column | Type | Description | |
| | |--------|------|-------------| |
| | | `file` | string | Original video file path | |
| | | `start` | float32 | Start time of the sign segment (seconds) | |
| | | `end` | float32 | End time of the sign segment (seconds) | |
| | | `text` | string | The English gloss/label for the sign | |
| | | `images` | list[Image] | Sequence of frames extracted from the video at 256x256 resolution | |
| |
|
| | ### Frame Extraction |
| |
|
| | Frames are extracted at approximately 5 FPS from each video clip. The start and end times are determined using a cascading approach: |
| |
|
| | 1. **Pose-based segmentation**: Uses a heuristic that detects when the signer's wrist is above their elbow, indicating active signing. This provides more accurate boundaries than model-based segmentation. |
| | 2. **EAF segmentation fallback**: If the pose-based method indicates signing throughout the entire video (hands never rest), falls back to automatic sign segmentation from EAF files. |
| | 3. **Full video duration**: If neither method provides a boundary, uses the entire video duration. |
| |
|
| | All frames are 256x256 pixels. |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the game subset |
| | game_dataset = load_dataset("sign/popsign-images", "game") |
| | |
| | # Load the non-game subset |
| | non_game_dataset = load_dataset("sign/popsign-images", "non-game") |
| | |
| | # Access a sample |
| | sample = game_dataset["train"][0] |
| | print(f"Sign: {sample['text']}") |
| | print(f"Duration: {sample['end'] - sample['start']:.2f}s") |
| | print(f"Number of frames: {len(sample['images'])}") |
| | |
| | # Display first frame |
| | sample['images'][0].show() |
| | ``` |
| |
|
| | ## Data Processing |
| |
|
| | The videos were processed using the following pipeline: |
| |
|
| | 1. **Video Preprocessing**: Original videos are cropped to square and rescaled to 256x256 pixels: |
| | ```bash |
| | ffmpeg -y -hide_banner -i input.mp4 \ |
| | -vf "crop='min(iw\,ih)':'min(iw\,ih)':(iw-min(iw\,ih))/2:(ih-min(iw\,ih))/2,scale=256:256:flags=lanczos" \ |
| | -c:v libx264 -preset ultrafast -crf 23 -an -movflags +faststart \ |
| | output.mp4 |
| | ``` |
| |
|
| | 2. **Pose Estimation**: MediaPipe pose estimation is applied: |
| | ```bash |
| | video_to_pose --format mediapipe -i video.mp4 -o video.pose \ |
| | --additional-config="model_complexity=2,smooth_landmarks=false,refine_face_landmarks=true" |
| | ``` |
| |
|
| | 3. **Sign Boundary Detection**: A cascading approach identifies sign boundaries: |
| | - **Primary**: Pose-based heuristic detects frames where the wrist is above the elbow (indicating active signing) |
| | - **Fallback**: If hands are raised throughout the video, uses automatic EAF segmentation: |
| | ```bash |
| | pose_to_segments --pose="video.pose" --elan="video.eaf" --video="video.mp4" |
| | ``` |
| | |
| | 4. **Frame Extraction**: Frames are extracted from the identified sign segment at 5 FPS. |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset, please cite the original PopSign dataset: |
| |
|
| | ```bibtex |
| | @inproceedings{Starner2023PopSignAV, |
| | title={PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones}, |
| | author={Thad Starner and Sean Forbes and Matthew So and David Martin and Rohit Sridhar and Gururaj Deshpande and Sam S. Sepah and Sahir Shahryar and Khushi Bhardwaj and Tyler Kwok and Daksh Sehgal and Saad Hassan and Bill Neubauer and Sofia Anandi Vempala and Alec Tan and Jocelyn Heath and Unnathi Kumar and Priyanka Mosur and Tavenner Hall and Rajandeep Singh and Christopher Cui and Glenn Cameron and Sohier Dane and Garrett Tanzer}, |
| | booktitle={Neural Information Processing Systems}, |
| | year={2023}, |
| | url={https://api.semanticscholar.org/CorpusID:268030720} |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | This dataset is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. |
| |
|