File size: 5,139 Bytes
19d6a91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6baf0f5
19d6a91
6baf0f5
19d6a91
6baf0f5
19d6a91
 
 
6baf0f5
19d6a91
6baf0f5
19d6a91
6baf0f5
19d6a91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6baf0f5
 
19d6a91
6baf0f5
 
19d6a91
6baf0f5
19d6a91
 
 
e8c8391
cd14a08
 
 
 
19d6a91
 
 
 
 
 
 
 
 
 
 
 
 
6baf0f5
 
 
 
 
 
 
 
 
19d6a91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd14a08
 
 
 
 
 
19d6a91
e8c8391
19d6a91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
  - ase
tags:
  - sign-language
  - ASL
  - american-sign-language
  - gesture-recognition
pretty_name: PopSign Images
size_categories:
  - 100K<n<1M
configs:
  - config_name: game
    data_files:
      - split: train
        path: data/game/train-*.parquet
      - split: validation
        path: data/game/validation-*.parquet
      - split: test
        path: data/game/test-*.parquet
  - config_name: non-game
    data_files:
      - split: train
        path: data/non-game/train-*.parquet
      - split: validation
        path: data/non-game/validation-*.parquet
      - split: test
        path: data/non-game/test-*.parquet
---

# PopSign Images Dataset

This dataset contains frame sequences extracted from PopSign ASL (American Sign Language) video clips, organized for sign language recognition tasks.

## Dataset Description

The PopSign dataset consists of short video clips of isolated ASL signs. This version provides pre-extracted image frames from each video clip, suitable for training image-based or video-based models for sign language recognition.

### Subsets

The dataset contains two subsets:

- **game**: Signs collected in a gamified data collection environment
- **non-game**: Signs collected in a standard recording environment

### Splits

Each subset contains three splits:
- **train**: Training data
- **validation**: Validation data
- **test**: Test data

## Dataset Structure

### Features

| Column | Type | Description |
|--------|------|-------------|
| `file` | string | Original video file path |
| `start` | float32 | Start time of the sign segment (seconds) |
| `end` | float32 | End time of the sign segment (seconds) |
| `text` | string | The English gloss/label for the sign |
| `images` | list[Image] | Sequence of frames extracted from the video at 256x256 resolution |

### Frame Extraction

Frames are extracted at approximately 5 FPS from each video clip. The start and end times are determined using a cascading approach:

1. **Pose-based segmentation**: Uses a heuristic that detects when the signer's wrist is above their elbow, indicating active signing. This provides more accurate boundaries than model-based segmentation.
2. **EAF segmentation fallback**: If the pose-based method indicates signing throughout the entire video (hands never rest), falls back to automatic sign segmentation from EAF files.
3. **Full video duration**: If neither method provides a boundary, uses the entire video duration.

All frames are 256x256 pixels.

## Usage

```python
from datasets import load_dataset

# Load the game subset
game_dataset = load_dataset("sign/popsign-images", "game")

# Load the non-game subset
non_game_dataset = load_dataset("sign/popsign-images", "non-game")

# Access a sample
sample = game_dataset["train"][0]
print(f"Sign: {sample['text']}")
print(f"Duration: {sample['end'] - sample['start']:.2f}s")
print(f"Number of frames: {len(sample['images'])}")

# Display first frame
sample['images'][0].show()
```

## Data Processing

The videos were processed using the following pipeline:

1. **Video Preprocessing**: Original videos are cropped to square and rescaled to 256x256 pixels:
   ```bash
   ffmpeg -y -hide_banner -i input.mp4 \
     -vf "crop='min(iw\,ih)':'min(iw\,ih)':(iw-min(iw\,ih))/2:(ih-min(iw\,ih))/2,scale=256:256:flags=lanczos" \
     -c:v libx264 -preset ultrafast -crf 23 -an -movflags +faststart \
     output.mp4
   ```

2. **Pose Estimation**: MediaPipe pose estimation is applied:
   ```bash
   video_to_pose --format mediapipe -i video.mp4 -o video.pose \
     --additional-config="model_complexity=2,smooth_landmarks=false,refine_face_landmarks=true"
   ```

3. **Sign Boundary Detection**: A cascading approach identifies sign boundaries:
   - **Primary**: Pose-based heuristic detects frames where the wrist is above the elbow (indicating active signing)
   - **Fallback**: If hands are raised throughout the video, uses automatic EAF segmentation:
     ```bash
     pose_to_segments --pose="video.pose" --elan="video.eaf" --video="video.mp4"
     ```

4. **Frame Extraction**: Frames are extracted from the identified sign segment at 5 FPS.

## Citation

If you use this dataset, please cite the original PopSign dataset:

```bibtex
@inproceedings{Starner2023PopSignAV,
  title={PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones},
  author={Thad Starner and Sean Forbes and Matthew So and David Martin and Rohit Sridhar and Gururaj Deshpande and Sam S. Sepah and Sahir Shahryar and Khushi Bhardwaj and Tyler Kwok and Daksh Sehgal and Saad Hassan and Bill Neubauer and Sofia Anandi Vempala and Alec Tan and Jocelyn Heath and Unnathi Kumar and Priyanka Mosur and Tavenner Hall and Rajandeep Singh and Christopher Cui and Glenn Cameron and Sohier Dane and Garrett Tanzer},
  booktitle={Neural Information Processing Systems},
  year={2023},
  url={https://api.semanticscholar.org/CorpusID:268030720}
}
```

## License

This dataset is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.