Datasets:
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -93,12 +93,20 @@ from datasets import load_dataset
|
|
| 93 |
|
| 94 |
# Random split with temporal segments (default)
|
| 95 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
|
|
|
| 96 |
|
| 97 |
# Random split with frame-wise labels (81 per video)
|
| 98 |
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
|
|
|
|
| 99 |
|
| 100 |
# Cross-demographic evaluation
|
| 101 |
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
```
|
| 103 |
|
| 104 |
## Activity Classes
|
|
@@ -227,53 +235,97 @@ dataset = load_dataset("simplexsigil2/wanfall", "cross_bmi")
|
|
| 227 |
|
| 228 |
## Usage
|
| 229 |
|
| 230 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
|
| 232 |
-
**Temporal Segments (default)** - Each sample is a segment with start/end times:
|
| 233 |
```python
|
| 234 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
| 235 |
-
# Train: 15,344 segments from 9,600 videos
|
| 236 |
-
# One video can have multiple segments
|
| 237 |
|
|
|
|
| 238 |
example = dataset['train'][0]
|
| 239 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 240 |
```
|
| 241 |
|
| 242 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 243 |
```python
|
| 244 |
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
|
| 245 |
-
# Train: 9,600 videos with 81 labels each
|
| 246 |
-
# One sample per video
|
| 247 |
|
|
|
|
| 248 |
example = dataset['train'][0]
|
| 249 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 250 |
```
|
| 251 |
|
| 252 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 253 |
```python
|
| 254 |
-
# All segments
|
| 255 |
dataset = load_dataset("simplexsigil2/wanfall", "labels") # 19,228 segments
|
| 256 |
|
| 257 |
-
# Video metadata only
|
| 258 |
dataset = load_dataset("simplexsigil2/wanfall", "metadata") # 12,000 videos
|
| 259 |
|
| 260 |
-
# Paths only (minimal)
|
| 261 |
dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True)
|
| 262 |
```
|
| 263 |
|
| 264 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
| 265 |
|
| 266 |
-
**Label Conversion:**
|
| 267 |
```python
|
| 268 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
| 269 |
label_feature = dataset['train'].features['label']
|
| 270 |
|
|
|
|
| 271 |
label_name = label_feature.int2str(1) # "fall"
|
|
|
|
|
|
|
| 272 |
label_id = label_feature.str2int("walk") # 0
|
| 273 |
-
|
|
|
|
|
|
|
| 274 |
```
|
| 275 |
|
| 276 |
-
|
|
|
|
| 277 |
```python
|
| 278 |
dataset = load_dataset("simplexsigil2/wanfall", "labels")
|
| 279 |
segments = dataset['train']
|
|
@@ -283,21 +335,91 @@ elderly_falls = [
|
|
| 283 |
ex for ex in segments
|
| 284 |
if ex['age_group'] == 'elderly_65_plus' and ex['label'] == 1
|
| 285 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 286 |
```
|
| 287 |
|
| 288 |
-
|
|
|
|
| 289 |
```python
|
|
|
|
| 290 |
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True)
|
| 291 |
|
| 292 |
-
# Train contains only young_adults_18_34
|
| 293 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 294 |
```
|
| 295 |
|
| 296 |
## Annotation Guidelines
|
| 297 |
|
| 298 |
-
|
| 299 |
-
|
| 300 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 301 |
|
| 302 |
## Demographic Distribution
|
| 303 |
|
|
@@ -312,11 +434,36 @@ Rich demographic and scene metadata enables bias analysis and cross-demographic
|
|
| 312 |
- Camera angles: 4 elevations Γ 4 azimuths Γ 2 distances
|
| 313 |
- Shot types: Static wide and medium-wide
|
| 314 |
|
| 315 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 316 |
|
| 317 |
-
|
| 318 |
-
- **Video specs:** 5.0625s duration, 81 frames @ 16fps, MP4 format
|
| 319 |
-
- **Access:** Videos must be obtained separately (information forthcoming)
|
| 320 |
|
| 321 |
## License
|
| 322 |
|
|
|
|
| 93 |
|
| 94 |
# Random split with temporal segments (default)
|
| 95 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
| 96 |
+
print(f"Train: {len(dataset['train'])} segments") # 15,344 segments
|
| 97 |
|
| 98 |
# Random split with frame-wise labels (81 per video)
|
| 99 |
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
|
| 100 |
+
print(f"Train: {len(dataset['train'])} videos") # 9,600 videos
|
| 101 |
|
| 102 |
# Cross-demographic evaluation
|
| 103 |
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age")
|
| 104 |
+
|
| 105 |
+
# Access example
|
| 106 |
+
example = dataset['train'][0]
|
| 107 |
+
print(f"Video: {example['path']}")
|
| 108 |
+
print(f"Activity: {example['label']} ({example['start']:.2f}s - {example['end']:.2f}s)")
|
| 109 |
+
print(f"Demographics: {example['age_group']}, {example['race_ethnicity_omb']}")
|
| 110 |
```
|
| 111 |
|
| 112 |
## Activity Classes
|
|
|
|
| 235 |
|
| 236 |
## Usage
|
| 237 |
|
| 238 |
+
The dataset provides flexible loading options depending on your use case. The key distinction is between **segment-level** and **video-level** samples.
|
| 239 |
+
|
| 240 |
+
### Loading Modes Overview
|
| 241 |
+
|
| 242 |
+
| Mode | Sample Unit | Has start/end? | Has frame_labels? | Random Split Train Size |
|
| 243 |
+
|------|-------------|----------------|-------------------|------------------------|
|
| 244 |
+
| **Temporal Segments** | Segment | β
Yes | β No | 15,344 segments (9,600 videos) |
|
| 245 |
+
| **Frame-Wise Labels** | Video | β No | β
Yes (81 labels) | 9,600 videos |
|
| 246 |
+
|
| 247 |
+
### 1. Temporal Segments (Default)
|
| 248 |
+
|
| 249 |
+
Load temporal segment annotations where **each sample is a segment** with start/end times. Multiple segments can come from the same video.
|
| 250 |
|
|
|
|
| 251 |
```python
|
| 252 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
|
|
|
|
|
|
| 253 |
|
| 254 |
+
# Each example is a SEGMENT (not a video)
|
| 255 |
example = dataset['train'][0]
|
| 256 |
+
print(example['path']) # "fall/fall_ch_001"
|
| 257 |
+
print(example['label']) # 1 (activity class ID)
|
| 258 |
+
print(example['start']) # 0.0 (start time in seconds)
|
| 259 |
+
print(example['end']) # 1.006 (end time in seconds)
|
| 260 |
+
print(example['age_group']) # Demographic metadata
|
| 261 |
+
|
| 262 |
+
# Dataset contains multiple segments per video
|
| 263 |
+
print(f"Total segments in train: {len(dataset['train'])}") # 15,344
|
| 264 |
+
print(f"Unique videos: {len(set([ex['path'] for ex in dataset['train']]))}") # 9,600
|
| 265 |
```
|
| 266 |
|
| 267 |
+
**Use case:** Training models on activity classification where you want to extract and process only the relevant video segment for each activity.
|
| 268 |
+
|
| 269 |
+
### 2. Frame-Wise Labels
|
| 270 |
+
|
| 271 |
+
Load dense frame-level labels where **each sample is a video** with 81 frame labels. Each video appears exactly once.
|
| 272 |
+
|
| 273 |
```python
|
| 274 |
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
|
|
|
|
|
|
|
| 275 |
|
| 276 |
+
# Each example is a VIDEO (not a segment)
|
| 277 |
example = dataset['train'][0]
|
| 278 |
+
print(example['path']) # "fall/fall_ch_001"
|
| 279 |
+
print(example['frame_labels']) # [1, 1, 1, ..., 11, 11] (81 labels)
|
| 280 |
+
print(len(example['frame_labels'])) # 81 frames
|
| 281 |
+
print(example['age_group']) # Demographic metadata included
|
| 282 |
+
|
| 283 |
+
# Dataset contains one sample per video
|
| 284 |
+
print(f"Total videos in train: {len(dataset['train'])}") # 9,600 videos
|
| 285 |
```
|
| 286 |
|
| 287 |
+
**Use case:** Training sequence models (e.g., temporal action segmentation) that process entire videos and predict frame-level labels.
|
| 288 |
+
|
| 289 |
+
**Key features:**
|
| 290 |
+
- Works with all split configs: Add `framewise=True` to any split
|
| 291 |
+
- Efficient: 348KB compressed archive, automatically cached
|
| 292 |
+
- Complete metadata: All demographic attributes included
|
| 293 |
+
|
| 294 |
+
### 3. Additional Configurations
|
| 295 |
+
|
| 296 |
```python
|
| 297 |
+
# All segments without train/val/test splits
|
| 298 |
dataset = load_dataset("simplexsigil2/wanfall", "labels") # 19,228 segments
|
| 299 |
|
| 300 |
+
# Video metadata only (no labels)
|
| 301 |
dataset = load_dataset("simplexsigil2/wanfall", "metadata") # 12,000 videos
|
| 302 |
|
| 303 |
+
# Paths only (minimal memory footprint)
|
| 304 |
dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True)
|
| 305 |
```
|
| 306 |
|
| 307 |
+
### Practical Examples
|
| 308 |
+
|
| 309 |
+
#### Label Conversion
|
| 310 |
+
|
| 311 |
+
Labels are stored as integers (0-15) but can be converted to strings:
|
| 312 |
|
|
|
|
| 313 |
```python
|
| 314 |
dataset = load_dataset("simplexsigil2/wanfall", "random")
|
| 315 |
label_feature = dataset['train'].features['label']
|
| 316 |
|
| 317 |
+
# Convert integer to string
|
| 318 |
label_name = label_feature.int2str(1) # "fall"
|
| 319 |
+
|
| 320 |
+
# Convert string to integer
|
| 321 |
label_id = label_feature.str2int("walk") # 0
|
| 322 |
+
|
| 323 |
+
# Access all label names
|
| 324 |
+
all_labels = label_feature.names # ['walk', 'fall', 'fallen', ...]
|
| 325 |
```
|
| 326 |
|
| 327 |
+
#### Filter by Demographics
|
| 328 |
+
|
| 329 |
```python
|
| 330 |
dataset = load_dataset("simplexsigil2/wanfall", "labels")
|
| 331 |
segments = dataset['train']
|
|
|
|
| 335 |
ex for ex in segments
|
| 336 |
if ex['age_group'] == 'elderly_65_plus' and ex['label'] == 1
|
| 337 |
]
|
| 338 |
+
print(f"Found {len(elderly_falls)} elderly fall segments")
|
| 339 |
+
|
| 340 |
+
# Filter by multiple demographics
|
| 341 |
+
indoor_male_falls = [
|
| 342 |
+
ex for ex in segments
|
| 343 |
+
if ex['environment_category'] == 'indoor'
|
| 344 |
+
and ex['gender_presentation'] == 'male'
|
| 345 |
+
and ex['label'] == 1
|
| 346 |
+
]
|
| 347 |
```
|
| 348 |
|
| 349 |
+
#### Cross-Demographic Evaluation
|
| 350 |
+
|
| 351 |
```python
|
| 352 |
+
# Train on young adults, test on children and elderly
|
| 353 |
cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True)
|
| 354 |
|
| 355 |
+
# Train contains only: young_adults_18_34, middle_aged_35_64
|
| 356 |
+
for example in cross_age['train'][:5]:
|
| 357 |
+
print(f"Train video: {example['path']}, age: {example['age_group']}")
|
| 358 |
+
|
| 359 |
+
# Test contains: children_5_12, toddlers_1_4, elderly_65_plus
|
| 360 |
+
for example in cross_age['test'][:5]:
|
| 361 |
+
print(f"Test video: {example['path']}, age: {example['age_group']}")
|
| 362 |
+
```
|
| 363 |
+
|
| 364 |
+
#### Training Loop Example
|
| 365 |
+
|
| 366 |
+
```python
|
| 367 |
+
from datasets import load_dataset
|
| 368 |
+
import torch
|
| 369 |
+
|
| 370 |
+
# Load dataset with frame-wise labels
|
| 371 |
+
dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
|
| 372 |
+
|
| 373 |
+
for epoch in range(num_epochs):
|
| 374 |
+
for example in dataset['train']:
|
| 375 |
+
video_path = example['path']
|
| 376 |
+
frame_labels = torch.tensor(example['frame_labels']) # (81,)
|
| 377 |
+
|
| 378 |
+
# Load video frames (user must implement)
|
| 379 |
+
# frames = load_video(video_root / f"{video_path}.mp4") # (81, H, W, 3)
|
| 380 |
+
|
| 381 |
+
# Forward pass
|
| 382 |
+
# outputs = model(frames)
|
| 383 |
+
# loss = criterion(outputs, frame_labels)
|
| 384 |
+
# loss.backward()
|
| 385 |
```
|
| 386 |
|
| 387 |
## Annotation Guidelines
|
| 388 |
|
| 389 |
+
### Temporal Precision
|
| 390 |
+
|
| 391 |
+
Annotations use sub-second accuracy with decimal timestamps (e.g., `start: 0.0, end: 1.006`). Most frames in videos are labeled, with minimal gaps between activities.
|
| 392 |
+
|
| 393 |
+
### Activity Sequences
|
| 394 |
+
|
| 395 |
+
Videos contain natural transitions between activities. Common sequences include:
|
| 396 |
+
|
| 397 |
+
```
|
| 398 |
+
walk β fall β fallen β stand_up
|
| 399 |
+
walk β sit_down β sitting β stand_up
|
| 400 |
+
walk β lie_down β lying β stand_up
|
| 401 |
+
standing β squat_down β squatting β stand_up
|
| 402 |
+
```
|
| 403 |
+
|
| 404 |
+
Not all transitions include static states. For example, a person might `stand_up` immediately after falling without a `fallen` state.
|
| 405 |
+
|
| 406 |
+
### Motion Types
|
| 407 |
+
|
| 408 |
+
**Dynamic Actions** (transitions and movements):
|
| 409 |
+
- Labeled from the **first frame** where the motion begins
|
| 410 |
+
- End when the person reaches a **resting state** or begins a new action
|
| 411 |
+
- If one motion is followed by another, the transition occurs at the first frame showing movement not explained by the previous action
|
| 412 |
+
|
| 413 |
+
**Static States** (stationary postures):
|
| 414 |
+
- Begin when person **comes to rest** in that posture
|
| 415 |
+
- Continue until the next motion begins
|
| 416 |
+
- Example for `sitting`: Does not start when the body touches the chair, but when the body loses its tension and settles into the seated position
|
| 417 |
+
|
| 418 |
+
### Label Boundaries
|
| 419 |
+
|
| 420 |
+
- **Dynamic β Dynamic**: Transition at first frame of new motion
|
| 421 |
+
- **Dynamic β Static**: Static begins when movement stops and body settles
|
| 422 |
+
- **Static β Dynamic**: Dynamic begins at first frame of movement
|
| 423 |
|
| 424 |
## Demographic Distribution
|
| 425 |
|
|
|
|
| 434 |
- Camera angles: 4 elevations Γ 4 azimuths Γ 2 distances
|
| 435 |
- Shot types: Static wide and medium-wide
|
| 436 |
|
| 437 |
+
## Video Data
|
| 438 |
+
|
| 439 |
+
**Videos are NOT included in this repository.** This dataset contains only annotations and metadata.
|
| 440 |
+
|
| 441 |
+
### Video Specifications
|
| 442 |
+
|
| 443 |
+
- **Duration:** 5.0625 seconds per clip
|
| 444 |
+
- **Frame count:** 81 frames
|
| 445 |
+
- **Frame rate:** 16 fps
|
| 446 |
+
- **Format:** MP4 (H.264)
|
| 447 |
+
- **Resolution:** Variable (synthetic generation)
|
| 448 |
+
|
| 449 |
+
### Accessing Videos
|
| 450 |
+
|
| 451 |
+
Videos will be released at a later point in time. Information about access will be provided here when available.
|
| 452 |
+
|
| 453 |
+
When videos become available, they should be organized with the following structure:
|
| 454 |
+
```
|
| 455 |
+
video_root/
|
| 456 |
+
βββ fall/
|
| 457 |
+
β βββ fall_ch_001.mp4
|
| 458 |
+
β βββ fall_ch_002.mp4
|
| 459 |
+
β βββ ...
|
| 460 |
+
βββ fallen/
|
| 461 |
+
β βββ fallen_ch_001.mp4
|
| 462 |
+
β βββ ...
|
| 463 |
+
βββ ...
|
| 464 |
+
```
|
| 465 |
|
| 466 |
+
The `path` field in the CSV corresponds to the relative path without the `.mp4` extension (e.g., `"fall/fall_ch_001"` β `video_root/fall/fall_ch_001.mp4`).
|
|
|
|
|
|
|
| 467 |
|
| 468 |
## License
|
| 469 |
|