simplexsigil2 commited on
Commit
dfcf1e5
·
verified ·
1 Parent(s): d10c20f

Upload folder using huggingface_hub

Browse files
.gitignore CHANGED
@@ -1,3 +1,5 @@
1
- export_via_to_csv.py
2
- create_splits.py
3
  CLAUDE.md
 
 
 
 
 
 
 
1
  CLAUDE.md
2
+ create_demographic_plots.py
3
+ create_splits.py
4
+ export_via_to_csv.py
5
+ extract_jsonl_metadata.py
README.md CHANGED
@@ -19,6 +19,11 @@ configs:
19
  default: true
20
  description: "Temporal segment labels for all videos. Load splits to get train/val/test paths."
21
 
 
 
 
 
 
22
  - config_name: random
23
  data_files:
24
  - split: train
@@ -35,6 +40,8 @@ configs:
35
 
36
  This repository contains temporal segment annotations for WanFall, a synthetic activity recognition dataset focused on fall detection and related activities of daily living.
37
 
 
 
38
  ## Overview
39
 
40
  WanFall is a large-scale synthetic dataset designed for activity recognition research, with emphasis on fall detection and posture transitions. The dataset features computer-generated videos of human actors performing various activities in controlled virtual environments.
@@ -50,7 +57,7 @@ WanFall is a large-scale synthetic dataset designed for activity recognition res
50
 
51
  - **Total videos**: 12,000
52
  - **Total temporal segments**: 19,228
53
- - **Annotation format**: Temporal segmentation (start/end timestamps)
54
  - **Video duration**: 5.0625 seconds per clip
55
  - **Frame count**: 81 frames per video
56
  - **Frame rate**: 16 fps
@@ -58,6 +65,7 @@ WanFall is a large-scale synthetic dataset designed for activity recognition res
58
  - Train: 9,600 videos
59
  - Validation: 1,200 videos
60
  - Test: 1,200 videos
 
61
 
62
  ## Activity Categories
63
 
@@ -81,35 +89,39 @@ The dataset includes **16 activity classes** organized into dynamic actions and
81
  - **14. crawl** - Crawling movement on hands and knees
82
  - **15. jump** - Jumping action
83
 
84
- ## Structure
85
-
86
- The repository is organized as follows:
87
-
88
- - `labels/` - CSV files containing temporal segment annotations
89
- - `wanfall.csv` - All temporal segments for the dataset
90
- - `label2id.csv` - Mapping of activity names to integer IDs
91
- - `splits/` - Train/validation/test split definitions
92
- - `train.csv` - Training set video paths (80%)
93
- - `val.csv` - Validation set video paths (10%)
94
- - `test.csv` - Test set video paths (10%)
95
-
96
  ### Label Format
97
 
98
- The `labels/wanfall.csv` file follows this format:
99
 
100
- ```
101
- path,label,start,end,subject,cam,dataset
102
  ```
103
 
104
- Where:
105
  - `path`: Relative path to the video (without .mp4 extension, e.g., "fall/fall_ch_001")
106
- - `label`: Class ID (0-15) corresponding to one of the 16 activity classes
107
  - `start`: Start time of the segment in seconds
108
  - `end`: End time of the segment in seconds
109
- - `subject`: Subject ID (`-1` for synthetic data without subject tracking)
110
- - `cam`: Camera view ID (`-1` for single view/no camera variation)
111
  - `dataset`: Dataset name (`wanfall`)
112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ### Split Format
114
 
115
  Split files in the `splits/` directory list the video paths included in each partition:
@@ -121,9 +133,7 @@ fall/fall_ch_002
121
  ...
122
  ```
123
 
124
- ## Usage Examples
125
-
126
- ### Load Default Split
127
 
128
  ```python
129
  from datasets import load_dataset
@@ -132,18 +142,23 @@ import pandas as pd
132
  # Load the datasets
133
  print("Loading WanFall dataset...")
134
 
135
- # Load labels (all temporal segments) - default config
136
- labels = load_dataset("YOUR_USERNAME/wanfall")["train"]
 
137
 
138
- # Load random train/val/test splits
139
- random_split = load_dataset("YOUR_USERNAME/wanfall", "random")
140
 
141
- # Convert to pandas DataFrames
 
 
 
 
142
  labels_df = pd.DataFrame(labels)
143
  print(f"Labels dataframe shape: {labels_df.shape}")
144
  print(f"Total temporal segments: {len(labels_df)}")
145
 
146
- # Process each split
147
  for split_name, split_data in random_split.items():
148
  # Convert to DataFrame
149
  split_df = pd.DataFrame(split_data)
@@ -152,419 +167,39 @@ for split_name, split_data in random_split.items():
152
  merged_df = pd.merge(split_df, labels_df, on="path", how="left")
153
 
154
  # Print statistics
155
- print(f"\n{split_name} split:")
156
- print(f" Videos: {len(split_df)}")
157
- print(f" Temporal segments: {len(merged_df)}")
158
- print(f" Unique labels: {merged_df['label'].nunique()}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
  ```
160
 
161
- ### Analyze Label Distribution
162
 
163
  ```python
164
- from datasets import load_dataset
165
- import pandas as pd
166
-
167
- # Load labels (default config)
168
- labels = load_dataset("YOUR_USERNAME/wanfall")["train"]
169
- labels_df = pd.DataFrame(labels)
170
-
171
- # Load label names
172
- label_map = {
173
  0: 'walk', 1: 'fall', 2: 'fallen', 3: 'sit_down',
174
  4: 'sitting', 5: 'lie_down', 6: 'lying', 7: 'stand_up',
175
  8: 'standing', 9: 'other', 10: 'kneel_down', 11: 'kneeling',
176
  12: 'squat_down', 13: 'squatting', 14: 'crawl', 15: 'jump'
177
  }
178
-
179
- # Add label names
180
- labels_df['label_name'] = labels_df['label'].map(label_map)
181
-
182
- # Segment-level distribution
183
- print("Temporal Segment Distribution:")
184
- segment_counts = labels_df['label_name'].value_counts().sort_index()
185
- for label_name, count in segment_counts.items():
186
- print(f" {label_name:15s}: {count:5d} segments")
187
-
188
- # Video-level distribution (primary activity from path)
189
- labels_df['primary_activity'] = labels_df['path'].str.split('/').str[0]
190
- print("\nVideo Distribution by Primary Activity:")
191
- video_counts = labels_df['primary_activity'].value_counts()
192
- for activity, count in video_counts.items():
193
- print(f" {activity:15s}: {count:5d} segments")
194
- ```
195
-
196
- ### Iterate Over Split
197
-
198
- ```python
199
- from datasets import load_dataset
200
- import pandas as pd
201
-
202
- # Load data
203
- labels = load_dataset("YOUR_USERNAME/wanfall")["train"] # default config
204
- labels_df = pd.DataFrame(labels)
205
-
206
- splits = load_dataset("YOUR_USERNAME/wanfall", "random")
207
- train_df = pd.DataFrame(splits["train"])
208
-
209
- # Merge to get train labels
210
- train_labels = pd.merge(train_df, labels_df, on="path", how="left")
211
-
212
- print(f"Training set: {len(train_labels)} temporal segments")
213
-
214
- # Iterate over videos
215
- for video_path in train_df['path'][:5]:
216
- # Get all segments for this video
217
- video_segments = train_labels[train_labels['path'] == video_path]
218
-
219
- print(f"\n{video_path}:")
220
- print(f" Segments: {len(video_segments)}")
221
-
222
- for _, seg in video_segments.iterrows():
223
- duration = seg['end'] - seg['start']
224
- print(f" {seg['start']:.3f}s - {seg['end']:.3f}s ({duration:.3f}s): "
225
- f"label {seg['label']}")
226
- ```
227
-
228
- ### PyTorch Dataset Integration
229
-
230
- ```python
231
- from datasets import load_dataset
232
- import pandas as pd
233
- import torch
234
- from torch.utils.data import Dataset, DataLoader
235
- from pathlib import Path
236
- import cv2
237
- import numpy as np
238
-
239
-
240
- class WanFallDataset(Dataset):
241
- """
242
- PyTorch Dataset for WanFall activity recognition.
243
-
244
- This dataset provides both temporal segments and video paths for loading.
245
- """
246
-
247
- def __init__(
248
- self,
249
- split='train',
250
- video_root=None,
251
- transform=None,
252
- target_transform=None,
253
- return_segments=True,
254
- fps=16,
255
- num_frames=81
256
- ):
257
- """
258
- Args:
259
- split: One of 'train', 'validation', 'test'
260
- video_root: Root directory containing video files (e.g., /path/to/wanfall/videos)
261
- transform: Optional transform to apply to video frames
262
- target_transform: Optional transform to apply to labels
263
- return_segments: If True, returns all temporal segments. If False, returns one sample per video.
264
- fps: Frame rate of videos (default: 16)
265
- num_frames: Number of frames per video (default: 81)
266
- """
267
- super().__init__()
268
-
269
- # Load labels (all temporal segments)
270
- labels_ds = load_dataset("simplexsigil2/wanfall")
271
- self.labels_df = pd.DataFrame(labels_ds["train"])
272
-
273
- # Load split
274
- split_ds = load_dataset("simplexsigil2/wanfall", "random")
275
- split_df = pd.DataFrame(split_ds[split])
276
-
277
- # Merge to get labeled segments for this split
278
- self.data = pd.merge(split_df, self.labels_df, on="path", how="left")
279
-
280
- # If not returning segments, keep only one row per video
281
- if not return_segments:
282
- self.data = self.data.groupby('path').first().reset_index()
283
-
284
- self.video_root = Path(video_root) if video_root else None
285
- self.transform = transform
286
- self.target_transform = target_transform
287
- self.return_segments = return_segments
288
- self.fps = fps
289
- self.num_frames = num_frames
290
-
291
- def __len__(self):
292
- return len(self.data)
293
-
294
- def __getitem__(self, idx):
295
- row = self.data.iloc[idx]
296
-
297
- # Get video path
298
- video_path = row['path']
299
- if self.video_root is not None:
300
- video_path = self.video_root / f"{video_path}.mp4"
301
-
302
- # Load video frames (if video_root is provided)
303
- frames = None
304
- if self.video_root is not None and Path(video_path).exists():
305
- frames = self._load_video(video_path)
306
- if self.transform is not None:
307
- frames = self.transform(frames)
308
-
309
- # Get label information
310
- label = int(row['label'])
311
- start_time = float(row['start'])
312
- end_time = float(row['end'])
313
-
314
- # Convert timestamps to frame indices
315
- start_frame = int(start_time * self.fps)
316
- end_frame = int(end_time * self.fps)
317
-
318
- if self.target_transform is not None:
319
- label = self.target_transform(label)
320
-
321
- # Return data
322
- sample = {
323
- 'video_path': row['path'],
324
- 'label': label,
325
- 'start_time': start_time,
326
- 'end_time': end_time,
327
- 'start_frame': start_frame,
328
- 'end_frame': end_frame,
329
- }
330
-
331
- if frames is not None:
332
- sample['frames'] = frames
333
-
334
- return sample
335
-
336
- def _load_video(self, video_path):
337
- """Load video frames using OpenCV."""
338
- cap = cv2.VideoCapture(str(video_path))
339
- frames = []
340
-
341
- while True:
342
- ret, frame = cap.read()
343
- if not ret:
344
- break
345
- # Convert BGR to RGB
346
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
347
- frames.append(frame)
348
-
349
- cap.release()
350
-
351
- # Convert to numpy array (T, H, W, C)
352
- frames = np.array(frames)
353
-
354
- return frames
355
-
356
-
357
- # Example usage
358
- def get_dataloaders(video_root, batch_size=32, num_workers=4):
359
- """Create PyTorch DataLoaders for train/val/test splits."""
360
-
361
- # Optional: Define transforms
362
- from torchvision import transforms
363
-
364
- transform = transforms.Compose([
365
- transforms.Lambda(lambda x: torch.from_numpy(x).float()),
366
- transforms.Lambda(lambda x: x.permute(0, 3, 1, 2)), # (T, H, W, C) -> (T, C, H, W)
367
- transforms.Lambda(lambda x: x / 255.0), # Normalize to [0, 1]
368
- ])
369
-
370
- # Create datasets
371
- train_dataset = WanFallDataset(
372
- split='train',
373
- video_root=video_root,
374
- transform=transform,
375
- return_segments=True
376
- )
377
-
378
- val_dataset = WanFallDataset(
379
- split='validation',
380
- video_root=video_root,
381
- transform=transform,
382
- return_segments=True
383
- )
384
-
385
- test_dataset = WanFallDataset(
386
- split='test',
387
- video_root=video_root,
388
- transform=transform,
389
- return_segments=True
390
- )
391
-
392
- # Create dataloaders
393
- train_loader = DataLoader(
394
- train_dataset,
395
- batch_size=batch_size,
396
- shuffle=True,
397
- num_workers=num_workers,
398
- pin_memory=True
399
- )
400
-
401
- val_loader = DataLoader(
402
- val_dataset,
403
- batch_size=batch_size,
404
- shuffle=False,
405
- num_workers=num_workers,
406
- pin_memory=True
407
- )
408
-
409
- test_loader = DataLoader(
410
- test_dataset,
411
- batch_size=batch_size,
412
- shuffle=False,
413
- num_workers=num_workers,
414
- pin_memory=True
415
- )
416
-
417
- return train_loader, val_loader, test_loader
418
-
419
-
420
- # Example training loop snippet
421
- if __name__ == "__main__":
422
- video_root = Path("/path/to/wanfall/videos")
423
-
424
- train_loader, val_loader, test_loader = get_dataloaders(
425
- video_root=video_root,
426
- batch_size=16,
427
- num_workers=4
428
- )
429
-
430
- print(f"Train batches: {len(train_loader)}")
431
- print(f"Val batches: {len(val_loader)}")
432
- print(f"Test batches: {len(test_loader)}")
433
-
434
- # Inspect first batch
435
- for batch in train_loader:
436
- print("\nBatch keys:", batch.keys())
437
- if 'frames' in batch:
438
- print(f"Frames shape: {batch['frames'].shape}")
439
- print(f"Labels shape: {batch['label'].shape}")
440
- print(f"Label range: [{batch['label'].min()}, {batch['label'].max()}]")
441
- break
442
- ```
443
-
444
- ### Converting Temporal Segments to Frame-Level Labels
445
-
446
- If you need frame-level labels for dense prediction tasks:
447
-
448
- ```python
449
- import numpy as np
450
-
451
-
452
- def temporal_segments_to_frames(segments_df, fps=16, num_frames=81):
453
- """
454
- Convert temporal segments to frame-level labels.
455
-
456
- Args:
457
- segments_df: DataFrame with 'start', 'end', 'label' columns for one video
458
- fps: Frame rate (default: 16)
459
- num_frames: Number of frames per video (default: 81)
460
-
461
- Returns:
462
- Array of shape (num_frames,) with label for each frame
463
- """
464
- # Initialize with -1 (unlabeled)
465
- frame_labels = np.full(num_frames, -1, dtype=np.int32)
466
-
467
- # Sort segments by start time
468
- segments_df = segments_df.sort_values('start')
469
-
470
- for _, seg in segments_df.iterrows():
471
- start_frame = int(seg['start'] * fps)
472
- end_frame = min(int(seg['end'] * fps), num_frames - 1)
473
-
474
- # Assign label to frames
475
- frame_labels[start_frame:end_frame + 1] = seg['label']
476
-
477
- return frame_labels
478
-
479
-
480
- # Example usage with PyTorch Dataset
481
- class WanFallFrameLevelDataset(Dataset):
482
- """PyTorch Dataset with frame-level labels."""
483
-
484
- def __init__(self, split='train', video_root=None, transform=None):
485
- super().__init__()
486
-
487
- # Load labels and split
488
- labels_ds = load_dataset("simplexsigil2/wanfall")
489
- self.labels_df = pd.DataFrame(labels_ds["train"])
490
-
491
- split_ds = load_dataset("simplexsigil2/wanfall", "random")
492
- split_df = pd.DataFrame(split_ds[split])
493
-
494
- # Get unique videos in this split
495
- self.video_paths = split_df['path'].tolist()
496
- self.video_root = Path(video_root) if video_root else None
497
- self.transform = transform
498
-
499
- def __len__(self):
500
- return len(self.video_paths)
501
-
502
- def __getitem__(self, idx):
503
- video_path = self.video_paths[idx]
504
-
505
- # Load video frames
506
- frames = None
507
- if self.video_root is not None:
508
- full_path = self.video_root / f"{video_path}.mp4"
509
- if full_path.exists():
510
- frames = self._load_video(full_path)
511
- if self.transform is not None:
512
- frames = self.transform(frames)
513
-
514
- # Get all segments for this video and convert to frame labels
515
- video_segments = self.labels_df[self.labels_df['path'] == video_path]
516
- frame_labels = temporal_segments_to_frames(video_segments)
517
-
518
- return {
519
- 'video_path': video_path,
520
- 'frames': frames,
521
- 'labels': torch.from_numpy(frame_labels), # Shape: (81,)
522
- }
523
-
524
- def _load_video(self, video_path):
525
- """Load video frames."""
526
- cap = cv2.VideoCapture(str(video_path))
527
- frames = []
528
- while True:
529
- ret, frame = cap.read()
530
- if not ret:
531
- break
532
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
533
- frames.append(frame)
534
- cap.release()
535
- return np.array(frames)
536
  ```
537
 
538
- ### Best Practices
539
-
540
- **1. Temporal Segment vs Frame-Level:**
541
- - Use temporal segments directly for action localization and detection tasks
542
- - Convert temporal segments to frame-level labels for dense prediction tasks (see example above)
543
- - The dataset provides temporal segments; use the conversion function for frame-level labels
544
-
545
- **2. Handling Multiple Segments per Video:**
546
- - Set `return_segments=True` to get all temporal segments (one sample per segment)
547
- - Set `return_segments=False` to get one sample per video (useful for video-level classification)
548
-
549
- **3. Data Loading:**
550
- - Videos are stored separately and not included in this HuggingFace dataset
551
- - Provide `video_root` path where videos are stored with structure: `{video_root}/{path}.mp4`
552
- - Example: `{video_root}/fall/fall_ch_001.mp4`
553
-
554
- **4. Memory Efficiency:**
555
- - Load videos on-demand in `__getitem__` rather than pre-loading
556
- - Use `num_workers > 0` in DataLoader for parallel loading
557
- - Consider using video decoding libraries like `decord` or `torchvision.io` for faster loading
558
-
559
- **5. Temporal Sampling:**
560
- - For long videos or limited memory, sample frames instead of loading all 81 frames
561
- - Use uniform sampling, random sampling, or segment-focused sampling based on task
562
-
563
- **6. Label Handling:**
564
- - Labels are integers 0-15 for the 16 activity classes
565
- - `-1` indicates unlabeled frames (when converting to frame-level labels)
566
- - Consider class balancing or weighted sampling for imbalanced classes
567
-
568
  ## Technical Properties
569
 
570
  ### Video Specifications
@@ -601,31 +236,26 @@ Videos often contain natural sequences of activities:
601
 
602
  Not all transitions include static states (e.g., a person might stand_up immediately after falling without a `fallen` state).
603
 
604
- ## Future Extensions
605
 
606
- This dataset is designed to support additional metadata and splits:
607
- - **Demographics**: Age groups, ethnicity (to be added)
608
- - **Cross-demographic splits**: Train on one demographic, test on another
609
- - **Scenario variations**: Different environments, lighting, occlusions
610
 
611
- ## Citation
612
 
613
- If you use WanFall in your research, please cite:
614
 
615
- ```bibtex
616
- @misc{wanfall2025,
617
- title={WanFall: A Synthetic Activity Recognition Dataset},
618
- author={TODO},
619
- year={2025},
620
- }
621
- ```
622
 
623
- ## License
624
 
625
- The annotations and split definitions in this repository are released under [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/).
 
 
 
 
626
 
627
- The video data is synthetic and must be obtained separately from the original source.
628
 
629
- ## Contact
630
 
631
- For questions about the dataset, please contact [TODO].
 
19
  default: true
20
  description: "Temporal segment labels for all videos. Load splits to get train/val/test paths."
21
 
22
+ - config_name: metadata
23
+ data_files:
24
+ - videos/metadata.csv
25
+ description: "Video level metadata."
26
+
27
  - config_name: random
28
  data_files:
29
  - split: train
 
40
 
41
  This repository contains temporal segment annotations for WanFall, a synthetic activity recognition dataset focused on fall detection and related activities of daily living.
42
 
43
+ **This dataset is currently under development and subject to change!**
44
+
45
  ## Overview
46
 
47
  WanFall is a large-scale synthetic dataset designed for activity recognition research, with emphasis on fall detection and posture transitions. The dataset features computer-generated videos of human actors performing various activities in controlled virtual environments.
 
57
 
58
  - **Total videos**: 12,000
59
  - **Total temporal segments**: 19,228
60
+ - **Annotation format**: Temporal segmentation (start/end timestamps) with rich metadata
61
  - **Video duration**: 5.0625 seconds per clip
62
  - **Frame count**: 81 frames per video
63
  - **Frame rate**: 16 fps
 
65
  - Train: 9,600 videos
66
  - Validation: 1,200 videos
67
  - Test: 1,200 videos
68
+ - **Metadata fields**: 12 demographic and scene attributes per video
69
 
70
  ## Activity Categories
71
 
 
89
  - **14. crawl** - Crawling movement on hands and knees
90
  - **15. jump** - Jumping action
91
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ### Label Format
93
 
94
+ The `labels/wanfall.csv` file contains temporal segments with rich metadata:
95
 
96
+ ```csv
97
+ path,label,start,end,subject,cam,dataset,age_group,gender_presentation,monk_skin_tone,race_ethnicity_omb,bmi_band,height_band,environment_category,camera_shot,speed,camera_elevation,camera_azimuth,camera_distance
98
  ```
99
 
100
+ **Core Fields:**
101
  - `path`: Relative path to the video (without .mp4 extension, e.g., "fall/fall_ch_001")
102
+ - `label`: Activity class ID (0-15)
103
  - `start`: Start time of the segment in seconds
104
  - `end`: End time of the segment in seconds
105
+ - `subject`: Subject ID (`-1` for synthetic data)
106
+ - `cam`: Camera view ID (`-1` for single view)
107
  - `dataset`: Dataset name (`wanfall`)
108
 
109
+ **Demographic Metadata:**
110
+ - `age_group`: One of 6 age categories (toddlers_1_4, children_5_12, teenagers_13_17, young_adults_18_34, middle_aged_35_64, elderly_65_plus)
111
+ - `gender_presentation`: Visual gender presentation (male, female)
112
+ - `monk_skin_tone`: Monk Skin Tone scale (mst1-mst10, representing diverse skin tones)
113
+ - `race_ethnicity_omb`: OMB ethnicity categories (white, black, asian, hispanic_latino, aian, nhpi, mena)
114
+ - `bmi_band`: Body type (underweight, normal, overweight, obese)
115
+ - `height_band`: Height category (short, avg, tall)
116
+
117
+ **Scene Metadata:**
118
+ - `environment_category`: Scene location (indoor, outdoor)
119
+ - `camera_shot`: Shot composition (static_wide, static_medium_wide)
120
+ - `speed`: Frame rate (24fps_rt, 25fps_rt, 30fps_rt, std_rt)
121
+ - `camera_elevation`: Camera height (eye, low, high, top)
122
+ - `camera_azimuth`: Camera angle (front, rear, left, right)
123
+ - `camera_distance`: Camera distance (medium, far)
124
+
125
  ### Split Format
126
 
127
  Split files in the `splits/` directory list the video paths included in each partition:
 
133
  ...
134
  ```
135
 
136
+ ## Usage Example
 
 
137
 
138
  ```python
139
  from datasets import load_dataset
 
142
  # Load the datasets
143
  print("Loading WanFall dataset...")
144
 
145
+ # Note: All segment labels are in the "train" split when loaded from the labels config,
146
+ # but we join them with the actual train/val/test splits afterwards.
147
+ labels = load_dataset("simplexsigil2/wanfall", "labels")["train"]
148
 
149
+ # Load the random 80/10/10 split
150
+ random_split = load_dataset("simplexsigil2/wanfall", "random")
151
 
152
+ # Load video metadata (optional, for demographic filtering)
153
+ video_metadata = pd.read_csv("videos/metadata.csv")
154
+ print(f"Video metadata shape: {video_metadata.shape}")
155
+
156
+ # Convert labels to DataFrame
157
  labels_df = pd.DataFrame(labels)
158
  print(f"Labels dataframe shape: {labels_df.shape}")
159
  print(f"Total temporal segments: {len(labels_df)}")
160
 
161
+ # Process each split (train, validation, test)
162
  for split_name, split_data in random_split.items():
163
  # Convert to DataFrame
164
  split_df = pd.DataFrame(split_data)
 
167
  merged_df = pd.merge(split_df, labels_df, on="path", how="left")
168
 
169
  # Print statistics
170
+ print(f"\n{split_name} split: {len(split_df)} videos, {len(merged_df)} temporal segments")
171
+
172
+ # Print examples
173
+ if not merged_df.empty:
174
+ print(f"\n {split_name.upper()} EXAMPLES:")
175
+ random_samples = merged_df.sample(min(3, len(merged_df)))
176
+ for i, (_, row) in enumerate(random_samples.iterrows()):
177
+ print(f" Example {i+1}:")
178
+ print(f" Path: {row['path']}")
179
+ print(f" Label: {row['label']} (segment {row['start']:.2f}s - {row['end']:.2f}s)")
180
+ print(f" Age: {row['age_group']}, Gender: {row['gender_presentation']}")
181
+ print(f" Ethnicity: {row['race_ethnicity_omb']}, Environment: {row['environment_category']}")
182
+ print()
183
+
184
+ # Example: Filter by demographics
185
+ elderly_falls = labels_df[
186
+ (labels_df['age_group'] == 'elderly_65_plus') &
187
+ (labels_df['label'] == 1) # fall = label 1
188
+ ]
189
+ print(f"\nElderly fall segments: {len(elderly_falls)} ({elderly_falls['path'].nunique()} unique videos)")
190
  ```
191
 
192
+ ### Label Mapping
193
 
194
  ```python
195
+ LABEL_MAP = {
 
 
 
 
 
 
 
 
196
  0: 'walk', 1: 'fall', 2: 'fallen', 3: 'sit_down',
197
  4: 'sitting', 5: 'lie_down', 6: 'lying', 7: 'stand_up',
198
  8: 'standing', 9: 'other', 10: 'kneel_down', 11: 'kneeling',
199
  12: 'squat_down', 13: 'squatting', 14: 'crawl', 15: 'jump'
200
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
  ```
202
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
203
  ## Technical Properties
204
 
205
  ### Video Specifications
 
236
 
237
  Not all transitions include static states (e.g., a person might stand_up immediately after falling without a `fallen` state).
238
 
239
+ ## Demographic Diversity
240
 
241
+ The dataset includes rich demographic and scene metadata for every video, enabling bias analysis and cross-demographic evaluation.
242
+ However, while age and gender and ethnicity are quite reliable with consistent generation, the attributes were merely provided with the generation prompts and due to model biases, the resulting videos can deviate.
 
 
243
 
244
+ ### Overview
245
 
246
+ ![Demographic Overview](figures/demographic_overview.png)
247
 
 
 
 
 
 
 
 
248
 
249
+ ### Scene Variations
250
 
251
+ Beyond demographic diversity, the dataset includes:
252
+ - **Environment**: Indoor and outdoor settings
253
+ - **Camera Angles**: Multiple elevations (eye, low, high, top), azimuths (front, rear, left, right), and distances
254
+ - **Camera Shots**: Static wide and medium-wide compositions
255
+ - **Frame Rates**: Various speeds (24fps, 25fps, 30fps, standard real-time)
256
 
257
+ ## License
258
 
259
+ The annotations and split definitions in this repository are released under [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/).
260
 
261
+ The video data is synthetic and must be obtained separately from the original source, more information in the future.
figures/age_distribution.png ADDED

Git LFS Details

  • SHA256: 14b72bb01c590a436de4a6d5b49976e9b1de139c989677292d7c10d20e3149b2
  • Pointer size: 131 Bytes
  • Size of remote file: 126 kB
figures/bmi_distribution.png ADDED

Git LFS Details

  • SHA256: e1031c16375c4ed5c3fe7fad8e711a3b02ef8533a2030aba9d22de4d365a3419
  • Pointer size: 131 Bytes
  • Size of remote file: 158 kB
figures/demographic_overview.png ADDED

Git LFS Details

  • SHA256: fe318f13c035954cdb197729dcc5c0d6f36f3abf4767b05d668eb71f5f312c3e
  • Pointer size: 131 Bytes
  • Size of remote file: 416 kB
figures/ethnicity_distribution.png ADDED

Git LFS Details

  • SHA256: 75c6740e36b52199fe9a44702db15e843d4cf963a9cea8c64d0905f250344a6e
  • Pointer size: 131 Bytes
  • Size of remote file: 171 kB
figures/gender_distribution.png ADDED

Git LFS Details

  • SHA256: 74bc11f938665b5f4de083fe298bac7f3349fac82d078130eab6085f494b1440
  • Pointer size: 131 Bytes
  • Size of remote file: 106 kB
figures/height_distribution.png ADDED

Git LFS Details

  • SHA256: cd0c07789c276acc7bbe4da21469e2612a077080697129c523c16d6a5d41ca5a
  • Pointer size: 131 Bytes
  • Size of remote file: 113 kB
labels/wanfall.csv CHANGED
The diff for this file is too large to render. See raw diff
 
videos/metadata.csv ADDED
The diff for this file is too large to render. See raw diff