wanfall / README.md
simplexsigil2's picture
Upload folder using huggingface_hub
d10c20f verified
|
raw
history blame
20.4 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - video-classification
language:
  - en
tags:
  - synthetic
  - activity-recognition
  - fall-detection
pretty_name: 'WanFall: A Synthetic Activity Recognition Dataset'
size_categories:
  - 10K<n<100K
configs:
  - config_name: labels
    data_files:
      - labels/wanfall.csv
    default: true
    description: >-
      Temporal segment labels for all videos. Load splits to get train/val/test
      paths.
  - config_name: random
    data_files:
      - split: train
        path: splits/train.csv
      - split: validation
        path: splits/val.csv
      - split: test
        path: splits/test.csv
    description: Random 80/10/10 train/val/test split (seed 42)

License: CC BY-NC 4.0

WanFall: A Synthetic Activity Recognition Dataset

This repository contains temporal segment annotations for WanFall, a synthetic activity recognition dataset focused on fall detection and related activities of daily living.

Overview

WanFall is a large-scale synthetic dataset designed for activity recognition research, with emphasis on fall detection and posture transitions. The dataset features computer-generated videos of human actors performing various activities in controlled virtual environments.

Key Features:

  • ~12,000 video clips with dense temporal annotations
  • 16 activity classes including falls, posture transitions, and static states
  • 5.0625 seconds per video clip (81 frames @ 16 fps)
  • Synthetic generation enabling diverse scenarios and controlled variation
  • Dense temporal segmentation with frame-level precision

Dataset Statistics

  • Total videos: 12,000
  • Total temporal segments: 19,228
  • Annotation format: Temporal segmentation (start/end timestamps)
  • Video duration: 5.0625 seconds per clip
  • Frame count: 81 frames per video
  • Frame rate: 16 fps
  • Default split: 80/10/10 train/val/test (seed 42)
    • Train: 9,600 videos
    • Validation: 1,200 videos
    • Test: 1,200 videos

Activity Categories

The dataset includes 16 activity classes organized into dynamic actions and static states:

Dynamic Actions (Transitions)

  • 0. walk - Walking movement, including jogging and running
  • 1. fall - Falling down action (from any previous state), beginning with the moment of lost control and ending with a resting state or activity change.
  • 2. fallen - Person in fallen state (on ground after fall)
  • 3. sit_down - Transitioning from standing to sitting
  • 4. sitting - Stationary sitting posture
  • 5. lie_down - Intentionally lying down (not falling)
  • 6. lying - Stationary lying posture (after intentional lie_down)
  • 7. stand_up Getting up, either from fallen or lying into sitting or into standing position (not only get up to standing)
  • 8. standing - Stationary standing posture
  • 9. other - Actions not fitting above categories
  • 10. kneel_down - Transitioning to kneeling position
  • 11. kneeling - Stationary kneeling posture
  • 12. squat_down - Transitioning to squatting position
  • 13. squatting - Stationary squatting posture
  • 14. crawl - Crawling movement on hands and knees
  • 15. jump - Jumping action

Structure

The repository is organized as follows:

  • labels/ - CSV files containing temporal segment annotations
    • wanfall.csv - All temporal segments for the dataset
    • label2id.csv - Mapping of activity names to integer IDs
  • splits/ - Train/validation/test split definitions
    • train.csv - Training set video paths (80%)
    • val.csv - Validation set video paths (10%)
    • test.csv - Test set video paths (10%)

Label Format

The labels/wanfall.csv file follows this format:

path,label,start,end,subject,cam,dataset

Where:

  • path: Relative path to the video (without .mp4 extension, e.g., "fall/fall_ch_001")
  • label: Class ID (0-15) corresponding to one of the 16 activity classes
  • start: Start time of the segment in seconds
  • end: End time of the segment in seconds
  • subject: Subject ID (-1 for synthetic data without subject tracking)
  • cam: Camera view ID (-1 for single view/no camera variation)
  • dataset: Dataset name (wanfall)

Split Format

Split files in the splits/ directory list the video paths included in each partition:

path
fall/fall_ch_001
fall/fall_ch_002
...

Usage Examples

Load Default Split

from datasets import load_dataset
import pandas as pd

# Load the datasets
print("Loading WanFall dataset...")

# Load labels (all temporal segments) - default config
labels = load_dataset("YOUR_USERNAME/wanfall")["train"]

# Load random train/val/test splits
random_split = load_dataset("YOUR_USERNAME/wanfall", "random")

# Convert to pandas DataFrames
labels_df = pd.DataFrame(labels)
print(f"Labels dataframe shape: {labels_df.shape}")
print(f"Total temporal segments: {len(labels_df)}")

# Process each split
for split_name, split_data in random_split.items():
    # Convert to DataFrame
    split_df = pd.DataFrame(split_data)

    # Join with labels on 'path'
    merged_df = pd.merge(split_df, labels_df, on="path", how="left")

    # Print statistics
    print(f"\n{split_name} split:")
    print(f"  Videos: {len(split_df)}")
    print(f"  Temporal segments: {len(merged_df)}")
    print(f"  Unique labels: {merged_df['label'].nunique()}")

Analyze Label Distribution

from datasets import load_dataset
import pandas as pd

# Load labels (default config)
labels = load_dataset("YOUR_USERNAME/wanfall")["train"]
labels_df = pd.DataFrame(labels)

# Load label names
label_map = {
    0: 'walk', 1: 'fall', 2: 'fallen', 3: 'sit_down',
    4: 'sitting', 5: 'lie_down', 6: 'lying', 7: 'stand_up',
    8: 'standing', 9: 'other', 10: 'kneel_down', 11: 'kneeling',
    12: 'squat_down', 13: 'squatting', 14: 'crawl', 15: 'jump'
}

# Add label names
labels_df['label_name'] = labels_df['label'].map(label_map)

# Segment-level distribution
print("Temporal Segment Distribution:")
segment_counts = labels_df['label_name'].value_counts().sort_index()
for label_name, count in segment_counts.items():
    print(f"  {label_name:15s}: {count:5d} segments")

# Video-level distribution (primary activity from path)
labels_df['primary_activity'] = labels_df['path'].str.split('/').str[0]
print("\nVideo Distribution by Primary Activity:")
video_counts = labels_df['primary_activity'].value_counts()
for activity, count in video_counts.items():
    print(f"  {activity:15s}: {count:5d} segments")

Iterate Over Split

from datasets import load_dataset
import pandas as pd

# Load data
labels = load_dataset("YOUR_USERNAME/wanfall")["train"]  # default config
labels_df = pd.DataFrame(labels)

splits = load_dataset("YOUR_USERNAME/wanfall", "random")
train_df = pd.DataFrame(splits["train"])

# Merge to get train labels
train_labels = pd.merge(train_df, labels_df, on="path", how="left")

print(f"Training set: {len(train_labels)} temporal segments")

# Iterate over videos
for video_path in train_df['path'][:5]:
    # Get all segments for this video
    video_segments = train_labels[train_labels['path'] == video_path]

    print(f"\n{video_path}:")
    print(f"  Segments: {len(video_segments)}")

    for _, seg in video_segments.iterrows():
        duration = seg['end'] - seg['start']
        print(f"    {seg['start']:.3f}s - {seg['end']:.3f}s ({duration:.3f}s): "
              f"label {seg['label']}")

PyTorch Dataset Integration

from datasets import load_dataset
import pandas as pd
import torch
from torch.utils.data import Dataset, DataLoader
from pathlib import Path
import cv2
import numpy as np


class WanFallDataset(Dataset):
    """
    PyTorch Dataset for WanFall activity recognition.

    This dataset provides both temporal segments and video paths for loading.
    """

    def __init__(
        self,
        split='train',
        video_root=None,
        transform=None,
        target_transform=None,
        return_segments=True,
        fps=16,
        num_frames=81
    ):
        """
        Args:
            split: One of 'train', 'validation', 'test'
            video_root: Root directory containing video files (e.g., /path/to/wanfall/videos)
            transform: Optional transform to apply to video frames
            target_transform: Optional transform to apply to labels
            return_segments: If True, returns all temporal segments. If False, returns one sample per video.
            fps: Frame rate of videos (default: 16)
            num_frames: Number of frames per video (default: 81)
        """
        super().__init__()

        # Load labels (all temporal segments)
        labels_ds = load_dataset("simplexsigil2/wanfall")
        self.labels_df = pd.DataFrame(labels_ds["train"])

        # Load split
        split_ds = load_dataset("simplexsigil2/wanfall", "random")
        split_df = pd.DataFrame(split_ds[split])

        # Merge to get labeled segments for this split
        self.data = pd.merge(split_df, self.labels_df, on="path", how="left")

        # If not returning segments, keep only one row per video
        if not return_segments:
            self.data = self.data.groupby('path').first().reset_index()

        self.video_root = Path(video_root) if video_root else None
        self.transform = transform
        self.target_transform = target_transform
        self.return_segments = return_segments
        self.fps = fps
        self.num_frames = num_frames

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        row = self.data.iloc[idx]

        # Get video path
        video_path = row['path']
        if self.video_root is not None:
            video_path = self.video_root / f"{video_path}.mp4"

        # Load video frames (if video_root is provided)
        frames = None
        if self.video_root is not None and Path(video_path).exists():
            frames = self._load_video(video_path)
            if self.transform is not None:
                frames = self.transform(frames)

        # Get label information
        label = int(row['label'])
        start_time = float(row['start'])
        end_time = float(row['end'])

        # Convert timestamps to frame indices
        start_frame = int(start_time * self.fps)
        end_frame = int(end_time * self.fps)

        if self.target_transform is not None:
            label = self.target_transform(label)

        # Return data
        sample = {
            'video_path': row['path'],
            'label': label,
            'start_time': start_time,
            'end_time': end_time,
            'start_frame': start_frame,
            'end_frame': end_frame,
        }

        if frames is not None:
            sample['frames'] = frames

        return sample

    def _load_video(self, video_path):
        """Load video frames using OpenCV."""
        cap = cv2.VideoCapture(str(video_path))
        frames = []

        while True:
            ret, frame = cap.read()
            if not ret:
                break
            # Convert BGR to RGB
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            frames.append(frame)

        cap.release()

        # Convert to numpy array (T, H, W, C)
        frames = np.array(frames)

        return frames


# Example usage
def get_dataloaders(video_root, batch_size=32, num_workers=4):
    """Create PyTorch DataLoaders for train/val/test splits."""

    # Optional: Define transforms
    from torchvision import transforms

    transform = transforms.Compose([
        transforms.Lambda(lambda x: torch.from_numpy(x).float()),
        transforms.Lambda(lambda x: x.permute(0, 3, 1, 2)),  # (T, H, W, C) -> (T, C, H, W)
        transforms.Lambda(lambda x: x / 255.0),  # Normalize to [0, 1]
    ])

    # Create datasets
    train_dataset = WanFallDataset(
        split='train',
        video_root=video_root,
        transform=transform,
        return_segments=True
    )

    val_dataset = WanFallDataset(
        split='validation',
        video_root=video_root,
        transform=transform,
        return_segments=True
    )

    test_dataset = WanFallDataset(
        split='test',
        video_root=video_root,
        transform=transform,
        return_segments=True
    )

    # Create dataloaders
    train_loader = DataLoader(
        train_dataset,
        batch_size=batch_size,
        shuffle=True,
        num_workers=num_workers,
        pin_memory=True
    )

    val_loader = DataLoader(
        val_dataset,
        batch_size=batch_size,
        shuffle=False,
        num_workers=num_workers,
        pin_memory=True
    )

    test_loader = DataLoader(
        test_dataset,
        batch_size=batch_size,
        shuffle=False,
        num_workers=num_workers,
        pin_memory=True
    )

    return train_loader, val_loader, test_loader


# Example training loop snippet
if __name__ == "__main__":
    video_root = Path("/path/to/wanfall/videos")

    train_loader, val_loader, test_loader = get_dataloaders(
        video_root=video_root,
        batch_size=16,
        num_workers=4
    )

    print(f"Train batches: {len(train_loader)}")
    print(f"Val batches: {len(val_loader)}")
    print(f"Test batches: {len(test_loader)}")

    # Inspect first batch
    for batch in train_loader:
        print("\nBatch keys:", batch.keys())
        if 'frames' in batch:
            print(f"Frames shape: {batch['frames'].shape}")
        print(f"Labels shape: {batch['label'].shape}")
        print(f"Label range: [{batch['label'].min()}, {batch['label'].max()}]")
        break

Converting Temporal Segments to Frame-Level Labels

If you need frame-level labels for dense prediction tasks:

import numpy as np


def temporal_segments_to_frames(segments_df, fps=16, num_frames=81):
    """
    Convert temporal segments to frame-level labels.

    Args:
        segments_df: DataFrame with 'start', 'end', 'label' columns for one video
        fps: Frame rate (default: 16)
        num_frames: Number of frames per video (default: 81)

    Returns:
        Array of shape (num_frames,) with label for each frame
    """
    # Initialize with -1 (unlabeled)
    frame_labels = np.full(num_frames, -1, dtype=np.int32)

    # Sort segments by start time
    segments_df = segments_df.sort_values('start')

    for _, seg in segments_df.iterrows():
        start_frame = int(seg['start'] * fps)
        end_frame = min(int(seg['end'] * fps), num_frames - 1)

        # Assign label to frames
        frame_labels[start_frame:end_frame + 1] = seg['label']

    return frame_labels


# Example usage with PyTorch Dataset
class WanFallFrameLevelDataset(Dataset):
    """PyTorch Dataset with frame-level labels."""

    def __init__(self, split='train', video_root=None, transform=None):
        super().__init__()

        # Load labels and split
        labels_ds = load_dataset("simplexsigil2/wanfall")
        self.labels_df = pd.DataFrame(labels_ds["train"])

        split_ds = load_dataset("simplexsigil2/wanfall", "random")
        split_df = pd.DataFrame(split_ds[split])

        # Get unique videos in this split
        self.video_paths = split_df['path'].tolist()
        self.video_root = Path(video_root) if video_root else None
        self.transform = transform

    def __len__(self):
        return len(self.video_paths)

    def __getitem__(self, idx):
        video_path = self.video_paths[idx]

        # Load video frames
        frames = None
        if self.video_root is not None:
            full_path = self.video_root / f"{video_path}.mp4"
            if full_path.exists():
                frames = self._load_video(full_path)
                if self.transform is not None:
                    frames = self.transform(frames)

        # Get all segments for this video and convert to frame labels
        video_segments = self.labels_df[self.labels_df['path'] == video_path]
        frame_labels = temporal_segments_to_frames(video_segments)

        return {
            'video_path': video_path,
            'frames': frames,
            'labels': torch.from_numpy(frame_labels),  # Shape: (81,)
        }

    def _load_video(self, video_path):
        """Load video frames."""
        cap = cv2.VideoCapture(str(video_path))
        frames = []
        while True:
            ret, frame = cap.read()
            if not ret:
                break
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            frames.append(frame)
        cap.release()
        return np.array(frames)

Best Practices

1. Temporal Segment vs Frame-Level:

  • Use temporal segments directly for action localization and detection tasks
  • Convert temporal segments to frame-level labels for dense prediction tasks (see example above)
  • The dataset provides temporal segments; use the conversion function for frame-level labels

2. Handling Multiple Segments per Video:

  • Set return_segments=True to get all temporal segments (one sample per segment)
  • Set return_segments=False to get one sample per video (useful for video-level classification)

3. Data Loading:

  • Videos are stored separately and not included in this HuggingFace dataset
  • Provide video_root path where videos are stored with structure: {video_root}/{path}.mp4
  • Example: {video_root}/fall/fall_ch_001.mp4

4. Memory Efficiency:

  • Load videos on-demand in __getitem__ rather than pre-loading
  • Use num_workers > 0 in DataLoader for parallel loading
  • Consider using video decoding libraries like decord or torchvision.io for faster loading

5. Temporal Sampling:

  • For long videos or limited memory, sample frames instead of loading all 81 frames
  • Use uniform sampling, random sampling, or segment-focused sampling based on task

6. Label Handling:

  • Labels are integers 0-15 for the 16 activity classes
  • -1 indicates unlabeled frames (when converting to frame-level labels)
  • Consider class balancing or weighted sampling for imbalanced classes

Technical Properties

Video Specifications

  • Resolution: Variable (synthetic generation)
  • Duration: 5.0625 seconds (consistent across all videos)
  • Frame count: 81 frames
  • Frame rate: 16 fps
  • Format: MP4 (not included in this dataset, videos must be obtained separately)

Annotation Properties

  • Temporal precision: Sub-second (timestamps with decimal precision)
  • Coverage: Most frames are labeled, with some gaps
  • Overlap handling: Segments are annotated chronologically
  • Activity sequences: Natural transitions (e.g., walk β†’ fall β†’ fallen β†’ stand_up)

Motion Types

Activities are classified into two main motion types:

Dynamic motions (e.g., walk, fall, stand_up):

  • Labeled from the first frame where the motion begins
  • End when the person reaches a resting state

Static states (e.g., fallen, sitting, lying):

  • Begin when person comes to rest in that posture
  • Continue until next motion begins

Label Sequences

Videos often contain natural sequences of activities:

  • Fall sequence: walk β†’ fall β†’ fallen β†’ stand_up
  • Sit sequence: walk β†’ sit_down β†’ sitting β†’ stand_up
  • Lie sequence: walk β†’ lie_down β†’ lying β†’ stand_up

Not all transitions include static states (e.g., a person might stand_up immediately after falling without a fallen state).

Future Extensions

This dataset is designed to support additional metadata and splits:

  • Demographics: Age groups, ethnicity (to be added)
  • Cross-demographic splits: Train on one demographic, test on another
  • Scenario variations: Different environments, lighting, occlusions

Citation

If you use WanFall in your research, please cite:

@misc{wanfall2025,
  title={WanFall: A Synthetic Activity Recognition Dataset},
  author={TODO},
  year={2025},
}

License

The annotations and split definitions in this repository are released under Creative Commons Attribution-NonCommercial 4.0 International License.

The video data is synthetic and must be obtained separately from the original source.

Contact

For questions about the dataset, please contact [TODO].