sample / README.md
socialbank's picture
Upload README.md
aff8a55 verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - found
language:
  - ja
license: cc-by-nc-4.0
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - other
task_ids:
  - pose-estimation
  - grasping
  - task-planning
paperswithcode_id: null
pretty_name: FieldData Outdoor Labor Motion Dataset
tags:
  - physical-ai
  - embodied-ai
  - motion-capture
  - manipulation
  - outdoor-robotics
  - painting
  - graffiti-removal
  - construction
  - human-robot-learning

Dataset Card: FieldData Outdoor Labor Motion Dataset (FOLMD)

Table of Contents


Dataset Description

Dataset Summary

FieldData Outdoor Labor Motion Dataset (FOLMD) is a multi-modal motion dataset of professional outdoor workers performing real-world manual labor tasks in urban environments in Japan.

This dataset is designed to support the development of physical AI foundation models and embodied robotic systems targeting outdoor maintenance and construction tasks. All data was collected from professional workers at actual job sites — not from staged laboratory environments.

The v1.0 release covers a complete two-phase urban fence restoration task:

  1. Graffiti removal using chemical solvent and rag wiping
  2. Repainting with black paint using brushes at multiple heights and postures

Data modalities include egocentric RGB video, depth maps (Intel RealSense D435i), full-body IMU (8-point, Mbientlab MetaMotionS), glove-based grasp force (Tekscan Grip System), and annotated 3D skeleton sequences (SMPL format).


Why This Dataset?

Challenge for Robots What This Dataset Provides
Vertical surface contact control Sustained brush pressure on metal slat fence (9–25 N)
Low-posture precision work Deep crouch / kneeling brushwork at ground level
Boundary-aware manipulation Painting within 10 mm of masking tape edges
Multi-posture task transitions Standing → crouching → kneeling within a single session
Visual quality assessment Workers pausing to inspect coverage, detecting missed spots
Multi-agent coordination 3–4 workers dividing fence height and working in parallel

These tasks involve sustained contact with irregular vertical surfaces, fine motor control under physical load, and situated decision-making — all critical challenges for next-generation manipulation models.


Supported Tasks

  • Imitation Learning (IL): Action-annotated demonstrations suitable for behavior cloning and inverse reinforcement learning
  • Vision-Language-Action (VLA): Paired egocentric video and action labels for multimodal training
  • Pose Estimation: Full-body SMPL skeleton data across 6 posture categories
  • Action Segmentation: 7-class taxonomy with fine-grained sub-actions and temporal boundaries
  • Force Control Learning: Grasp pressure profiles for 5 grip types across different tools and surfaces
  • Contact-Rich Manipulation: Depth + force data for learning compliant surface following

Data Collection

Sensor Setup

Modality Device Specs Sample Rate
Egocentric RGB GoPro Hero 12 1920×1080 60 fps
Depth + RGB Intel RealSense D435i 848×480 30 fps
Full-body IMU (×8) Mbientlab MetaMotionS Accel + Gyro + Euler 200 Hz
Grasp Force Glove Tekscan Grip System 24 cells, right hand 100 Hz
3D Skeleton Computed from IMU SMPL format 30 fps
External cameras GoPro Hero 12 (×3) 1920×1080 60 fps

Temporal synchronization: All sensors synchronized via GPS-PPS hardware sync signal. Timestamp accuracy: ±0.8 ms.

Sensor Placement

Head:     GoPro (front) + RealSense (side)
R Wrist:  IMU
L Wrist:  IMU
R Elbow:  IMU
L Elbow:  IMU
Waist:    IMU
R Knee:   IMU
L Knee:   IMU
R Hand:   Force glove (Tekscan)

Collection Protocol

  • Workers are professional outdoor maintenance staff with 2–10 years of experience
  • Written informed consent obtained from all participants
  • Data collected at real urban job sites (Tokyo metropolitan area, Japan)
  • Session begins with 5-minute sensor calibration and zeroing
  • Workers perform tasks naturally without scripted movements
  • All sessions include workers of mixed skill levels (novice / intermediate / experienced)

Dataset Structure

folmd-v1.0/
├── sessions/
│   └── session_20210327_001/
│       ├── raw/
│       │   ├── gopro_head.mp4            # Egocentric RGB (60 fps)
│       │   ├── realsense_rgb.mp4         # RealSense RGB (30 fps)
│       │   ├── realsense_depth.bag       # Depth stream (ROS bag)
│       │   ├── imu_r_wrist.csv           # Right wrist IMU
│       │   ├── imu_l_wrist.csv           # Left wrist IMU
│       │   ├── imu_r_elbow.csv           # Right elbow IMU
│       │   ├── imu_l_elbow.csv           # Left elbow IMU
│       │   ├── imu_waist.csv             # Waist IMU
│       │   ├── imu_r_knee.csv            # Right knee IMU
│       │   ├── imu_l_knee.csv            # Left knee IMU
│       │   ├── imu_head.csv              # Head IMU
│       │   └── force_glove_right.csv     # Right hand force (24 cells)
│       ├── processed/
│       │   ├── skeleton_smpl/
│       │   │   └── frame_*.json          # Per-keyframe SMPL skeleton
│       │   ├── depth_pointcloud/
│       │   │   └── frame_*.pcd           # Per-keyframe 3D point cloud
│       │   └── action_segments.json      # Full annotation (Tier 2)
│       └── metadata/
│           ├── session_info.json
│           ├── sensor_calibration.json
│           └── quality_report.pdf
├── dataset_card.md                       # This file
├── annotation_guidelines.pdf            # Labeling rulebook (EN)
└── sample/
    └── 30min_annotated_sample/           # Free sample — see Licensing

Annotations

Action Taxonomy (7 Primary Labels)

Action Label Description Avg Duration (sec)
solvent_application Applying chemical solvent to dissolve graffiti using rag 45–180
scrub_with_rag Physical scrubbing to remove dissolved paint 10–60
paint_application Applying paint with brush/roller at mid-to-upper height 30–300
detail_paint_application Precision brushwork at edges, slat gaps, base boundaries 10–120
visual_inspection Standing back to scan surface for quality assessment 5–30
material_preparation Handling tools, mixing paint, preparing supplies 5–60
masking Applying or adjusting masking tape 30–180

Posture Labels (6 Categories)

standing_upright          — full height, minimal trunk lean
standing_slight_lean      — 10–20° trunk forward flexion
crouching_moderate        — knee flexion 60–90°
crouching_deep            — knee flexion > 90°
kneeling_single_knee      — one knee on ground
kneeling_double_knee      — both knees on ground

Force Labels (Qualitative)

minimal    < 5 N    (passive hold / inspection)
low        5–10 N   (precision brush / fine detail)
medium     10–20 N  (normal painting stroke)
high       20–30 N  (scrubbing / removal force)
very_high  > 30 N   (heavy scrub on resistant surface)

Annotation Layers

Layer Content Completeness
Layer 1 Action segmentation (start/end time + label) 100%
Layer 2 Object + surface + environment labels 100%
Layer 3 Skill quality label (expert/intermediate/novice) 100%
Layer 4 Failure cases and recovery actions 100%

Inter-annotator agreement (Cohen's κ): 0.94


Data Fields

action_segments.json

Field Type Description
session_id string Unique session identifier
frame_id string Keyframe identifier
timestamp_sec float Time from session start (seconds)
phase string graffiti_removal or repaint_application
action_label string Primary action (see taxonomy above)
sub_action string Fine-grained sub-action description
worker_id string Worker identifier (W01–W04)
body_posture string Posture category
working_height string Vertical zone on fence being worked

imu_*.csv

Field Type Unit Description
timestamp_sec float s Synchronized timestamp
accel_x/y/z_g float g Linear acceleration
gyro_x/y/z_dps float °/s Angular velocity
euler_roll/pitch/yaw_deg float ° Fused orientation estimate

force_glove_right.csv

Field Type Unit Description
timestamp_sec float s Synchronized timestamp
cell_{name}_N float N Force at each of 24 sensor cells
total_force_N float N Sum across all active cells
grip_type string Inferred grip classification

realsense_depth.bag (ROS bag)

Topic Format Description
/camera/depth/image_rect_raw sensor_msgs/Image 16-bit depth (mm)
/camera/color/image_raw sensor_msgs/Image RGB aligned to depth
/camera/depth/color/points sensor_msgs/PointCloud2 Fused RGBD point cloud

Data Splits

Split Sessions Hours Notes
Sample (free) 1 0.5 Available without license agreement
Full v1.0 10 50 Commercial license required
Planned v2.0 50+ 300+ Additional tasks: garbage collection, weeding, building cleaning

Dataset Creation

Motivation

Physical AI and embodied robot models are rapidly advancing, but publicly available training data for outdoor real-world manipulation tasks remains extremely scarce. Existing motion datasets focus primarily on laboratory manipulation, household tasks, or driving — leaving a large gap in construction, maintenance, and urban service work.

This dataset addresses that gap by collecting data from professional Japanese workers performing skilled outdoor maintenance tasks. Japan's workforce is recognized globally for precision, safety discipline, and consistent execution — making it an ideal source of high-quality demonstration data.

Collection Methodology

  • Task protocols defined in cooperation with professional site supervisors
  • Sessions conducted at real operational job sites
  • Multiple skill levels included to capture expert vs. novice motion differences
  • Both "successful" and "recovery from failure" motion captured and labeled
  • All data cleaned, synchronized, and formatted to be pipeline-ready

Known Limitations

  • All data collected in Tokyo metropolitan area — urban environment bias
  • Nighttime and rainy-weather sessions represent < 15% of current dataset
  • Force glove covers right hand only
  • Skeleton estimation accuracy degrades at extreme joint angles (> 110°)

Licensing Information

This dataset is released under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).

  • ✅ Free to use for academic and non-commercial research
  • ✅ Free 30-minute sample available without registration
  • ✅ Redistribution permitted with attribution
  • ❌ Commercial use requires a separate commercial license

For commercial licensing inquiries, please contact: contact@fielddata.jp


Citation

If you use this dataset in your research, please cite:

@dataset{fielddata2025folmd,
  author    = {FieldData Japan},
  title     = {FieldData Outdoor Labor Motion Dataset (FOLMD) v1.0},
  year      = {2025},
  publisher = {HuggingFace},
  url       = {https://huggingface.co/datasets/fielddata-jp/folmd},
  note      = {Multi-modal motion dataset for physical AI training — outdoor maintenance tasks, Tokyo, Japan}
}

Contact

FieldData Japan

We welcome collaboration inquiries from robotics researchers and companies interested in:

  • Custom data collection for specific task domains
  • Expanding the dataset to new outdoor task categories
  • Joint research and co-authorship opportunities