The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Dataset Card: FieldData Outdoor Labor Motion Dataset (FOLMD)
Dataset Summary
FieldData Outdoor Labor Motion Dataset (FOLMD) is a multi-modal motion dataset of professional outdoor workers performing real-world manual labor tasks in urban environments in Japan.
This dataset is designed to support the development of physical AI foundation models and embodied robotic systems targeting outdoor maintenance and construction tasks. All data was collected from professional workers at actual job sites β not from staged laboratory environments.
The v1.0 release covers a complete two-phase urban fence restoration task:
- Graffiti removal using chemical solvent and rag wiping
- Repainting with black paint using brushes at multiple heights and postures
Data modalities include egocentric RGB video, depth maps (Intel RealSense D435i), full-body IMU (8-point, Mbientlab MetaMotionS), glove-based grasp force (Tekscan Grip System), and annotated 3D skeleton sequences (SMPL format).
Why This Dataset?
| Challenge for Robots | What This Dataset Provides |
|---|---|
| Vertical surface contact control | Sustained brush pressure on metal slat fence (9β25 N) |
| Low-posture precision work | Deep crouch / kneeling brushwork at ground level |
| Boundary-aware manipulation | Painting within 10 mm of masking tape edges |
| Multi-posture task transitions | Standing β crouching β kneeling within a single session |
| Visual quality assessment | Workers pausing to inspect coverage, detecting missed spots |
| Multi-agent coordination | 3β4 workers dividing fence height and working in parallel |
These tasks involve sustained contact with irregular vertical surfaces, fine motor control under physical load, and situated decision-making β all critical challenges for next-generation manipulation models.
Supported Tasks
- Imitation Learning (IL): Action-annotated demonstrations suitable for behavior cloning and inverse reinforcement learning
- Vision-Language-Action (VLA): Paired egocentric video and action labels for multimodal training
- Pose Estimation: Full-body SMPL skeleton data across 6 posture categories
- Action Segmentation: 7-class taxonomy with fine-grained sub-actions and temporal boundaries
- Force Control Learning: Grasp pressure profiles for 5 grip types across different tools and surfaces
- Contact-Rich Manipulation: Depth + force data for learning compliant surface following
Data Collection
Sensor Setup
| Modality | Device | Specs | Sample Rate |
|---|---|---|---|
| Egocentric RGB | GoPro Hero 12 | 1920Γ1080 | 60 fps |
| Depth + RGB | Intel RealSense D435i | 848Γ480 | 30 fps |
| Full-body IMU (Γ8) | Mbientlab MetaMotionS | Accel + Gyro + Euler | 200 Hz |
| Grasp Force Glove | Tekscan Grip System | 24 cells, right hand | 100 Hz |
| 3D Skeleton | Computed from IMU | SMPL format | 30 fps |
| External cameras | GoPro Hero 12 (Γ3) | 1920Γ1080 | 60 fps |
Temporal synchronization: All sensors synchronized via GPS-PPS hardware sync signal. Timestamp accuracy: Β±0.8 ms.
Sensor Placement
Head: GoPro (front) + RealSense (side)
R Wrist: IMU
L Wrist: IMU
R Elbow: IMU
L Elbow: IMU
Waist: IMU
R Knee: IMU
L Knee: IMU
R Hand: Force glove (Tekscan)
Collection Protocol
- Workers are professional outdoor maintenance staff with 2β10 years of experience
- Written informed consent obtained from all participants
- Data collected at real urban job sites (Tokyo metropolitan area, Japan)
- Session begins with 5-minute sensor calibration and zeroing
- Workers perform tasks naturally without scripted movements
- All sessions include workers of mixed skill levels (novice / intermediate / experienced)
Dataset Structure
folmd-v1.0/
βββ sessions/
β βββ session_20210327_001/
β βββ raw/
β β βββ gopro_head.mp4 # Egocentric RGB (60 fps)
β β βββ realsense_rgb.mp4 # RealSense RGB (30 fps)
β β βββ realsense_depth.bag # Depth stream (ROS bag)
β β βββ imu_r_wrist.csv # Right wrist IMU
β β βββ imu_l_wrist.csv # Left wrist IMU
β β βββ imu_r_elbow.csv # Right elbow IMU
β β βββ imu_l_elbow.csv # Left elbow IMU
β β βββ imu_waist.csv # Waist IMU
β β βββ imu_r_knee.csv # Right knee IMU
β β βββ imu_l_knee.csv # Left knee IMU
β β βββ imu_head.csv # Head IMU
β β βββ force_glove_right.csv # Right hand force (24 cells)
β βββ processed/
β β βββ skeleton_smpl/
β β β βββ frame_*.json # Per-keyframe SMPL skeleton
β β βββ depth_pointcloud/
β β β βββ frame_*.pcd # Per-keyframe 3D point cloud
β β βββ action_segments.json # Full annotation (Tier 2)
β βββ metadata/
β βββ session_info.json
β βββ sensor_calibration.json
β βββ quality_report.pdf
βββ dataset_card.md # This file
βββ annotation_guidelines.pdf # Labeling rulebook (EN)
βββ sample/
βββ 30min_annotated_sample/ # Free sample β see Licensing
Annotations
Action Taxonomy (7 Primary Labels)
| Action Label | Description | Avg Duration (sec) |
|---|---|---|
solvent_application |
Applying chemical solvent to dissolve graffiti using rag | 45β180 |
scrub_with_rag |
Physical scrubbing to remove dissolved paint | 10β60 |
paint_application |
Applying paint with brush/roller at mid-to-upper height | 30β300 |
detail_paint_application |
Precision brushwork at edges, slat gaps, base boundaries | 10β120 |
visual_inspection |
Standing back to scan surface for quality assessment | 5β30 |
material_preparation |
Handling tools, mixing paint, preparing supplies | 5β60 |
masking |
Applying or adjusting masking tape | 30β180 |
Posture Labels (6 Categories)
standing_upright β full height, minimal trunk lean
standing_slight_lean β 10β20Β° trunk forward flexion
crouching_moderate β knee flexion 60β90Β°
crouching_deep β knee flexion > 90Β°
kneeling_single_knee β one knee on ground
kneeling_double_knee β both knees on ground
Force Labels (Qualitative)
minimal < 5 N (passive hold / inspection)
low 5β10 N (precision brush / fine detail)
medium 10β20 N (normal painting stroke)
high 20β30 N (scrubbing / removal force)
very_high > 30 N (heavy scrub on resistant surface)
Annotation Layers
| Layer | Content | Completeness |
|---|---|---|
| Layer 1 | Action segmentation (start/end time + label) | 100% |
| Layer 2 | Object + surface + environment labels | 100% |
| Layer 3 | Skill quality label (expert/intermediate/novice) | 100% |
| Layer 4 | Failure cases and recovery actions | 100% |
Inter-annotator agreement (Cohen's ΞΊ): 0.94
Data Fields
action_segments.json
| Field | Type | Description |
|---|---|---|
session_id |
string | Unique session identifier |
frame_id |
string | Keyframe identifier |
timestamp_sec |
float | Time from session start (seconds) |
phase |
string | graffiti_removal or repaint_application |
action_label |
string | Primary action (see taxonomy above) |
sub_action |
string | Fine-grained sub-action description |
worker_id |
string | Worker identifier (W01βW04) |
body_posture |
string | Posture category |
working_height |
string | Vertical zone on fence being worked |
imu_*.csv
| Field | Type | Unit | Description |
|---|---|---|---|
timestamp_sec |
float | s | Synchronized timestamp |
accel_x/y/z_g |
float | g | Linear acceleration |
gyro_x/y/z_dps |
float | Β°/s | Angular velocity |
euler_roll/pitch/yaw_deg |
float | Β° | Fused orientation estimate |
force_glove_right.csv
| Field | Type | Unit | Description |
|---|---|---|---|
timestamp_sec |
float | s | Synchronized timestamp |
cell_{name}_N |
float | N | Force at each of 24 sensor cells |
total_force_N |
float | N | Sum across all active cells |
grip_type |
string | β | Inferred grip classification |
realsense_depth.bag (ROS bag)
| Topic | Format | Description |
|---|---|---|
/camera/depth/image_rect_raw |
sensor_msgs/Image | 16-bit depth (mm) |
/camera/color/image_raw |
sensor_msgs/Image | RGB aligned to depth |
/camera/depth/color/points |
sensor_msgs/PointCloud2 | Fused RGBD point cloud |
Data Splits
| Split | Sessions | Hours | Notes |
|---|---|---|---|
| Sample (free) | 1 | 0.5 | Available without license agreement |
| Full v1.0 | 10 | 50 | Commercial license required |
| Planned v2.0 | 50+ | 300+ | Additional tasks: garbage collection, weeding, building cleaning |
Dataset Creation
Motivation
Physical AI and embodied robot models are rapidly advancing, but publicly available training data for outdoor real-world manipulation tasks remains extremely scarce. Existing motion datasets focus primarily on laboratory manipulation, household tasks, or driving β leaving a large gap in construction, maintenance, and urban service work.
This dataset addresses that gap by collecting data from professional Japanese workers performing skilled outdoor maintenance tasks. Japan's workforce is recognized globally for precision, safety discipline, and consistent execution β making it an ideal source of high-quality demonstration data.
Collection Methodology
- Task protocols defined in cooperation with professional site supervisors
- Sessions conducted at real operational job sites
- Multiple skill levels included to capture expert vs. novice motion differences
- Both "successful" and "recovery from failure" motion captured and labeled
- All data cleaned, synchronized, and formatted to be pipeline-ready
Known Limitations
- All data collected in Tokyo metropolitan area β urban environment bias
- Nighttime and rainy-weather sessions represent < 15% of current dataset
- Force glove covers right hand only
- Skeleton estimation accuracy degrades at extreme joint angles (> 110Β°)
Licensing Information
This dataset is released under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
- β Free to use for academic and non-commercial research
- β Free 30-minute sample available without registration
- β Redistribution permitted with attribution
- β Commercial use requires a separate commercial license
For commercial licensing inquiries, please contact: contact@fielddata.jp
Citation
If you use this dataset in your research, please cite:
@dataset{fielddata2025folmd,
author = {FieldData Japan},
title = {FieldData Outdoor Labor Motion Dataset (FOLMD) v1.0},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/fielddata-jp/folmd},
note = {Multi-modal motion dataset for physical AI training β outdoor maintenance tasks, Tokyo, Japan}
}
Contact
FieldData Japan
- Email: contact@fielddata.jp
- HuggingFace: fielddata-jp
- X (Twitter): @fielddata_jp
We welcome collaboration inquiries from robotics researchers and companies interested in:
- Custom data collection for specific task domains
- Expanding the dataset to new outdoor task categories
- Joint research and co-authorship opportunities
- Downloads last month
- 6