--- annotations_creators: - expert-generated language_creators: - found language: - ja license: cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 1K 90° kneeling_single_knee — one knee on ground kneeling_double_knee — both knees on ground ``` ### Force Labels (Qualitative) ``` minimal < 5 N (passive hold / inspection) low 5–10 N (precision brush / fine detail) medium 10–20 N (normal painting stroke) high 20–30 N (scrubbing / removal force) very_high > 30 N (heavy scrub on resistant surface) ``` ### Annotation Layers | Layer | Content | Completeness | |---|---|---| | Layer 1 | Action segmentation (start/end time + label) | 100% | | Layer 2 | Object + surface + environment labels | 100% | | Layer 3 | Skill quality label (expert/intermediate/novice) | 100% | | Layer 4 | Failure cases and recovery actions | 100% | **Inter-annotator agreement (Cohen's κ):** 0.94 --- ## Data Fields ### `action_segments.json` | Field | Type | Description | |---|---|---| | `session_id` | string | Unique session identifier | | `frame_id` | string | Keyframe identifier | | `timestamp_sec` | float | Time from session start (seconds) | | `phase` | string | `graffiti_removal` or `repaint_application` | | `action_label` | string | Primary action (see taxonomy above) | | `sub_action` | string | Fine-grained sub-action description | | `worker_id` | string | Worker identifier (W01–W04) | | `body_posture` | string | Posture category | | `working_height` | string | Vertical zone on fence being worked | ### `imu_*.csv` | Field | Type | Unit | Description | |---|---|---|---| | `timestamp_sec` | float | s | Synchronized timestamp | | `accel_x/y/z_g` | float | g | Linear acceleration | | `gyro_x/y/z_dps` | float | °/s | Angular velocity | | `euler_roll/pitch/yaw_deg` | float | ° | Fused orientation estimate | ### `force_glove_right.csv` | Field | Type | Unit | Description | |---|---|---|---| | `timestamp_sec` | float | s | Synchronized timestamp | | `cell_{name}_N` | float | N | Force at each of 24 sensor cells | | `total_force_N` | float | N | Sum across all active cells | | `grip_type` | string | — | Inferred grip classification | ### `realsense_depth.bag` (ROS bag) | Topic | Format | Description | |---|---|---| | `/camera/depth/image_rect_raw` | sensor_msgs/Image | 16-bit depth (mm) | | `/camera/color/image_raw` | sensor_msgs/Image | RGB aligned to depth | | `/camera/depth/color/points` | sensor_msgs/PointCloud2 | Fused RGBD point cloud | --- ## Data Splits | Split | Sessions | Hours | Notes | |---|---|---|---| | Sample (free) | 1 | 0.5 | Available without license agreement | | Full v1.0 | 10 | 50 | Commercial license required | | Planned v2.0 | 50+ | 300+ | Additional tasks: garbage collection, weeding, building cleaning | --- ## Dataset Creation ### Motivation Physical AI and embodied robot models are rapidly advancing, but publicly available training data for **outdoor real-world manipulation tasks** remains extremely scarce. Existing motion datasets focus primarily on laboratory manipulation, household tasks, or driving — leaving a large gap in construction, maintenance, and urban service work. This dataset addresses that gap by collecting data from professional Japanese workers performing skilled outdoor maintenance tasks. Japan's workforce is recognized globally for precision, safety discipline, and consistent execution — making it an ideal source of high-quality demonstration data. ### Collection Methodology - Task protocols defined in cooperation with professional site supervisors - Sessions conducted at real operational job sites - Multiple skill levels included to capture expert vs. novice motion differences - Both "successful" and "recovery from failure" motion captured and labeled - All data cleaned, synchronized, and formatted to be pipeline-ready ### Known Limitations - All data collected in Tokyo metropolitan area — urban environment bias - Nighttime and rainy-weather sessions represent < 15% of current dataset - Force glove covers right hand only - Skeleton estimation accuracy degrades at extreme joint angles (> 110°) --- ## Licensing Information This dataset is released under **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**. - ✅ Free to use for academic and non-commercial research - ✅ Free 30-minute sample available without registration - ✅ Redistribution permitted with attribution - ❌ Commercial use requires a separate commercial license For commercial licensing inquiries, please contact: **contact@fielddata.jp** --- ## Citation If you use this dataset in your research, please cite: ```bibtex @dataset{fielddata2025folmd, author = {FieldData Japan}, title = {FieldData Outdoor Labor Motion Dataset (FOLMD) v1.0}, year = {2025}, publisher = {HuggingFace}, url = {https://huggingface.co/datasets/fielddata-jp/folmd}, note = {Multi-modal motion dataset for physical AI training — outdoor maintenance tasks, Tokyo, Japan} } ``` --- ## Contact **FieldData Japan** - Email: contact@fielddata.jp - HuggingFace: [fielddata-jp](https://huggingface.co/fielddata-jp) - X (Twitter): [@fielddata_jp](https://twitter.com/fielddata_jp) We welcome collaboration inquiries from robotics researchers and companies interested in: - Custom data collection for specific task domains - Expanding the dataset to new outdoor task categories - Joint research and co-authorship opportunities