Datasets:
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
tags:
- motion
- human-motion
- motion-generation
- 3d_pose
- hand_pose
- body_pose
- language
- esk
pretty_name: ESK-MotionGen
🍳 EPFL‑Smart‑Kitchen: Motion Generation Assets
📚 Introduction
This folder accompanies motion generation experiments on EPFL‑Smart‑Kitchen‑30 (ESK‑30). The goal is to generate realistic human motion sequences (e.g., body and hands in 3D) optionally conditioned on text, activity context, or past motion. The assets here focus on kinematic inputs/outputs (body, hands, gaze) from ESK‑30 to enable training, fine‑tuning, and evaluation of motion generators.
- Source dataset: ESK‑30 with densely annotated activities and synchronized 3D kinematics
- Modalities typically used: body pose, hand pose, and gaze
- Typical tasks: unconditional/conditional motion synthesis, continuation, and in‑betweening
If you need the full dataset description and licensing, please refer to the paper and upstream repository linked above.
📦 What’s in this folder
This directory contains split archives for motion data and (optionally) pretrained checkpoints:
motion_data.z01,motion_data.z02, …: multipart archive with motion training/eval datacheckpoints.z01(optional): multipart archive with example pretrained weights
You’ll first reconstruct and unzip these archives locally. And you can see the following folders:
ESK_motion_generation
├── motion_data
| ├── new_joint_vecs
| └── holo_images
├── evaluators
| ├── Comp_v6_KLD005
| └── Decomp_SP001_SM001_H512
| └── text_mot_match_[TEXT_TYPE]
├── checkpoints
| ├── fullbody_[TOKENIZER]
| └── fullbody_[BASELINE]
| ├── fullbody_image_[TOKENIZER]
| └── fullbody_image_[BASELINE]
└── README.md
🔓 Extract the archives (Linux)
# Reconstruct and extract motion data
cat motion_data.z* > motion_data.zip
unzip motion_data.zip -d motion_data
unzip evaluators.zip -d evaluators
# (Optional) Reconstruct and extract pretrained checkpoints
cat checkpoints.z* > checkpoints.zip
unzip checkpoints.zip -d checkpoints
🌟 Citations
Please cite our work:
@misc{bonnetto2025epflsmartkitchen,
title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models},
author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis},
year={2025},
eprint={2506.01608},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.01608},
}
❤️ Acknowledgments This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center, and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We thank the Brain Mind Institute for hardware support and the Neuro‑X Institute for services.