|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: cc-by-4.0 |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
tags: |
|
|
- behavior |
|
|
- motion |
|
|
- human |
|
|
- egocentric |
|
|
- exocentric |
|
|
- 3d_pose |
|
|
- hand_pose |
|
|
- body_pose |
|
|
- language |
|
|
- esk |
|
|
- action_recognition |
|
|
- classification |
|
|
pretty_name: ESK-ActionRec |
|
|
--- |
|
|
|
|
|
[Paper](https://arxiv.org/abs/2506.01608) | [GitHub](https://github.com/amathislab/EPFL-Smart-Kitchen) |
|
|
|
|
|
# π³ EPFL-Smart-Kitchen: Action recognition benchmark |
|
|
|
|
|
## π Introduction |
|
|
Given a video or temporal clip, action recognition requires the model to predict a single action label for that clip (or for a specified window within an untrimmed video). Building on the same EPFL-Smart-Kitchen-30 data and modalities as our segmentation benchmark, we provide a recognition-oriented setup to compare inputs from 3D body pose, hand pose, and eye gaze, as well as deep visual features (e.g., VideoMAE). |
|
|
|
|
|
Action recognition with kinematic inputs is lightweight and fast, while still capturing rich information about the task. We therefore offer recognition-ready annotations and splits that pair these modalities with clip-level labels. This makes it easy to evaluate the effect of different inputs (body vs hand vs gaze vs video) on standard classification metrics. |
|
|
|
|
|
|
|
|
## π Content |
|
|
This repository contains materials for performing action recognition on EPFL-Smart-Kitchen-30. You can find the original dataset on [Zenodo](https://zenodo.org/records/15535461). |
|
|
|
|
|
Current contents in this folder: |
|
|
``` |
|
|
ESK_action_recognition |
|
|
βββ benchmark_data.z01 |
|
|
βββ benchmark_data.z02 |
|
|
βββ benchmark_data.z03 |
|
|
βββ benchmark_data.z04 |
|
|
βββ benchmark_data.z05 |
|
|
βββ benchmark_data.z06 |
|
|
βββ benchmark_data.zip # main part (open this to extract) |
|
|
βββ checkpoints.z01 |
|
|
βββ checkpoints.z02 |
|
|
βββ checkpoints.z03 |
|
|
βββ checkpoints.z04 |
|
|
βββ checkpoints.z05 |
|
|
βββ checkpoints.zip # main part (open this to extract) |
|
|
βββ README.md |
|
|
``` |
|
|
|
|
|
How to unzip (Linux): |
|
|
|
|
|
1) Ensure all parts are in the same directory (as above). |
|
|
2) Use either unzip or 7-Zip, starting from the `.zip` file (not the `.z01`). |
|
|
|
|
|
- With unzip (preinstalled on many systems): |
|
|
```bash |
|
|
unzip benchmark_data.zip |
|
|
unzip checkpoints.zip |
|
|
``` |
|
|
|
|
|
- With 7-Zip (if you prefer): |
|
|
```bash |
|
|
# install if needed (Debian/Ubuntu) |
|
|
sudo apt-get update && sudo apt-get install -y p7zip-full |
|
|
|
|
|
7z x benchmark_data.zip |
|
|
7z x checkpoints.zip |
|
|
``` |
|
|
|
|
|
Notes: |
|
|
- Donβt try to extract the `.z01` files directlyβalways open the corresponding `.zip` file. |
|
|
- If extraction fails, verify that all parts are fully downloaded and present. |
|
|
|
|
|
After extracting the files, you will get the following folders: |
|
|
``` |
|
|
ESK_action_recognition |
|
|
βββ Benchmark_data |
|
|
| βββ [PARTICIPANT_ID]/[SESSION_ID] |
|
|
βββ Annotations |
|
|
| βββ [SPLIT].csv |
|
|
βββ Hand_videos |
|
|
| βββ [SPLIT] |
|
|
βββ pose_data |
|
|
| βββ [PARTICIPANT_ID]/[SESSION_ID] |
|
|
βββ checkpoints |
|
|
| βββ [INPUT_TYPE]_experiment |
|
|
| βββ [INPUT_TYPE]_nopretrain_experiment |
|
|
βββ README.md |
|
|
``` |
|
|
|
|
|
## π Data format |
|
|
Clip-level labels are provided in CSV form for straightforward use with classification training code. A minimal `activity_labels.csv` contains the following columns: |
|
|
|
|
|
- `clip_id`: Unique identifier for the clip |
|
|
- `participant_id`: e.g., YH20XX |
|
|
- `session_id`: e.g., YYYY_MM_DD_hh_mm_ss |
|
|
- `start_frame`, `end_frame`: temporal window within the session (omit if clips are pre-trimmed files) |
|
|
- `label_id`: integer-coded action class |
|
|
- `label_name`: human-readable action name |
|
|
|
|
|
Splits are text/CSV files listing `clip_id` or clip paths for train/val/test. |
|
|
|
|
|
|
|
|
## π Usage |
|
|
Training and evaluation code are available on the [GitHub repository](https://github.com/amathislab/EPFL-Smart-Kitchen). Please see the Action Recognition section for: |
|
|
|
|
|
- dataset splits and loading utilities for clip-level labels |
|
|
- using 3D kinematic features (body/hand/eyes) and/or VideoMAE features |
|
|
- training scripts for classification models (e.g., MLP/Transformer over features) |
|
|
|
|
|
|
|
|
## π Citations |
|
|
|
|
|
Please cite our work! |
|
|
``` |
|
|
@misc{bonnetto2025epflsmartkitchen, |
|
|
title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models}, |
|
|
author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis}, |
|
|
year={2025}, |
|
|
eprint={2506.01608}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2506.01608}, |
|
|
} |
|
|
``` |
|
|
|
|
|
β€οΈ Acknowledgments |
|
|
Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services. |
|
|
|