Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/amathislab/ESK_action_segmentation@ab580079a8225c4d471ae56aa77d601b9a086fc6/D2A_converted_pose_norm/YH2002_2023_12_04_10_15_23_body_hand_eye_pose_norm.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Paper | GitHub

🍳 EPFL-Smart-Kitchen: Action segmentation benchmark

πŸ“š Introduction

Given an untrimmed video, action segmentation requires the model to predict one or multiple action classes for every frame.Given the absence of popular (and comprehensive) action segmentation benchmarks from 3D pose we built an action segmentation benchmark that compares the impact of different input data (body,hand,eyes, videofeatures).One might expect that actions such as moving through the kitchen will be better predicted from the body, while motions like cutting require hand pose keypoints.Therefore,we used combinations of body pose, hand poses, and eye gazes, to form the input as they are computationally more efficient than deep visual features.We additionally compared the performance when using video features from VideoMAE as input. For this, we use DLC2Action, a python toolbox that standardize action segmentation experiments.

πŸ“‹ Content

This dataset contains data from the EPFL-Smart-Kitchen-30 dataset formatted for performing action segmentation with DLC2Action. You can find the original dataset on Zenodo.

The dataset is organized as followed:

ESK_action_segmentation
β”œβ”€β”€ D2A_converted_label_activity
|   β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
|   └── ...
β”œβ”€β”€ D2A_converted_label_nouns
|   β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
|   └── ...
β”œβ”€β”€ D2A_converted_label_verbs
|   β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
|   └── ...
β”œβ”€β”€ D2A_converted_label_pose_norm
|   β”œβ”€β”€ ...
|   └── [PARTICIPANT_ID]_[SESSION_ID]_[MODALITY].h5
└── README.md

Where :

  • PARTICIPANT_ID : The participant identifier in the format YH20XX
  • SESSION_ID : The date when the data was collected in format "YYYY_MM_DD_hh_mm_ss"
  • MODALITY : Modalities included in the data (body_hand_eyes_norm, only_videomae, holo_hand_pose_norm). holo_hand correspond to the 3D pose estimation of the hands coming from the HoloLens2 device.

πŸ“Š Data format

The labels is given as a tuple with the following structure: (Information dictionary, List of label names, List of individuals, Annotations)

Information dictionary (dict): {β€œdatetime” : β€œ%y-%m-%d %t, β€œvideo_file [video_name.mp4]}

List of label names (list): List with the name of the labels in the same order as in the annotations.

List of individuals (List): List with the name of the individuals in the same order as in the annotations.

Annotations(list(list(numpy arr))): The annotations are organized as Time segments for each individuals and for each label. The Time segments are numpy arrays (dtype = float64). Each segment is composed of 3 elements [start_frame, end_frame, confusion]. Confusion scales from 0 (absolutely confident that a given action is happening) to 1 (not confident that a given action is happening).

🌈 Usage

The training and evaluation code are available on the Github repository, in particular please visit the related section. The repository contains details on the dataset split, the dataset loading using DLC2Action, and the other benchmarks.

Evaluation results

🌟 Citations

Please cite our work!

@misc{bonnetto2025epflsmartkitchen,
      title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models}, 
      author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis},
      year={2025},
      eprint={2506.01608},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.01608}, 
}
@article {kozlova2025DLC2Action,
    author = {Kozlova, Elizaveta and Bonnetto, Andy and Mathis, Alexander},
    title = {DLC2Action: A Deep Learning-based Toolbox for Automated Behavior Segmentation},
    elocation-id = {2025.09.27.678941},
    year = {2025},
    doi = {10.1101/2025.09.27.678941},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941},
    eprint = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941.full.pdf},
    journal = {bioRxiv}
}

❀️ Acknowledgments Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.

Downloads last month
143

Collection including amathislab/ESK_action_segmentation