AndyBonnetto commited on
Commit
26f4609
Β·
verified Β·
1 Parent(s): 20ba10f

Remove task categories

Browse files
Files changed (1) hide show
  1. README.md +106 -108
README.md CHANGED
@@ -1,108 +1,106 @@
1
- ---
2
- language:
3
- - en
4
- license: cc-by-4.0
5
- size_categories:
6
- - 10K<n<100K
7
- task_categories:
8
- - action-segmentation
9
- tags:
10
- - behavior
11
- - motion
12
- - human
13
- - egocentric
14
- - exocentric
15
- - 3d_pose
16
- - hand_pose
17
- - body_pose
18
- - language
19
- - esk
20
- - dlc2action
21
- pretty_name: ESK-ActionSeg
22
- ---
23
-
24
- [Paper](https://arxiv.org/abs/2506.01608) | [GitHub](https://github.com/amathislab/EPFL-Smart-Kitchen)
25
-
26
- # 🍳 EPFL-Smart-Kitchen: Action segmentation benchmark
27
-
28
- ## πŸ“š Introduction
29
- Given an untrimmed video, action segmentation requires the model to predict one or multiple action classes for every frame.Given the absence of popular (and comprehensive) action segmentation benchmarks from 3D pose we built an action segmentation benchmark that compares the impact of different input data (body,hand,eyes, videofeatures).One might expect that actions such as moving through the kitchen will be better predicted from the body, while motions like cutting require hand pose keypoints.Therefore,we used combinations of body pose, hand poses, and eye gazes, to form the input as they are computationally more efficient than deep visual features.We additionally compared the performance when using video features from VideoMAE as input. For this, we use DLC2Action, a python toolbox that standardize action segmentation experiments.
30
-
31
- ![](media/illustration.png)
32
-
33
- ## πŸ“‹ Content
34
- This dataset contains data from the EPFL-Smart-Kitchen-30 dataset formatted for performing action segmentation with DLC2Action. You can find the original dataset on [Zenodo](https://zenodo.org/records/15535461).
35
-
36
- The dataset is organized as followed:
37
- ```
38
- ESK_action_segmentation
39
- β”œβ”€β”€ D2A_converted_label_activity
40
- | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
41
- | └── ...
42
- β”œβ”€β”€ D2A_converted_label_nouns
43
- | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
44
- | └── ...
45
- β”œβ”€β”€ D2A_converted_label_verbs
46
- | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
47
- | └── ...
48
- β”œβ”€β”€ D2A_converted_label_pose_norm
49
- | β”œβ”€β”€ ...
50
- | └── [PARTICIPANT_ID]_[SESSION_ID]_[MODALITY].h5
51
- └── README.md
52
- ```
53
-
54
- Where :
55
- * **PARTICIPANT_ID** : The participant identifier in the format YH20XX
56
- * **SESSION_ID** : The date when the data was collected in format "YYYY_MM_DD_hh_mm_ss"
57
- * **MODALITY** : Modalities included in the data (`body_hand_eyes_norm`, `only_videomae`, `holo_hand_pose_norm`). `holo_hand` correspond to the 3D pose estimation of the hands coming from the HoloLens2 device.
58
-
59
- ## πŸ“Š Data format
60
- The labels is given as a tuple with the following structure:
61
- (Information dictionary, List of label names, List of individuals, Annotations)
62
-
63
- `Information dictionary (dict)`: {β€œdatetime” : β€œ%y-%m-%d %t, β€œvideo_file [video_name.mp4]}
64
-
65
- `List of label names (list)`: List with the name of the labels in the same order as in the annotations.
66
-
67
- `List of individuals (List)`: List with the name of the individuals in the same order as in the annotations.
68
-
69
- `Annotations(list(list(numpy arr)))`: The annotations are organized as Time segments for each individuals and for each label. The Time segments are numpy arrays (dtype = float64). Each segment is composed of 3 elements [start_frame, end_frame, confusion]. Confusion scales from 0 (absolutely confident that a given action is happening) to 1 (not confident that a given action is happening).
70
-
71
- ## 🌈 Usage
72
- The training and evaluation code are available on the [Github repository](https://github.com/amathislab/EPFL-Smart-Kitchen), in particular please visit the related section. The repository contains details on the dataset split, the dataset loading using DLC2Action, and the other benchmarks.
73
-
74
- ## Evaluation results
75
-
76
- ![](media/results.png)
77
-
78
- ## 🌟 Citations
79
-
80
- Please cite our work!
81
- ```
82
- @misc{bonnetto2025epflsmartkitchen,
83
- title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models},
84
- author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis},
85
- year={2025},
86
- eprint={2506.01608},
87
- archivePrefix={arXiv},
88
- primaryClass={cs.CV},
89
- url={https://arxiv.org/abs/2506.01608},
90
- }
91
- ```
92
-
93
- ```
94
- @article {kozlova2025DLC2Action,
95
- author = {Kozlova, Elizaveta and Bonnetto, Andy and Mathis, Alexander},
96
- title = {DLC2Action: A Deep Learning-based Toolbox for Automated Behavior Segmentation},
97
- elocation-id = {2025.09.27.678941},
98
- year = {2025},
99
- doi = {10.1101/2025.09.27.678941},
100
- publisher = {Cold Spring Harbor Laboratory},
101
- URL = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941},
102
- eprint = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941.full.pdf},
103
- journal = {bioRxiv}
104
- }
105
- ```
106
-
107
- ❀️ Acknowledgments
108
- Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - 10K<n<100K
7
+ tags:
8
+ - behavior
9
+ - motion
10
+ - human
11
+ - egocentric
12
+ - exocentric
13
+ - 3d_pose
14
+ - hand_pose
15
+ - body_pose
16
+ - language
17
+ - esk
18
+ - dlc2action
19
+ pretty_name: ESK-ActionSeg
20
+ ---
21
+
22
+ [Paper](https://arxiv.org/abs/2506.01608) | [GitHub](https://github.com/amathislab/EPFL-Smart-Kitchen)
23
+
24
+ # 🍳 EPFL-Smart-Kitchen: Action segmentation benchmark
25
+
26
+ ## πŸ“š Introduction
27
+ Given an untrimmed video, action segmentation requires the model to predict one or multiple action classes for every frame.Given the absence of popular (and comprehensive) action segmentation benchmarks from 3D pose we built an action segmentation benchmark that compares the impact of different input data (body,hand,eyes, videofeatures).One might expect that actions such as moving through the kitchen will be better predicted from the body, while motions like cutting require hand pose keypoints.Therefore,we used combinations of body pose, hand poses, and eye gazes, to form the input as they are computationally more efficient than deep visual features.We additionally compared the performance when using video features from VideoMAE as input. For this, we use DLC2Action, a python toolbox that standardize action segmentation experiments.
28
+
29
+ ![](media/illustration.png)
30
+
31
+ ## πŸ“‹ Content
32
+ This dataset contains data from the EPFL-Smart-Kitchen-30 dataset formatted for performing action segmentation with DLC2Action. You can find the original dataset on [Zenodo](https://zenodo.org/records/15535461).
33
+
34
+ The dataset is organized as followed:
35
+ ```
36
+ ESK_action_segmentation
37
+ β”œβ”€β”€ D2A_converted_label_activity
38
+ | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
39
+ | └── ...
40
+ β”œβ”€β”€ D2A_converted_label_nouns
41
+ | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
42
+ | └── ...
43
+ β”œβ”€β”€ D2A_converted_label_verbs
44
+ | β”œβ”€β”€ [PARTICIPANT_ID]_[SESSION_ID]_labels.pickle
45
+ | └── ...
46
+ β”œβ”€β”€ D2A_converted_label_pose_norm
47
+ | β”œβ”€β”€ ...
48
+ | └── [PARTICIPANT_ID]_[SESSION_ID]_[MODALITY].h5
49
+ └── README.md
50
+ ```
51
+
52
+ Where :
53
+ * **PARTICIPANT_ID** : The participant identifier in the format YH20XX
54
+ * **SESSION_ID** : The date when the data was collected in format "YYYY_MM_DD_hh_mm_ss"
55
+ * **MODALITY** : Modalities included in the data (`body_hand_eyes_norm`, `only_videomae`, `holo_hand_pose_norm`). `holo_hand` correspond to the 3D pose estimation of the hands coming from the HoloLens2 device.
56
+
57
+ ## πŸ“Š Data format
58
+ The labels is given as a tuple with the following structure:
59
+ (Information dictionary, List of label names, List of individuals, Annotations)
60
+
61
+ `Information dictionary (dict)`: {β€œdatetime” : β€œ%y-%m-%d %t, β€œvideo_file [video_name.mp4]}
62
+
63
+ `List of label names (list)`: List with the name of the labels in the same order as in the annotations.
64
+
65
+ `List of individuals (List)`: List with the name of the individuals in the same order as in the annotations.
66
+
67
+ `Annotations(list(list(numpy arr)))`: The annotations are organized as Time segments for each individuals and for each label. The Time segments are numpy arrays (dtype = float64). Each segment is composed of 3 elements [start_frame, end_frame, confusion]. Confusion scales from 0 (absolutely confident that a given action is happening) to 1 (not confident that a given action is happening).
68
+
69
+ ## 🌈 Usage
70
+ The training and evaluation code are available on the [Github repository](https://github.com/amathislab/EPFL-Smart-Kitchen), in particular please visit the related section. The repository contains details on the dataset split, the dataset loading using DLC2Action, and the other benchmarks.
71
+
72
+ ## Evaluation results
73
+
74
+ ![](media/results.png)
75
+
76
+ ## 🌟 Citations
77
+
78
+ Please cite our work!
79
+ ```
80
+ @misc{bonnetto2025epflsmartkitchen,
81
+ title={EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models},
82
+ author={Andy Bonnetto and Haozhe Qi and Franklin Leong and Matea Tashkovska and Mahdi Rad and Solaiman Shokur and Friedhelm Hummel and Silvestro Micera and Marc Pollefeys and Alexander Mathis},
83
+ year={2025},
84
+ eprint={2506.01608},
85
+ archivePrefix={arXiv},
86
+ primaryClass={cs.CV},
87
+ url={https://arxiv.org/abs/2506.01608},
88
+ }
89
+ ```
90
+
91
+ ```
92
+ @article {kozlova2025DLC2Action,
93
+ author = {Kozlova, Elizaveta and Bonnetto, Andy and Mathis, Alexander},
94
+ title = {DLC2Action: A Deep Learning-based Toolbox for Automated Behavior Segmentation},
95
+ elocation-id = {2025.09.27.678941},
96
+ year = {2025},
97
+ doi = {10.1101/2025.09.27.678941},
98
+ publisher = {Cold Spring Harbor Laboratory},
99
+ URL = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941},
100
+ eprint = {https://www.biorxiv.org/content/early/2025/09/28/2025.09.27.678941.full.pdf},
101
+ journal = {bioRxiv}
102
+ }
103
+ ```
104
+
105
+ ❀️ Acknowledgments
106
+ Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.