Datasets:

MMHA_28 / README.md
tomirisss's picture
Update README.md
ba88917 verified
![MMHA-28 Overview](drawing.png)
# MMHAR-28: Human Action Recognition Across RGB, Depth, Thermal, and Event Modalities
## Dataset Summary
MMHAR-28 is a multimodal human action recognition dataset designed for action classification across four sensing modalities:
- RGB
- Depth
- Thermal
- Event
The MMHAR-28 dataset contains 28 human action classes collected in two sessions. Session 1 focuses on single-person sports and exercise actions, while Session 2 focuses on two-person interaction activities.
## Dataset Structure
The dataset is organized into predefined splits:
- `train`
- `val`
- `test`
Samples are stored by modality. Typical modality folders include:
- `rgb_images`
- `depth_images`
- `thermal`
- `event-streams`
## Data Instances
Each instance corresponds to one action sample and one label from the 28 action classes.
Example annotation format:
```text
path/to/sample,label
```
Example paths:
```text
data/train/session_1/sub_18/d_rgb/28/rgb_images,13
data/train/session_1/sub_7/d_rgb/26/depth_images,12
data/train/session_1/sub_33/thermal/9_1_0,8
data/train/session_1/sub_55/event-streams/15,7
```
## Data Splits
The dataset provides predefined splits for:
- training
- validation
- testing
## Citation
If you use the MMHAR-28 dataset in your research, please cite our paper:
```bibtex
@ARTICLE{11447325,
author={Rakhimzhanova, Tomiris and Kuzdeuov, Askat and Muratov, Artur and Varol, Huseyin Atakan},
journal={IEEE Transactions on Biometrics, Behavior, and Identity Science},
title={MMHAR-28: Human Action Recognition Across RGB, Thermal, Depth, and Event Modalities},
year={2026},
volume={},
number={},
pages={1-1},
keywords={Videos;Cameras;Event detection;Thermal sensors;Sensors;Web sites;Video on demand;Three-dimensional displays;Software;Lighting;Human action recognition (HAR);multimodal learning;RGB;depth;thermal;event-based camera;multimodal dataset;video classification;deep learning},
doi={10.1109/TBIOM.2026.3675639}}
```