Datasets:

File size: 1,998 Bytes
3f2640b
202baaa
c69e3cb
202baaa
 
 
c69fb21
202baaa
 
 
 
 
 
b6ff2e6
202baaa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c69fb21
202baaa
 
ba88917
 
 
 
 
 
 
 
 
 
 
202baaa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
![MMHA-28 Overview](drawing.png)

# MMHAR-28: Human Action Recognition Across RGB, Depth, Thermal, and Event Modalities

## Dataset Summary

MMHAR-28 is a multimodal human action recognition dataset designed for action classification across four sensing modalities:

- RGB
- Depth
- Thermal
- Event

The MMHAR-28 dataset contains 28 human action classes collected in two sessions. Session 1 focuses on single-person sports and exercise actions, while Session 2 focuses on two-person interaction activities.


## Dataset Structure

The dataset is organized into predefined splits:

- `train`
- `val`
- `test`

Samples are stored by modality. Typical modality folders include:

- `rgb_images`
- `depth_images`
- `thermal`
- `event-streams`

## Data Instances

Each instance corresponds to one action sample and one label from the 28 action classes.

Example annotation format:

```text
path/to/sample,label
```

Example paths:

```text
data/train/session_1/sub_18/d_rgb/28/rgb_images,13
data/train/session_1/sub_7/d_rgb/26/depth_images,12
data/train/session_1/sub_33/thermal/9_1_0,8
data/train/session_1/sub_55/event-streams/15,7
```

## Data Splits

The dataset provides predefined splits for:

- training
- validation
- testing

## Citation

If you use the MMHAR-28 dataset in your research, please cite our paper:

```bibtex
@ARTICLE{11447325,
  author={Rakhimzhanova, Tomiris and Kuzdeuov, Askat and Muratov, Artur and Varol, Huseyin Atakan},
  journal={IEEE Transactions on Biometrics, Behavior, and Identity Science}, 
  title={MMHAR-28: Human Action Recognition Across RGB, Thermal, Depth, and Event Modalities}, 
  year={2026},
  volume={},
  number={},
  pages={1-1},
  keywords={Videos;Cameras;Event detection;Thermal sensors;Sensors;Web sites;Video on demand;Three-dimensional displays;Software;Lighting;Human action recognition (HAR);multimodal learning;RGB;depth;thermal;event-based camera;multimodal dataset;video classification;deep learning},
  doi={10.1109/TBIOM.2026.3675639}}

```