CUHK-S: A Privacy-Preserving Multimodal Dataset for Human Action Recognition
Dataset Description
CUHK-S is a privacy-preserving subset of the CUHK-X dataset, a large-scale multimodal benchmark for Human Action Recognition (HAR), Understanding (HAU), and Reasoning (HARn). CUHK-X was accepted at MobiSys 2026.
Compared to the full CUHK-X dataset, CUHK-S:
- Removes all RGB video to prevent facial identification
- Downscales all visual modalities to 320 × 240
- Selects 18 out of 30 participants while preserving full action coverage (40 categories)
Dataset Summary
| Attribute | Value |
|---|---|
| Participants | 18 (selected from 30 in CUHK-X) |
| Action Categories | 40 |
| Modalities | 6 (Depth, IR, Thermal, IMU, Radar, Skeleton) |
| Visual Resolution | 320 × 240 |
| Total Size | ~146 GB (18 zip files, one per participant) |
Modalities
| Modality | Format | Description |
|---|---|---|
| Depth | PNG (color) | Colorized depth maps from Vzense NYX 650 |
| IR | PNG | Infrared images, robust to lighting changes |
| Thermal | PNG | Heat signature from thermal camera |
| IMU | CSV | 5-sensor accelerometer/gyroscope/magnetometer |
| Radar | Binary | mmWave radar point cloud (TI Radar) |
| Skeleton | JSON/CSV | 3D joint positions from pose estimation |
Note: RGB video is intentionally excluded from CUHK-S to protect participant privacy.
Dataset Structure
Each participant's data is packaged as a zip file: CUHK-S_userN-userN.zip
CUHK-S/
├── HAR/ # Human Action Recognition task
│ └── data/
│ ├── Depth_Color/ # Colorized depth frames (.png)
│ ├── IR/ # Infrared frames (.png)
│ ├── Thermal/ # Thermal imaging frames (.png)
│ ├── Skeleton/ # Skeleton pose data
│ │ └── {action}/{user}/{session}/
│ │ ├── predictions/ # Keypoint JSON (.json) + overlay images (.jpg)
│ │ └── visualizations/
│ ├── IMU/ # IMU sensor data (CSV)
│ │ └── {action}/{user}/{session}/
│ │ ├── up(LA+RA+C).csv # Upper-body IMU (Left Arm + Right Arm + Chest)
│ │ └── down(LL+RL).csv # Lower-body IMU (Left Leg + Right Leg)
│ └── Radar/ # mmWave radar data (CSV)
│ └── {action}/{user}/{session}/
│ └── radar_output_T{timestamp}.csv
│
├── HAU/ # Human Action Understanding task
│ └── data/
│ ├── Depth/ # Visual modality clips as .mp4 video
│ ├── IR/
│ └── Thermal/
│ └── {user}/{session}/
│ └── {Modality}.mp4
│
├── HARn/ # Human Action next-step Reasoning task
│ └── data/
│ ├── Depth/ # Video clips as .mp4
│ └── IR/
│ └── {action}/{user}/{session}/
│ └── Depth.mp4
│
└── source_data/ # Raw source frames (with timestamps)
└── data/
├── Depth_Color/ # Timestamped raw frames (.png)
├── IR/
├── Thermal/
├── Skeleton/
├── IMU/
└── Radar/
└── {user}/{session}/
└── {Modality}_{timestamp}_{frameId}.png
Path naming convention:
| Level | Meaning | Example |
|---|---|---|
{action} |
Action category with numeric prefix | 10_Stir_drinks |
{user} |
Participant ID | user1 |
{session} |
Scene–Environment–Trial index | 2-1-1 (Scene 2, Indoor, Trial 1) |
- HAR: Singular well-defined actions organized by action category, for traditional classification tasks
- HAU: Sequential action clips organized by user/session, for temporal and contextual understanding
- HARn: Sequential action clips organized by action/user/session, for next-action reasoning
- source_data: Original raw frames with full timestamps, before any task-level processing
IMU Sensor Layout
Five IMU sensors are placed on the body:
| Sensor | Position | Channels (per sensor) |
|---|---|---|
| WTLA | Left Arm | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTC | Chest | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTRA | Right Arm | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTRL | Right Leg | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
| WTLL | Left Leg | Acc(X/Y/Z), Gyro(X/Y/Z), Mag(X/Y/Z) |
Benchmarks & Tasks
| Task | Type | Metrics |
|---|---|---|
| Action Recognition | Classification | Accuracy, F1, Precision, Recall |
| Action Selection | Multiple Choice | Accuracy |
| Action Captioning | Generation | BLEU, METEOR |
| Emotion Analysis | Classification | Accuracy |
| Sequential Reordering | Ordering | Accuracy |
| Next Action Reasoning | Reasoning | Accuracy |
Citation
If you use CUHK-S in your research, please cite:
@inproceedings{jiang2026cuhkx,
title={CUHK-X: A Large-Scale Multimodal Dataset and Benchmark for Human Action Recognition, Understanding and Reasoning},
author={Jiang, Siyang and others},
booktitle={Proceedings of ACM MobiSys},
year={2026}
}
Ethics & Privacy
We obtained approval from an Institutional Review Board (IRB) to conduct this study and collect data from human subjects.
Privacy measures in CUHK-S:
- No RGB video is included to prevent facial identification
- All visual modalities are downscaled to 320 × 240
- Participants are identified only by numeric IDs (e.g., user1, user2)
- No personally identifiable information is linked to individual records
- IMU, Radar, and Skeleton modalities do not capture visual appearance
License
Code is released under the MIT License. The dataset is available for non-commercial research under a Data Use Agreement (DUA) and is not redistributable. Our derived annotations/splits are released under CC BY 4.0.
Note: This dataset is designed for research and educational purposes. Please ensure compliance with your institution's ethics guidelines when using human activity data.
Contact
- Email: syjiang [AT] ie.cuhk.edu.hk
- Project Page: https://siyang-jiang.github.io/CUHK-X/
- Lab: CUHK AIoT Lab
- Downloads last month
- 36