|
|
|
|
|
--- |
|
|
dataset_info: |
|
|
- config_name: full |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: actuated_angle |
|
|
struct: |
|
|
- name: '0' |
|
|
dtype: int32 |
|
|
- name: '1' |
|
|
dtype: int32 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 5561715.0 |
|
|
num_examples: 5 |
|
|
download_size: 5564574 |
|
|
dataset_size: 5561715.0 |
|
|
- config_name: small |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: actuated_angle |
|
|
struct: |
|
|
- name: '0' |
|
|
dtype: int32 |
|
|
- name: '1' |
|
|
dtype: int32 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2219188.0 |
|
|
num_examples: 2 |
|
|
download_size: 2221784 |
|
|
dataset_size: 2219188.0 |
|
|
configs: |
|
|
- config_name: full |
|
|
data_files: |
|
|
- split: train |
|
|
path: full/train-* |
|
|
- config_name: small |
|
|
data_files: |
|
|
- split: train |
|
|
path: small/train-* |
|
|
default: true |
|
|
--- |
|
|
|
|
|
# 🧠 Open Humanoid Actuated Face Dataset |
|
|
|
|
|
<p align="center"> |
|
|
<img src="https://huggingface.co/datasets/iamirulofficial/Test2/resolve/main/imgesFace.png" alt="Sample Face Image" width="360"/> |
|
|
</p> |
|
|
|
|
|
## Dataset Summary |
|
|
The **Open Humanoid Actuated Face Dataset** is designed for researchers working on |
|
|
facial‑actuation control, robotics, reinforcement learning, and human–computer interaction. |
|
|
|
|
|
* **Origin** – collected during a reinforcement‑learning (RL) training loop whose objective was to reproduce human facial expressions. |
|
|
* **Platform** – a modified **i2Head InMoov** humanoid head with a silicone skin. |
|
|
* **Control** – **16 actuators** driving facial features and eyeballs. |
|
|
* **Pairing** – each example contains the raw RGB image **and** the exact actuator angles that produced it. |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
| Field | Type | Description | |
|
|
|-----------------|---------------|---------------------------------------------------------------| |
|
|
| `image` | `Image` | RGB capture of the humanoid face (resolution **[FILL_RES]**). | |
|
|
| `actuated_angle`| `struct` | 16 integer values (`"0"`, `"1"` .. so on) | |
|
|
|
|
|
|
|
|
### Actuator Index Reference |
|
|
|
|
|
| Idx | Actuator | Idx | Actuator | |
|
|
|:---:|--------------------------------|:---:|-------------------------| |
|
|
| 00 | Cheek – Left | 08 | Eyelid Upper – Right | |
|
|
| 01 | Cheek – Right | 09 | Eyelid Lower – Right | |
|
|
| 02 | Eyeball Sideways – Left | 10 | Forehead – Right | |
|
|
| 03 | Eyeball Up/Down – Left | 11 | Forehead – Left | |
|
|
| 04 | Eyelid Upper – Left | 12 | Upper Nose | |
|
|
| 05 | Eyelid Lower – Left | 13 | Eyebrow – Right | |
|
|
| 06 | Eyeball Up/Down – Right | 14 | Jaw | |
|
|
| 07 | Eyeball Sideways – Right | 15 | Eyebrow – Left | |
|
|
|
|
|
--- |
|
|
|
|
|
### Actuator Mapping Images *(placeholders)* |
|
|
|
|
|
| Full‑Face Map | Eye‑Only Map | |
|
|
|:-------------:|:-----------:| |
|
|
| <br><img src="https://huggingface.co/datasets/iamirulofficial/Test2/resolve/main/Screenshot%202025-05-02%20at%204.04.28%E2%80%AFPM.png" width="50%"/> | <br><img src="https://huggingface.co/datasets/iamirulofficial/Test2/resolve/main/Screenshot%202025-05-02%20at%204.03.56%E2%80%AFPM.png" width="50%"/> | |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
| Split | Samples | Size | |
|
|
|-------|---------|------| |
|
|
| **Train (full)** | **[FILL_TOTAL]** | ≈ 105 GB | |
|
|
| **Train (small)** | 2 | ≈ 2 GB | |
|
|
|
|
|
*(Numbers shown are for the preview release — update as you add data.)* |
|
|
|
|
|
--- |
|
|
|
|
|
## Usage Example |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset, Image |
|
|
|
|
|
# load the small subset |
|
|
ds = load_dataset("iamirulofficial/OpenHumnoidDataset", name="small", split="train") |
|
|
ds = ds.cast_column("image", Image()) # decode image bytes ➜ PIL.Image |
|
|
|
|
|
img = ds[0]["image"] |
|
|
angles = ds[0]["actuated_angle"] # {'0': 90, '1': 20, ...} |
|
|
img.show() |
|
|
print(angles) |
|
|
```` |
|
|
|
|
|
> **Tip** |
|
|
> For the full corpus use `name="full"` (may require `streaming=True` once the dataset grows). |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Collection & RL Setup |
|
|
|
|
|
A detailed description of the RL pipeline, reward design, and actuator hardware will appear in our upcoming paper (in preparation, 2025). Briefly: |
|
|
|
|
|
1. **Vision module** extracts target expression keypoints from live human video. |
|
|
2. **Policy network** predicts 16 actuator set‑points. |
|
|
3. **Real‑time reward** computes expression similarity + smoothness penalties. |
|
|
4. Images & angle vectors are logged every *N* steps, forming this dataset. |
|
|
|
|
|
--- |
|
|
|
|
|
## License |
|
|
|
|
|
Released under the **MIT License** – free for commercial and non‑commercial use. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{amirul2025openhumanoidface, |
|
|
title = {Open Humanoid Actuated Face Dataset}, |
|
|
author = {Amirul et al.}, |
|
|
year = {2025}, |
|
|
url = {https://huggingface.co/datasets/iamirulofficial/OpenHumnoidDataset} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
Big thanks to the **i2Head InMoov** community and everyone who helped engineer the silicone skin and actuator stack. |
|
|
|
|
|
|