Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1.28k
1.28k
actuated_angle
dict
{ "0": 90, "1": 20 }
{ "0": 90, "1": 0 }

🧠 Open Humanoid Actuated Face Dataset

Sample Face Image

Dataset Summary

The Open Humanoid Actuated Face Dataset is designed for researchers working on
facial‑actuation control, robotics, reinforcement learning, and human–computer interaction.

  • Origin – collected during a reinforcement‑learning (RL) training loop whose objective was to reproduce human facial expressions.
  • Platform – a modified i2Head InMoov humanoid head with a silicone skin.
  • Control16 actuators driving facial features and eyeballs.
  • Pairing – each example contains the raw RGB image and the exact actuator angles that produced it.

Dataset Structure

Field Type Description
image Image RGB capture of the humanoid face (resolution [FILL_RES]).
actuated_angle struct 16 integer values ("0""1" .. so on)

Actuator Index Reference

Idx Actuator Idx Actuator
00 Cheek – Left 08 Eyelid Upper – Right
01 Cheek – Right 09 Eyelid Lower – Right
02 Eyeball Sideways – Left 10 Forehead – Right
03 Eyeball Up/Down – Left 11 Forehead – Left
04 Eyelid Upper – Left 12 Upper Nose
05 Eyelid Lower – Left 13 Eyebrow – Right
06 Eyeball Up/Down – Right 14 Jaw
07 Eyeball Sideways – Right 15 Eyebrow – Left

Actuator Mapping Images (placeholders)

Full‑Face Map Eye‑Only Map



Dataset Statistics

Split Samples Size
Train (full) [FILL_TOTAL] ≈ 105 GB
Train (small) 2 ≈ 2 GB

(Numbers shown are for the preview release — update as you add data.)


Usage Example

from datasets import load_dataset, Image

# load the small subset
ds = load_dataset("iamirulofficial/OpenHumnoidDataset", name="small", split="train")
ds = ds.cast_column("image", Image())   # decode image bytes ➜ PIL.Image

img = ds[0]["image"]
angles = ds[0]["actuated_angle"]        # {'0': 90, '1': 20, ...}
img.show()
print(angles)

Tip  For the full corpus use name="full" (may require streaming=True once the dataset grows).


Data Collection & RL Setup

A detailed description of the RL pipeline, reward design, and actuator hardware will appear in our upcoming paper (in preparation, 2025). Briefly:

  1. Vision module extracts target expression keypoints from live human video.
  2. Policy network predicts 16 actuator set‑points.
  3. Real‑time reward computes expression similarity + smoothness penalties.
  4. Images & angle vectors are logged every N steps, forming this dataset.

License

Released under the MIT License – free for commercial and non‑commercial use.


Citation

@misc{amirul2025openhumanoidface,
  title   = {Open Humanoid Actuated Face Dataset},
  author  = {Amirul et al.},
  year    = {2025},
  url     = {https://huggingface.co/datasets/iamirulofficial/OpenHumnoidDataset}
}

Acknowledgements

Big thanks to the i2Head InMoov community and everyone who helped engineer the silicone skin and actuator stack.

Downloads last month
5