Interact-Custom / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add links, tags, sample usage, and citation
f92ae71 verified
|
raw
history blame
4.53 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 100K<n<1M
task_categories:
  - text-to-image
tags:
  - code
  - human-object-interaction

Customized Human Object Interaction Image Generation

Paper | arXiv | | Code

By Zhu Xu, Zhaowen Wang, Yuxin Peng, Yang Liu*

Accepted by ACM-MM 2025

Abstract

Compositional Customized Image Generation aims to customize multiple target concepts within generation content, which has gained attention for its wild application. Existing approaches mainly concentrate on the target entity's appearance preservation, while neglecting the fine-grained interaction control among target entities. To enable the model of such interaction control capability, we focus on human object interaction scenario and propose the task of Customized Human Object Interaction Image Generation(CHOI), which simultaneously requires identity preservation for target human object and the interaction semantic control between them. Two primary challenges exist for CHOI:(1)simultaneous identity preservation and interaction control demands require the model to decompose the human object into self-contained identity features and pose-oriented interaction features, while the current HOI image datasets fail to provide ideal samples for such feature-decomposed learning.(2)inappropriate spatial configuration between human and object may lead to the lack of desired interaction semantics. To tackle it, we first process a large-scale dataset, where each sample encompasses the same pair of human object involving different interactive poses. Then we design a two-stage model Interact-Custom, which firstly explicitly models the spatial configuration by generating a foreground mask depicting the interaction behavior, then under the guidance of this mask, we generate the target human object interacting while preserving their identities features. Furthermore, if the background image and the union location of where the target human object should appear are provided by users, Interact-Custom also provides the optional functionality to specify them, offering high content controllability. Extensive experiments on our tailored metrics for CHOI task demonstrate the effectiveness of our approach.

Dataset Structure

-data
   |--train
        |--image
        |--video
        |--video_2
   |--test
   |--annos
   |--hico_det_clip_instance

Data Preparation

Our data is stored at Hugging Face. Download and place the data like this:

-data
   |--train
        |--image
        |--video
        |--video_2
   |--test
   |--annos
   |--hico_det_clip_instance

Sample Usage

Our pre-trained models for iamg (mask generation) and mgig (HOI image generation) are stored on Hugging Face. Download them and place them at ./ckpts in your project directory.

1. Mask Generation (iamg)

To generate masks, first set up the virtual environment by following ./iamg/environment.yml and activate it. Then, run the demo with:

cd ./iamg
python main_demo.py --hoi_category 'a person is riding a bicycle' --demo_sample ./demo_data/1.jpg --position [0.3,0.8,0.3,0.8]
# demo_sample and position are used to specify the background image and union location of human-object.

2. HOI Image Generation (mgig)

For HOI image generation, first set up the virtual environment by installing the required packages:

pip install -r ./mgig/requirements.txt
pip install git+https://github.com/cocodataset/panopticapi.git
pip install pycocotools -i https://pypi.douban.com/simple
pip install lvis

After setting up and activating the environment, generate images with the demo sample:

cd ./mgig
python run_inference_demo.py

Bibtex

@article{xu2025interactcustom,
  title={Interact-Custom: Customized Human Object Interaction Image Generation},
  author={Xu, Zhu and Wang, Zhaowen and Peng, Yuxin and Liu, Yang},
  journal={ACM-MM},
  year={2025}
}