Improve dataset card: Add links, tags, sample usage, and citation
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,25 +1,27 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-to-image
|
| 5 |
-
language:
|
| 6 |
-
- en
|
| 7 |
tags:
|
| 8 |
- code
|
| 9 |
-
|
| 10 |
-
- 100K<n<1M
|
| 11 |
---
|
| 12 |
|
| 13 |
# Customized Human Object Interaction Image Generation
|
| 14 |
-
[](https://arxiv.org/abs/2508.19575)
|
| 15 |
-
<a href="https://sites.google.com/view/interact-custom"> <img src="https://img.shields.io/badge/ℋ-homepage-3875B7.svg?labelColor=21438A&style=plastic" height="25px">
|
| 16 |
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
> By Zhu Xu, Zhaowen Wang, Yuxin Peng, Yang Liu*
|
| 19 |
>
|
| 20 |
> Accepted by ACM-MM 2025
|
| 21 |
|
| 22 |
-
|
| 23 |
## Abstract
|
| 24 |
Compositional Customized Image Generation aims to customize multiple target concepts within generation content, which has gained attention for its wild application. Existing approaches mainly concentrate on the target entity's appearance preservation, while neglecting the fine-grained interaction control among target entities. To enable the model of such interaction control capability, we focus on human object interaction scenario and propose the task of Customized Human Object Interaction Image Generation(CHOI), which simultaneously requires identity preservation for target human object and the interaction semantic control between them. Two primary challenges exist for CHOI:(1)simultaneous identity preservation and interaction control demands require the model to decompose the human object into self-contained identity features and pose-oriented interaction features, while the current HOI image datasets fail to provide ideal samples for such feature-decomposed learning.(2)inappropriate spatial configuration between human and object may lead to the lack of desired interaction semantics. To tackle it, we first process a large-scale dataset, where each sample encompasses the same pair of human object involving different interactive poses. Then we design a two-stage model Interact-Custom, which firstly explicitly models the spatial configuration by generating a foreground mask depicting the interaction behavior, then under the guidance of this mask, we generate the target human object interacting while preserving their identities features. Furthermore, if the background image and the union location of where the target human object should appear are provided by users, Interact-Custom also provides the optional functionality to specify them, offering high content controllability. Extensive experiments on our tailored metrics for CHOI task demonstrate the effectiveness of our approach.
|
| 25 |
|
|
@@ -35,4 +37,54 @@ Compositional Customized Image Generation aims to customize multiple target conc
|
|
| 35 |
|--hico_det_clip_instance
|
| 36 |
```
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## Bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: apache-2.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 100K<n<1M
|
| 7 |
task_categories:
|
| 8 |
- text-to-image
|
|
|
|
|
|
|
| 9 |
tags:
|
| 10 |
- code
|
| 11 |
+
- human-object-interaction
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
# Customized Human Object Interaction Image Generation
|
| 15 |
+
[Paper](https://huggingface.co/papers/2508.19575) | [](https://arxiv.org/abs/2508.19575) | <a href="https://sites.google.com/view/interact-custom"> <img src="https://img.shields.io/badge/ℋ-homepage-3875B7.svg?labelColor=21438A&style=plastic" height="25px"></a> | [Code](https://github.com/XZPKU/Inter-custom.git)
|
|
|
|
| 16 |
|
| 17 |
+
<div align="left">
|
| 18 |
+
<img src="https://github.com/XZPKU/Inter-custom/blob/main/asset/method.png?raw=true" width=70% />
|
| 19 |
+
</div>
|
| 20 |
|
| 21 |
> By Zhu Xu, Zhaowen Wang, Yuxin Peng, Yang Liu*
|
| 22 |
>
|
| 23 |
> Accepted by ACM-MM 2025
|
| 24 |
|
|
|
|
| 25 |
## Abstract
|
| 26 |
Compositional Customized Image Generation aims to customize multiple target concepts within generation content, which has gained attention for its wild application. Existing approaches mainly concentrate on the target entity's appearance preservation, while neglecting the fine-grained interaction control among target entities. To enable the model of such interaction control capability, we focus on human object interaction scenario and propose the task of Customized Human Object Interaction Image Generation(CHOI), which simultaneously requires identity preservation for target human object and the interaction semantic control between them. Two primary challenges exist for CHOI:(1)simultaneous identity preservation and interaction control demands require the model to decompose the human object into self-contained identity features and pose-oriented interaction features, while the current HOI image datasets fail to provide ideal samples for such feature-decomposed learning.(2)inappropriate spatial configuration between human and object may lead to the lack of desired interaction semantics. To tackle it, we first process a large-scale dataset, where each sample encompasses the same pair of human object involving different interactive poses. Then we design a two-stage model Interact-Custom, which firstly explicitly models the spatial configuration by generating a foreground mask depicting the interaction behavior, then under the guidance of this mask, we generate the target human object interacting while preserving their identities features. Furthermore, if the background image and the union location of where the target human object should appear are provided by users, Interact-Custom also provides the optional functionality to specify them, offering high content controllability. Extensive experiments on our tailored metrics for CHOI task demonstrate the effectiveness of our approach.
|
| 27 |
|
|
|
|
| 37 |
|--hico_det_clip_instance
|
| 38 |
```
|
| 39 |
|
| 40 |
+
## Data Preparation
|
| 41 |
+
Our data is stored at [Hugging Face](https://huggingface.co/sdcdavcsvc/Interact-Custom). Download and place the data like this:
|
| 42 |
+
```
|
| 43 |
+
-data
|
| 44 |
+
|--train
|
| 45 |
+
|--image
|
| 46 |
+
|--video
|
| 47 |
+
|--video_2
|
| 48 |
+
|--test
|
| 49 |
+
|--annos
|
| 50 |
+
|--hico_det_clip_instance
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Sample Usage
|
| 54 |
+
|
| 55 |
+
Our pre-trained models for `iamg` (mask generation) and `mgig` (HOI image generation) are stored on [Hugging Face](https://huggingface.co/sdcdavcsvc/Interact-custom). Download them and place them at `./ckpts` in your project directory.
|
| 56 |
+
|
| 57 |
+
### 1. Mask Generation (iamg)
|
| 58 |
+
|
| 59 |
+
To generate masks, first set up the virtual environment by following `./iamg/environment.yml` and activate it.
|
| 60 |
+
Then, run the demo with:
|
| 61 |
+
```bash
|
| 62 |
+
cd ./iamg
|
| 63 |
+
python main_demo.py --hoi_category 'a person is riding a bicycle' --demo_sample ./demo_data/1.jpg --position [0.3,0.8,0.3,0.8]
|
| 64 |
+
# demo_sample and position are used to specify the background image and union location of human-object.
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
### 2. HOI Image Generation (mgig)
|
| 68 |
+
|
| 69 |
+
For HOI image generation, first set up the virtual environment by installing the required packages:
|
| 70 |
+
```bash
|
| 71 |
+
pip install -r ./mgig/requirements.txt
|
| 72 |
+
pip install git+https://github.com/cocodataset/panopticapi.git
|
| 73 |
+
pip install pycocotools -i https://pypi.douban.com/simple
|
| 74 |
+
pip install lvis
|
| 75 |
+
```
|
| 76 |
+
After setting up and activating the environment, generate images with the demo sample:
|
| 77 |
+
```bash
|
| 78 |
+
cd ./mgig
|
| 79 |
+
python run_inference_demo.py
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
## Bibtex
|
| 83 |
+
```bibtex
|
| 84 |
+
@article{xu2025interactcustom,
|
| 85 |
+
title={Interact-Custom: Customized Human Object Interaction Image Generation},
|
| 86 |
+
author={Xu, Zhu and Wang, Zhaowen and Peng, Yuxin and Liu, Yang},
|
| 87 |
+
journal={ACM-MM},
|
| 88 |
+
year={2025}
|
| 89 |
+
}
|
| 90 |
+
```
|