license: apache-2.0
pipeline_tag: text-to-image
library_name: diffusers
SceneDesigner: Controllable Multi-Object Image Generation with 9-DoF Pose Manipulation
This repository contains the model presented in the paper SceneDesigner: Controllable Multi-Object Image Generation with 9-DoF Pose Manipulation.
Project Page: https://henghuiding.com/SceneDesigner/ Code: https://github.com/FudanCVL/SceneDesigner
Abstract
Controllable image generation has attracted increasing attention in recent years, enabling users to manipulate visual content such as identity and style. However, achieving simultaneous control over the 9D poses (location, size, and orientation) of multiple objects remains an open challenge. Despite recent progress, existing methods often suffer from limited controllability and degraded quality, falling short of comprehensive multi-object 9D pose control. To address these limitations, we propose SceneDesigner, a method for accurate and flexible multi-object 9-DoF pose manipulation. SceneDesigner incorporates a branched network to the pre-trained base model and leverages a new representation, CNOCS map, which encodes 9D pose information from the camera view. This representation exhibits strong geometric interpretation properties, leading to more efficient and stable training. To support training, we construct a new dataset, ObjectPose9D, which aggregates images from diverse sources along with 9D pose annotations. To further address data imbalance issues, particularly performance degradation on low-frequency poses, we introduce a two-stage training strategy with reinforcement learning, where the second stage fine-tunes the model using a reward-based objective on rebalanced data. At inference time, we propose Disentangled Object Sampling, a technique that mitigates insufficient object generation and concept confusion in complex multi-object scenes. Moreover, by integrating user-specific personalization weights, SceneDesigner enables customized pose control for reference subjects. Extensive qualitative and quantitative experiments demonstrate that SceneDesigner significantly outperforms existing approaches in both controllability and quality. Code is publicly available at this https URL .
⚙️ Quick Start
1. Installation
Install Python environment (recommended to use uv)
uv syncOr alternatively:
pip install -r requirements.txtInstall Blender environment
cd render python install.pyIf the automatic installation script fails, you can install manually:
- First download Blender and extract it to the
./renderdirectory - Then locate the Blender Python path and install the Python dependencies for Blender, for example:
cd render blender-4.2.8-linux-x64/4.2/python/bin/python3.11 -m pip install -r blender_requirements.txt- First download Blender and extract it to the
2. Download Checkpoints
- Download the SceneDesigner weights to the
checkpointsdirectory - Download the Stable Diffusion 3.5 base model weights to the
checkpointsdirectory
3. Run Demo
Launch the Gradio app:
python app.py \
--blender_path render/blender/blender \
--device cuda:0 \
--port 7861
- Adjust the 9D pose of the cube in the Cube Controls panel
- Enter text prompts in the Generation Config panel and click the Generate Images button to create images
✒️ Citation
If you find our work useful for your research and applications, please kindly cite using this BibTeX:
@inproceedings{SceneDesigner,
title={SceneDesigner: Controllable Multi-Object Image Generation with 9-DoF Pose Manipulation},
author={Qin, Zhenyuan and Shuai, Xincheng and Ding, Henghui},
booktitle={NeurIPS},
year={2025}
}
