ControlNet for Mobile UI Layout Generation
This repository contains a ControlNet model fine-tuned to generate stylized mobile user interface (UI) screens from wireframe layouts and structured text prompts.
The model was trained on a fully synthetic dataset of mobile UI layouts, allowing precise control over spatial structure and design parameters.
Model Overview
- Architecture: ControlNet + Stable Diffusion 1.5
- Conditioning:
- Wireframe image (layout constraints)
- Text prompt (design parameters)
- Resolution: 512 × 512
- Training data: Procedurally generated synthetic UI layout
Usage
This model is designed to be used with the Stable Diffusion ControlNet pipeline.
import torch
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
from PIL import Image
# Load ControlNet
controlnet = ControlNetModel.from_pretrained(
"louis-gs/controlnet-mobile-ui-layout",
torch_dtype=torch.float16
)
# Load Stable Diffusion + ControlNet pipeline
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
torch_dtype=torch.float16,
safety_checker=None
).to("cuda")
# Load conditioning image (wireframe)
conditioning_image = Image.open("wireframe.png").convert("RGB")
# Structured prompt (same format as training)
prompt = (
"a light mobile app product screen UI, "
"low density, primary color palette p5, "
"rounded corners radius 8, "
"topbar with_search, bottom navigation 3, "
"tabs 2, hero carousel, "
"1 cards, 6 list items, "
"cta none, 1 badges, 2 sections, "
"header_block true, price_tag true"
)
# Generate image
image = pipe(
prompt=prompt,
image=conditioning_image,
num_inference_steps=30,
guidance_scale=7.5
).images[0]
image.save("result.png")
- Downloads last month
- -
Model tree for utbm-ai54-l/controlnet-mobile-ui-layout
Base model
runwayml/stable-diffusion-v1-5