File size: 3,215 Bytes
73b54e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a139c9c
73b54e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- graphic-design
- design-generation
- layout-planning
- qwen3
base_model: Qwen/Qwen3-8B
---

# DesignAsCode Semantic Planner

The Semantic Planner for the [DesignAsCode](https://github.com/liuziyuan1109/design-as-code) pipeline. Given a natural-language design request, it generates a structured design plan — including layout reasoning, layer grouping, image generation prompts, and text element specifications.

## Model Details

| | |
|---|---|
| **Base Model** | Qwen3-8B |
| **Fine-tuning** | Supervised Fine-Tuning (SFT) |
| **Size** | 16 GB (fp16) |
| **Context Window** | 8,192 tokens |

## Training Data

Trained on ~10k examples sampled from the [DesignAsCode Training Data](https://huggingface.co/datasets/Tony1109/DesignAsCode-training-data), which contains 19,479 design samples distilled from the [Crello](https://huggingface.co/datasets/cyberagent/crello) dataset using GPT-4o and GPT-o3. No additional data was used.

### Training Format

- **Input:** `prompt` — natural-language design request
- **Output:** `layout_thought` + `grouping` + `image_generator` + `generate_text`

See the [training data repo](https://huggingface.co/datasets/Tony1109/DesignAsCode-training-data) for field details.

## Training Configuration

| | |
|---|---|
| **Batch Size** | 1 |
| **Gradient Accumulation** | 2 |
| **Learning Rate** | 5e-5 (AdamW) |
| **Epochs** | 2 |
| **Max Sequence Length** | 8,192 tokens |
| **Precision** | bfloat16 |
| **Loss** | Completion-only (only on generated tokens) |

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_path = "Tony1109/DesignAsCode-planner"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto"
)
```

For full pipeline usage (plan → implement → reflection), see the [project repo](https://github.com/liuziyuan1109/design-as-code) and [QUICKSTART.md](https://github.com/liuziyuan1109/design-as-code/blob/main/QUICKSTART.md).

## Outputs

The model generates semi-structured text with XML tags:

- `<layout_thought>...</layout_thought>` — detailed layout reasoning
- `<grouping>...</grouping>` — JSON array grouping related layers with thematic labels
- `<image_generator>...</image_generator>` — JSON array of per-layer image generation prompts
- `<generate_text>...</generate_text>` — JSON array of text element specifications (font, size, alignment, etc.)

## Ethical Considerations

- Designs should be reviewed by humans before production use.
- May reflect biases present in the training data.
- Generated content should be checked for copyright compliance.

## Citation

```bibtex
@article{liu2025designascode,
  title     = {DesignAsCode: Bridging Structural Editability and 
               Visual Fidelity in Graphic Design Generation},
  author    = {Liu, Ziyuan and Sun, Shizhao and Huang, Danqing 
               and Shi, Yingdong and Zhang, Meisheng and Li, Ji 
               and Yu, Jingsong and Bian, Jiang},
  journal   = {arXiv preprint},
  year      = {2025}
}
```