Devio commited on
Commit
1d65120
Β·
verified Β·
1 Parent(s): 6d4c738

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -11
README.md CHANGED
@@ -1,33 +1,163 @@
1
  ---
2
  license: apache-2.0
3
- pipeline_tag: text-to-image
 
4
  library_name: diffusers
 
 
 
 
 
 
 
5
  ---
6
 
7
- ![NucleusMoE-Image](demo.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
- NucleusMoE-Image is a 17B flow-based text-to-image generation MoE model with only 2B active parameters. Technical report coming soon...
10
 
11
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ```python
14
  import torch
15
  from diffusers import DiffusionPipeline
 
 
 
16
 
17
- pipe = DiffusionPipeline.from_pretrained("NucleusAI/NucleusMoE-Image", torch_dtype=torch.bfloat16)
18
  pipe.to("cuda")
19
 
 
20
  config = TextKVCacheConfig()
21
  pipe.transformer.enable_cache(config)
22
 
23
- prompt = "Vintage-style poster for Artificial Analysis depicting a retro-futuristic cityscape dominated by AI. Towering structures shaped like neural networks loom over sleek flying cars. In the foreground, a stylized robot extends a hand towards the viewer. Bold, sans-serif typography at the top reads 'Welcome to the AI Revolution' with 'Artificial Analysis' prominently displayed at the bottom in a retro chrome effect. The color palette consists of deep blues, vibrant oranges, and metallic silvers, reminiscent of 1950s sci-fi illustrations."
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  image = pipe(
25
- prompt,
26
- height=1024,
27
- width=1024,
28
- guidance_scale=4.0,
29
  num_inference_steps=50,
 
 
30
  ).images[0]
31
- image.save("nucleus_image_demo.png")
 
32
  ```
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
  library_name: diffusers
6
+ pipeline_tag: text-to-image
7
+ tags:
8
+ - moe
9
+ - sparse-moe
10
+ - diffusion
11
+ - text-to-image
12
+ - image-generation
13
  ---
14
 
15
+ <p align="center">
16
+ <img src="assets/logo/OpsAI_Logo.png" width="200"/>
17
+ </p>
18
+ <p align="center">
19
+ <a href="https://github.com/WithNucleusAI/Nucleus-Image"><b>GitHub</b></a>&nbsp;&nbsp; | &nbsp;&nbsp;<a href="https://huggingface.co/NucleusAI/NucleusMoE-Image">Hugging Face</a>&nbsp;&nbsp; | &nbsp;&nbsp;<a href="">Tech Report</a>
20
+ </p>
21
+
22
+ <p align="center">
23
+ <img src="assets/collage/Collage-1-Top.jpeg" width="1600"/>
24
+ <img src="assets/collage/Collage-1-Bottom.jpeg" width="1600"/>
25
+ </p>
26
+
27
+ ## Introduction
28
+
29
+ **Nucleus-Image** is a text-to-image generation model built on a sparse mixture-of-experts (MoE) diffusion transformer architecture. It scales to **17B total parameters** across 64 routed experts per layer while activating only **~2B parameters** per forward pass, establishing a new Pareto frontier in quality-versus-efficiency. Nucleus-Image matches or exceeds leading models β€” including Qwen-Image, GPT Image 1, Seedream 3.0, and Imagen4 β€” on GenEval, DPG-Bench, and OneIG-Bench. This is a **base model** released without any post-training optimization (no DPO, no reinforcement learning, no human preference tuning) β€” all reported results reflect pre-training performance only. We release the full model weights, training code, and dataset, making Nucleus-Image the first fully open-source MoE diffusion model at this quality tier.
30
+
31
+ ## Key Features
32
+
33
+ - **Sparse MoE efficiency**: 17B total capacity with only ~2B active parameters per forward pass, enabling high-quality generation at a fraction of the inference cost of dense models
34
+ - **Expert-Choice Routing**: Guarantees balanced expert utilization without auxiliary load-balancing losses, with a decoupled routing design that separates timestep-aware assignment from timestep-conditioned computation
35
+ - **Base model, no post-training**: This is a base model β€” all benchmark results are from pre-training alone, without DPO, reinforcement learning, or human preference tuning
36
+ - **Multi-aspect-ratio support**: Trained with aspect-ratio bucketing from the outset at every resolution stage, supporting a range of output dimensions
37
+ - **Text KV caching via diffusers**: Text tokens are excluded from the transformer backbone entirely and their KV projections are cached across all denoising steps. This caching is natively integrated into the `diffusers` pipeline β€” simply enable it with `TextKVCacheConfig` for automatic speedup with no code changes to the inference loop
38
+ - **Progressive resolution training**: Three-stage curriculum (256 β†’ 512 β†’ 1024) with progressive sparsification of expert capacity
39
+
40
+ ## Model Specifications
41
+
42
+ | Specification | Value |
43
+ |---|---|
44
+ | Total parameters | 17B |
45
+ | Active parameters | ~2B |
46
+ | Architecture | Sparse MoE Diffusion Transformer |
47
+ | Layers | 32 |
48
+ | Hidden dimension | 2048 |
49
+ | Attention heads (Q / KV) | 16 / 4 (GQA) |
50
+ | Experts per MoE layer | 64 routed + 1 shared |
51
+ | Expert hidden dimension | 1344 |
52
+ | Text encoder | Qwen3-VL-8B-Instruct |
53
+ | Image tokenizer | Qwen-Image VAE (16ch) |
54
+ | Training data | 700M images, 1.5B caption pairs |
55
+ | Training curriculum | Progressive resolution (256 β†’ 512 β†’ 1024) |
56
+ | Total training steps | 1.7M |
57
 
58
+ ## Benchmark Results
59
 
60
+ ![Overall Performance](assets/Overall-Performance.png)
61
+
62
+ Nucleus-Image achieves state-of-the-art or near state-of-the-art results on all three benchmarks despite activating only ~2B of its 17B parameters per forward pass. All results are from the base model at 1024x1024, 50 inference steps, CFG scale 8.0.
63
+
64
+ | Benchmark | Score | Highlights |
65
+ |---|---|---|
66
+ | **GenEval** | **0.87** | Matches Qwen-Image; leads all models on spatial position (0.85) |
67
+ | **DPG-Bench** | **88.79** | #1 overall; leads in entity (93.08), attribute (92.20), and other (93.62) |
68
+ | **OneIG-Bench** | **0.522** | Surpasses Imagen4 (0.515) and Recraft V3 (0.502); strong style (0.430) |
69
+
70
+ ## Quick Start
71
+
72
+ Install the latest version of diffusers:
73
+ ```
74
+ pip install git+https://github.com/huggingface/diffusers
75
+ ```
76
+
77
+ Generate images with Nucleus-Image:
78
 
79
  ```python
80
  import torch
81
  from diffusers import DiffusionPipeline
82
+ from diffusers import TextKVCacheConfig
83
+
84
+ model_name = "NucleusAI/NucleusMoE-Image"
85
 
86
+ pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
87
  pipe.to("cuda")
88
 
89
+ # Enable Text KV caching across denoising steps (integrated into diffusers)
90
  config = TextKVCacheConfig()
91
  pipe.transformer.enable_cache(config)
92
 
93
+ # Supported aspect ratios
94
+ aspect_ratios = {
95
+ "1:1": (1024, 1024),
96
+ "16:9": (1344, 768),
97
+ "9:16": (768, 1344),
98
+ "4:3": (1184, 896),
99
+ "3:4": (896, 1184),
100
+ "3:2": (1248, 832),
101
+ "2:3": (832, 1248),
102
+ }
103
+
104
+ prompt = "A weathered lighthouse on a rocky coastline at golden hour, waves crashing against the rocks below, seagulls circling overhead, dramatic clouds painted in shades of amber and violet"
105
+ width, height = aspect_ratios["16:9"]
106
+
107
  image = pipe(
108
+ prompt=prompt,
109
+ width=width,
110
+ height=height,
 
111
  num_inference_steps=50,
112
+ guidance_scale=8.0,
113
+ generator=torch.Generator(device="cuda").manual_seed(42),
114
  ).images[0]
115
+
116
+ image.save("nucleus_output.png")
117
  ```
118
 
119
+ ## Show Cases
120
+
121
+ ### Portraits & People
122
+
123
+ Nucleus-Image generations of human subjects and portraits, spanning diverse cultures, ages, and artistic styles β€” from expressive character studies to fine-grained close-ups with intricate skin texture and detail.
124
+
125
+ <p align="center">
126
+ <img src="assets/collage/Collage-1-Top.jpeg" width="1600"/>
127
+ <img src="assets/collage/Collage-1-Bottom.jpeg" width="1600"/>
128
+ </p>
129
+
130
+ ### Fantasy, Surrealism & Nature
131
+
132
+ Nucleus-Image generations spanning fantasy, surrealism, animation, and the natural world.
133
+
134
+ <p align="center">
135
+ <img src="assets/collage/Collage-2-Top.jpeg" width="1600"/>
136
+ <img src="assets/collage/Collage-2-Bottom.jpeg" width="1600"/>
137
+ </p>
138
+
139
+ ### Commercial & Everyday Imagery
140
+
141
+ Nucleus-Image generations across product photography, architecture, typography, food, and world culture β€” demonstrating versatility in commercial, conceptual, and everyday imagery.
142
+
143
+ <p align="center">
144
+ <img src="assets/collage/Collage-3-Top.jpeg" width="1600"/>
145
+ <img src="assets/collage/Collage-3-Bottom.jpeg" width="1600"/>
146
+ </p>
147
+
148
+ ## License
149
+
150
+ Nucleus-Image is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
151
+
152
+ ## Citation
153
+
154
+ ```bibtex
155
+ @misc{nucleusimage2026,
156
+ title={Nucleus-Image: Sparse MoE for Image Generation},
157
+ author={Nucleus AI Team},
158
+ year={2026},
159
+ eprint={XXXX.XXXXX},
160
+ archivePrefix={arXiv},
161
+ primaryClass={cs.CV},
162
+ }
163
+ ```