Less is More: Recursive Reasoning with Tiny Networks
Paper โข 2510.04871 โข Published โข 514
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
A novel ~90M parameter image generation model for art/illustration that runs on mobile devices (2-4GB VRAM).
| Problem | Current Solutions | LuminaRS |
|---|---|---|
| Heavy models (6-12GB) | SDXL, Flux | ~90M params, <500MB |
| Can't run mobile | Quantized SD (quality loss) | Designed small from scratch |
| Poor prompt adherence | SD 1.5 | TRM-style recursive reasoning |
| No art specialization | General photo models | Art-focused training stages |
| Unstable training | Diffusion (score matching) | Flow matching (stable ODE) |
Inspired by Tiny Recursive Models โ beat 200x larger LLMs with 7M params.
for _ in range(T): z = z + unet(z, text, t) # shared-weight refinement
Effective depth = T x L without Tx parameters.
Depthwise 7x7 conv, Adaptive LayerNorm (time), MQA cross-attn (text), GELU MLP
| Stage | What's Trained | LR |
|---|---|---|
| 1 | All denoiser params | 1e-4 |
| 2 | Cross-attention only | 1e-5 |
| 3 | All params, joint | 1e-6 |
VAE and CLIP always frozen.
| Component | Params |
|---|---|
| Encoder | ~35M |
| Bottleneck | ~15M |
| Decoder | ~35M |
| Embeds | ~5M |
| Total trainable | ~90M |
| VAE (frozen) | ~83M |
| CLIP (frozen) | ~303M |
| Inference VRAM (b=1) | ~1.5-2GB |
from luminars.model import LuminaRS
from luminars.config import LuminaRSConfig
from luminars.sampler import sample_flow
cfg = LuminaRSConfig()
model = LuminaRS(cfg)
latents = sample_flow(model, text_emb, (1,16,32,32), 12)
MIT License