Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,71 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: creativeml-openrail-m
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: creativeml-openrail-m
|
| 3 |
+
datasets:
|
| 4 |
+
- Voxel51/rico
|
| 5 |
+
pipeline_tag: unconditional-image-generation
|
| 6 |
+
tags:
|
| 7 |
+
- diffusion
|
| 8 |
+
- unet
|
| 9 |
+
- image-generation
|
| 10 |
+
- ui-design
|
| 11 |
+
- tensorflow
|
| 12 |
+
- mobile-ui
|
| 13 |
+
---
|
| 14 |
+
# Forma-1
|
| 15 |
+
|
| 16 |
+
`Forma-1` is a diffusion model trained on 36,536 mobile UI screenshots from the
|
| 17 |
+
RICO dataset. Give it random noise and it will denoise it into something that
|
| 18 |
+
looks like a mobile app screen.
|
| 19 |
+
|
| 20 |
+
It's the model behind DiffuseUI — a project I'm building to explore generative AI
|
| 21 |
+
applied to interface design.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## Details
|
| 26 |
+
|
| 27 |
+
| | |
|
| 28 |
+
|---|---|
|
| 29 |
+
| **Architecture** | U-Net |
|
| 30 |
+
| **Framework** | Tensorflow |
|
| 31 |
+
| **Image Size** | 64x64 |
|
| 32 |
+
| **Timesteps** | 1000 |
|
| 33 |
+
| **Noise Schedule** | Linear |
|
| 34 |
+
| **Epochs** | 200 |
|
| 35 |
+
| **Batch Size** | 64 |
|
| 36 |
+
| **Learning Rate** | 1e-4 |
|
| 37 |
+
| **Loss** | MSE |
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## Training Data
|
| 42 |
+
|
| 43 |
+
Trained on the `RICO` dataset — 36,536 UI screenshots across 27 app
|
| 44 |
+
categories. Images were resized to 64x64 and normalized to [-1, 1] before
|
| 45 |
+
training.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## How It Works
|
| 50 |
+
|
| 51 |
+
Standard DDPM setup. Forward process adds Gaussian noise to real UI screenshots
|
| 52 |
+
across 1000 steps until they're pure static. The U-Net learns to predict that
|
| 53 |
+
noise at each step. At generation time you start from pure static and denoise
|
| 54 |
+
1000 times — a new UI screen comes out the other end.
|
| 55 |
+
|
| 56 |
+
---
|
| 57 |
+
|
| 58 |
+
## Limitations
|
| 59 |
+
|
| 60 |
+
- 64x64 resolution — outputs are small
|
| 61 |
+
- Unconditional — no control over what category of UI gets generated
|
| 62 |
+
- Android only — trained exclusively on Android screenshots
|
| 63 |
+
- 200 epochs on this dataset size produces recognizable but rough outputs
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## About
|
| 68 |
+
|
| 69 |
+
Built by Ricardo Flores as part of DiffuseUI.
|
| 70 |
+
|
| 71 |
+
[GitHub](https://github.com/imrichie) · [DiffuseUI](#)
|