File size: 858 Bytes
ee9ca69 2695c16 ee9ca69 2695c16 ee9ca69 2695c16 ee9ca69 2695c16 ee9ca69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: mit
tags:
- tiny-stable-diffusion
- diffusion
- image-generation
- diffusion
library_name: pytorch
---
# tiny-sd-models
This is a **DIFFUSION** model trained with [tiny-stable-diffusion](https://github.com/your-username/tiny-stable-diffusion).
## Model Description
This is a Diffusion Transformer (DiT/MMDiT) trained for text-to-image generation in latent space.
### Architecture
- **Type**: DiT or MMDiT (Multi-Modal Diffusion Transformer)
- **Conditioning**: CLIP text embeddings
## Usage
```python
import torch
from src.models.vae import create_vae # or appropriate model import
# Load checkpoint
checkpoint = torch.load("model.pt", map_location="cpu")
# Create model and load weights
model = create_model(...) # Use config from checkpoint
model.load_state_dict(checkpoint["model_state_dict"])
```
## License
MIT License
|