SDXL FP16 LoRAs Collection
A curated collection of Low-Rank Adaptation (LoRA) models for Stable Diffusion XL in FP16 precision format. LoRAs enable efficient fine-tuning and style transfer with minimal storage requirements compared to full model fine-tunes.
Model Description
This repository contains LoRA adapters for Stable Diffusion XL (SDXL) that can be applied on top of the base SDXL model to achieve specific artistic styles, concepts, or improvements. LoRAs use low-rank matrix decomposition to efficiently capture style and concept information in small files (typically 10-200 MB vs 6+ GB for full models).
Key Features
- FP16 Precision: Half-precision floating point for balance between quality and efficiency
- Small File Sizes: LoRAs are 30-100x smaller than full model checkpoints
- Modular Design: Mix and match multiple LoRAs for combined effects
- Base Model Compatible: Works with SDXL base 1.0 and derived models
- SafeTensors Format: Secure, fast loading with metadata support
Repository Contents
sdxl-fp16-loras/
├── README.md # This file
└── loras/
└── sdxl/ # SDXL LoRA files (to be added)
Current Status
Repository Status: Empty / Ready for population
This repository is structured and ready to receive SDXL LoRA files. LoRA models should be placed in the loras/sdxl/ directory.
Expected File Types:
.safetensors- Primary LoRA format (recommended).pt/.pth- PyTorch format (legacy)
Typical LoRA Sizes: 10 MB - 200 MB per file
Hardware Requirements
For LoRA Storage
- Disk Space: 10-200 MB per LoRA model
- Format: SafeTensors or PyTorch format
For Running SDXL + LoRAs
- VRAM Requirements:
- Minimum: 8 GB (with optimizations)
- Recommended: 12 GB (comfortable generation)
- Optimal: 16+ GB (multiple LoRAs, higher resolutions)
- RAM: 16 GB minimum, 32 GB recommended
- Disk Space: 6-7 GB for SDXL base model + LoRA sizes
- GPU: NVIDIA GPU with CUDA support recommended
Performance Notes
- Each LoRA adds minimal VRAM overhead (~50-100 MB)
- Multiple LoRAs can be stacked (typically 2-5 simultaneously)
- LoRA strength can be adjusted (typically 0.5-1.0 range)
Usage Examples
Using with Diffusers Library
from diffusers import DiffusionPipeline
import torch
# Load base SDXL model
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.to("cuda")
# Load LoRA weights
pipe.load_lora_weights("E:/huggingface/sdxl-fp16-loras/loras/sdxl/example_lora.safetensors")
# Generate image with LoRA applied
prompt = "a beautiful landscape with mountains and a lake"
image = pipe(
prompt=prompt,
num_inference_steps=30,
guidance_scale=7.5,
cross_attention_kwargs={"scale": 0.8} # LoRA strength (0.0-1.0)
).images[0]
image.save("output.png")
Loading Multiple LoRAs
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16"
).to("cuda")
# Load multiple LoRAs with different strengths
pipe.load_lora_weights(
"E:/huggingface/sdxl-fp16-loras/loras/sdxl/style_lora.safetensors",
adapter_name="style"
)
pipe.load_lora_weights(
"E:/huggingface/sdxl-fp16-loras/loras/sdxl/detail_lora.safetensors",
adapter_name="detail"
)
# Set adapter weights
pipe.set_adapters(["style", "detail"], adapter_weights=[0.8, 0.6])
# Generate with combined LoRAs
image = pipe(prompt="your prompt here").images[0]
Using with ComfyUI
- Copy LoRA files to ComfyUI's
models/loras/directory - In ComfyUI workflow:
- Add "Load LoRA" node
- Connect to your model loader
- Select LoRA file from dropdown
- Adjust strength (typically 0.5-1.0)
- Connect to your generation workflow
Using with Automatic1111 WebUI
- Copy LoRA files to
stable-diffusion-webui/models/Lora/directory - Restart WebUI or click "Refresh" in LoRA section
- In prompt, use:
<lora:filename:strength>- Example:
beautiful landscape <lora:style_lora:0.8>
- Example:
- Adjust strength value (0.0-1.5 typical range)
Model Specifications
Format Details
- Precision: FP16 (16-bit floating point)
- File Format: SafeTensors (recommended) or PyTorch
- Base Architecture: Stable Diffusion XL (SDXL)
- Compatible Models: SDXL base 1.0, SDXL refiner, SDXL derivatives
LoRA Architecture
- Rank: Typically 4-128 (higher = more capacity, larger files)
- Target Modules: Cross-attention layers, transformer blocks
- Training Method: Low-Rank Adaptation (LoRA) fine-tuning
- Compatibility: Cross-compatible with other SDXL tools and frameworks
Performance Tips and Optimization
Memory Optimization
- Use FP16 precision to reduce VRAM usage
- Enable
torch.compile()for faster inference (PyTorch 2.0+) - Use
enable_model_cpu_offload()for low VRAM systems - Lower LoRA strength if generation quality is affected
Quality Optimization
- LoRA Strength: Start at 0.8 and adjust based on results
- Too high (>1.2): May cause artifacts or overfitting
- Too low (<0.4): Minimal LoRA effect
- Multiple LoRAs: Keep total strength below 3.0 to avoid conflicts
- Inference Steps: 25-35 steps recommended for quality
- Guidance Scale: 7-9 for balanced creativity and adherence
Best Practices
- Test LoRAs individually before combining
- Use descriptive filenames for easy identification
- Keep LoRAs organized by style/purpose
- Document LoRA trigger words and recommended settings
- Back up working LoRA combinations
Adding LoRAs to This Repository
When adding new LoRA files:
- Place files in
loras/sdxl/directory - Use descriptive names:
style_name_v1.safetensors - Document metadata: Include trigger words, training info
- Update README: Add file listing with sizes and descriptions
- Verify format: Ensure SafeTensors format for safety
License Information
Repository License: OpenRAIL++
This repository follows the OpenRAIL++ license, which is the standard license for Stable Diffusion XL models. Individual LoRA files may have additional licensing terms specified by their creators.
Usage Terms
- Commercial Use: Allowed under OpenRAIL++ terms
- Redistribution: Allowed with attribution
- Modifications: Allowed with attribution
- Restrictions: See OpenRAIL++ for prohibited use cases
Important: Always verify the license of individual LoRA models before use, especially for commercial applications. Some LoRAs may have additional restrictions or requirements.
Citation
If using SDXL and LoRA models in research or publications:
@misc{sdxl2023,
title={SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis},
author={Podell, Dustin and English, Zion and Lacey, Kyle and Blattmann, Andreas and Dockhorn, Tim and Müller, Jonas and Penna, Joe and Rombach, Robin},
year={2023}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhu},
journal={arXiv preprint arXiv:2106.09685},
year={2021}
}
Related Resources
Official Documentation
- SDXL Model: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
- Diffusers Library: https://huggingface.co/docs/diffusers/
- LoRA Training Guide: https://huggingface.co/docs/diffusers/training/lora
Community Resources
- Civitai: Community LoRA sharing platform
- Hugging Face Hub: Official model repository
- SDXL Discord: Community support and discussion
Tools and Frameworks
- ComfyUI: Node-based UI for SDXL workflows
- Automatic1111: Popular web UI for Stable Diffusion
- Fooocus: Simplified interface focused on quality
Changelog
Version 1.4 (2025-10-28)
- Updated README version to v1.4
- Verified repository structure and Hugging Face metadata compliance
- Confirmed all YAML frontmatter requirements met
- Current status: Empty repository ready for LoRA population
Version 1.3 (2025-10-14)
- Updated README version to v1.3
- Verified repository structure and status
- Confirmed empty state ready for LoRA population
Version 1.2 (2025-10-13)
- Enhanced usage examples and documentation
- Added multiple LoRA loading examples
- Expanded performance optimization section
Version 1.0 (2025-10-13)
- Initial repository structure created
- README documentation established
- Ready for LoRA file population
- Comprehensive usage examples and specifications documented
Repository Maintained By: Local Collection Last Updated: 2025-10-28 Status: Active - Ready for LoRA additions
- Downloads last month
- -