Model Card: Flux LoRA - cowl (Z-Image De-Turbo)
This is a LoRA model trained on a curated dataset of high-quality images (1024x1024) using the Ostris - AI Toolkit.
Architecture: Specifically optimized for Z-Image De-Turbo (De-Distilled).
Training Settings
| Parameter | Value |
|---|---|
| Trigger Word | cowl neck |
| Model Architecture | Z-Image De-Turbo (De-Distilled) |
| Batch Size | 2 |
| Rank (Dimension) | 48 |
| Precision | float8 (Transformer & Text Encoder) |
| Save Frequency | Every 200 steps |
| Total Training Steps | 4000 |
Prompting Strategy & "Mix & Match"
This LoRA was trained using granular, tag-based descriptive captions. This approach "de-couples" specific attributes, allowing you to freely combine features from across the dataset.
- Modular Control: You are not limited to the training images. You can mix attributes—for example, taking a
silktexture from one style, adeep drapefrom another, and setting the color toemerald green. - Tag-Based Precision: Since every detail was tagged, the model performs best when you describe the specific materials (satin, jersey, wool), finishes, and colors you want to see.
Example Prompt:
cowl neck, emerald green color, satin material, deep draped folds, elegant aesthetic, studio lighting
Usage
- Inference: Use settings compatible with Z-Image Turbo/De-Turbo.
- LoRA Strength: 0.6 - 1.0 (Start at 0.8).
- Precision: Trained in float8. Ensure your environment (ComfyUI, Diffusers, etc.) supports this for optimal results.
Training Infrastructure
- Toolkit: Ostris AI Toolkit
- Dataset Size: [Indsæt antal] images (1024x1024)
- Training Method: LoRA
Model card generated for the cowl series.
- Downloads last month
- 94
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Balledk/cowl-ZIT-v1.0
Base model
black-forest-labs/FLUX.1-dev