cowl-ZIT-v1.0 / README.md
Balledk's picture
Create README.md
574ab91 verified
metadata
license: creativeml-openrail-m
base_model: black-forest-labs/FLUX.1-dev
tags:
  - lora
  - flux
  - diffusers
  - ai-toolkit
  - z-image-turbo
datasets:
  - cowl1
  - cowl2
instance_prompt: cowl neck

Model Card: Flux LoRA - cowl (Z-Image De-Turbo)

This is a LoRA model trained on a curated dataset of high-quality images (1024x1024) using the Ostris - AI Toolkit.

Architecture: Specifically optimized for Z-Image De-Turbo (De-Distilled).

Training Settings

Parameter Value
Trigger Word cowl neck
Model Architecture Z-Image De-Turbo (De-Distilled)
Batch Size 2
Rank (Dimension) 48
Precision float8 (Transformer & Text Encoder)
Save Frequency Every 200 steps
Total Training Steps 4000

Prompting Strategy & "Mix & Match"

This LoRA was trained using granular, tag-based descriptive captions. This approach "de-couples" specific attributes, allowing you to freely combine features from across the dataset.

  • Modular Control: You are not limited to the training images. You can mix attributes—for example, taking a silk texture from one style, a deep drape from another, and setting the color to emerald green.
  • Tag-Based Precision: Since every detail was tagged, the model performs best when you describe the specific materials (satin, jersey, wool), finishes, and colors you want to see.

Example Prompt:

cowl neck, emerald green color, satin material, deep draped folds, elegant aesthetic, studio lighting

Usage

  • Inference: Use settings compatible with Z-Image Turbo/De-Turbo.
  • LoRA Strength: 0.6 - 1.0 (Start at 0.8).
  • Precision: Trained in float8. Ensure your environment (ComfyUI, Diffusers, etc.) supports this for optimal results.

Training Infrastructure

  • Toolkit: Ostris AI Toolkit
  • Dataset Size: [Indsæt antal] images (1024x1024)
  • Training Method: LoRA

Model card generated for the cowl series.