YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

image/png

image/png


base_model: - black-forest-labs/FLUX.1-Krea-dev pipeline_tag: text-to-image tags: - comfyui - flux - text-to-image - krea - workflow - diffusion

Here's a comprehensive Hugging Face Model Card for the Flux.1-Krea-dev workflow on ComfyUI:

---
license: other
tags:
- comfyui
- flux
- text-to-image
- krea
- workflow
- diffusion
pipeline_tag: text-to-image
---

# FLUX.1-Krea-dev ComfyUI Workflow

## Model Description

FLUX.1-Krea-dev is an advanced text-to-image diffusion model optimized for ComfyUI workflows. This workflow provides a complete pipeline for generating high-quality 1024x1024 images using the FLUX.1 architecture with specialized Krea development enhancements.

## Workflow Features

- **Dual Text Encoder Support**: Utilizes both CLIP-L and T5-XXL encoders for enhanced prompt understanding
- **FP8 Optimized**: Uses quantized models for improved performance and memory efficiency
- **High-Resolution Output**: Native 1024x1024 image generation
- **Advanced Sampling**: UniPC sampler with SGM uniform scheduling
- **Conditioning Control**: Flexible positive/negative prompt handling

## Quick Start

### Prerequisites

- ComfyUI installed
- 8GB+ VRAM recommended
- Required model files (see below)

### Installation

1. Download the workflow JSON file: `flux1_krea_dev.json`
2. Place it in your ComfyUI workflows directory
3. Download required models using the links below

### Required Models

Download and place these files in your ComfyUI models directory:

ComfyUI/models/ β”œβ”€β”€ diffusion_models/ β”‚ └── flux1-krea-dev.safetensors β”œβ”€β”€ text_encoders/ β”‚ β”œβ”€β”€ clip_l.safetensors β”‚ └── t5xxl_fp16.safetensors └── vae/ └── ae.safetensors


### Model Links

- **Diffusion Model**: [flux1-krea-dev.safetensors](https://huggingface.co/Comfy-Org/FLUX.1-Krea-dev_ComfyUI/resolve/main/split_files/diffusion_models/flux1-krea-dev.safetensors)
- **Text Encoders**: 
  - [clip_l.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors)
  - [t5xxl_fp16.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors)
- **VAE**: [ae.safetensors](https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/resolve/main/split_files/vae/ae.safetensors)

## Usage

1. Load the `flux1_krea_dev.json` workflow in ComfyUI
2. Enter your prompt in the CLIPTextEncode node
3. Adjust image dimensions in EmptySD3LatentImage (default: 1024x1024)
4. Configure sampling parameters in KSampler:
   - Steps: 20
   - CFG: 1.0
   - Sampler: uni_pc
   - Scheduler: sgm_uniform
5. Execute the workflow
6. Results save to: `flux_krea/flux_krea` directory

## Workflow Structure

The workflow is organized into three main sections:

### Step 1 - Model Loading
- UNETLoader: Flux.1-Krea-dev diffusion model
- DualCLIPLoader: Text encoding pipeline
- VAELoader: Autoencoder for image decoding

### Step 2 - Image Configuration
- EmptySD3LatentImage: Set output dimensions and batch size

### Step 3 - Prompt & Generation
- CLIPTextEncode: Process text prompts
- KSampler: Diffusion process
- ConditioningZeroOut: Negative prompt handling
- VAEDecode: Convert latents to images
- SaveImage: Output results

## Parameters

**KSampler Defaults:**
- Seed: Random
- Steps: 20
- CFG Scale: 1.0
- Sampler: uni_pc
- Scheduler: sgm_uniform
- Denoise: 1.0

## Output Example

Here's a demonstration of the workflow in action:

<video controls width="100%">
    <source src="output.mp4" type="video/mp4">
    Your browser does not support the video tag.
</video>

## Technical Details

**Model Architecture:**
- Base Model: FLUX.1-Krea-dev
- Resolution: 1024x1024
- Precision: FP8 quantized
- Text Encoders: CLIP-L + T5-XXL

**Performance:**
- Recommended VRAM: 8GB+
- Generation Time: ~30-60 seconds (depending on hardware)
- Batch Size: 1 (configurable)

## Limitations

- Requires significant VRAM for optimal performance
- Limited to 1024x1024 resolution in this configuration
- May require prompt engineering for best results
- FP8 quantization may affect some image details

## License

Please check the original model repositories for specific license terms:
- [FLUX.1-Krea-dev](https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev/)
- [Text Encoders](https://huggingface.co/comfyanonymous/flux_text_encoders)

## Citation

If you use this workflow in your research, please cite the original FLUX models:

```bibtex
@misc{flux2025,
  title={FLUX: A Large-Scale Generative Model for Visual Content Creation},
  author={Black Forest Labs},
  year={2025},
  url={https://huggingface.co/black-forest-labs/FLUX.1-dev}
}

Community & Support


This workflow card was generated for the FLUX.1-Krea-dev ComfyUI implementation.


This model card includes:
1. Comprehensive workflow description
2. Installation and setup instructions
3. Required model downloads with direct links
4. Usage instructions with parameter explanations
5. Video demonstration embedding (`output.mp4`)
6. Technical specifications and limitations
7. Proper attribution and citation

The card is formatted for Hugging Face's model repository and includes all necessary information for users to successfully implement the Flux.1-Krea-dev workflow in ComfyUI.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support