|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- mistralai/Mistral-Small-3.1-24B-Instruct-2503 |
|
|
- RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic |
|
|
tags: |
|
|
- flux2 |
|
|
- text-encoder |
|
|
- fp8 |
|
|
- mistral |
|
|
--- |
|
|
|
|
|
# FLUX.2 Text Encoder (FP8) |
|
|
|
|
|
Combined repo for FLUX.2 text encoding with FP8 quantization (~24GB VRAM instead of ~48GB). |
|
|
|
|
|
## Components |
|
|
|
|
|
| Component | Source | |
|
|
|-----------|--------| |
|
|
| FP8 Model Weights | [RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic](https://huggingface.co/RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic) | |
|
|
| Tokenizer/Processor | [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) | |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoProcessor, Mistral3ForConditionalGeneration |
|
|
|
|
|
model = Mistral3ForConditionalGeneration.from_pretrained( |
|
|
"TensorTemplar/flux2-text-encoder-fp8", |
|
|
local_files_only=True, # or False for download |
|
|
) |
|
|
processor = AutoProcessor.from_pretrained( |
|
|
"TensorTemplar/flux2-text-encoder-fp8", |
|
|
use_fast=False, |
|
|
) |
|
|
``` |
|
|
|
|
|
## Purpose |
|
|
|
|
|
This repo exists to simplify FLUX.2 deployment by combining all necessary text encoder |
|
|
components into a single download. Used for extracting intermediate hidden states |
|
|
(layers 10/20/30) for image generation conditioning. |
|
|
|
|
|
## Attribution |
|
|
|
|
|
- FP8 quantization by [RedHatAI](https://huggingface.co/RedHatAI) using llm-compressor |
|
|
- Original model by [Mistral AI](https://huggingface.co/mistralai) |
|
|
|