File size: 1,466 Bytes
5f347ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: apache-2.0
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
- RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic
tags:
- flux2
- text-encoder
- fp8
- mistral
---
# FLUX.2 Text Encoder (FP8)
Combined repo for FLUX.2 text encoding with FP8 quantization (~24GB VRAM instead of ~48GB).
## Components
| Component | Source |
|-----------|--------|
| FP8 Model Weights | [RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic](https://huggingface.co/RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic) |
| Tokenizer/Processor | [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) |
## Usage
```python
from transformers import AutoProcessor, Mistral3ForConditionalGeneration
model = Mistral3ForConditionalGeneration.from_pretrained(
"TensorTemplar/flux2-text-encoder-fp8",
local_files_only=True, # or False for download
)
processor = AutoProcessor.from_pretrained(
"TensorTemplar/flux2-text-encoder-fp8",
use_fast=False,
)
```
## Purpose
This repo exists to simplify FLUX.2 deployment by combining all necessary text encoder
components into a single download. Used for extracting intermediate hidden states
(layers 10/20/30) for image generation conditioning.
## Attribution
- FP8 quantization by [RedHatAI](https://huggingface.co/RedHatAI) using llm-compressor
- Original model by [Mistral AI](https://huggingface.co/mistralai)
|