InsecureErasure/flan-t5-xxl-encoder-only-GGUF

Quantized GGUF version of Google FLAN T5-XXL (encoder only)

This repo contains additional GGUF quantizations of Google's FLAN-T5 XXL (encoder only) not available in silveroxides/flan-t5-xxl-encoder-only-GGUF, produced using llama.cpp.

These are aimed at users with limited hardware — requiring less memory and storage than higher quantization formats. Other repositories on HF include the decoder layers, resulting in a much bigger file. If you're using this for generating embeddings for Chroma or FLUX models (as I am), you only need the encoder layers.

Big kudos to silveroxides.

Chroma Flash Q5_K_S
Chroma Flash Q4_K_S
Q4_K_M Q3_K_M
Downloads last month
117
GGUF
Model size
5B params
Architecture
t5encoder
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for InsecureErasure/flan-t5-xxl-encoder-only-GGUF

Quantized
(1)
this model