How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Able2/umt5-xxl-encode-only")
model = AutoModelForSeq2SeqLM.from_pretrained("Able2/umt5-xxl-encode-only")
Quick Links

An encoder only version of google's umt5-xxl model, mainly used as text encoder for image or video generation models.

Original Repo: https://huggingface.co/google/umt5-xxl
Quantized version: https://huggingface.co/Able2/umt5-xxl-encode-only-gguf

Downloads last month
36
Safetensors
Model size
6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Able2/umt5-xxl-encode-only

Base model

google/umt5-xxl
Quantized
(5)
this model
Quantizations
2 models