How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("feature-extraction", model="Eviation/DistillT5")
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Eviation/DistillT5", dtype="auto")
Quick Links
  • Unwrapped model: renamed tensors from encoder.encoder.[...] to encoder.[...] to align with T5 XXL.

Available models:

Filename Quant type File Size Description
DistillT5-F32.safetensors F32 518MB -
DistillT5-BF16.safetensors BF16 259MB -
DistillT5-F15.safetensors F16 259MB -
DistillT5-FP8.safetensors FP8 (F8_E4M3) 130MB -
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Eviation/DistillT5

Finetuned
(1)
this model