How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-to-speech", model="Gapeleon/Orpheus-4b-base")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Gapeleon/Orpheus-4b-base")
model = AutoModelForCausalLM.from_pretrained("Gapeleon/Orpheus-4b-base")
Quick Links
README.md exists but content is empty.
Downloads last month
4
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gapeleon/Orpheus-4b-base

Quantizations
1 model