# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Mathews/Orpheus-Liam")
model = AutoModelForCausalLM.from_pretrained("Mathews/Orpheus-Liam")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
- Developed by: Mathews
- License: apache-2.0
- Finetuned model
Emotion tags included in the training:
<chuckles>, <whispering>, <happy>, <annoyed>, <nervous>, <sad>, <sighs>, <thoughtful>, <short pause>, <exhales sharply>, <surprised>, <clears throat>, <excited>, <stuttering>, <yawning>, <uh>, <groans>, <cracks knuckles>, <inhales deeply>, <laughs>, <exasperated>, <long pause>
Usage example:(prompt)
Oh my goodness <laughs>.
Disclaimer
I cannot guarantee that all tags will work and/or produce good-quality outputs, as the training dataset was really small.
- Downloads last month
- 19
Model tree for Mathews/Orpheus-Liam
Base model
meta-llama/Llama-3.2-3B-Instruct Finetuned
canopylabs/orpheus-3b-0.1-pretrained Finetuned
canopylabs/orpheus-3b-0.1-ft Finetuned
unsloth/orpheus-3b-0.1-ft
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Mathews/Orpheus-Liam") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)