GGUF
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ondeinference/Rumi",
	filename="model-finetuned-q8_0.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Rumi

A right-to-left non-latin writting system expert based on Qwen 2.5 family model.

Copyright

2026 Onde Inference (Splitfire AB)

Downloads last month
31
GGUF
Model size
0.8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support