How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="darkc0de/XORTRON-XPRT3-FAST",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

177228927977544653

Run the latest XORTRON on your local machine.

You'll need a high RAM device, 32GB+ System RAM recommended.

No GPU required, even with a CPU only setup this model is very fast.

100% Uncensored, High IQ, FAST.

My current daily driver...

XORTRON-XPRT → LMStudio.ai → Skales/OpenClaw/HermesAgent

Check out xortron.tech for more info.

Downloads last month
529
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for darkc0de/XORTRON-XPRT3-FAST

Quantized
(301)
this model