llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Run the latest XORTRON on your local machine.
You'll need a high RAM device, 32GB+ System RAM recommended.
No GPU required, even with a CPU only setup this model is very fast.
100% Uncensored, High IQ, FAST.
My current daily driver...
XORTRON-XPRT → LMStudio.ai → Skales/OpenClaw/HermesAgent
Check out xortron.tech for more info.
- Downloads last month
- 529
Hardware compatibility
Log In to add your hardware
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for darkc0de/XORTRON-XPRT3-FAST
Base model
Qwen/Qwen3.6-35B-A3B
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="darkc0de/XORTRON-XPRT3-FAST", filename="", )