Instructions to use allura-org/Q3-8B-Kintsugi with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use allura-org/Q3-8B-Kintsugi with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="allura-org/Q3-8B-Kintsugi") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("allura-org/Q3-8B-Kintsugi") model = AutoModelForCausalLM.from_pretrained("allura-org/Q3-8B-Kintsugi") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use allura-org/Q3-8B-Kintsugi with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "allura-org/Q3-8B-Kintsugi" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "allura-org/Q3-8B-Kintsugi", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/allura-org/Q3-8B-Kintsugi
- SGLang
How to use allura-org/Q3-8B-Kintsugi with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "allura-org/Q3-8B-Kintsugi" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "allura-org/Q3-8B-Kintsugi", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "allura-org/Q3-8B-Kintsugi" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "allura-org/Q3-8B-Kintsugi", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use allura-org/Q3-8B-Kintsugi with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for allura-org/Q3-8B-Kintsugi to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for allura-org/Q3-8B-Kintsugi to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for allura-org/Q3-8B-Kintsugi to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="allura-org/Q3-8B-Kintsugi", max_seq_length=2048, ) - Docker Model Runner
How to use allura-org/Q3-8B-Kintsugi with Docker Model Runner:
docker model run hf.co/allura-org/Q3-8B-Kintsugi
Q3-8B-Kintsugi
get it? because kintsugi sounds like kitsune? hahaha-
Overview
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
Quantizations
EXL3:
GGUF:
MLX:
Usage
Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do not need to prefill empty think tags for it not to reason -- see below).
Settings used by testers varied, but we generally stayed around 0.9 temperature and 0.1 min p. Do not use repetition penalties (DRY included). They break it.
Any system prompt can likely be used, but I used the Shingame system prompt (link will be added later i promise)
The official instruction following version of Qwen3-8B was not used as a base. Instruction-following is trained in post-hoc, and "thinking" traces were not included. As a result of this, "thinking" will not function.
Training Process
The base model first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the Ink/Bigger Body/Remnant lineage.
Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
Both stages here are very similar to Q3-30B-A3B-Designant, which went through a very similar process with the same data.
Credits
Fizz - Training, Data Wrangling
Toaster, Mango, Bot, probably others I forgot ;-; - Testing
inflatebot - original Designant model card that this one was yoinked from
Artus - Funding
Alibaba - Making the original model
Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
All quanters, inside and outside the org, specifically Artus, Lyra, and soundTeam/Heni
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3
- Downloads last month
- 15
Model tree for allura-org/Q3-8B-Kintsugi
Base model
Qwen/Qwen3-8B-Base