qwen3-coder-30b-a3b-codemonkey
LoRA adapter for unsloth/Qwen3-Coder-30B-A3B-Instruct.
Files
adapter_model.safetensors: adapter weightsadapter_config.json: PEFT configtokenizer.json,tokenizer_config.json,chat_template.jinja: tokenizer and chat template assets
Load with Transformers + PEFT
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_id = "unsloth/Qwen3-Coder-30B-A3B-Instruct"
adapter_id = "1337Hero/qwen3-coder-30b-a3b-codemonkey"
tokenizer = AutoTokenizer.from_pretrained(base_id)
base_model = AutoModelForCausalLM.from_pretrained(
base_id,
torch_dtype="auto",
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, adapter_id)
messages = [
{"role": "user", "content": "Write a Python function that atomically replaces a file."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
completion = outputs[0][inputs.input_ids.shape[1]:]
print(tokenizer.decode(completion, skip_special_tokens=True))
Adapter details
- Base model:
unsloth/Qwen3-Coder-30B-A3B-Instruct - PEFT type:
LoRA - Rank:
r=16 - Alpha:
32 - Target modules:
q_proj,k_proj,v_proj,o_proj
GGUF
A merged GGUF release can live in a separate repo such as
1337Hero/qwen3-coder-30b-a3b-codemonkey-GGUF.
- Downloads last month
- 39
Model tree for 1337Hero/qwen3-coder-30b-a3b-codemonkey
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct Finetuned
unsloth/Qwen3-Coder-30B-A3B-Instruct