Instructions to use armand0e/Qwen3.5-9B-Agent with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use armand0e/Qwen3.5-9B-Agent with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="armand0e/Qwen3.5-9B-Agent") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("armand0e/Qwen3.5-9B-Agent") model = AutoModelForImageTextToText.from_pretrained("armand0e/Qwen3.5-9B-Agent") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use armand0e/Qwen3.5-9B-Agent with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "armand0e/Qwen3.5-9B-Agent" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "armand0e/Qwen3.5-9B-Agent", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/armand0e/Qwen3.5-9B-Agent
- SGLang
How to use armand0e/Qwen3.5-9B-Agent with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "armand0e/Qwen3.5-9B-Agent" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "armand0e/Qwen3.5-9B-Agent", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "armand0e/Qwen3.5-9B-Agent" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "armand0e/Qwen3.5-9B-Agent", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Unsloth Studio new
How to use armand0e/Qwen3.5-9B-Agent with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for armand0e/Qwen3.5-9B-Agent to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for armand0e/Qwen3.5-9B-Agent to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for armand0e/Qwen3.5-9B-Agent to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="armand0e/Qwen3.5-9B-Agent", max_seq_length=2048, ) - Docker Model Runner
How to use armand0e/Qwen3.5-9B-Agent with Docker Model Runner:
docker model run hf.co/armand0e/Qwen3.5-9B-Agent
This model was trained on the following datasets using the qwen3.6 chat template (training was done with enable_thinking and preserve_thinking set to True):
armand0e/badlogicgames-pi-mono-opus-filtered- Pi traces from Claude Opus (mainly 4.5)armand0e/kimi-k2.6-claude-code-traces- Claude Code traces from kimi k2.6armand0e/kimi-k2.6-agent- Codex traces from kimi k2.6armand0e/minimax-m2.7-agent- Pi traces from minimax m2.7TeichAI/Claude-Opus-4.6-Reasoning-887x(Downsampled to 200 examples, only present to stabilize chat behavior)
I recommend using the following sampling parameters:
- temp: 1.0
- top_k: 20 (though higher values like 40 still seem to work and be stable with tool calling and agentic tasks)
- top_p: 0.95
- min_p: 0.00
- repeat_penalty: 1.0
- presence_penalty: 1.5
Training code:
MAX_SEQ_LEN = 49152
from unsloth import FastModel
import torch
model = FastModel.get_peft_model(
model,
finetune_vision_layers = False, # Turn off for just text!
finetune_language_layers = True, # Should leave on!
finetune_attention_modules = True, # Attention good for GRPO
finetune_mlp_modules = True, # Should leave on always!
r = 64, # Larger = higher accuracy, but might overfit
lora_alpha = 64, # Recommended alpha == r at least
lora_dropout = 0,
bias = "none",
random_state = 3407,
)
from teich import prepare_data
train_dataset = prepare_data(
{
"opus-agent": {
"source": "armand0e/badlogicgames-pi-mono-opus-filtered",
},
"kimi-claude": {
"source": "armand0e/kimi-k2.6-claude-code-traces",
},
"kimi-codex": {
"source": "armand0e/kimi-k2.6-agent",
},
"minimax-m2.7": {
"source": "armand0e/minimax-m2.7-agent",
},
"chat": {
"source": "TeichAI/Claude-Opus-4.6-Reasoning-887x",
"max_examples": 200,
}
},
tokenizer,
split="train",
hf_token=HF_TOKEN,
chat_template_kwargs={"enable_thinking": True, "preserve_thinking": True},
max_length=MAX_SEQ_LEN,
drop_oversized_examples=True,
trim_oversized_followups=True,
tokenize=True,
strict=True,
)
from trl import SFTConfig, SFTTrainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=None,
args=SFTConfig(
dataset_text_field="text",
dataset_num_proc=1,
max_length=MAX_SEQ_LEN,
packing=False,
per_device_train_batch_size=1,
gradient_accumulation_steps=8,
warmup_steps= 5,
num_train_epochs=2,
learning_rate=2e-4,
logging_steps=1,
save_steps=100,
save_total_limit=3,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
output_dir=OUTPUT_DIR,
seed=3407,
report_to="none",
),
)
from teich import mask_data
trainer = mask_data(
trainer,
tokenizer=tokenizer,
train_on_reasoning=True,
train_on_final_answers=True,
train_on_tools=True,
)
This tune was very data limited, but still impresses me. I encourage everyone to generate their own high quality data for their own use cases, they can all be aggregated together.
Uploaded finetuned model
- Developed by: armand0e
- License: apache-2.0
- Finetuned from model : unsloth/Qwen3.5-9B
This qwen3_5 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 190
