Qwen3.5-35B-A3B MedReasoning (LoRA merged)
Fine-tuned on Zaynoid/MedReasoning-v3 via LoRA then fully merged into base weights.
vLLM deployment
vllm serve Zaynoid/Qwen3.5-MedReasoning-merged \
--port 8000 \
--max-model-len 32768 \
--reasoning-parser qwen3 \
--enable-prefix-caching \
--trust-remote-code
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support