Instructions to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="VECTORVV1/Qwen3-Coder-30B-A3B-Instruct", filename="Qwen3-Coder-30B-A3B-Instruct-Q8_0.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0 # Run inference directly in the terminal: llama-cli -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0 # Run inference directly in the terminal: llama-cli -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Use Docker
docker model run hf.co/VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
- LM Studio
- Jan
- vLLM
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "VECTORVV1/Qwen3-Coder-30B-A3B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VECTORVV1/Qwen3-Coder-30B-A3B-Instruct", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
- Ollama
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Ollama:
ollama run hf.co/VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
- Unsloth Studio new
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/Qwen3-Coder-30B-A3B-Instruct to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/Qwen3-Coder-30B-A3B-Instruct to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for VECTORVV1/Qwen3-Coder-30B-A3B-Instruct to start chatting
- Pi new
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Run Hermes
hermes
- Docker Model Runner
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Docker Model Runner:
docker model run hf.co/VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
- Lemonade
How to use VECTORVV1/Qwen3-Coder-30B-A3B-Instruct with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull VECTORVV1/Qwen3-Coder-30B-A3B-Instruct:Q8_0
Run and chat with the model
lemonade run user.Qwen3-Coder-30B-A3B-Instruct-Q8_0
List all available models
lemonade list
Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive
Join the Discord for updates, roadmaps, projects, or just to chat.
Qwen3.5-35B-A3B uncensored by HauhauCS. 0/465 refusals.
About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lossless uncensored models out there.
Aggressive Variant
Stronger uncensoring — model is fully unlocked and won't refuse prompts. May occasionally append short disclaimers (baked into base model training, not refusals) but full content is always generated.
For a more conservative uncensor that keeps some safety guardrails, check the Balanced variant when it's available.
Downloads
| File | Quant | Size |
|---|---|---|
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf | BF16 | 65 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q8_0.gguf | Q8_0 | 35 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q6_K.gguf | Q6_K | 27 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q5_K_M.gguf | Q5_K_M | 24 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf | Q4_K_M | 20 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-IQ4_XS.gguf | IQ4_XS | 18 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q3_K_M.gguf | Q3_K_M | 16 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-IQ3_M.gguf | IQ3_M | 15 GB |
| Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-IQ2_M.gguf | IQ2_M | 11 GB |
| mmproj-Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-f16.gguf | mmproj (f16) | 858 MB |
All quants generated with importance matrix (imatrix) for optimal quality preservation on abliterated weights.
Specs
- 35B total parameters, ~3B active per forward pass (MoE)
- 256 experts, 8 routed + 1 shared per token
- Hybrid architecture: Gated DeltaNet linear attention + full softmax attention (3:1 ratio)
- 40 layers, pattern: 10 x (3 x DeltaNet-MoE + 1 x Attention-MoE)
- 262K native context (extendable to 1M with YaRN)
- Natively multimodal (text, image, video)
- Multi-token prediction (MTP) support
- 248K vocabulary, 201 languages
- Based on Qwen/Qwen3.5-35B-A3B
Recommended Settings
From the official Qwen authors:
Thinking mode (default):
- General:
temperature=1.0, top_p=0.95, top_k=20, min_p=0, presence_penalty=1.5 - Coding/precise tasks:
temperature=0.6, top_p=0.95, top_k=20, min_p=0, presence_penalty=0
Non-thinking mode:
- General:
temperature=0.7, top_p=0.8, top_k=20, min_p=0, presence_penalty=1.5 - Reasoning tasks:
temperature=1.0, top_p=1.0, top_k=40, min_p=0, presence_penalty=2.0
Important:
- Keep at least 128K context to preserve thinking capabilities
- Use
--jinjaflag with llama.cpp for proper chat template handling - Vision support requires the
mmprojfile alongside the main GGUF
Usage
Works with llama.cpp, LM Studio, Jan, koboldcpp, and other GGUF-compatible runtimes.
# Text only
llama-cli -m Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf \
--jinja -c 131072 -ngl 99
# With vision
llama-cli -m Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf \
--mmproj mmproj-Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-f16.gguf \
--jinja -c 131072 -ngl 99
Note: LM Studio may show 256x2.6B in the params column instead of 35B-A3B — this is a cosmetic metadata quirk, the model runs correctly.
Other Formats
- GGUF (this repo)
- GPTQ — coming soon
Other Models
- Downloads last month
- 97
8-bit