GGUF
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="g023/NeuronBlade-Qwen3.5-4B",
	filename="NeuronBlade-Qwen3.5-4B-Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Model Card for Model ID

So this model is made by using (https://github.com/g023/neuronblade) to alter a Qwen3.5 4B model to improve effectiveness. It is a WIP so feel free to test with me. I will be updating as I go. MODEL CAN HAVE UNPREDICTABLE OUTPUTS

llama-cli -m NeuronBlade-Qwen3.5-4B-Q4_K_M.gguf -n 8192 --temp 1.0 --top-p 0.9 --repeat-last-n 16384 --mirostat 2 --mirostat-lr 0.2 --mirostat-ent 3 --presence-penalty 0.3 --frequency-penalty 0.5 --repeat-penalty 0.4
Downloads last month
49
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support