Instructions to use rockypod/neotoi-coder-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use rockypod/neotoi-coder-8b with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="rockypod/neotoi-coder-8b", filename="neotoi-coder-v3.1-8b-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use rockypod/neotoi-coder-8b with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M # Run inference directly in the terminal: llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M # Run inference directly in the terminal: llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_M
Use Docker
docker model run hf.co/rockypod/neotoi-coder-8b:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use rockypod/neotoi-coder-8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "rockypod/neotoi-coder-8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rockypod/neotoi-coder-8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/rockypod/neotoi-coder-8b:Q4_K_M
- Ollama
How to use rockypod/neotoi-coder-8b with Ollama:
ollama run hf.co/rockypod/neotoi-coder-8b:Q4_K_M
- Unsloth Studio new
How to use rockypod/neotoi-coder-8b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rockypod/neotoi-coder-8b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rockypod/neotoi-coder-8b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for rockypod/neotoi-coder-8b to start chatting
- Pi new
How to use rockypod/neotoi-coder-8b with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "rockypod/neotoi-coder-8b:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use rockypod/neotoi-coder-8b with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default rockypod/neotoi-coder-8b:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use rockypod/neotoi-coder-8b with Docker Model Runner:
docker model run hf.co/rockypod/neotoi-coder-8b:Q4_K_M
- Lemonade
How to use rockypod/neotoi-coder-8b with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull rockypod/neotoi-coder-8b:Q4_K_M
Run and chat with the model
lemonade run user.neotoi-coder-8b-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_MUse Docker
docker model run hf.co/rockypod/neotoi-coder-8b:Q4_K_MNeoToi Coder v3.1 โ 8B
A Rust / Dioxus 0.7 specialist fine-tuned from Qwen3-8B (8.2B parameters, 6.95B non-embedding) using RAFT (Retrieval-Augmented Fine-Tuning). Optimized for production-quality Dioxus 0.7 components with Tailwind v4 and WCAG 2.2 AAA accessibility.
This is the 8B variant of the v3.1 release. A complementary 4B variant (rockypod/neotoi-coder-4b) ships in the same release. The legacy 14B is at rockypod/neotoi-coder and the family hub linking all three is on the same page.
Exam Results โ 104-question Dioxus 0.7 Spec Exam
Re-graded 2026-04-26 with the patched grader (run_grade_v31.py, accepts LANG()/THEME() GlobalSignal accessor calls on Q87).
| Tier | Name | Cnt | Raw | Wtd | /Max | Rate | Floor | Status |
|---|---|---|---|---|---|---|---|---|
| T1 | Fundamentals | 12 | 12 | 12.0 | 12.0 | 100.0% | 82% | โ |
| T2 | RSX Syntax | 12 | 12 | 12.0 | 12.0 | 100.0% | 82% | โ |
| T3 | Signal Hygiene | 12 | 12 | 12.0 | 12.0 | 100.0% | 82% | โ |
| T4 | WCAG / ARIA | 14 | 14 | 21.0 | 21.0 | 100.0% | 82% | โ |
| T5 | use_resource | 8 | 8 | 12.0 | 12.0 | 100.0% | 82% | โ |
| T6 | Hard Reasoning | 10 | 10 | 20.0 | 20.0 | 100.0% | 88% | โ |
| T7 | Primitives + CSS | 12 | 12 | 18.0 | 18.0 | 100.0% | 82% | โ |
| T8 | GlobalSignal / i18n | 8 | 8 | 12.0 | 12.0 | 100.0% | 82% | โ |
| T9 | Static Navigator | 6 | 6 | 9.0 | 9.0 | 100.0% | 82% | โ |
| T10 | Dioxus 0.7.4 | 6 | 6 | 12.0 | 12.0 | 100.0% | 88% | โ |
| T11 | Server Functions | 3 | 3 | 4.5 | 4.5 | 100.0% | 82% | โ |
| Overall | 103 | 103 | 144.5 | 144.5 | 100.0% | โ | โ PASS |
- Publication bar (90%): PASS
- Release bar (95%): PASS
- Tier floors: PASS
Version History
| Version | Base (params) | Score | Exam | Dataset | Status |
|---|---|---|---|---|---|
| v1.0 | Qwen3-Coder-14B (14.8B) | 51/60 (85.0%) | 60Q standard | โ | Published |
| v2.0 | Qwen3-Coder-14B (14.8B) | 135.5/140 (96.8%) | 100Q weighted | 4,185 | Published |
| v3.0 | Qwen3-Coder-14B (14.8B) | 124.0/144.5 (85.8%) | 103Q weighted | 4,535 | Published |
| v3.1 | Qwen3-Coder-14B (14.8B) | 137.0/144.5 (94.81%) | 103Q weighted | 4,880 | Published |
| v3.1 | Qwen3-8B (8.2B) | 144.5/144.5 (100.00%) | 103Q weighted | 4,880 | This release |
| v3.1 | Qwen3-4B (4.0B) | 143.5/144.5 (99.31%) | 103Q weighted | 4,880 | Published |
Model Details
- Base model: Qwen/Qwen3-8B (8.2B parameters total, 6.95B non-embedding)
- Method: RAFT (Retrieval-Augmented Fine-Tuning) with LoRA adapters
- Dataset: 4,880 curated Dioxus 0.7 examples across 43 topics
- Scope: Rust + Dioxus 0.7 + Tailwind v4 + WCAG 2.2 AAA
- Quantization: Q4_K_M (~4.68 GB)
- Thinking tokens: patched (
qwen3.thinking = true) - Author: Kevin Miller, Jr.
Training
| Field | Value |
|---|---|
| Steps | 2,440 |
| Epochs | 4 |
| Wall time | ~3h 6m |
| Final train loss | 0.4444 |
| LoRA rank | 16 (alpha 32, dropout 0) |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Sequence length | 2048 |
| Precision | bf16 + 4-bit base |
| Hardware | RTX 3090 Ti (24 GB) |
Files
neotoi-coder-v3.1-8b-q4_k_m.ggufโ Q4_K_M quant (~4.68 GB)neotoi-coder-v3.1-8b-q4_k_m_patched.ggufโ same quant +qwen3.thinking=truepatch (recommended for Ollama / LM Studio)
Enabling Thinking Mode
This model emits Qwen3 native <think>...</think> blocks. Thinking is on by default with the patched GGUF on inference backends that honor qwen3.thinking.
Ollama
FROM neotoi-coder-v3.1-8b-q4_k_m_patched.gguf
PARAMETER temperature 0.2
PARAMETER num_predict 2000
PARAMETER num_ctx 8192
PARAMETER repeat_penalty 1.1
PARAMETER stop "<|im_end|>"
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
<think>
"""
SYSTEM You are NeoToi, an expert Rust and Dioxus 0.7 developer specialized in Tailwind v4 and WCAG 2.2 AAA accessibility. Always think step-by-step before answering.
ollama create neotoi-coder:8b -f Modelfile
ollama run neotoi-coder:8b
LM Studio
| Field | Value |
|---|---|
| Before System | <|im_start|>system |
| After System | <|im_end|> |
| Before User | <|im_start|>user |
| After User | <|im_end|> |
| Before Assistant | <|im_start|>assistant\n<think> |
| After Assistant | <|im_end|> |
llama.cpp
./llama-cli \
-m neotoi-coder-v3.1-8b-q4_k_m_patched.gguf \
-ngl 99 \
--temp 0.2 \
-p "<|im_start|>user\nYour question here<|im_end|>\n<|im_start|>assistant\n<think>"
What It Knows
- Dioxus 0.7 RSX brace syntax โ never function-call style
use_signal,use_resourcewith correct three-arm matchr#foron label elements only, never inputs- WCAG 2.2 AAA:
aria_labelledby,aria_describedby,role="alert",role="dialog", live regions - dioxus-primitives โ no manual ARIA on managed components
styles!()macro for CSS modules- Tailwind v4 utility classes
GlobalSignalpatterns (LANG / THEME), i18n, dark-mode toggling- Dioxus 0.7.4 APIs:
WritableResultExt, WebSocket Stream+Sink, server-fn extractors
What It Does Not Know
- Playwright / E2E testing (out of scope)
- Non-Dioxus web frameworks
- Backends or databases beyond what server functions cover
Transparency
Per-question model outputs and the patched grader source are published alongside the weights:
- Weights: HuggingFace โ rockypod/neotoi-coder-8b
- Family hub (8B / 4B / 14B comparison): rockypod/neotoi-coder
- Exam runner, grader, per-question results: GitHub โ rockypod/neotoi-coder
- Ollama:
ollama pull rockypod/neotoi-coder:8b
The training dataset itself is not redistributed โ see the GitHub repo for the data-generation pipeline.
License & Attribution
Fine-tuned weights and dataset: licensed under the Neotoi Coder Community License v1.0 โ see LICENSE. Commercial use of model outputs permitted. Weight redistribution prohibited. Mental health deployment requires written permission.
Upstream models: the base model and teacher model are licensed under the Apache License, Version 2.0 โ see LICENSE-APACHE and NOTICE:
- Base: Qwen3-8B โ ยฉ Alibaba Cloud
- Teacher: Qwen3-Coder-Next 80B โ ยฉ Alibaba Cloud
The Neotoi Coder 8B weights are a derivative work of Qwen3-8B, fine-tuned via LoRA adapters on the Neotoi Coder RAFT dataset and then merged + quantized to GGUF.
Credits
- Unsloth โ 2ร faster fine-tuning
- TRL โ SFTTrainer
- Qwen3-8B โ base model
- Dioxus โ the framework this model specializes in
- Claude Code โ dataset pipeline and training infrastructure
- Downloads last month
- 124
4-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf rockypod/neotoi-coder-8b:Q4_K_M# Run inference directly in the terminal: llama-cli -hf rockypod/neotoi-coder-8b:Q4_K_M