| --- |
| license: other |
| license_name: neotoi-coder-community-license |
| language: |
| - en |
| base_model: Qwen/Qwen3-Coder-14B |
| tags: |
| - dioxus |
| - rust |
| - accessibility |
| - wcag |
| - fine-tuned |
| - raft |
| - code |
| - mlx |
| pipeline_tag: text-generation |
| --- |
| |
| # Neotoi Coder v1 |
|
|
| A Rust/Dioxus 0.7 specialist fine-tuned from Qwen3-Coder-14B using RAFT |
| (Retrieval-Augmented Fine-Tuning). Optimized for production-quality |
| Dioxus 0.7 components with Tailwind v4 and WCAG 2.2 AAA accessibility. |
|
|
| ## Exam Results |
|
|
| | Tier | Score | Required | Status | |
| |---|---|---|---| |
| | T1 Fundamentals | 9/10 | 9/10 | β
| |
| | T2 RSX Syntax | 9/10 | 8/10 | β
| |
| | T3 Signal Hygiene | 10/10 | 8/10 | β
| |
| | T4 WCAG/ARIA | 9/10 | 7/10 | β
| |
| | T5 use_resource | 4/5 | 4/5 | β
| |
| | T6 Hard Reasoning | 2/5 | 2/5 | β
| |
| | T7 Primitives+CSS | 8/10 | 6/10 | β
| |
| | **Overall** | **51/60** | **50/60** | **β
PASS** | |
| |
| ## Model Details |
| |
| - **Base model:** Qwen3-Coder-14B |
| - **Method:** RAFT (Retrieval-Augmented Fine-Tuning) |
| - **Dataset:** 3,156 curated Dioxus 0.7 examples |
| - **Scope:** Rust + Dioxus 0.7 + Tailwind v4 + WCAG 2.2 AAA |
| - **Quantization:** Q4_K_M (8.38 GB) |
| - **Author:** Kevin Miller, Jr. |
| |
| ## Enabling Thinking Mode |
| |
| This model supports Qwen3 native thinking tokens. |
| Thinking must be enabled manually depending on your inference backend. |
| |
| ### LM Studio |
| |
| In the chat interface go to the prompt template settings and configure: |
| |
| | Field | Value | |
| |---|---| |
| | Before System | `<\|im_start\|>system` | |
| | After System | `<\|im_end\|>` | |
| | Before User | `<\|im_start\|>user` | |
| | After User | `<\|im_end\|>` | |
| | Before Assistant | `<\|im_start\|>assistant\n<think>` | |
| | After Assistant | `<\|im_end\|>` | |
|
|
| ### Ollama |
|
|
| Create a Modelfile: |
| ``` |
| FROM neotoi-coder-v1-q4_k_m_final.gguf |
| PARAMETER temperature 0.2 |
| PARAMETER num_predict 4096 |
| PARAMETER repeat_penalty 1.15 |
| PARAMETER stop "<|im_end|>" |
| TEMPLATE """<|im_start|>system |
| {{ .System }}<|im_end|> |
| <|im_start|>user |
| {{ .Prompt }}<|im_end|> |
| <|im_start|>assistant |
| <think> |
| """ |
| SYSTEM You are Neotoi, an expert Rust and Dioxus 0.7 developer. Always think step-by-step before answering. |
| ``` |
|
|
| ### llama.cpp / llama-cli |
| ```bash |
| ./llama-cli \ |
| -m neotoi-coder-v1-q4_k_m_final.gguf \ |
| -ngl 99 \ |
| --temp 0.2 \ |
| -p "<|im_start|>user\nYour question here<|im_end|>\n<|im_start|>assistant\n<think>" |
| ``` |
|
|
| ## What It Knows |
|
|
| - Dioxus 0.7 RSX brace syntax β never function-call style |
| - `use_signal`, `use_resource` with correct three-arm match |
| - `r#for` on label elements only, never inputs |
| - WCAG 2.2 AAA: `aria_labelledby`, `aria_describedby`, |
| `role="alert"`, `role="dialog"`, live regions |
| - dioxus-primitives β no manual ARIA on managed components |
| - `styles!()` macro for CSS modules |
| - Tailwind v4 utility classes |
|
|
| ## What It Does Not Know |
|
|
| - Tier 6 hard reasoning edge cases (use_context panic behavior, |
| optimistic UI race conditions) β known weak spots |
| - Playwright/E2E testing (out of scope) |
| - Non-Dioxus web frameworks |
| |
| ## License |
| |
| Neotoi Coder Community License v1.0 β see LICENSE file. |
| Commercial use of model outputs permitted. |
| Weight redistribution prohibited. |
| Mental health deployment requires written permission. |
| |
| ## Credits |
| |
| Built with: |
| - [Unsloth](https://github.com/unslothai/unsloth) β 2x faster fine-tuning |
| - [TRL](https://github.com/huggingface/trl) β SFTTrainer |
| - [Qwen3-Coder-14B](https://huggingface.co/Qwen/Qwen3-Coder-14B) β base model |
| - [MLX](https://github.com/ml-explore/mlx) β dataset generation on Apple Silicon |
| - [Claude Code](https://claude.ai/code) β dataset pipeline and training infrastructure |
| - [Ansible](https://www.ansible.com) β server automation and RAFT workflow orchestration |
| - [repomix](https://github.com/yamadashy/repomix) β bundling framework source into LLM context |
| - [Forgejo](https://forgejo.org) β self-hosted git, source stored locally |
| - [Zed](https://zed.dev) β editor used throughout development |
| - [Dioxus](https://dioxuslabs.com) β the framework this model specializes in |
| |
| Developed on: |
| - Apple M3 MacBook Pro β dataset generation, MLX inference, LM Studio |
| - Rocky Linux 10.1 β dataset generation, Unsloth fine-tuning, PyTorch, GGUF export |
| - CachyOS β additional RAFT pipeline work |
| |