llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)react-ts-architect-0.5b-gguf
A fine-tuned, Q4_K_M-quantised version of unsloth/Qwen2.5-Coder-0.5B-Instruct trained to act as a React + TypeScript Architect capable of producing full, multi-file component trees with correct typings and CSS.
Intended Use
| Platform | Runtime |
|---|---|
| Android (Termux) | llama.cpp |
| Desktop | llama.cpp / Ollama / LM Studio |
Prompt Format
### System: You are a React TS Architect. ...
### Instruction: <task>
### Input: <optional context>
### Response:
Training Details
| Parameter | Value |
|---|---|
| Base model | unsloth/Qwen2.5-Coder-0.5B-Instruct |
| LoRA rank | 32 |
| Max sequence length | 2048 |
| Optimiser | AdamW 8-bit |
| Steps | 150 |
| Quantisation | Q4_K_M (GGUF) |
| Framework | Unsloth + TRL |
Datasets
mhhmm/typescript-instruct-20kโ TypeScript instruction pairsiamdyeus/ui-instruct-4kโ UI / component generation instructionsAgent-Ark/Toucan-1.5M(SFT split) โ Agentic tool-calling trajectories
- Downloads last month
- 218
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Mist26/react-ts-architect-0.5b-gguf
Base model
Qwen/Qwen2.5-0.5B Finetuned
Qwen/Qwen2.5-Coder-0.5B Finetuned
Qwen/Qwen2.5-Coder-0.5B-Instruct Finetuned
unsloth/Qwen2.5-Coder-0.5B-Instruct
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Mist26/react-ts-architect-0.5b-gguf", filename="qwen2.5-coder-0.5b-instruct.Q4_K_M.gguf", )