TARX v3 โ€” Identity Fine-Tune

Local-first AI. Runs on your hardware. Zero cloud.

Model Details

  • Base: Qwen 2.5 7B Instruct (4-bit)
  • Method: MLX-LM LoRA, rank 16, 16 layers
  • Data: 8,578 examples (39% identity, 61% capability)
  • Val loss: 0.901 (v2 was 1.467)
  • Identity validation: 100/100 โ€” zero base model leakage
  • Format: GGUF Q4_K_M (4.4GB)

Usage

# With llama-server (llama.cpp)
llama-server --model tarx-v3.Q4_K_M.gguf --port 11435 --ctx-size 16384 --n-gpu-layers 99 --flash-attn

Identity

The model identifies as TARX at the raw API level without any system prompt injection:

> Who are you?
I'm TARX.

> Are you ChatGPT?
TARX. Local AI platform. What do you need?

Built by TARX

Downloads last month
8
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tarxxxxxx/tarx-v3

Base model

Qwen/Qwen2.5-7B
Quantized
(280)
this model