How to use from the
Use from the
MLX library
# Download the model from the Hub
pip install huggingface_hub[hf_xet]

huggingface-cli download --local-dir doer cagataydev/doer

cagataydev/doer

The default checkpoint for doer — a one-file pipe-native self-aware Unix agent.

what

A LoRA-fine-tuned mlx-community/Qwen3-1.7B-4bit that knows:

  • what doer is, its architecture, its SOUL (creed)
  • all DOER_* env vars and their defaults
  • how to train, upload, round-trip data via --train* / --upload-hf
  • the design rules: one file, lean deps, context over memory, unix over RPC, env vars over config files
  • how to use doer with images, audio, video (mlx-vlm routing)
  • provider auto-detection (bedrock → mlx → ollama)

use

pip install 'doer-cli[mlx]'

# point at this checkpoint
DOER_PROVIDER=mlx \
DOER_MLX_MODEL=cagataydev/doer \
doer "what is doer"

Future doer builds default DOER_MLX_MODEL=cagataydev/doer, so:

pip install 'doer-cli[mlx]'
doer "what is doer"   # auto-pulls this checkpoint on first run

training

  • base: mlx-community/Qwen3-1.7B-4bit
  • data: cagataydev/doer-training (fat, self-contained records: {ts, query, system, messages, tools})
  • method: LoRA via mlx_lm.tuner, 8 layers, rank 8, scale 20
  • fused: mlx_lm.fuse --dequantize → re-quantized to 4bit

Trained on self-generated Q/A turns about doer itself — the model learns its own source, its own prompt, its own philosophy.

Downloads last month
14
Safetensors
Model size
0.3B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cagataydev/doer

Finetuned
Qwen/Qwen3-1.7B
Adapter
(3)
this model