MacAgent Brain LoRA v1

A LoRA adapter fine-tuned for macOS command assistance and system operations.

Model Details

Attribute Value
Base Model mlx-community/Qwen2.5-0.5B-Instruct-4bit
Fine-tuning Method LoRA (Low-Rank Adaptation)
Training Framework MLX
Platform Apple Silicon (M-series)
Training Data 1,010 macOS instruction examples
Final Val Loss 0.383
Adapter Size 5.6 MB

Training Configuration

{
  "lora_rank": 8,
  "lora_scale": 20.0,
  "learning_rate": 2e-5,
  "batch_size": 4,
  "num_layers": 8,
  "iterations": 200
}

Usage

With MLX

from mlx_lm import load, generate

# Load base model with LoRA adapter
model, tokenizer = load(
    "mlx-community/Qwen2.5-0.5B-Instruct-4bit",
    adapter_path="midnightnow/macos-brain-lora-v1"
)

# Generate macOS command help
prompt = "<|im_start|>user\nHow do I check CPU usage on Mac?<|im_end|>\n<|im_start|>assistant\n"
response = generate(model, tokenizer, prompt=prompt, max_tokens=100)
print(response)

Download Only

huggingface-cli download midnightnow/macos-brain-lora-v1

Example Outputs

Q: How do I list all running processes? A: Use ps aux to see all running processes with their details.

Q: How do I check my Mac's CPU usage? A: Use top or ps aux | head -20 to check CPU usage.

Part of MacAgent

This adapter is designed for use with MacAgent - a hardware-aware macOS agent that runs locally on Apple Silicon.

License

MIT License - Use freely, attribute kindly.


Trained on Apple Silicon with MLX - December 2025

Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to view the estimation

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for midnightnow/macos-brain-lora-v1

Base model

Qwen/Qwen2.5-0.5B
Adapter
(1)
this model