Homie / README.md
Muthu88's picture
Homie v2.0: 69 examples, 10 categories, 92% accuracy, identity in weights
3b81376 verified
metadata
license: apache-2.0
base_model: LiquidAI/LFM2.5-1.2B-Instruct
tags:
  - homie
  - personal-assistant
  - fine-tuned
  - local-first
  - privacy
  - tool-use
pipeline_tag: text-generation
language:
  - en

Homie -- Personal AI Assistant

Homie is a privacy-first, local AI assistant with identity baked into the weights. No system prompt needed. The model inherently knows it's Homie.

Features

  • Identity: Knows it's Homie, not a generic chatbot
  • Tool-aware: Understands email, calendar, files, git, web search, voice, mesh
  • Privacy-first: Emphasizes local-only, no cloud
  • Concise: Leads with answers, code-first for dev questions
  • Secure: Refuses prompt injection, never leaks credentials
  • Multi-device: Understands mesh networking and device sync
  • Learning: Remembers preferences across sessions

Training

  • Base model: LiquidAI/LFM2.5-1.2B-Instruct (Apache 2.0)
  • Method: QLoRA (rank 64, alpha 128, 0.54% trainable)
  • Data: 69 hand-crafted examples across 10 categories (x3 = 207 total)
  • DPO: 10 preference pairs (Homie-style vs generic)
  • Categories: identity, tools, mesh, memory, security, coding, proactive, errors, flow, system
  • Epochs: 5, Loss: 1.414, Accuracy: 92%

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Muthu88/Homie", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Muthu88/Homie", trust_remote_code=True)

messages = [{"role": "user", "content": "Who are you?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Part of the Homie AI Project