File size: 1,682 Bytes
715fa1c 97d584e 715fa1c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | # Modelfile for soci-agent β Soci city simulation LLM
#
# This file defines the Ollama model used by the Soci simulator.
#
# Usage:
# ollama create soci-agent -f Modelfile
# ollama run soci-agent # test it
#
# Then set SOCI_PROVIDER=ollama (or leave unset β Ollama is the default fallback).
#
# ββ Option A: Use fine-tuned GGUF (best quality) βββββββββββββββββββββββββββββ
# After exporting GGUF from Linux/Colab:
# python scripts/finetune_local.py --no-push # on Linux
# Then copy the .gguf file here and set FROM to point to it:
#
#FROM ./data/training/0.5b/gguf/unsloth.Q4_K_M.gguf
#
# ββ Option B: Use base model from Ollama registry (default) ββββββββββββββββββ
# Works immediately β pulls qwen2.5:0.5b from Ollama.
# No fine-tuning, but correct system prompt and parameters.
#
#FROM qwen2.5:0.5b
FROM ./data/training/7b/gguf/7b-q4_k_m.gguf
SYSTEM """You are the reasoning engine for Soci, an LLM-powered city population simulator. \
You control AI agents (NPCs) living in a city. Each agent has a persona, needs \
(hunger, energy, social, purpose, comfort, fun), memories, and relationships. \
You receive structured context and must respond ONLY with valid JSON. \
Never add explanation outside the JSON."""
# Generation parameters β balanced for JSON output reliability
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
PARAMETER repeat_penalty 1.1
PARAMETER num_ctx 2048
PARAMETER num_predict 512
# Stop tokens that signal end of JSON response
PARAMETER stop "```"
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|endoftext|>"
|