CPH-Community-7B / README.md
root
Initial upload
98e3ee0
metadata
license: mit
tags:
  - gguf
  - llama.cpp
  - q4_k_m
  - cypherium
  - cph
  - local-ai
datasets:
  - cypherium_raw
language:
  - en
  - ja
pipeline_tag: text-generation

CPH-Community-7B (Q4_K_M)

A compact 7B-model fine-tuned for Cypherium blockchain operations, validator support, node configuration, RPC troubleshooting, and general-purpose lightweight reasoning.

This model is optimized for CPU-only inference using llama.cpp and provides fast responses on low-resource servers such as VPS instances (2–6 vCPUs, 8–16GB RAM).

Model Description

  • Base model: Qwen2-7B
  • Fine-tuning: QLoRA
  • Domain: Cypherium blockchain RPC, node operations, validator troubleshooting
  • Format: GGUF (Q4_K_M)
  • Intended use: lightweight on-device assistant for Cypherium node operators

Example Inference Command (llama.cpp)

./llama-cli \
  -m cph-community-7b-q4_k_m.gguf \
  -c 4096 \
  -n 256 \
  --system-prompt "You are a helpful Cypherium assistant." \
  --prompt "Explain how to resync a Cypherium validator node."