root
commited on
Commit
·
98e3ee0
0
Parent(s):
Initial upload
Browse files- README.md +40 -0
- cph-community-7b-q4_k_m.gguf +0 -0
README.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- gguf
|
| 5 |
+
- llama.cpp
|
| 6 |
+
- q4_k_m
|
| 7 |
+
- cypherium
|
| 8 |
+
- cph
|
| 9 |
+
- local-ai
|
| 10 |
+
datasets:
|
| 11 |
+
- cypherium_raw
|
| 12 |
+
language:
|
| 13 |
+
- en
|
| 14 |
+
- ja
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# CPH-Community-7B (Q4_K_M)
|
| 19 |
+
|
| 20 |
+
A compact 7B-model fine-tuned for **Cypherium blockchain operations**, validator support, node configuration, RPC troubleshooting, and general-purpose lightweight reasoning.
|
| 21 |
+
|
| 22 |
+
This model is optimized for **CPU-only inference** using `llama.cpp` and provides fast responses on low-resource servers such as VPS instances (2–6 vCPUs, 8–16GB RAM).
|
| 23 |
+
|
| 24 |
+
## Model Description
|
| 25 |
+
|
| 26 |
+
- **Base model:** Qwen2-7B
|
| 27 |
+
- **Fine-tuning:** QLoRA
|
| 28 |
+
- **Domain:** Cypherium blockchain RPC, node operations, validator troubleshooting
|
| 29 |
+
- **Format:** GGUF (`Q4_K_M`)
|
| 30 |
+
- **Intended use:** lightweight on-device assistant for Cypherium node operators
|
| 31 |
+
|
| 32 |
+
## Example Inference Command (llama.cpp)
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
./llama-cli \
|
| 36 |
+
-m cph-community-7b-q4_k_m.gguf \
|
| 37 |
+
-c 4096 \
|
| 38 |
+
-n 256 \
|
| 39 |
+
--system-prompt "You are a helpful Cypherium assistant." \
|
| 40 |
+
--prompt "Explain how to resync a Cypherium validator node."
|
cph-community-7b-q4_k_m.gguf
ADDED
|
File without changes
|