ghostai1 commited on
Commit
1acbf39
·
verified ·
1 Parent(s): 36088fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -48
README.md CHANGED
@@ -1,74 +1,64 @@
1
  ---
2
  license: mit
3
  ---
4
- ---
5
- language:
6
- - en
7
- tags:
8
- - gguf
9
- - llama.cpp
10
- - mistral
11
- - finetune
12
- - lora
13
- - roleplay
14
- - instruct
15
- license: apache-2.0
16
- base_model: mistralai/Mistral-7B-Instruct-v0.3
17
- library_name: llama.cpp
18
- pipeline_tag: text-generation
19
- ---
20
 
21
- # GHOSTAI — Spooky Mistral (GGUF Release)
 
 
 
22
 
23
- This is the **GGUF release** of **GHOSTAI**: a fine-tuned variant of **Mistral-7B-Instruct v0.3** exported for **llama.cpp**-compatible runtimes.
 
 
24
 
25
- It ships multiple quantizations (with spooky filenames) so you can pick the right tradeoff for quality vs memory.
 
 
 
 
 
26
 
27
- ## What’s in this repo
 
 
 
 
28
 
29
- You’ll find:
30
 
31
- - A **full-precision GGUF** (f16) export
32
- - One or more **quantized GGUF** files for fast inference
33
 
34
- ### Spooky GGUF names (file mapping)
35
 
36
- These are just friendly names; the actual quant format is in the table.
37
 
38
- | Spooky name | Actual file | Quant format | Notes |
39
  |---|---|---:|---|
40
- | **GHOSTAI_FOGF16** | `model.f16.gguf` | f16 | Highest quality; largest file |
41
- | **GHOSTAI_CRYPT_Q4KM** | `model.Q4_K_M.gguf` | Q4_K_M | Recommended default balance |
42
- | **GHOSTAI_WHISPER_IQ1S** | `model.IQ1_S.gguf` | IQ1_S | Extremely small; quality drop |
43
- | *(fallback)* **GHOSTAI_RAGDOLL_Q2K** | `model.Q2_K.gguf` | Q2_K | If IQ1_S not supported |
44
-
45
- If a “spooky” file name isn’t present, that quant was not generated for this release.
46
-
47
- ## Base model
48
 
49
- - Base: `mistralai/Mistral-7B-Instruct-v0.3`
50
 
51
- ## Training summary
52
-
53
- - Method: LoRA fine-tune
54
- - Export pipeline: LoRA adapter → merged weights → GGUF conversion → quantization
55
- - Intended use: general instruction/chat (custom domain depends on your dataset)
56
-
57
- > If you want to disclose dataset or alignment details, add them here (recommended).
58
 
59
- ## Quickstart (llama.cpp)
60
 
61
- ### 1) Download a GGUF
 
 
62
 
63
- Pick one file (start with `model.Q4_K_M.gguf` if you’re unsure).
64
 
65
- ### 2) Run with llama.cpp
66
 
67
- Example using `llama-cli`:
68
 
69
  ```bash
70
  ./llama-cli \
71
  -m model.Q4_K_M.gguf \
72
  -ngl 99 \
73
  -c 4096 \
74
- -p "You are GHOSTAI. Introduce yourself in one paragraph."
 
 
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
+ <!--
6
+ GHOSTAI • HORROR GGUF RELEASE README
7
+ Drop this into README.md at the root of your Hugging Face repo.
8
+ -->
9
 
10
+ <p align="center">
11
+ <img src="https://capsule-render.vercel.app/api?type=waving&color=0:0b0b0f,50:2b0a2a,100:0b0b0f&height=160&section=header&text=GHOSTAI%20%E2%80%94%20HORROR%20GGUF&fontSize=44&fontColor=EAEAEA&animation=twinkling" />
12
+ </p>
13
 
14
+ <p align="center">
15
+ <img alt="GGUF" src="https://img.shields.io/badge/GGUF-llama.cpp-8A2BE2?style=for-the-badge">
16
+ <img alt="Base" src="https://img.shields.io/badge/Base-Mistral%207B%20Instruct%20v0.3-5B2C83?style=for-the-badge">
17
+ <img alt="Quant" src="https://img.shields.io/badge/Quant-Q4__K__M%20%7C%20F16-3A0CA3?style=for-the-badge">
18
+ <img alt="Theme" src="https://img.shields.io/badge/Theme-Horror-8B0000?style=for-the-badge">
19
+ </p>
20
 
21
+ <p align="center">
22
+ <b>GHOSTAI</b> is a horror-flavored GGUF release (llama.cpp-ready) built from a LoRA fine-tune on <code>mistralai/Mistral-7B-Instruct-v0.3</code>.
23
+ <br/>
24
+ Pick your haunt: <b>F16</b> for max fidelity or <b>Q4_K_M</b> for the best everyday balance.
25
+ </p>
26
 
27
+ ---
28
 
29
+ ## 🩸 What’s inside
 
30
 
31
+ This repo contains **GGUF** files for fast local inference using **llama.cpp**-compatible runtimes.
32
 
33
+ ### 🎃 Spooky file set
34
 
35
+ | Codename | File | Format | Use case |
36
  |---|---|---:|---|
37
+ | **GHOSTAI_FOGF16** | `model.f16.gguf` | f16 | Maximum quality (largest) |
38
+ | **GHOSTAI_CRYPT_Q4KM** | `model.Q4_K_M.gguf` | Q4_K_M | Best default (quality/size) |
39
+ | **GHOSTAI_WHISPER_IQ1S** | `model.IQ1_S.gguf` | IQ1_S | Tiny build (quality drop) |
40
+ | **GHOSTAI_RAGDOLL_Q2K** | `model.Q2_K.gguf` | Q2_K | Fallback if IQ1_S unsupported |
 
 
 
 
41
 
42
+ > Not all files may exist in every release—this table lists the intended set. Use the “Files” panel to confirm what’s included.
43
 
44
+ ---
 
 
 
 
 
 
45
 
46
+ ## 🧬 Base model
47
 
48
+ - **Base**: `mistralai/Mistral-7B-Instruct-v0.3`
49
+ - **Release type**: GGUF export (llama.cpp ecosystem)
50
+ - **Training method**: LoRA fine-tune → merged → GGUF → quantized
51
 
52
+ ---
53
 
54
+ ## ⚰️ Quickstart (llama.cpp)
55
 
56
+ ### 1) Run on GPU (CUDA build)
57
 
58
  ```bash
59
  ./llama-cli \
60
  -m model.Q4_K_M.gguf \
61
  -ngl 99 \
62
  -c 4096 \
63
+ -p "You are GHOSTAI. Speak like a calm narrator in a horror novel. Keep it concise."
64
+