waltgrace commited on
Commit
c94ca8d
·
verified ·
1 Parent(s): 9f84be1

Add full README with verified benchmarks and research findings

Browse files
Files changed (1) hide show
  1. README.md +124 -3
README.md CHANGED
@@ -1,3 +1,124 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: gguf
3
+ tags:
4
+ - moe
5
+ - expert-prefetch
6
+ - madvise
7
+ - llama-cpp
8
+ - apple-silicon
9
+ - cuda
10
+ - on-device
11
+ license: mit
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ # llama.cpp Expert Sniper — madvise prefetch for MoE inference
16
+
17
+ ~65 lines of C++ that enables MoE models larger than RAM on llama.cpp.
18
+
19
+ Stock llama.cpp thrashes indefinitely. This build generates tokens.
20
+
21
+ ## Results
22
+
23
+ | Hardware | RAM | Model | Stock llama.cpp | madvise build |
24
+ |----------|-----|-------|-----------------|---------------|
25
+ | M2 MacBook Air | 8 GB | Qwen3.5-35B-A3B IQ2_M (10.6 GB) | 0 tok/s (thrash) | **0.57 tok/s** |
26
+ | M2 MacBook Air | 8 GB | Same model, no-op callback only | 0 tok/s (thrash) | 0.46 tok/s |
27
+
28
+ On GPU machines with abundant RAM (A100 251GB, RTX 3090 31GB), stock llama.cpp is faster — the OS page cache handles it. madvise helps specifically when **system RAM < model size** and **layers are on CPU (ngl 0)**.
29
+
30
+ ## How it works
31
+
32
+ MoE models activate 8 of 128+ experts per token. Consecutive tokens share ~87% of active experts. Stock llama.cpp uses mmap but the OS has no idea which expert pages are hot — it evicts them randomly, causing a page fault storm.
33
+
34
+ Our patch hooks llama.cpp's eval callback, intercepts every `ggml_mul_mat_id` operation, reads the router's top-k expert selection from `t->src[2]`, and calls `madvise(MADV_WILLNEED)` on each active expert's memory range. This tells the kernel which pages to prefetch before the compute needs them.
35
+
36
+ Zero allocation. Zero memcpy. Zero mutex. One syscall per expert slice.
37
+
38
+ ## What we learned
39
+
40
+ **1. madvise beats LRU cache everywhere.** We first built a 460-line LRU cache. It was 2.4x slower than 15 lines of madvise (0.24 vs 0.57 tok/s on 8GB MacBook). The cache stole 5GB from the OS page cache for duplicate data. Don't fight the OS page cache — coach it.
41
+
42
+ **2. Even a no-op callback prevents thrashing.** Just hooking the eval callback and inspecting tensor pointers (without any madvise) produces 0.46 tok/s where stock produces zero. The callback inadvertently warms mmap pages through pointer inspection.
43
+
44
+ **3. Device pointer bug.** All prior GPU benchmarks were silently invalid — the callback dereferenced `t->src[2]->data` without checking `ggml_backend_buffer_is_host()`. On GPU layers this was a CUDA device pointer. Fixed.
45
+
46
+ **4. On abundant RAM, do nothing.** The OS page cache is remarkably good when it has enough room. Any intervention is pure overhead when RAM exceeds model size.
47
+
48
+ ## Build
49
+
50
+ ```bash
51
+ git clone https://github.com/walter-grace/mac-code
52
+ cd mac-code/research/expert-sniper/llama-cpp
53
+
54
+ # macOS (Metal)
55
+ cmake -B build -DGGML_METAL=ON -DCMAKE_BUILD_TYPE=Release
56
+ cmake --build build -j$(nproc) --target llama-server
57
+
58
+ # NVIDIA GPU (CUDA)
59
+ cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
60
+ cmake --build build -j$(nproc) --target llama-server
61
+ ```
62
+
63
+ ## Usage
64
+
65
+ ```bash
66
+ # Enable madvise prefetch
67
+ ./build/bin/llama-server \
68
+ -m Qwen3.5-35B-A3B-UD-IQ2_M.gguf \
69
+ -ngl 0 \
70
+ --expert-cache-size 1 \
71
+ --port 8201
72
+
73
+ # Test no-op mode (isolate callback overhead)
74
+ EXPERT_CACHE_NOOP=1 ./build/bin/llama-server \
75
+ -m model.gguf \
76
+ -ngl 0 \
77
+ --expert-cache-size 1 \
78
+ --port 8201
79
+ ```
80
+
81
+ ## Files changed vs stock llama.cpp
82
+
83
+ **New files (~430 lines):**
84
+
85
+ | File | Purpose |
86
+ |------|---------|
87
+ | `src/llama-expert-cache-ctx.cpp` | Eval callback, madvise prefetch, tensor identification |
88
+ | `src/llama-expert-cache-ctx.h` | Context struct and declarations |
89
+ | `src/llama-expert-cache.cpp` | LRU cache (deprecated, retained for reference) |
90
+ | `src/llama-expert-cache.h` | Cache class definition |
91
+
92
+ **Patched files (~30 lines across 5 files):**
93
+
94
+ | File | Change |
95
+ |------|--------|
96
+ | `src/CMakeLists.txt` | Added new source files to build |
97
+ | `src/llama-context.h` | Added expert cache context member |
98
+ | `common/common.h` | Added `expert_cache_size` parameter |
99
+ | `common/common.cpp` | Cache init + eval callback registration |
100
+ | `common/arg.cpp` | `--expert-cache-size` CLI flag |
101
+
102
+ ## The research journey
103
+
104
+ ```
105
+ 460-line LRU cache → 0.24 tok/s (stole RAM from OS page cache)
106
+ 15-line madvise → 0.57 tok/s (coached the OS page cache)
107
+ no-op callback → 0.46 tok/s (accidental page warming)
108
+
109
+ The cache was the experiment. madvise was the answer.
110
+ ```
111
+
112
+ ## Full three-way benchmark (8 GB MacBook Air)
113
+
114
+ | Config | tok/s | Mechanism |
115
+ |--------|-------|-----------|
116
+ | Stock llama.cpp | 0 (thrash) | OS blind LRU, no domain knowledge |
117
+ | No-op callback | 0.46 | Accidental page warming from tensor inspection |
118
+ | madvise prefetch | 0.57 | Explicit kernel prefetch hints |
119
+ | LRU cache (5 GB) | 0.24 | Duplicate data in user-space heap |
120
+
121
+ ## Related
122
+
123
+ - **MLX Expert Sniper** (Apple Silicon, 3.3 tok/s): [huggingface.co/waltgrace/mlx-expert-sniper](https://huggingface.co/waltgrace/mlx-expert-sniper)
124
+ - **Full research + code**: [github.com/walter-grace/mac-code/tree/main/research/expert-sniper](https://github.com/walter-grace/mac-code/tree/main/research/expert-sniper)