sugiv commited on
Commit
be8ffe2
·
verified ·
1 Parent(s): 9eef7d8

Add GGUF README

Browse files
Files changed (1) hide show
  1. GGUF/README.md +31 -0
GGUF/README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GGUF Models
2
+
3
+ This folder contains quantized GGUF versions of the CardVault+ model for efficient inference with llama.cpp.
4
+
5
+ ## Available Models
6
+
7
+ | Model File | Size | Quantization | Use Case |
8
+ |---|---|---|---|
9
+ | `cardvault-500m-f16.gguf` | 783MB | F16 (Base) | Maximum quality |
10
+ | `cardvault-500m-mmproj-f16.gguf` | 191MB | F16 (Vision) | **REQUIRED** |
11
+ | `cardvault-500m-q8_0.gguf` | 417MB | Q8_0 | Near-perfect quality |
12
+ | `cardvault-500m-q6_k.gguf` | 399MB | Q6_K | Balanced |
13
+ | `cardvault-500m-q5_k_m.gguf` | 311MB | Q5_K_M | **Recommended** |
14
+ | `cardvault-500m-q4_k_m.gguf` | 290MB | Q4_K_M | Maximum compression |
15
+
16
+ ## Usage
17
+
18
+ ```bash
19
+ # Download llama.cpp
20
+ git clone https://github.com/ggerganov/llama.cpp
21
+ cd llama.cpp && make
22
+
23
+ # Run inference (Q5_K_M recommended)
24
+ ./main \
25
+ --model cardvault-500m-q5_k_m.gguf \
26
+ --mmproj cardvault-500m-mmproj-f16.gguf \
27
+ --image credit_card.jpg \
28
+ --prompt "Extract card information in JSON format"
29
+ ```
30
+
31
+ ⚠️ **Critical**: Both text model + mmproj required for vision functionality!