anthonym21 commited on
Commit
42ab310
·
verified ·
1 Parent(s): 378da97

Refresh DotsOCR mmproj with corrected converter output

Browse files
Files changed (2) hide show
  1. README.md +37 -1
  2. mmproj-Dots.Ocr-F16.gguf +2 -2
README.md CHANGED
@@ -25,6 +25,14 @@ GGUF conversions of [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hila
25
  | Dots.Ocr-1.8B-F16.gguf | 3.4 GB | Text model, float16 |
26
  | mmproj-Dots.Ocr-F16.gguf | 2.4 GB | Vision encoder (mmproj), float16 |
27
 
 
 
 
 
 
 
 
 
28
  ## Architecture
29
 
30
  dots.ocr = Qwen2 text backbone (1.7B params, 28 layers) + modified Qwen2-VL vision encoder (1.2B params, 42 layers).
@@ -36,6 +44,34 @@ Key differences from Qwen2-VL:
36
 
37
  ## Usage with llama.cpp
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
 
40
 
41
- > **Note:** Requires llama.cpp with dots.ocr support (pending upstream merge: https://github.com/ggml-org/llama.cpp/pull/19882/changes)
 
 
 
 
 
 
 
 
 
 
 
25
  | Dots.Ocr-1.8B-F16.gguf | 3.4 GB | Text model, float16 |
26
  | mmproj-Dots.Ocr-F16.gguf | 2.4 GB | Vision encoder (mmproj), float16 |
27
 
28
+ ## Update
29
+
30
+ On March 23, 2026, `mmproj-Dots.Ocr-F16.gguf` was regenerated from a corrected DotsOCR converter. The text GGUF files did not change. If you downloaded the `mmproj` earlier, refresh that file.
31
+
32
+ Current llama.cpp fork with DotsOCR support and the compatibility fix:
33
+
34
+ - [anthony-maio/llama.cpp](https://github.com/anthony-maio/llama.cpp)
35
+
36
  ## Architecture
37
 
38
  dots.ocr = Qwen2 text backbone (1.7B params, 28 layers) + modified Qwen2-VL vision encoder (1.2B params, 42 layers).
 
44
 
45
  ## Usage with llama.cpp
46
 
47
+ Requires a llama.cpp build with DotsOCR support. At the moment, use:
48
+
49
+ - [anthony-maio/llama.cpp](https://github.com/anthony-maio/llama.cpp)
50
+
51
+ Single-image example on Windows:
52
+
53
+ ```powershell
54
+ llama-mtmd-cli.exe `
55
+ -m .\Dots.Ocr-1.8B-Q8_0.gguf `
56
+ --mmproj .\mmproj-Dots.Ocr-F16.gguf `
57
+ --image .\page.png `
58
+ -p "Extract all text from this image and preserve structure in markdown." `
59
+ --ctx-size 131072 `
60
+ -n 4096 `
61
+ --temp 0 `
62
+ --jinja
63
+ ```
64
 
65
+ Equivalent server launch:
66
 
67
+ ```powershell
68
+ llama-server.exe `
69
+ -m .\Dots.Ocr-1.8B-Q8_0.gguf `
70
+ --mmproj .\mmproj-Dots.Ocr-F16.gguf `
71
+ --port 8111 `
72
+ --host 0.0.0.0 `
73
+ --ctx-size 131072 `
74
+ -n 4096 `
75
+ --temp 0 `
76
+ --jinja
77
+ ```
mmproj-Dots.Ocr-F16.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ea68c0dfa6d23eb07a5f2cf91e84d65f55cf26037088b430df8850597565618
3
- size 2524495776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b65a1db58db75f6d1ae900a4cb80772c4c0fabfaf261439e4c4ab965f6970de8
3
+ size 2524495808