No text output inferencing with ollama?
#8
by
frankslin
- opened
I'm using the example image from https://cdn.bigmodel.cn/static/logo/introduction.png.
Apple M2, macOS 15.7.3.
$ ollama -v
ollama version is 0.15.5-rc1
$ sha256sum introduction.png
b01139689cb0682b3a1d6f3f3eda3f481101571654e3a23a1de5bf5520f3c6f5 introduction.png
$ ollama run glm-ocr Text Recognition: ./introduction.png
Added image './introduction.png'
```markdown
```markdown
```markdown
```text
When I tried with another file, I got this error:
Error: an error was encountered while running the model: GGML_ASSERT(a->ne[2] * 4 == b->ne[0]) failed
WARNING: Using native backtrace. Set GGML_BACKTRACE_LLDB for more info.
WARNING: GGML_BACKTRACE_LLDB may cause native MacOS Terminal.app to crash.
See: https://github.com/ggml-org/llama.cpp/pull/17869
0 ollama 0x0000000100e6d620 ggml_print_backtrace + 276
What am I missing?