No text output inferencing with ollama?
#8 opened 3 minutes ago
by
frankslin
GLM-OCR: A Tiny 0.9B-Parameter Model That Punches Far Above Its Weight
#7 opened about 2 hours ago
by
Javedalam
When will there be better support for vLLM?
5
#6 opened about 7 hours ago
by
Xiakj
Working without the Sdk?
1
#5 opened about 8 hours ago
by
meganoob1337
Thanks for using the PP-DocLayoutV3 model from PaddleOCR-VL-1.5
👍
6
#4 opened about 13 hours ago
by
ChengCui
Installation Video and Testing - Step by Step
#1 opened about 20 hours ago
by
fahdmirzac