Thank you for the Great model!

#3
by vladimirxi - opened

Prithiv,

First of all, just wanted to say thanks for bringing this model to GGUF format. It is an amazing model and is dramatically changing my workflow because it's a very detailed model and it's leaps and bounds above a lot of other models right now. I've been using it extensively since the first day you released it.

I have ran into a few problems or quirks, and just wanted to show them to you because maybe I don't understand how to use this model correctly. I'm not a developer by any means, just a medical student that likes to play on the technical side of things. If you have any suggestions, fantastic I will take them to heart. And obviously some of this issues are not from the GGUF version, but the actual model itself.

Summary by my Codex on the issues we've found:

Technical feedback on chandra-ocr-2 GGUF for diagram-heavy lecture-slide OCR

Thanks again for converting this model to GGUF.

I know you made the GGUF release, not the original base model, so I’m trying to keep this question scoped to the parts that are most likely in your lane: quant choice, template/prompting, invocation style, and whether some of the behavior we’re seeing is expected on the local GGUF path.

Use case

We’re using chandra-ocr-2 locally in a hybrid OCR pipeline for medical-school lecture PDFs.

Typical pages include:

  • title/objective/outline slides
  • dense bullet/prose slides
  • pathway and signaling diagrams
  • mixed text + diagram slides
  • tables and diagram-tables
  • transcript-heavy continuation slides

Our goal is extraction-style, study-ready markdown. We care a lot about preserving the semantic content of mechanism diagrams, but we also want conservative behavior on slides that are basically plain text.

What we’re seeing

Some of the recurring behavior may be upstream model behavior rather than GGUF-specific, but I wanted to ask whether any of the following are expected or fixable through better GGUF usage/settings:

  1. On some pages, the model is still too eager to describe text-heavy slides that should probably just stay plain OCR-only.
  2. On some real diagram/mechanism pages, the output is too thin and collapses toward a title-level summary even when the raw response appears to contain more useful detail.
  3. The output surface is not always stable. Depending on the page, we may get:
    • layout HTML with bbox metadata
    • JSON-like layout blobs
    • planner/meta text
    • semantic explanation that seems better in a reasoning-style channel than in the main content channel
  4. Table and diagram-table pages seem to need more headroom and more careful handling than generic pages.

A few concrete examples from our validation runs

Representative failure classes:

  • over-triggering on title/objective/outline pages
  • diagram pages left effectively text-only
  • correctly-routed pages where the final semantic output is just a very thin summary

On one larger microbiology validation pass, even after local fixes on our side, the residual issues still looked like this:

  • over-routed text pages: 99 -> 42
  • under-described diagram/mechanism pages: 28 -> 17
  • thin semantic outputs: 20 -> 6

So the model is definitely usable and the direction is positive, but those three failure classes remain fairly consistent for our workload.

My actual questions

From the GGUF/conversion/local-runtime perspective:

  1. Do you recommend a specific quant for OCR workloads like this, especially diagram-heavy lecture slides?
  2. Is there a preferred chat template or prompt style for extraction-style OCR, as opposed to descriptive captioning?
  3. Are there generation settings you recommend to reduce the chance of getting bbox-heavy HTML, layout JSON, or planner/meta text?
  4. Is it normal on the GGUF path for the most useful semantic answer to sometimes show up outside the main content channel?
  5. If you’ve tested this model locally yourself, is there a “known good” invocation pattern for stable markdown-style extraction?

Scope note

If most of what I’m describing sounds like upstream base-model behavior rather than anything related to the GGUF conversion, no worries. In that case, I’d still really appreciate any advice on:

  • which quant you trust most
  • which prompt/template style works best
  • which local invocation pattern gives the cleanest OCR-style outputs

If helpful, I can also post a small representative page set for the three main failure modes instead of dumping a giant wall of examples on you like it’s the extended cut of a very nerdy disaster film.

Thanks again for all that you do!

Bently

prithivMLmods changed discussion status to closed

Sign up or log in to comment