Commit ·
4ff458d
1
Parent(s): 083e8aa
Revert enable_prefix_caching change - causes vLLM init failure
Browse filesThe enable_prefix_caching=False parameter causes vLLM to fail with:
'OSError: Can't load image processor for lightonai/LightOnOCR-2-1B'
Keeping only the max_tokens change (6144 -> 4096) to fix repetition.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- lighton-ocr2.py +0 -1
lighton-ocr2.py
CHANGED
|
@@ -340,7 +340,6 @@ def main(
|
|
| 340 |
gpu_memory_utilization=gpu_memory_utilization,
|
| 341 |
limit_mm_per_prompt={"image": 1}, # One image per prompt
|
| 342 |
enforce_eager=False, # Use torch.compile for better performance
|
| 343 |
-
enable_prefix_caching=False, # Recommended by model card
|
| 344 |
)
|
| 345 |
|
| 346 |
# LightOnOCR-2 recommended sampling parameters
|
|
|
|
| 340 |
gpu_memory_utilization=gpu_memory_utilization,
|
| 341 |
limit_mm_per_prompt={"image": 1}, # One image per prompt
|
| 342 |
enforce_eager=False, # Use torch.compile for better performance
|
|
|
|
| 343 |
)
|
| 344 |
|
| 345 |
# LightOnOCR-2 recommended sampling parameters
|