Fine Tuned
Collection
A collection for a fine tuned model that I've created.
•
2 items
•
Updated
**DeepSeek-OCR Thai ** is a fine-tuned version of DeepSeek-OCR specifically optimized for recognizing Thai handwriting. This model leverages the power of the DeepSeek-OCR architecture and has been adapted using Low-Rank Adaptation (LoRA) for high-performance Thai OCR tasks, particularly focusing on handwritten text which often presents challenges for general-purpose OCR systems.
unsloth/DeepSeek-OCRrequirement libraries.
pip install -U addict transformers unsloth unsloth_zoo
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126
from unsloth import FastVisionModel
import torch
from transformers import AutoModel
import os
import builtins
# Fix for Unsloth/PEFT issue: "name 'VARIANT_KWARG_KEYS' is not defined"
builtins.VARIANT_KWARG_KEYS = ['alora_offsets']
os.environ["UNSLOTH_WARN_UNINITIALIZED"] = '0'
adapter = "sthaps/DeepSeek-ocr-Thai"
from huggingface_hub import snapshot_download
snapshot_download("unsloth/DeepSeek-OCR", local_dir = "deepseek_ocr")
model, tokenizer = FastVisionModel.from_pretrained(
"./deepseek_ocr",
load_in_4bit = False, # Use 4bit to reduce memory use. False for 16bit LoRA.
auto_model = AutoModel,
trust_remote_code = True,
unsloth_force_compile = True,
use_gradient_checkpointing = "unsloth", # True or "unsloth" for long context
)
model = PeftModel.from_pretrained(model, adapter)
model = model.eval().to(torch.bfloat16)
prompt = "<image>\nFree OCR"
image = "download.jpeg"
# Tiny: base_size = 512, image_size = 512, crop_mode = False
# Small: base_size = 640, image_size = 640, crop_mode = False
# Base: base_size = 1024, image_size = 1024, crop_mode = False
# Large: base_size = 1280, image_size = 1280, crop_mode = False
# Gundam: base_size = 1024, image_size = 640, crop_mode = True
res = model.infer(tokenizer, prompt=prompt, image_file=image, output_path = "output2", base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = False)
print(res)
The model was fine-tuned on a comprehensive collection of Thai OCR datasets, including:
The training data was preprocessed to a conversation format with images saved locally and mapped to instruction-based prompts:
<image>\nFree OCR. The model was trained using the unsloth library for memory-efficient and fast training.
q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_projadamw_8bit2e-482 (Effective Batch Size: 16)linearbf16 (if supported) or fp16Base model
deepseek-ai/DeepSeek-OCR