Universal DeepSeek-OCR 2 – CPU, MPS, CUDA Support

This repository uses the weights from the original DeepSeek-OCR 2 and modifies model to support inference on different devices such as CPU and MPS (Apple Metal GPU). By default runs on CPU.

DeepSeek AI

Explore more human-like visual encoding.

Usage

Sample code available at: https://github.com/Dogacel/Universal-DeepSeek-OCR-2

mamba create -n deepseek-ocr-2 python=3.12.9
mamba activate deepseek-ocr-2

pip install torch==2.6.0  torchvision Pillow transformers==4.46.3 tokenizers==0.20.3 einops addict easydict
from transformers import AutoModel, AutoTokenizer
import torch

model_name = '.'

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, use_safetensors=True)
model = model.eval().to("cpu").to(torch.float16)

# prompt = "<image>\nFree OCR. "
prompt = "<image>\n<|grounding|>Convert the document to markdown. "
image_file = 'samples/paper.png'
output_path = 'tmp'

res = model.infer(
    tokenizer, 
    prompt=prompt, 
    image_file=image_file, 
    output_path = output_path, 
    base_size = 1024, 
    image_size = 768, 
    crop_mode = True, 
    save_results = True,
    test_compress = True,
)

To change device type, you should update two things,

- model.eval().to("cpu").to(torch.float16)
+ model = model.eval().to("mps").to(torch.float16)

res = model.infer(
    tokenizer, 
    prompt=prompt, 
    image_file=image_file, 
    output_path = output_path, 
    base_size = 1024, 
    image_size = 768, 
    crop_mode = True, 
    save_results = True,
    test_compress = True,
+   device = "mps",
+   dtype = torch.float16,
)

For CUDA, you should also use bfloat16 to get as close as possible to the original implementation.

- model.eval().to("cpu").to(torch.float16)
+ model = model.eval().to("cuda").to(torch.bfloat16)

res = model.infer(
    tokenizer, 
    prompt=prompt, 
    image_file=image_file, 
    output_path = output_path, 
    base_size = 1024, 
    image_size = 768, 
    crop_mode = True, 
    save_results = True,
    test_compress = True,
+   device = "cuda",
+   dtype = torch.bfloat16,
)
Downloads last month
330
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dogacel/Universal-DeepSeek-OCR-2

Finetuned
(4)
this model