How to use from the
Use from the
MLX library
# Make sure mlx-vlm is installed
# pip install --upgrade mlx-vlm

from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config

# Load the model
model, processor = load("mlx-community/PaddleOCR-VL-bfloat16")
config = load_config("mlx-community/PaddleOCR-VL-bfloat16")

# Prepare input
image = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
prompt = "Describe this image."

# Apply chat template
formatted_prompt = apply_chat_template(
    processor, config, prompt, num_images=1
)

# Generate output
output = generate(model, processor, formatted_prompt, image)
print(output)

mlx-community/PaddleOCR-VL-bfloat16

This model was converted to MLX format from PaddlePaddle/PaddleOCR-VL using mlx-vlm version 0.3.10. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/PaddleOCR-VL-bfloat16 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
41
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/PaddleOCR-VL-bfloat16

Finetuned
(28)
this model

Collection including mlx-community/PaddleOCR-VL-bfloat16