dolphin-v2-mlx-4bit / README.md
BrainBuzzer's picture
Add files using upload-large-folder tool
2bdc589 verified
metadata
tags:
  - mlx
  - multimodal
  - image-text-to-text
  - document-parsing
  - qwen2_5_vl
license: other
base_model:
  - hf_model

Dolphin-v2 MLX Conversion

This repository contains a local MLX conversion of hf_model intended for Apple Silicon inference.

Important License Notice

The code in this repository may be MIT-licensed, but the model weights are not MIT licensed. The converted weights remain subject to the upstream Qwen RESEARCH LICENSE AGREEMENT.

This bundle is provided for non-commercial research or evaluation use only unless you separately obtain commercial rights from the upstream licensors.

Required Attribution

Built with Qwen

Conversion Details

  • Source model: hf_model
  • Quantization: 4-bit / group size 64 / mode affine
  • Dtype: bfloat16
  • Trust remote code: False

Included Compliance Files

  • LICENSE.upstream.txt
  • NOTICE
  • UPSTREAM_MODEL_CARD.md
  • PUBLISHING_CHECKLIST.md

Local Usage

uv run python -m mlx_vlm.generate \
  --model . \
  --max-tokens 512 \
  --prompt "Parse the reading order of this document." \
  --image /absolute/path/to/page.png

Publishing Guidance

Before publishing, confirm that:

  1. The intended release is non-commercial.
  2. The upstream license and notice files are included.
  3. Your model card prominently states Built with Qwen.
  4. You clearly state that the repository contains converted derivative weights.