Dolphin-v2 MLX Conversion
This repository contains a local MLX conversion of hf_model intended for Apple Silicon inference.
Important License Notice
The code in this repository may be MIT-licensed, but the model weights are not MIT licensed.
The converted weights remain subject to the upstream Qwen RESEARCH LICENSE AGREEMENT.
This bundle is provided for non-commercial research or evaluation use only unless you separately obtain commercial rights from the upstream licensors.
Required Attribution
Built with Qwen
Conversion Details
- Source model:
hf_model - Quantization:
4-bit / group size 64 / mode affine - Dtype:
bfloat16 - Trust remote code:
False
Included Compliance Files
LICENSE.upstream.txtNOTICEUPSTREAM_MODEL_CARD.mdPUBLISHING_CHECKLIST.md
Local Usage
uv run python -m mlx_vlm.generate \
--model . \
--max-tokens 512 \
--prompt "Parse the reading order of this document." \
--image /absolute/path/to/page.png
Publishing Guidance
Before publishing, confirm that:
- The intended release is non-commercial.
- The upstream license and notice files are included.
- Your model card prominently states
Built with Qwen. - You clearly state that the repository contains converted derivative weights.
- Downloads last month
- 151
Model size
1B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit