Bug: MLX checkpoint config.json has wrong model_type "lfm2-vl" (should be "lfm2_vl")

#1
by Manojb - opened

Bug

The config.json in this MLX 8-bit checkpoint uses "model_type": "lfm2-vl" (with hyphen), but HuggingFace transformers registers the architecture as "lfm2_vl" (with underscore).

The original bf16 checkpoint (LiquidAI/LFM2.5-VL-450M) correctly uses "lfm2_vl".

Steps to reproduce

from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained("LiquidAI/LFM2.5-VL-450M-MLX-8bit")
# Error: model type lfm2-vl but Transformers does not recognize this architecture

Fix

Change "model_type": "lfm2-vl" to "model_type": "lfm2_vl" in config.json.

Affected repos

  • LFM2.5-VL-450M-MLX-8bit
  • LFM2.5-VL-450M-MLX-6bit
  • LFM2.5-VL-450M-MLX-5bit
  • LFM2.5-VL-450M-MLX-4bit
  • LFM2.5-VL-450M-MLX-bf16

Additional issue

Even after fixing model_type, mlx-vlm 0.4.4 fails with:

ValueError: Received 2 parameters not in model: multi_modal_projector.layer_norm.bias, multi_modal_projector.layer_norm.weight

This seems like a mlx-vlm compatibility issue with the newer LFM2.5-VL architecture.

Environment

  • transformers 5.6.0.dev0
  • mlx-vlm 0.4.4
  • Mac Mini M4 16GB
  • macOS Darwin 24.3.0

Sign up or log in to comment