LFM2.5-VL-1.6B CoreML (ANE Native)

CoreML ML Program export of LFM2.5-VL-1.6B for Apple Silicon with a fixed 4096-token context window, including the text decoder plus split multimodal vision assets.

Swift inference package and CLI: https://github.com/mweinbach/LFM2.5-VL-1.6B-ANE-Native

Included files

  • CoreMLModels/Embeddings.mlpackage
  • CoreMLModels/LMHead.mlpackage
  • CoreMLModels/DecoderChunk{0-3}Prefill.mlpackage
  • CoreMLModels/DecoderChunk{0-3}Decode.mlpackage
  • CoreMLModels/VisionPatchEmbedding.mlpackage
  • CoreMLModels/VisionEncoder.mlpackage
  • CoreMLModels/VisionProjector.mlpackage
  • CoreMLModels/VisionPositionEmbeddings.float16.bin
  • CoreMLModels/meta.json
  • tokenizer.json
  • tokenizer_config.json
  • config.json
  • processor_config.json
  • generation_config.json
  • chat_template.jinja

Runtime characteristics

  • 4096-token fixed context window
  • chunked decoder export for prompt prefill + single-token decode
  • split vision path: patch embedding + encoder + projector
  • supports multi-image prompts and the model's tiling/thumbnail preprocessing flow
  • Apple CoreML / ML Program target
  • tokenizer and chat template included for local execution

Usage

Clone or download this repo, then point the Swift CLI at the downloaded folder.

Text-only example:

swift run ANEInferenceCLI \
  --bundle-root /path/to/LFM2.5-VL-1.6B-CoreML \
  --prompt "Summarize the benefits of on-device inference" \
  --max-new-tokens 64

Multimodal example:

swift run ANEInferenceCLI \
  --bundle-root /path/to/LFM2.5-VL-1.6B-CoreML \
  --image /path/to/image.png \
  --prompt "Describe the image briefly." \
  --max-new-tokens 64

The matching Swift package is here: https://github.com/mweinbach/LFM2.5-VL-1.6B-ANE-Native

Downloads last month
324
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mweinbach/LFM2.5-VL-1.6B-CoreML

Quantized
(16)
this model