mlabonne's picture
Update README.md
98c36eb verified
metadata
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
  - en
  - ar
  - zh
  - fr
  - de
  - ja
  - ko
  - es
pipeline_tag: text-generation
tags:
  - liquid
  - lfm2.5
  - edge
  - mlx
base_model: LiquidAI/LFM2.5-350M
Liquid AI
Try LFMDocsLEAPDiscord

LFM2.5-350M-MLX-4bit

MLX export of LFM2.5-350M for Apple Silicon inference.

LFM2.5-350M is a compact multilingual base model built on LiquidAI's hybrid architecture, combining convolutional and attention layers for efficient long-context processing.

Model Details

Property Value
Parameters 350M
Precision 4-bit
Group Size 64
Size 212 MB
Context Length 128K

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler

model, tokenizer = load("LiquidAI/LFM2.5-350M-MLX-4bit")

response = generate(
    model,
    tokenizer,
    prompt="The capital of France is",
    max_tokens=100,
    sampler=make_sampler(temp=0.7),
    verbose=True,
)

Other Precisions

License

This model is released under the LFM 1.0 License.