metadata
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2.5
- edge
- mlx
base_model: LiquidAI/LFM2.5-350M
LFM2.5-350M-MLX-4bit
MLX export of LFM2.5-350M for Apple Silicon inference.
LFM2.5-350M is a compact multilingual base model built on LiquidAI's hybrid architecture, combining convolutional and attention layers for efficient long-context processing.
Model Details
| Property | Value |
|---|---|
| Parameters | 350M |
| Precision | 4-bit |
| Group Size | 64 |
| Size | 212 MB |
| Context Length | 128K |
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler
model, tokenizer = load("LiquidAI/LFM2.5-350M-MLX-4bit")
response = generate(
model,
tokenizer,
prompt="The capital of France is",
max_tokens=100,
sampler=make_sampler(temp=0.7),
verbose=True,
)
Other Precisions
- LFM2.5-350M-MLX-bf16 (676 MB)
- LFM2.5-350M-MLX-8bit (381 MB)
- LFM2.5-350M-MLX-6bit (296 MB)
- LFM2.5-350M-MLX-5bit (254 MB)
- LFM2.5-350M-MLX-4bit (212 MB)
License
This model is released under the LFM 1.0 License.