Susant-Achary commited on
Commit
3f14c1e
·
verified ·
1 Parent(s): b199369

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -17
README.md CHANGED
@@ -1,23 +1,63 @@
1
  ---
2
- library_name: mlx
3
- license: other
4
- license_name: lfm1.0
5
- license_link: LICENSE
6
  language:
7
  - en
8
- - ar
9
- - zh
10
- - fr
11
- - de
12
- - ja
13
- - ko
14
- - es
15
- pipeline_tag: text-generation
16
  tags:
17
- - liquid
18
- - lfm2
19
- - edge
20
- - moe
21
  - mlx
22
- base_model: LiquidAI/LFM2-8B-A1B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ model-index:
3
+ - name: LFM2-8B-A1B — MLX (Apple Silicon), **8-bit**
4
+ results: []
5
+ license: apache-2.0
6
  language:
7
  - en
 
 
 
 
 
 
 
 
8
  tags:
 
 
 
 
9
  - mlx
10
+ - apple-silicon
11
+ - text-generation
12
+ - 8bit
13
+ - quantized
14
+ - 8b
15
+ - MoE
16
+ - Mixture of Experts
17
+ pipeline_tag: text-generation
18
+ library_name: mlx
19
+ ---
20
+
21
+ # LFM2-8B-A1B — **MLX 8-bit** (Apple Silicon)
22
+ **Maintainer / Publisher:** [**Susant Achary**](https://huggingface.co/Susant-Achary)
23
+
24
+ This repository provides an **Apple-Silicon-optimized MLX build** of **LFM2-8B-A1B** with **8-bit** weight quantization.
25
+ The goal is a **drop-in, on-device** experience on M-series Macs with **maximal fidelity** among quantized variants while keeping load times small and setup simple.
26
+
27
+ > Source model: `mlx-community/LFM2-8B-A1B-8bit-MLX` (Apache-2.0).
28
+ > Format: **MLX** (Metal/MPS), ready for `mlx_lm.generate`.
29
+
30
+ ---
31
+
32
+ ## 🔎 Model at a glance
33
+
34
+ - **Type:** 8B-parameter decoder-only language model (dense Transformer family).
35
+ - **This build:** **8-bit** quantized **MLX** weights for fast, Apple-native inference.
36
+ - **Typical uses:** instruction following, summarization, drafting, QA, basic code/text utilities.
37
+
38
+ > If you need a smaller RAM footprint on older/lower-RAM Macs, consider lower-bit MLX builds (4/5/6-bit). If you want the **closest behavior to FP16** while staying in MLX, **8-bit** is the right choice.
39
+
40
+ ---
41
+
42
+ ## 📦 Files in this repo
43
+
44
+ - `config.json` (MLX config)
45
+ - `mlx_model*.safetensors` (**8-bit** sharded weights)
46
+ - `tokenizer.json`, `tokenizer_config.json`
47
+ - `model_index.json` and basic metadata
48
+
49
+ All assets are arranged for **direct loading** via `mlx_lm`.
50
+
51
  ---
52
+
53
+ ## 🚀 Quickstart (CLI — MLX)
54
+
55
+ **Deterministic generation**
56
+ ```bash
57
+ python -m mlx_lm.generate \
58
+ --model mlx-community/LFM2-8B-A1B-8bit-MLX \
59
+ --prompt "Summarize the following notes into 5 bullet points:\n<your text>" \
60
+ --max-tokens 256 \
61
+ --temperature 0.0 \
62
+ --device mps \
63
+ --seed 0