Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,7 +1,75 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
pipeline_tag: text-generation
|
| 4 |
tags:
|
|
|
|
|
|
|
|
|
|
| 5 |
- mlx
|
| 6 |
-
|
| 7 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: mlx
|
| 3 |
+
license: other
|
| 4 |
+
license_name: lfm1.0
|
| 5 |
+
license_link: LICENSE
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
- ja
|
| 9 |
+
- ko
|
| 10 |
+
- fr
|
| 11 |
+
- es
|
| 12 |
+
- de
|
| 13 |
+
- it
|
| 14 |
+
- pt
|
| 15 |
+
- ar
|
| 16 |
+
- zh
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
tags:
|
| 19 |
+
- liquid
|
| 20 |
+
- lfm2.5
|
| 21 |
+
- edge
|
| 22 |
- mlx
|
| 23 |
+
base_model: LiquidAI/LFM2.5-1.2B-Instruct
|
| 24 |
---
|
| 25 |
+
|
| 26 |
+
<div align="center">
|
| 27 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png" alt="Liquid AI" style="width: 100%; max-width: 100%;">
|
| 28 |
+
|
| 29 |
+
<p>
|
| 30 |
+
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> •
|
| 31 |
+
<a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> •
|
| 32 |
+
<a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> •
|
| 33 |
+
<a href="https://www.liquid.ai/blog/"><strong>Blog</strong></a>
|
| 34 |
+
</p>
|
| 35 |
+
</div>
|
| 36 |
+
|
| 37 |
+
# LFM2.5-1.2B-Instruct-4bit
|
| 38 |
+
|
| 39 |
+
MLX export of [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) for Apple Silicon inference.
|
| 40 |
+
|
| 41 |
+
## Model Details
|
| 42 |
+
|
| 43 |
+
| Property | Value |
|
| 44 |
+
|----------|-------|
|
| 45 |
+
| Parameters | 1.2B |
|
| 46 |
+
| Precision | 4-bit |
|
| 47 |
+
| Group Size | 64 |
|
| 48 |
+
| Size | 628 MB |
|
| 49 |
+
| Context Length | 128K |
|
| 50 |
+
|
| 51 |
+
## Use with mlx
|
| 52 |
+
|
| 53 |
+
```bash
|
| 54 |
+
pip install mlx-lm
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
```python
|
| 58 |
+
from mlx_lm import load, generate
|
| 59 |
+
|
| 60 |
+
model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-4bit")
|
| 61 |
+
|
| 62 |
+
prompt = "What is the capital of France?"
|
| 63 |
+
|
| 64 |
+
if tokenizer.chat_template is not None:
|
| 65 |
+
messages = [{"role": "user", "content": prompt}]
|
| 66 |
+
prompt = tokenizer.apply_chat_template(
|
| 67 |
+
messages, tokenize=False, add_generation_prompt=True
|
| 68 |
+
)
|
| 69 |
+
|
| 70 |
+
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## License
|
| 74 |
+
|
| 75 |
+
This model is released under the [LFM 1.0 License](LICENSE).
|