ykhrustalev commited on
Commit
b2a000b
·
verified ·
1 Parent(s): 561cd42

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ license: other
4
+ license_name: lfm1.0
5
+ license_link: LICENSE
6
+ language:
7
+ - en
8
+ - ja
9
+ - ko
10
+ - fr
11
+ - es
12
+ - de
13
+ - it
14
+ - pt
15
+ - ar
16
+ - zh
17
+ pipeline_tag: text-generation
18
+ tags:
19
+ - liquid
20
+ - lfm2.5
21
+ - edge
22
+ - mlx
23
+ base_model: LiquidAI/LFM2.5-1.2B-Instruct
24
+ ---
25
+
26
+ # LFM2.5-1.2B-Instruct-8bit
27
+
28
+ This model was converted to MLX format from [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
29
+
30
+ ## Model Details
31
+
32
+ | Property | Value |
33
+ |----------|-------|
34
+ | Parameters | 1.2B |
35
+ | Precision | 8-bit |
36
+ | Group Size | 64 |
37
+ | Size | 1.2 GB |
38
+ | Context Length | 128K |
39
+
40
+ ## Use with mlx
41
+
42
+ ```bash
43
+ pip install mlx-lm
44
+ ```
45
+
46
+ ```python
47
+ from mlx_lm import load, generate
48
+
49
+ model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-8bit")
50
+
51
+ prompt = "What is the capital of France?"
52
+
53
+ if tokenizer.chat_template is not None:
54
+ messages = [{"role": "user", "content": prompt}]
55
+ prompt = tokenizer.apply_chat_template(
56
+ messages, tokenize=False, add_generation_prompt=True
57
+ )
58
+
59
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
60
+ ```
61
+
62
+ ## License
63
+
64
+ This model is released under the [LFM 1.0 License](LICENSE).