ykhrustalev commited on
Commit
224f4e5
·
verified ·
1 Parent(s): d2b195f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +24 -3
README.md CHANGED
@@ -44,10 +44,19 @@ MLX export of [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B
44
  |----------|-------|
45
  | Parameters | 1.2B |
46
  | Precision | 8-bit |
47
- | Group Size | 64 |
48
- | Size | 1.2 GB |
49
  | Context Length | 128K |
50
 
 
 
 
 
 
 
 
 
 
 
51
  ## Use with mlx
52
 
53
  ```bash
@@ -56,6 +65,7 @@ pip install mlx-lm
56
 
57
  ```python
58
  from mlx_lm import load, generate
 
59
 
60
  model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-8bit")
61
 
@@ -67,7 +77,18 @@ if tokenizer.chat_template is not None:
67
  messages, tokenize=False, add_generation_prompt=True
68
  )
69
 
70
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
 
 
 
 
 
 
 
 
 
 
 
71
  ```
72
 
73
  ## License
 
44
  |----------|-------|
45
  | Parameters | 1.2B |
46
  | Precision | 8-bit |
47
+ | Group Size | 64 || Size | 1.2 GB |
 
48
  | Context Length | 128K |
49
 
50
+ ## Recommended Sampling Parameters
51
+
52
+ | Parameter | Value |
53
+ |-----------|-------|
54
+ | temperature | 0.1 |
55
+ | top_k | 50 |
56
+ | top_p | 0.1 |
57
+ | repetition_penalty | 1.05 |
58
+ | max_tokens | 512 |
59
+
60
  ## Use with mlx
61
 
62
  ```bash
 
65
 
66
  ```python
67
  from mlx_lm import load, generate
68
+ from mlx_lm.sample_utils import make_sampler, make_logits_processors
69
 
70
  model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-8bit")
71
 
 
77
  messages, tokenize=False, add_generation_prompt=True
78
  )
79
 
80
+ sampler = make_sampler(temp=0.1, top_k=50, top_p=0.1)
81
+ logits_processors = make_logits_processors(repetition_penalty=1.05)
82
+
83
+ response = generate(
84
+ model,
85
+ tokenizer,
86
+ prompt=prompt,
87
+ max_tokens=512,
88
+ sampler=sampler,
89
+ logits_processors=logits_processors,
90
+ verbose=True,
91
+ )
92
  ```
93
 
94
  ## License