ykhrustalev commited on
Commit
548319d
·
verified ·
1 Parent(s): f7f6966

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -47,6 +47,16 @@ MLX export of [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B
47
  | Size | 2.2 GB |
48
  | Context Length | 128K |
49
 
 
 
 
 
 
 
 
 
 
 
50
  ## Use with mlx
51
 
52
  ```bash
@@ -55,6 +65,7 @@ pip install mlx-lm
55
 
56
  ```python
57
  from mlx_lm import load, generate
 
58
 
59
  model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-bf16")
60
 
@@ -66,7 +77,18 @@ if tokenizer.chat_template is not None:
66
  messages, tokenize=False, add_generation_prompt=True
67
  )
68
 
69
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
 
 
 
 
 
 
 
 
 
 
 
70
  ```
71
 
72
  ## License
 
47
  | Size | 2.2 GB |
48
  | Context Length | 128K |
49
 
50
+ ## Recommended Sampling Parameters
51
+
52
+ | Parameter | Value |
53
+ |-----------|-------|
54
+ | temperature | 0.1 |
55
+ | top_k | 50 |
56
+ | top_p | 0.1 |
57
+ | repetition_penalty | 1.05 |
58
+ | max_tokens | 512 |
59
+
60
  ## Use with mlx
61
 
62
  ```bash
 
65
 
66
  ```python
67
  from mlx_lm import load, generate
68
+ from mlx_lm.sample_utils import make_sampler, make_logits_processors
69
 
70
  model, tokenizer = load("LiquidAI/LFM2.5-1.2B-Instruct-bf16")
71
 
 
77
  messages, tokenize=False, add_generation_prompt=True
78
  )
79
 
80
+ sampler = make_sampler(temp=0.1, top_k=50, top_p=0.1)
81
+ logits_processors = make_logits_processors(repetition_penalty=1.05)
82
+
83
+ response = generate(
84
+ model,
85
+ tokenizer,
86
+ prompt=prompt,
87
+ max_tokens=512,
88
+ sampler=sampler,
89
+ logits_processors=logits_processors,
90
+ verbose=True,
91
+ )
92
  ```
93
 
94
  ## License