File size: 1,024 Bytes
a073fc0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
datasets:
- livecodebench/code_generation_lite
- agentica-org/DeepCoder-Preview-Dataset
- NousResearch/lcb_test
- NousResearch/RLVR_Coding_Problems
base_model: NousResearch/NousCoder-14B
pipeline_tag: text-generation
tags:
- mlx
---

# ncls-p/NousCoder-14B-mlx-8Bit

The Model [ncls-p/NousCoder-14B-mlx-8Bit](https://huggingface.co/ncls-p/NousCoder-14B-mlx-8Bit) was converted to MLX format from [NousResearch/NousCoder-14B](https://huggingface.co/NousResearch/NousCoder-14B) using mlx-lm version **0.29.1**.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("ncls-p/NousCoder-14B-mlx-8Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```