ccckblaze commited on
Commit
917ee59
·
verified ·
1 Parent(s): 08a4c60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -4
README.md CHANGED
@@ -1,7 +1,36 @@
1
  ---
2
  language:
3
  - en
4
- base_model:
5
- - Qwen/Qwen2.5-Coder-7B-Instruct
6
- library_name: mlx
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ tags:
6
+ - chat
7
+ - mlx
8
+ base_model: Qwen/Qwen2.5-Coder-7B-Instruct
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # ccckblaze/Qwen2.5-Coder-7B-Instruct-bf16-MLX
13
+
14
+ The Model [ccckblaze/Qwen2.5-Coder-7B-Instruct-bf16-MLX](https://huggingface.co/ccckblaze/Qwen2.5-Coder-7B-Instruct-bf16-MLX/) was converted to MLX format from [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) using mlx-lm version **0.22.4**.
15
+
16
+ ## Use with mlx
17
+
18
+ ```bash
19
+ pip install mlx-lm
20
+ ```
21
+
22
+ ```python
23
+ from mlx_lm import load, generate
24
+
25
+ model, tokenizer = load("kieron-buyskes/Josiefied-Qwen2.5-Coder-14B-Instruct-abliterated-v1-mlx-4Bit")
26
+
27
+ prompt="hello"
28
+
29
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
30
+ messages = [{"role": "user", "content": prompt}]
31
+ prompt = tokenizer.apply_chat_template(
32
+ messages, tokenize=False, add_generation_prompt=True
33
+ )
34
+
35
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
36
+ ```