FM-1976 commited on
Commit
44beaaf
·
verified ·
1 Parent(s): 51f47ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +222 -3
README.md CHANGED
@@ -1,3 +1,222 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - it
6
+ - zh
7
+ - fr
8
+ base_model:
9
+ - Qwen/Qwen2.5-1.5B-Instruct
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - llamafile
13
+ - chat
14
+ - exe
15
+ ---
16
+
17
+ # Qwen2.5-1.5B-Instruct-GGUF - llamafile
18
+
19
+ - Model creator: [Fabio Matricardi](https://huggingface.co/FM-1976)
20
+ - Original model: [Qwen/Qwen2.5-1.5B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF)
21
+
22
+ Fabio Matricardi packaged the Qwen 2.5 models into executable weights that we
23
+ call [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This gives
24
+ you the easiest fastest way to use the model on Linux, MacOS, Windows,
25
+ FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
26
+
27
+ *Software Last Updated: 2025-03-31*
28
+
29
+ *Llamafile Version: 0.9.2*
30
+
31
+ *The executable will start with a context window set to 24k tokens*
32
+
33
+ ## Quickstart
34
+
35
+ To get started, you need both the Qwen 2.5 weights, and the llamafile
36
+ software. Both of them are included in a single file, which can be
37
+ downloaded and run as follows:
38
+
39
+ ```
40
+ wget https://huggingface.co/Mozilla/Qwen2.5-7B-Instruct-1M-llamafile/resolve/main/Qwen2.5-7B-Instruct-1M-Q6_K.llamafile
41
+ chmod +x QwenPortable.llamafile
42
+ ./QwenPortable.llamafile
43
+ ```
44
+ For Windows user:
45
+ simply rename the extension from `QwenPortable.llamafile` to `QwenPortable.exe`
46
+
47
+ The default mode of operation for these llamafiles is our new command
48
+ line chatbot interface.
49
+ At the same time a Web interface is available at `http://127.0.0.1:8080/` and also exposed to your internal Network.
50
+
51
+ An OpenAI compatible API endpoint server will be listening at `http://localhost:8080/v1`
52
+
53
+
54
+ ## Usage
55
+
56
+ You can use triple quotes to ask questions on multiple lines. You can
57
+ pass commands like `/stats` and `/context` to see runtime status
58
+ information. You can change the system prompt by passing the `-p "new
59
+ system prompt"` flag. You can press CTRL-C to interrupt the model.
60
+ Finally CTRL-D may be used to exit.
61
+
62
+ If you prefer to use a web GUI, then a `--server` mode is provided, that
63
+ will open a tab with a chatbot and completion interface in your browser.
64
+ For additional help on how it may be used, pass the `--help` flag. The
65
+ server also has an OpenAI API compatible completions endpoint that can
66
+ be accessed via Python using the `openai` pip package.
67
+
68
+ ```
69
+ When you launch the executable the oepnAI API server is started automatically
70
+ ```
71
+
72
+ An advanced CLI mode is provided that's useful for shell scripting. You
73
+ can use it by passing the `--cli` flag. For additional help on how it
74
+ may be used, pass the `--help` flag.
75
+
76
+ ```
77
+ ./Qwen2.5-7B-Instruct-1M-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
78
+ ```
79
+
80
+ ## Quickstart with python and openAI API endpoint
81
+
82
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
83
+
84
+ ```python
85
+ # Chat with an intelligent assistant in your terminal
86
+ from openai import OpenAI
87
+ import sys
88
+
89
+ # Point to the local server
90
+ client = OpenAI(base_url="http://localhost:8080/v1", api_key="not-needed")
91
+ history = [
92
+ {"role": "system", "content": "You are QWEN-PORTABLE, an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful. Always reply in the language of the instructions."},
93
+ {"role": "user", "content": "Hello, introduce yourself to someone opening this program for the first time. Be concise."},
94
+ ]
95
+ print("\033[92;1m")
96
+ while True:
97
+ userinput = ""
98
+ completion = client.chat.completions.create(
99
+ model="local-model", # this field is currently unused
100
+ messages=history,
101
+ temperature=0.3,
102
+ frequency_penalty = 1.4,
103
+ max_tokens = 600,
104
+ stream=True,
105
+ )
106
+
107
+ new_message = {"role": "assistant", "content": ""}
108
+
109
+ for chunk in completion:
110
+ if chunk.choices[0].delta.content:
111
+ print(chunk.choices[0].delta.content, end="", flush=True)
112
+ new_message["content"] += chunk.choices[0].delta.content
113
+
114
+ history.append(new_message)
115
+
116
+ print("\033[1;30m") #dark grey
117
+ print("Enter your text (end input with Ctrl+D on Unix or Ctrl+Z on Windows) - type quit! to exit the chatroom:")
118
+ print("\033[91;1m") #red
119
+ lines = sys.stdin.readlines()
120
+ for line in lines:
121
+ userinput += line + "\n"
122
+ if "quit!" in lines[0].lower():
123
+ print("\033[0mBYE BYE!")
124
+ break
125
+ history = [
126
+ {"role": "system", "content": "You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."},
127
+ ]
128
+ history.append({"role": "user", "content": userinput})
129
+ print("\033[92;1m")
130
+
131
+ ```
132
+
133
+
134
+ ## Context Window
135
+
136
+ This model has a max context window size of 128k tokens. By default, a
137
+ context window size of 8192 tokens is used. You can ask llamafile
138
+ to use the maximum context size by passing the `-c 0` flag. That's big
139
+ enough for a small book. If you want to be able to have a conversation
140
+ with your book, you can use the `-f book.txt` flag.
141
+
142
+ ## GPU Acceleration
143
+
144
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
145
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
146
+ driver needs to be installed if you own an NVIDIA GPU. On Windows, if
147
+ you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
148
+ the flags `--recompile --gpu amd` the first time you run your llamafile.
149
+
150
+ On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
151
+ perform matrix multiplications. This is open source software, but it
152
+ doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
153
+ installed on your system, then you can pass the `--recompile` flag to
154
+ build a GGML CUDA library just for your system that uses cuBLAS. This
155
+ ensures you get maximum performance.
156
+
157
+ For further information, please see the [llamafile
158
+ README](https://github.com/mozilla-ocho/llamafile/).
159
+
160
+ ## About llamafile
161
+
162
+ llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
163
+ uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
164
+ binaries that run on the stock installs of six OSes for both ARM64 and
165
+ AMD64.
166
+
167
+ ---
168
+
169
+ # Qwen2.5-1.5B-Instruct-GGUF
170
+
171
+ ## Introduction
172
+
173
+ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
174
+
175
+ - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
176
+ - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
177
+ - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
178
+ - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
179
+
180
+ **This repo contains the instruction-tuned 1.5B Qwen2.5 model in the GGUF Format**, which has the following features:
181
+ - Type: Causal Language Models
182
+ - Training Stage: Pretraining & Post-training
183
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
184
+ - Number of Parameters: 1.54B
185
+ - Number of Paramaters (Non-Embedding): 1.31B
186
+ - Number of Layers: 28
187
+ - Number of Attention Heads (GQA): 12 for Q and 2 for KV
188
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
189
+
190
+
191
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
192
+
193
+
194
+
195
+
196
+ ## Evaluation & Performance
197
+
198
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
199
+
200
+ For quantized models, the benchmark results against the original bfloat16 models can be found [here](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html)
201
+
202
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
203
+
204
+ ## Citation
205
+
206
+ If you find our work helpful, feel free to give us a cite.
207
+
208
+ ```
209
+ @misc{qwen2.5,
210
+ title = {Qwen2.5: A Party of Foundation Models},
211
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
212
+ author = {Qwen Team},
213
+ month = {September},
214
+ year = {2024}
215
+ }
216
+ @article{qwen2,
217
+ title={Qwen2 Technical Report},
218
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
219
+ journal={arXiv preprint arXiv:2407.10671},
220
+ year={2024}
221
+ }
222
+ ```