TheBloke commited on
Commit
900b2d9
·
1 Parent(s): 2ed9430

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -19
README.md CHANGED
@@ -51,12 +51,13 @@ Here is an incomplete list of clients and libraries that are known to support GG
51
  * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
52
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
53
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
54
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
 
55
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
56
  * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
57
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
58
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
59
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
 
60
 
61
  <!-- README_GGUF.md-about-gguf end -->
62
  <!-- repositories-available start -->
@@ -113,7 +114,7 @@ Refer to the Provided Files table below to see what files use which methods, and
113
  | [daringfortitude.Q3_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
114
  | [daringfortitude.Q3_K_L.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
115
  | [daringfortitude.Q4_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
116
- | [daringfortitude.Q4_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
117
  | [daringfortitude.Q4_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
118
  | [daringfortitude.Q5_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
119
  | [daringfortitude.Q5_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
@@ -159,7 +160,7 @@ huggingface-cli download TheBloke/DaringFortitude-GGUF daringfortitude.Q4_K_M.gg
159
  ```
160
 
161
  <details>
162
- <summary>More advanced huggingface-cli download usage</summary>
163
 
164
  You can also download multiple files at once with a pattern:
165
 
@@ -191,12 +192,12 @@ Windows Command Line users: You can set the environment variable by running `set
191
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
192
 
193
  ```shell
194
- ./main -ngl 32 -m daringfortitude.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
195
  ```
196
 
197
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
198
 
199
- Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
200
 
201
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
202
 
@@ -208,9 +209,11 @@ Further instructions can be found in the text-generation-webui documentation, he
208
 
209
  ## How to run from Python code
210
 
211
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
212
 
213
- ### How to load this model in Python code, using ctransformers
 
 
214
 
215
  #### First install the package
216
 
@@ -218,24 +221,56 @@ Run one of the following commands, according to your system:
218
 
219
  ```shell
220
  # Base ctransformers with no GPU acceleration
221
- pip install ctransformers
222
- # Or with CUDA GPU acceleration
223
- pip install ctransformers[cuda]
 
 
 
 
224
  # Or with AMD ROCm GPU acceleration (Linux only)
225
- CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
226
  # Or with Metal GPU acceleration for macOS systems only
227
- CT_METAL=1 pip install ctransformers --no-binary ctransformers
 
 
 
 
228
  ```
229
 
230
- #### Simple ctransformers example code
231
 
232
  ```python
233
- from ctransformers import AutoModelForCausalLM
234
 
235
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
236
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/DaringFortitude-GGUF", model_file="daringfortitude.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
237
-
238
- print(llm("AI is going to"))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
  ```
240
 
241
  ## How to use with LangChain
@@ -272,7 +307,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
272
 
273
  **Special thanks to**: Aemon Algiz.
274
 
275
- **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
276
 
277
 
278
  Thank you to all my generous patrons and donaters!
 
51
  * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
52
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
53
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
54
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
55
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
56
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
57
  * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
 
58
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
59
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
60
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
61
 
62
  <!-- README_GGUF.md-about-gguf end -->
63
  <!-- repositories-available start -->
 
114
  | [daringfortitude.Q3_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
115
  | [daringfortitude.Q3_K_L.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
116
  | [daringfortitude.Q4_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
117
+ | [daringfortitude.Q4_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
118
  | [daringfortitude.Q4_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
119
  | [daringfortitude.Q5_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
120
  | [daringfortitude.Q5_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
 
160
  ```
161
 
162
  <details>
163
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
164
 
165
  You can also download multiple files at once with a pattern:
166
 
 
192
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
193
 
194
  ```shell
195
+ ./main -ngl 35 -m daringfortitude.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
196
  ```
197
 
198
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
199
 
200
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
201
 
202
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
203
 
 
209
 
210
  ## How to run from Python code
211
 
212
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
213
 
214
+ ### How to load this model in Python code, using llama-cpp-python
215
+
216
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
217
 
218
  #### First install the package
219
 
 
221
 
222
  ```shell
223
  # Base ctransformers with no GPU acceleration
224
+ pip install llama-cpp-python
225
+ # With NVidia CUDA acceleration
226
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
227
+ # Or with OpenBLAS acceleration
228
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
229
+ # Or with CLBLast acceleration
230
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
231
  # Or with AMD ROCm GPU acceleration (Linux only)
232
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
233
  # Or with Metal GPU acceleration for macOS systems only
234
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
235
+
236
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
237
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
238
+ pip install llama-cpp-python
239
  ```
240
 
241
+ #### Simple llama-cpp-python example code
242
 
243
  ```python
244
+ from llama_cpp import Llama
245
 
246
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
247
+ llm = Llama(
248
+ model_path="./daringfortitude.Q4_K_M.gguf", # Download the model file first
249
+ n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
250
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
251
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
252
+ )
253
+
254
+ # Simple inference example
255
+ output = llm(
256
+ "{prompt}", # Prompt
257
+ max_tokens=512, # Generate up to 512 tokens
258
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
259
+ echo=True # Whether to echo the prompt
260
+ )
261
+
262
+ # Chat Completion API
263
+
264
+ llm = Llama(model_path="./daringfortitude.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
265
+ llm.create_chat_completion(
266
+ messages = [
267
+ {"role": "system", "content": "You are a story writing assistant."},
268
+ {
269
+ "role": "user",
270
+ "content": "Write a story about llamas."
271
+ }
272
+ ]
273
+ )
274
  ```
275
 
276
  ## How to use with LangChain
 
307
 
308
  **Special thanks to**: Aemon Algiz.
309
 
310
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
311
 
312
 
313
  Thank you to all my generous patrons and donaters!