TimesLast commited on
Commit
3ba15c9
·
verified ·
1 Parent(s): f0b3835

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -1
README.md CHANGED
@@ -4,9 +4,50 @@ pipeline_tag: image-text-to-text
4
  library_name: transformers
5
  tags:
6
  - llama-cpp
 
7
  base_model: internlm/JanusCoder-8B
8
  ---
9
 
10
  # TimesLast/JanusCoder-8B-Q4_K_M-GGUF
 
 
11
 
12
- funky gguf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  library_name: transformers
5
  tags:
6
  - llama-cpp
7
+ - gguf-my-repo
8
  base_model: internlm/JanusCoder-8B
9
  ---
10
 
11
  # TimesLast/JanusCoder-8B-Q4_K_M-GGUF
12
+ This model was converted to GGUF format from [`internlm/JanusCoder-8B`](https://huggingface.co/internlm/JanusCoder-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
+ Refer to the [original model card](https://huggingface.co/internlm/JanusCoder-8B) for more details on the model.
14
 
15
+ ## Use with llama.cpp
16
+ Install llama.cpp through brew (works on Mac and Linux)
17
+
18
+ ```bash
19
+ brew install llama.cpp
20
+
21
+ ```
22
+ Invoke the llama.cpp server or the CLI.
23
+
24
+ ### CLI:
25
+ ```bash
26
+ llama-cli --hf-repo TimesLast/JanusCoder-8B-Q4_K_M-GGUF --hf-file januscoder-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
27
+ ```
28
+
29
+ ### Server:
30
+ ```bash
31
+ llama-server --hf-repo TimesLast/JanusCoder-8B-Q4_K_M-GGUF --hf-file januscoder-8b-q4_k_m.gguf -c 2048
32
+ ```
33
+
34
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
35
+
36
+ Step 1: Clone llama.cpp from GitHub.
37
+ ```
38
+ git clone https://github.com/ggerganov/llama.cpp
39
+ ```
40
+
41
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
42
+ ```
43
+ cd llama.cpp && LLAMA_CURL=1 make
44
+ ```
45
+
46
+ Step 3: Run inference through the main binary.
47
+ ```
48
+ ./llama-cli --hf-repo TimesLast/JanusCoder-8B-Q4_K_M-GGUF --hf-file januscoder-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
49
+ ```
50
+ or
51
+ ```
52
+ ./llama-server --hf-repo TimesLast/JanusCoder-8B-Q4_K_M-GGUF --hf-file januscoder-8b-q4_k_m.gguf -c 2048
53
+ ```