lgcharpe commited on
Commit
fdbb03d
·
verified ·
1 Parent(s): 03b7966

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ datasets:
4
+ - NbAiLab/aurora-sft-2512-filtered
5
+ language:
6
+ - 'no'
7
+ - nb
8
+ - nn
9
+ base_model: NbAiLab/borealis-12b-instruct-preview
10
+ pipeline_tag: image-text-to-text
11
+ library_name: transformers
12
+ tags:
13
+ - conversational
14
+ - instruct
15
+ - experimental
16
+ - llama-cpp
17
+ ---
18
+
19
+ # borealis-12b-instruct-preview
20
+ **Model creator:** [NbAiLab](https://huggingface.co/NbAiLab)<br/>
21
+ **Original model**: [NbAiLab/borealis-12b-instruct-preview](https://huggingface.co/NbAiLab/borealis-12b-instruct-preview)<br/>
22
+ **GGUF quantization:** provided by [NbAiLab](https://huggingface.co/NbAiLab) using `llama.cpp`<br/>
23
+
24
+ ## Available Quantizations
25
+ - Q4_K_M
26
+ - Q8_0
27
+ - BF16
28
+
29
+ ## Special thanks
30
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
31
+
32
+ ## Usage Examples
33
+
34
+ ### Q4_K_M
35
+
36
+ **Ollama:**
37
+ ```bash
38
+ ollama run "hf.co/NbAiLab/borealis-12b-instruct-preview-gguf:Q4_K_M"
39
+ ```
40
+
41
+ **LM Studio:**
42
+ ```bash
43
+ lms load "NbAiLab/borealis-12b-instruct-preview-gguf/borealis-12b-instruct-preview-Q4_K_M.gguf"
44
+ ```
45
+
46
+ **llama.cpp CLI:**
47
+ ```bash
48
+ llama-cli --hf "NbAiLab/borealis-12b-instruct-preview-gguf:Q4_K_M" -p "The meaning to life and the universe is"
49
+ ```
50
+
51
+ **llama.cpp Server:**
52
+ ```bash
53
+ llama-server --hf "NbAiLab/borealis-12b-instruct-preview-gguf:Q4_K_M" -c 4096
54
+ ```
55
+
56
+
57
+ ### Q8_0
58
+
59
+ **Ollama:**
60
+ ```bash
61
+ ollama run "hf.co/NbAiLab/borealis-12b-instruct-preview-gguf:Q8_0"
62
+ ```
63
+
64
+ **LM Studio:**
65
+ ```bash
66
+ lms load "NbAiLab/borealis-12b-instruct-preview-gguf/borealis-12b-instruct-preview-Q8_0.gguf"
67
+ ```
68
+
69
+ **llama.cpp CLI:**
70
+ ```bash
71
+ llama-cli --hf "NbAiLab/borealis-12b-instruct-preview-gguf:Q8_0" -p "The meaning to life and the universe is"
72
+ ```
73
+
74
+ **llama.cpp Server:**
75
+ ```bash
76
+ llama-server --hf "NbAiLab/borealis-12b-instruct-preview-gguf:Q8_0" -c 4096
77
+ ```
78
+
79
+
80
+ ### BF16
81
+
82
+ **Ollama:**
83
+ ```bash
84
+ ollama run "hf.co/NbAiLab/borealis-12b-instruct-preview-gguf:BF16"
85
+ ```
86
+
87
+ **LM Studio:**
88
+ ```bash
89
+ lms load "NbAiLab/borealis-12b-instruct-preview-gguf/borealis-12b-instruct-preview-BF16.gguf"
90
+ ```
91
+
92
+ **llama.cpp CLI:**
93
+ ```bash
94
+ llama-cli --hf "NbAiLab/borealis-12b-instruct-preview-gguf:BF16" -p "The meaning to life and the universe is"
95
+ ```
96
+
97
+ **llama.cpp Server:**
98
+ ```bash
99
+ llama-server --hf "NbAiLab/borealis-12b-instruct-preview-gguf:BF16" -c 4096
100
+ ```