irmma commited on
Commit
8e25d3c
·
verified ·
1 Parent(s): e8fbbfa

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Spestly/Athena-R3-7B
3
+ language:
4
+ - en
5
+ - zh
6
+ - fr
7
+ - es
8
+ - pt
9
+ - de
10
+ - it
11
+ - ru
12
+ - ja
13
+ - ko
14
+ - vi
15
+ - th
16
+ - ar
17
+ - fa
18
+ - he
19
+ - tr
20
+ - cs
21
+ - pl
22
+ - hi
23
+ - bn
24
+ - ur
25
+ - id
26
+ - ms
27
+ - lo
28
+ - my
29
+ - ceb
30
+ - km
31
+ - tl
32
+ - nl
33
+ library_name: transformers
34
+ license: mit
35
+ tags:
36
+ - text-generation-inference
37
+ - transformers
38
+ - unsloth
39
+ - qwen2
40
+ - trl
41
+ - llama-cpp
42
+ - gguf-my-repo
43
+ extra_gated_prompt: By accessing this model, you agree to comply with ethical usage
44
+ guidelines and accept full responsibility for its applications. You will not use
45
+ this model for harmful, malicious, or illegal activities, and you understand that
46
+ the model's use is subject to ongoing monitoring for misuse. This model is provided
47
+ 'AS IS' and agreeing to this means that you are responsible for all the outputs
48
+ generated by you
49
+ extra_gated_fields:
50
+ Name: text
51
+ Organization: text
52
+ Country: country
53
+ Date of Birth: date_picker
54
+ Intended Use:
55
+ type: select
56
+ options:
57
+ - Research
58
+ - Education
59
+ - Personal Development
60
+ - Commercial Use
61
+ - label: Other
62
+ value: other
63
+ I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
64
+ I agree to use this model under the MIT licence: checkbox
65
+ ---
66
+
67
+ # irmma/Athena-R3-7B-Q4_K_M-GGUF
68
+ This model was converted to GGUF format from [`Spestly/Athena-R3-7B`](https://huggingface.co/Spestly/Athena-R3-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
69
+ Refer to the [original model card](https://huggingface.co/Spestly/Athena-R3-7B) for more details on the model.
70
+
71
+ ## Use with llama.cpp
72
+ Install llama.cpp through brew (works on Mac and Linux)
73
+
74
+ ```bash
75
+ brew install llama.cpp
76
+
77
+ ```
78
+ Invoke the llama.cpp server or the CLI.
79
+
80
+ ### CLI:
81
+ ```bash
82
+ llama-cli --hf-repo irmma/Athena-R3-7B-Q4_K_M-GGUF --hf-file athena-r3-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
83
+ ```
84
+
85
+ ### Server:
86
+ ```bash
87
+ llama-server --hf-repo irmma/Athena-R3-7B-Q4_K_M-GGUF --hf-file athena-r3-7b-q4_k_m.gguf -c 2048
88
+ ```
89
+
90
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
91
+
92
+ Step 1: Clone llama.cpp from GitHub.
93
+ ```
94
+ git clone https://github.com/ggerganov/llama.cpp
95
+ ```
96
+
97
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
98
+ ```
99
+ cd llama.cpp && LLAMA_CURL=1 make
100
+ ```
101
+
102
+ Step 3: Run inference through the main binary.
103
+ ```
104
+ ./llama-cli --hf-repo irmma/Athena-R3-7B-Q4_K_M-GGUF --hf-file athena-r3-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
105
+ ```
106
+ or
107
+ ```
108
+ ./llama-server --hf-repo irmma/Athena-R3-7B-Q4_K_M-GGUF --hf-file athena-r3-7b-q4_k_m.gguf -c 2048
109
+ ```