readme : ggerganov -> ggml-org
Browse files
README.md
CHANGED
|
@@ -113,7 +113,7 @@ tags:
|
|
| 113 |
- gguf-my-repo
|
| 114 |
---
|
| 115 |
|
| 116 |
-
#
|
| 117 |
This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v2-moe`](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 118 |
Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) for more details on the model.
|
| 119 |
|
|
@@ -128,19 +128,19 @@ Invoke the llama.cpp server or the CLI.
|
|
| 128 |
|
| 129 |
### CLI:
|
| 130 |
```bash
|
| 131 |
-
llama-cli --hf-repo
|
| 132 |
```
|
| 133 |
|
| 134 |
### Server:
|
| 135 |
```bash
|
| 136 |
-
llama-server --hf-repo
|
| 137 |
```
|
| 138 |
|
| 139 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/
|
| 140 |
|
| 141 |
Step 1: Clone llama.cpp from GitHub.
|
| 142 |
```
|
| 143 |
-
git clone https://github.com/
|
| 144 |
```
|
| 145 |
|
| 146 |
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
|
@@ -150,9 +150,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
| 150 |
|
| 151 |
Step 3: Run inference through the main binary.
|
| 152 |
```
|
| 153 |
-
./llama-cli --hf-repo
|
| 154 |
```
|
| 155 |
or
|
| 156 |
```
|
| 157 |
-
./llama-server --hf-repo
|
| 158 |
```
|
|
|
|
| 113 |
- gguf-my-repo
|
| 114 |
---
|
| 115 |
|
| 116 |
+
# ggml-org/Nomic-Embed-Text-V2-GGUF
|
| 117 |
This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v2-moe`](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 118 |
Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) for more details on the model.
|
| 119 |
|
|
|
|
| 128 |
|
| 129 |
### CLI:
|
| 130 |
```bash
|
| 131 |
+
llama-cli --hf-repo ggml-org/Nomic-Embed-Text-V2-GGUF -p "The meaning to life and the universe is"
|
| 132 |
```
|
| 133 |
|
| 134 |
### Server:
|
| 135 |
```bash
|
| 136 |
+
llama-server --hf-repo ggml-org/Nomic-Embed-Text-V2-GGUF -c 2048
|
| 137 |
```
|
| 138 |
|
| 139 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
| 140 |
|
| 141 |
Step 1: Clone llama.cpp from GitHub.
|
| 142 |
```
|
| 143 |
+
git clone https://github.com/ggml-org/llama.cpp
|
| 144 |
```
|
| 145 |
|
| 146 |
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
|
|
|
| 150 |
|
| 151 |
Step 3: Run inference through the main binary.
|
| 152 |
```
|
| 153 |
+
./llama-cli --hf-repo ggml-org/Nomic-Embed-Text-V2-GGUF -p "The meaning to life and the universe is"
|
| 154 |
```
|
| 155 |
or
|
| 156 |
```
|
| 157 |
+
./llama-server --hf-repo ggml-org/Nomic-Embed-Text-V2-GGUF -c 2048
|
| 158 |
```
|