Update README.md
Browse files
README.md
CHANGED
|
@@ -111,4 +111,11 @@ do_sample=False
|
|
| 111 |
return tokenizer.decode(output[0], skip_special_tokens=True)
|
| 112 |
|
| 113 |
print(run_command("Current directory: /home/user", "pwd"))
|
| 114 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
return tokenizer.decode(output[0], skip_special_tokens=True)
|
| 112 |
|
| 113 |
print(run_command("Current directory: /home/user", "pwd"))
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
## Shoutouts
|
| 117 |
+
|
| 118 |
+
Very big thanks to [@mradermacher](https://huggingface.co/mradermacher) for spending time and compute to generate GGUF Quantized versions of LaaLM-v1!
|
| 119 |
+
|
| 120 |
+
Check out the quantized versions he made [here](https://huggingface.co/mradermacher/LaaLM-v1-GGUF). With these you can run LaaLM-v1 on low tier GPUs or CPUs!
|
| 121 |
+
|