Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -53,9 +53,9 @@ tags:
|
|
| 53 |
|
| 54 |
This repository contains **GGUF format** quantizations of [utter-project/EuroLLM-22B-Instruct](https://huggingface.co/utter-project/EuroLLM-22B-Instruct).
|
| 55 |
|
| 56 |
-
##
|
| 57 |
|
| 58 |
-
Unlike standard automated quantizations, this release was **specifically optimized by Jugaad** to balance professional performance with consumer hardware constraints.
|
| 59 |
|
| 60 |
We focused on enabling the deployment of this powerful 22B parameter model on **single 24GB VRAM GPUs** (NVIDIA RTX 3090, RTX 4090, L4) while preserving its capability in critical tasks like **PII/PHI Extraction (NER)** across European languages.
|
| 61 |
|
|
|
|
| 53 |
|
| 54 |
This repository contains **GGUF format** quantizations of [utter-project/EuroLLM-22B-Instruct](https://huggingface.co/utter-project/EuroLLM-22B-Instruct).
|
| 55 |
|
| 56 |
+
## Why this release?
|
| 57 |
|
| 58 |
+
Unlike standard automated quantizations, this release was **specifically optimized by [Jugaad](https://jugaad.digital)** to balance professional performance with consumer hardware constraints.
|
| 59 |
|
| 60 |
We focused on enabling the deployment of this powerful 22B parameter model on **single 24GB VRAM GPUs** (NVIDIA RTX 3090, RTX 4090, L4) while preserving its capability in critical tasks like **PII/PHI Extraction (NER)** across European languages.
|
| 61 |
|