Update README.md
Browse files
README.md
CHANGED
|
@@ -32,8 +32,9 @@ This card is meant only to request GGUF-IQ-Imatrix quants for models that meet t
|
|
| 32 |
|
| 33 |
For the model:
|
| 34 |
- Maximum model parameter size of **11B**. <br>
|
| 35 |
-
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
|
| 36 |
-
*Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.*
|
|
|
|
| 37 |
|
| 38 |
Important:
|
| 39 |
- Fill the request template as outlined in the next section.
|
|
|
|
| 32 |
|
| 33 |
For the model:
|
| 34 |
- Maximum model parameter size of **11B**. <br>
|
| 35 |
+
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.* <br>
|
| 36 |
+
*Preferably for Mistral and LLama-3 based models in the creative/roleplay niche.* <br>
|
| 37 |
+
*If you need a bigger model, you can try requesting at [mradermacher's](https://huggingface.co/mradermacher/model_requests). Pretty awesome.*
|
| 38 |
|
| 39 |
Important:
|
| 40 |
- Fill the request template as outlined in the next section.
|