Update README.md
Browse files
README.md
CHANGED
|
@@ -11,12 +11,12 @@ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review
|
|
| 11 |
Face and click below. Requests are processed immediately.
|
| 12 |
extra_gated_button_content: Acknowledge license
|
| 13 |
---
|
| 14 |
-
# gemma-2-2b-it-RK3588-1.1.
|
| 15 |
|
| 16 |
This version of gemma-2-2b-it has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization.
|
| 17 |
This model has been optimized with the following LoRA:
|
| 18 |
|
| 19 |
-
Compatible with RKLLM version: 1.1.
|
| 20 |
|
| 21 |
## Useful links:
|
| 22 |
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
|
|
|
|
| 11 |
Face and click below. Requests are processed immediately.
|
| 12 |
extra_gated_button_content: Acknowledge license
|
| 13 |
---
|
| 14 |
+
# gemma-2-2b-it-RK3588-1.1.4
|
| 15 |
|
| 16 |
This version of gemma-2-2b-it has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization.
|
| 17 |
This model has been optimized with the following LoRA:
|
| 18 |
|
| 19 |
+
Compatible with RKLLM version: 1.1.4
|
| 20 |
|
| 21 |
## Useful links:
|
| 22 |
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
|