Clarify settings, quantization level in readme
Browse files
README.md
CHANGED
|
@@ -1,18 +1,18 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
base_model: Nanbeige/Nanbeige4.1-3B
|
| 6 |
-
base_model_relation: quantized
|
| 7 |
-
pipeline_tag: text-generation
|
| 8 |
-
library_name: mnn
|
| 9 |
-
tags:
|
| 10 |
-
- code
|
| 11 |
-
- mnn
|
| 12 |
-
---
|
| 13 |
-
|
| 14 |
-
This model [DeProgrammer/Nanbeige4.1-3B-MNN](https://huggingface.co/DeProgrammer/Nanbeige4.1-3B-MNN) was
|
| 15 |
-
converted to MNN format from [Nanbeige/Nanbeige4.1-3B](https://huggingface.co/Nanbeige/Nanbeige4.1-3B)
|
| 16 |
-
using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7).
|
| 17 |
-
|
| 18 |
Inference can be run via MNN, e.g., MNN Chat on Android.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
base_model: Nanbeige/Nanbeige4.1-3B
|
| 6 |
+
base_model_relation: quantized
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
library_name: mnn
|
| 9 |
+
tags:
|
| 10 |
+
- code
|
| 11 |
+
- mnn
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
This model [DeProgrammer/Nanbeige4.1-3B-MNN](https://huggingface.co/DeProgrammer/Nanbeige4.1-3B-MNN) was
|
| 15 |
+
converted to MNN format from [Nanbeige/Nanbeige4.1-3B](https://huggingface.co/Nanbeige/Nanbeige4.1-3B)
|
| 16 |
+
using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7) with default settings (4-bit quantization).
|
| 17 |
+
|
| 18 |
Inference can be run via MNN, e.g., MNN Chat on Android.
|