yongqiang
commited on
Commit
·
e160560
1
Parent(s):
085e80f
Update readme
Browse files
README.md
CHANGED
|
@@ -19,7 +19,9 @@ This version of InternVL3_5-1B has been converted to run on the Axera NPU using
|
|
| 19 |
|
| 20 |
This model has been optimized with the following LoRA:
|
| 21 |
|
| 22 |
-
Compatible with Pulsar2 version:
|
|
|
|
|
|
|
| 23 |
|
| 24 |
## Convert tools links:
|
| 25 |
|
|
@@ -27,7 +29,7 @@ For those who are interested in model conversion, you can try to export axmodel
|
|
| 27 |
|
| 28 |
https://huggingface.co/OpenGVLab/InternVL3_5-1B
|
| 29 |
|
| 30 |
-
[How to Convert LLM from Huggingface to axmodel](https://github.com/AXERA-TECH/InternVL3_5-1B.axera/tree/main/model_convert)
|
| 31 |
|
| 32 |
[AXera NPU HOST LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-internvl)
|
| 33 |
|
|
|
|
| 19 |
|
| 20 |
This model has been optimized with the following LoRA:
|
| 21 |
|
| 22 |
+
Compatible with Pulsar2 version: 5.1-patch1.
|
| 23 |
+
|
| 24 |
+
Please note that the context of the model is 2k and the maximum prefill length is 1k.
|
| 25 |
|
| 26 |
## Convert tools links:
|
| 27 |
|
|
|
|
| 29 |
|
| 30 |
https://huggingface.co/OpenGVLab/InternVL3_5-1B
|
| 31 |
|
| 32 |
+
[How to Convert LLM from Huggingface to axmodel](https://github.com/AXERA-TECH/InternVL3_5-1B.axera/tree/main/model_convert)
|
| 33 |
|
| 34 |
[AXera NPU HOST LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-internvl)
|
| 35 |
|