Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ library_name: transformers
|
|
| 24 |
|
| 25 |
## Introduction
|
| 26 |
|
| 27 |
-
We introduce **Intern-S1-Pro**, a trillion-scale MoE multimodal scientific reasoning model. The model delivers top-tier performance on advanced reasoning benchmarks and achieves leading results across key AI4Science domains (chemistry, materials, life-science, earth, etc.), while maintaining strong general multimodal and text capabilities.
|
| 28 |
|
| 29 |
### Features
|
| 30 |
|
|
@@ -42,7 +42,7 @@ We evaluate the Intern-S1-Pro on various benchmarks, including general datasets
|
|
| 42 |
|
| 43 |
> **Note**: <u>Underline</u> means the best performance among open-sourced models, **Bold** indicates the best performance among all models.
|
| 44 |
|
| 45 |
-
We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [
|
| 46 |
|
| 47 |
|
| 48 |
## Quick Start
|
|
@@ -60,7 +60,7 @@ temperature = 0.8
|
|
| 60 |
|
| 61 |
### Serving
|
| 62 |
|
| 63 |
-
|
| 64 |
|
| 65 |
- LMDeploy
|
| 66 |
- vLLM
|
|
|
|
| 24 |
|
| 25 |
## Introduction
|
| 26 |
|
| 27 |
+
We introduce **Intern-S1-Pro**, a trillion-scale MoE multimodal scientific reasoning model. Intern-S1-Pro scales to 1T total parameters with 512 experts, activating 8 experts per token (22B activated parameters). The model delivers top-tier performance on advanced reasoning benchmarks and achieves leading results across key AI4Science domains (chemistry, materials, life-science, earth, etc.), while maintaining strong general multimodal and text capabilities.
|
| 28 |
|
| 29 |
### Features
|
| 30 |
|
|
|
|
| 42 |
|
| 43 |
> **Note**: <u>Underline</u> means the best performance among open-sourced models, **Bold** indicates the best performance among all models.
|
| 44 |
|
| 45 |
+
We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalKit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
|
| 46 |
|
| 47 |
|
| 48 |
## Quick Start
|
|
|
|
| 60 |
|
| 61 |
### Serving
|
| 62 |
|
| 63 |
+
Intern-S1-Pro can be deployed using any of the following LLM inference frameworks:
|
| 64 |
|
| 65 |
- LMDeploy
|
| 66 |
- vLLM
|