Update README.md
Browse files
README.md
CHANGED
|
@@ -35,7 +35,7 @@ For a complete overview of our dataset, methodology, and benchmarks, please refe
|
|
| 35 |
| **Multi-turn Capability** | | | | | |
|
| 36 |
| MT-Bench-EN | 8.4 | 8.4 | 8.3 | 7.8 | **8.5** |
|
| 37 |
| MT-Bench-TH | **8.1** | 8.0 | **8.1** | 6.9 | 8.0 |
|
| 38 |
-
| **Long-context** | | | | | |
|
| 39 |
| MRCR | **18.9** | 18.3 | 16.9 | 16.9 | 16.2 |
|
| 40 |
| LongBench-v2 | **33.6** | 32.4 | 29.2 | **33.6** | 28.8 |
|
| 41 |
| **Tool Calling** | | | | | |
|
|
@@ -133,7 +133,7 @@ print(responses)
|
|
| 133 |
|
| 134 |
## Processing Long Texts
|
| 135 |
|
| 136 |
-
OpenJAI-v1.0 was trained for robust performance on context lengths up to **120,000 tokens**. The model operates optimally within its native 32,768-token window. To process contexts exceeding this limit, applying a context extension technique like YaRN or Dynamic RoPE scaling is necessary.
|
| 137 |
|
| 138 |
Frameworks like `vLLM` and `SGLang` support passing command-line arguments to enable RoPE scaling.
|
| 139 |
|
|
|
|
| 35 |
| **Multi-turn Capability** | | | | | |
|
| 36 |
| MT-Bench-EN | 8.4 | 8.4 | 8.3 | 7.8 | **8.5** |
|
| 37 |
| MT-Bench-TH | **8.1** | 8.0 | **8.1** | 6.9 | 8.0 |
|
| 38 |
+
| **Long-context Understanding** | | | | | |
|
| 39 |
| MRCR | **18.9** | 18.3 | 16.9 | 16.9 | 16.2 |
|
| 40 |
| LongBench-v2 | **33.6** | 32.4 | 29.2 | **33.6** | 28.8 |
|
| 41 |
| **Tool Calling** | | | | | |
|
|
|
|
| 133 |
|
| 134 |
## Processing Long Texts
|
| 135 |
|
| 136 |
+
OpenJAI-v1.0 was trained for robust performance on input context lengths up to **120,000 tokens**. The model operates optimally within its native 32,768-token window. To process contexts exceeding this limit, applying a context extension technique like YaRN or Dynamic RoPE scaling is necessary.
|
| 137 |
|
| 138 |
Frameworks like `vLLM` and `SGLang` support passing command-line arguments to enable RoPE scaling.
|
| 139 |
|