Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -30,7 +30,7 @@ size_categories:
|
|
| 30 |
<div align="center">
|
| 31 |
|
| 32 |
[](https://huggingface.co/datasets/caskcsg/LongBench-Pro)
|
| 33 |
-
[](https://github.com/caskcsg/longcontext/tree/main/
|
| 34 |
[](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard)
|
| 35 |
[]()
|
| 36 |
|
|
@@ -40,7 +40,7 @@ size_categories:
|
|
| 40 |
|
| 41 |
**LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
|
| 42 |
|
| 43 |
-
In addition, **LongBench
|
| 44 |
|
| 45 |
- **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
|
| 46 |
- **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
|
|
@@ -100,7 +100,7 @@ dataset = load_dataset('caskcsg/LongBench-Pro', split='test')
|
|
| 100 |
|
| 101 |
### Evaluation
|
| 102 |
|
| 103 |
-
Please refer to our [Github Repo]() for automated evaluation.
|
| 104 |
|
| 105 |
## 📖 Citation
|
| 106 |
|
|
|
|
| 30 |
<div align="center">
|
| 31 |
|
| 32 |
[](https://huggingface.co/datasets/caskcsg/LongBench-Pro)
|
| 33 |
+
[](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro)
|
| 34 |
[](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard)
|
| 35 |
[]()
|
| 36 |
|
|
|
|
| 40 |
|
| 41 |
**LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
|
| 42 |
|
| 43 |
+
In addition, **LongBench Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
|
| 44 |
|
| 45 |
- **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
|
| 46 |
- **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
|
|
|
|
| 100 |
|
| 101 |
### Evaluation
|
| 102 |
|
| 103 |
+
Please refer to our [Github Repo](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) for automated evaluation.
|
| 104 |
|
| 105 |
## 📖 Citation
|
| 106 |
|