czyPL commited on
Commit
e3f4410
·
verified ·
1 Parent(s): f035093

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -30,7 +30,7 @@ size_categories:
30
  <div align="center">
31
 
32
  [![HF Dataset](https://img.shields.io/badge/HF-Dataset-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/caskcsg/LongBench-Pro) &nbsp;&nbsp;
33
- [![Github Code](https://img.shields.io/badge/Github-Code-blue?logo=github&logoColor=white)](https://github.com/caskcsg/longcontext/tree/main/LongBench_Pro) &nbsp;&nbsp;
34
  [![Leaderboard](https://img.shields.io/badge/🏆-Leaderboard-red)](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard) &nbsp;&nbsp;
35
  [![Paper](https://img.shields.io/badge/📄-Arxiv_Paper-green)]()
36
 
@@ -40,7 +40,7 @@ size_categories:
40
 
41
  **LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
42
 
43
- In addition, **LongBench-Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
44
 
45
  - **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
46
  - **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
@@ -100,7 +100,7 @@ dataset = load_dataset('caskcsg/LongBench-Pro', split='test')
100
 
101
  ### Evaluation
102
 
103
- Please refer to our [Github Repo]() for automated evaluation.
104
 
105
  ## 📖 Citation
106
 
 
30
  <div align="center">
31
 
32
  [![HF Dataset](https://img.shields.io/badge/HF-Dataset-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/caskcsg/LongBench-Pro) &nbsp;&nbsp;
33
+ [![Github Code](https://img.shields.io/badge/Github-Code-blue?logo=github&logoColor=white)](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) &nbsp;&nbsp;
34
  [![Leaderboard](https://img.shields.io/badge/🏆-Leaderboard-red)](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard) &nbsp;&nbsp;
35
  [![Paper](https://img.shields.io/badge/📄-Arxiv_Paper-green)]()
36
 
 
40
 
41
  **LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
42
 
43
+ In addition, **LongBench Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
44
 
45
  - **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
46
  - **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
 
100
 
101
  ### Evaluation
102
 
103
+ Please refer to our [Github Repo](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) for automated evaluation.
104
 
105
  ## 📖 Citation
106