Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -1,15 +1,14 @@
1
  ---
2
- license: other
3
  language:
4
- - zh
5
  - en
6
  size_categories:
7
- - n<1K
8
  configs:
9
  - config_name: default
10
  data_files:
11
  - split: full
12
- path: "widesearch.jsonl"
13
  ---
14
  # WideSearch: Benchmarking Agentic Broad Info-Seeking
15
 
@@ -19,7 +18,7 @@ WideSearch is a benchmark designed to evaluate the capabilities of Large Languag
19
 
20
  The challenge in these tasks lies not in cognitive difficulty, but in the operational scale, repetitiveness, and the need for **Completeness** and **Factual Fidelity** in the final result. For example, a financial analyst gathering key metrics for all companies in a sector, or a job seeker collecting every vacancy that meets their criteria.
21
 
22
- The benchmark, originating from the research paper "WideSearch: Benchmarking Agentic Broad Info-Seeking," contains 200 meticulously designed tasks (100 in English, 100 in Chinese).
23
 
24
  See our [paper](https://arxiv.org/abs/2508.07999) and [github repo](https://github.com/ByteDance-Seed/WideSearch) for more details.
25
 
@@ -33,7 +32,6 @@ The dataset consists of these components: a task file, and a directory containin
33
  ├── widesearch.jsonl
34
  └── widesearch_gold/
35
  ├── ws_en_001.csv
36
- ├── ws_zh_001.csv
37
  └── ...
38
 
39
  ```
@@ -71,8 +69,7 @@ The dataset consists of these components: a task file, and a directory containin
71
  * `unique_columns` (list): The primary key column(s) used to uniquely identify a row in the table.
72
  * `required` (list): All column names that must be present in the agent's generated response.
73
  * `eval_pipeline` (dict): Defines the evaluation method for each column.
74
- * `preprocess` (list): Preprocessing steps to be applied to the cell data before evaluation (e.g., `norm_str` to normalize strings, `extract_number` to extract numbers).
75
- * `metric` (list): The metric used to compare the predicted value with the ground truth (e.g., `exact_match`, `number_near` for numerical approximation, `llm_judge` for judgment by an LLM).
76
  * `criterion` (float or string): Specific criteria for the metric. For `number_near`, this is the allowed relative tolerance; for `llm_judge`, it's the scoring guide for the "judge" LLM.
77
  * `language` (string): The language of the task (`en` or `zh`).
78
 
 
1
  ---
2
+ license: apache-2.0
3
  language:
 
4
  - en
5
  size_categories:
6
+ - n>1T
7
  configs:
8
  - config_name: default
9
  data_files:
10
  - split: full
11
+ path: widesearch.jsonl
12
  ---
13
  # WideSearch: Benchmarking Agentic Broad Info-Seeking
14
 
 
18
 
19
  The challenge in these tasks lies not in cognitive difficulty, but in the operational scale, repetitiveness, and the need for **Completeness** and **Factual Fidelity** in the final result. For example, a financial analyst gathering key metrics for all companies in a sector, or a job seeker collecting every vacancy that meets their criteria.
20
 
21
+ The benchmark, originating from the research paper "WideSearch: Benchmarking Agentic Broad Info-Seeking," contains 200 meticulously designed tasks (100 in English, 100 in English).
22
 
23
  See our [paper](https://arxiv.org/abs/2508.07999) and [github repo](https://github.com/ByteDance-Seed/WideSearch) for more details.
24
 
 
32
  ├── widesearch.jsonl
33
  └── widesearch_gold/
34
  ├── ws_en_001.csv
 
35
  └── ...
36
 
37
  ```
 
69
  * `unique_columns` (list): The primary key column(s) used to uniquely identify a row in the table.
70
  * `required` (list): All column names that must be present in the agent's generated response.
71
  * `eval_pipeline` (dict): Defines the evaluation method for each column.
72
+ * `preprocess` (list): Preprocessing steps to be applied to the cell data before evaluation (e.g., `norm_str` to normalize strings, `extract_number`.
 
73
  * `criterion` (float or string): Specific criteria for the metric. For `number_near`, this is the allowed relative tolerance; for `llm_judge`, it's the scoring guide for the "judge" LLM.
74
  * `language` (string): The language of the task (`en` or `zh`).
75