geolocal commited on
Commit
75c44ab
·
verified ·
1 Parent(s): 042aaaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - factuality
9
+ - search
10
+ - retrieval
11
+ - deep research
12
+ - comprehensiveness
13
+ - agent
14
+ - posttraining
15
+ - benchmark
16
+ - Google DeepMind
17
+ pretty_name: DeepSearchQA
18
+ size_categories:
19
+ - n<1K
20
+ configs:
21
+ - config_name: deepsearchqa
22
+ default: true
23
+ data_files:
24
+ - split: eval
25
+ path: DSQA_full.csv
26
+ ---
27
+ # DeepSearchQA
28
+ #### A 900-prompt factuality benchmark from Google DeepMind, designed to evaluate agents on difficult multi-step information-seeking tasks across 17 different fields.
29
+
30
+ ▶ [Google DeepMind Release Blog Post](https://blog.google/technology/developers/deep-research-agent-gemini-api/)\
31
+ ▶ [DeepSearchQA Leaderboard on Kaggle](https://www.kaggle.com/benchmarks/google/dsqa)\
32
+ ▶ [Technical Report](https://storage.googleapis.com/deepmind-media/DeepSearchQA/DeepSearchQA_benchmark_paper.pdf)\
33
+ ▶ [Evaluation Starter Code](https://www.kaggle.com/code/andrewmingwang/deepsearchqa-starter-code)
34
+
35
+
36
+ ## Benchmark
37
+
38
+ DeepSearchQA is a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks across 17 different fields. Unlike traditional benchmarks that target single-answer retrieval or broad-spectrum factuality, DeepSearchQA features a dataset of challenging, hand-crafted tasks designed to evaluate an agent’s ability to execute complex search plans to generate exhaustive answer lists.
39
+
40
+ Each task is structured as a "causal chain", where discovering information for one step is dependent on the successful completion of the previous one, stressing long-horizon planning and context retention. All tasks are grounded in the open web with objectively verifiable answer sets.
41
+
42
+ DeepSearchQA is meant to be used to evaluate LLMs or LLM agents with access to the web.
43
+
44
+ ## Dataset Description
45
+
46
+ This dataset is a collection of 900 examples. Each example is composed of:
47
+
48
+ * A problem (`problem`) which is the prompt testing parametric knowledge.
49
+ * A problem category (`problem_category`) specifying which of 17 different domains the problem belongs to.
50
+ * A gold answer (`answer`) which is used in conjunction with the evaluation prompt to judge the correctness of an LLM's response.
51
+ * An answer type classification (`answer_type`) specifying whether a single answer or set of answers is expected as a response. This information should NOT be given to the LLM during inference time. 65% of answers are of type `Set Answer`.
52
+
53
+ See the [Technical Report](https://storage.googleapis.com/deepmind-media/DeepSearchQA/DeepSearchQA_benchmark_paper.pdf) for methodology details.
54
+
55
+ ## Limitations
56
+ While DeepSearchQA offers a robust framework for evaluating comprehensive retrieval, it relies on
57
+ specific design choices that entail certain limitations. By employing an exclusively outcome-based
58
+ evaluation, we effectively treat any agent that is evaluated as a black box. In the absence of trajectory data, it is difficult
59
+ to distinguish between an agent that reasoned correctly and one that arrived at the correct list through
60
+ inefficient or accidental means (e.g., lucky guessing). Additionally, the static web assumption, while
61
+ necessary for reproducibility, limits the evaluation of “breaking news” retrieval where ground truth is
62
+ volatile. A task’s ground truth may become outdated if source websites are removed or their content
63
+ is significantly altered. This is a prevalent challenge for all benchmarks operating on the live web,
64
+ necessitating periodic manual reviews and updates to the dataset.
65
+
66
+ Questions, comments, or issues? Share your thoughts with us in the [discussion forum](https://www.kaggle.com/benchmarks/google/dsqa/discussion).
67
+
68
+ ## Evaluation Prompt
69
+ The autorater which should be used for DeepSearchQA is `gemini-2.5-flash` with the grading prompt found in the [starter notebook](https://www.kaggle.com/code/andrewmingwang/deepsearchqa-starter-code) on Kaggle. Using a different autorater model or grading prompt will likely result in statistically significant deviation in results.
70
+
71
+ ## Citation
72
+
73
+ Coming soon.