UIS-QA / README.md
UIS-Digger's picture
Upload uis_qa.tsv
ef97f79
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-generation
language:
  - zh
  - en
tags:
  - information-seeking
  - unindexed-information
  - benchmark
  - agent-evaluation
  - web-browsing
size_categories:
  - n<1K

UIS-QA: A Benchmark for Unindexed Information Seeking

UIS problem illustration

Figure 1. UIS problem. Standard agents (bottom) rely on indexed information and often fail or hallucinate; UIS-capable agents (top) use additional tools to excavate unindexed information and solve UIS tasks.

If .figs do not load, see the paper.


πŸ”” News

  • [2026.03.10] πŸŽ‰ We release the UIS-QA dataset and the paper (ICLR 2026, arXiv) today!

πŸ“‹ Dataset Description

Homepage Paper (arXiv:2603.08117)
Paper UIS-Digger: Towards Comprehensive Research Agent Systems for Real-World Unindexed Information Seeking (ICLR 2026)
Languages Chinese (zh), English (en)
License Apache-2.0
Size 110 question-answer pairs (84 Chinese, 26 English)

Summary

UIS-QA is the first benchmark dedicated to **Unindexed Information Seeking (UIS)**β€”the setting where the information needed to answer a question is not directly retrievable via search engine results (e.g., content behind deep navigation, dynamic pages, embedded files, or overlooked corners of the web). It is introduced in the ICLR 2026 paper UIS-Digger.

Unlike conventional information-seeking benchmarks (e.g., GAIA, BrowseComp), UIS-QA explicitly requires agents to rely on unindexed information: answering correctly demands actions such as multi-step browsing, option selection, filter setting, file download, or reading content that search snippets do not expose. The benchmark is designed to evaluate whether agent systems can discover and use information that is not in standard search indices.

Note: State-of-the-art information-seeking agents show a sharp performance drop on UIS-QA (e.g., from ~70% on GAIA and ~47% on BrowseComp-zh to ~24–25% on UIS-QA), highlighting UIS as an underexplored and critical capability.


πŸ“Š Main Results (from the paper)

Evaluation results on UIS-QA, GAIA, and BrowseComp-zh (BC-zh). Action space: crawl (read webpage content), visual (read images), file (download/read files), browser (operate browser). βœ“ = supported, βœ— = not supported.

Names crawl visual file browser Backbone UIS-QA GAIA BC-zh
Direct Inference
DeepSeek-V3.1 – – – – DeepSeek-V3.1 1.8 – –
Claude-sonnet-4 – – – – Claude-S4 2.7 – –
GPT-5 – – – – GPT-5 0.9 – –
Commercial System
GLM-4.5 (auto-thinking, web search) βœ“ βœ— – βœ“ GLM4.5† 11.8 – –
Doubao (DeepThink) – – – – Doubao 11.8 – –
Gemini-2.5-pro (google_search) – – – – Gemini-2.5-pro 4.5 – –
ReAct Agentic Framework
WebSailor βœ“ βœ— βœ— βœ— WebSailor-32B + Qwen3-72B 7.3 53.2‑ 25.5
Tongyi-DR βœ“ βœ— βœ— βœ— TongyiDR-30B-A3B† + GPT-4o 23.6 70.9‑ 46.7
Multi-agent Framework
DDv2 βœ“ βœ— βœ— βœ— Pangu-38B 8.2 – 34.6
OWL βœ“ βœ“ βœ“ βœ“ O3-mini + 4o + Claude-S3.7 4.6 69.7 –
MiroThinker v0.1 βœ“ βœ“ βœ“ βœ“ MiroThinker-32B-DPO + GPT-4.1 + Claude-S3.7 7.3 57.9‑ –
Memento βœ“ βœ“ βœ“ βœ— O3 + GPT-4.1 25.5Β§ 79.4 –
AWorld βœ“ βœ“ βœ“ βœ“ Gemini-2.5-pro + GPT-4o 5.5 32.2 –
UIS-Digger (Pangu) βœ“ βœ“ βœ“ βœ“ PanGu-38B 27.3 50.5 32.5
UIS-Digger (Qwen) βœ“ βœ“ βœ“ βœ“ Qwen3-32B 27.3 47.6 32.5

† Reasoning-oriented LLMs. ‑ GAIA-text-103 (not full GAIA). Β§ Memento without case bank (UIS is a new task).

Best on UIS-QA: UIS-Digger reaches 27.3% (tied best), outperforming all baselines including those with O3 or GPT-4.1.


πŸ”§ UIS-Digger Framework and QA Construction Pipeline

UIS-Digger multi-agent system

Figure 2. UIS-Digger multi-agent system. Planner, web searcher, web surfer, and file reader work together. The web surfer can switch between textual and visual mode.

QA construction pipeline

Figure 3. QA pairs construction pipeline. Left: Real-world information β†’ homepage collection β†’ unindexed information collection β†’ question generation. Right: Simulated webpages with difficult action spaces β†’ QA generation from JSON DB. (If the PDF does not render in your viewer, see the paper.)


πŸ“ Dataset Structure

Data fields

Field Type Description
Question string Natural language question that requires unindexed information to answer.
Answer string Golden answer (short fact, number, list, or with explicit evaluation rules, e.g. β€œeither A or B is correct”).

Data format

  • File: uis_qa.tsv (tab-separated values).
  • Encoding: UTF-8. Fields may be wrapped in triple quotes (""") when they contain tabs or newlines.
  • Splits: No predefined train/validation/test split; the dataset is a test set for benchmarking.

Example

Question	Answer
"""How many restrooms does the Metropolitan Museum of Art have, and how many of them are accessible restrooms?"""	"""13 restrooms, 10 accessible restrooms"""

(The dataset also contains Chinese questions; the file is UTF-8 encoded.)

Answer encryption (anti-leakage)

To reduce the risk of the benchmark being crawled or used for pretraining (so that models memorize answers instead of performing real information seeking), the Answer column may be released in encrypted form on Hugging Face. Answers then appear as UIS_ENC_V1: followed by ciphertext and are not usable for training.

Evaluators: To obtain the decryption key, please fill in the key request form (with reCAPTCHA verification). The key will be sent automatically to your email after submission. Then use the provided decoder script for local evaluation:

pip install cryptography
python scripts/decrypt_answers.py <encrypted.tsv> -o <decrypted.tsv> --key-file <key.txt>

See scripts/README.md for full instructions. Do not redistribute the key or decrypted answers beyond personal evaluation use.


πŸ“ Data Collection and Curation

Problem formulation (from the paper)

  • Indexed information (II): Information present in search result snippets or in one-step crawl from indexed pages.
  • Unindexed information (UI): All other information on the web (deeper pages, files, dynamic content, etc.).
  • A question is a UIS task if: (1) solving it requires some unindexed information, and (2) the correct answer cannot be inferred from indexed information alone.

Collection procedure

  1. Expert annotation: An expert group manually created question-answer pairs by:
    • Navigating authoritative or official websites (government, companies, museums, repositories, etc.).
    • Performing interactive actions: multi-round clicks, option selection, filters, site-internal search, file download.
    • Reaching a specific information source (page or file), then formulating a question whose answer is in that content.
  2. Diversity: At most two QA pairs per website to encourage coverage across domains and sites.
  3. Filtering: A UIS filtering pipeline was applied to remove questions solvable using only indexed information (see below).

Curation principles

Principle Description
Objectivity Answers are factual, deterministic, and unique (no open-ended or subjective questions).
Authoritativeness Golden answers are derived from authoritative sources; finding and trusting the right source is part of the task.
Static nature Answers are chosen so that they remain valid across evaluation times (no β€œtoday’s weather” style questions).
Verifiability Answers allow automatic or rule-based verification (numbers, dates, proper nouns, or explicit rules such as β€œA or B counts as correct”).
Accessibility Questions avoid CAPTCHAs, login-only content, or other barriers that would require human verification during browsing.

UIS filtering pipeline

To ensure each question truly requires unindexed information:

  1. Manual check: Three annotators independently used Google Search to verify that the target content is not directly in the search result page (if the SERP only links to the content page, the question is still considered UIS).
  2. Automatic verifier (z.ai): Used to filter out questions answerable from search alone; questions that require downloading a file to answer are kept as UIS.
  3. LLM filter: An offline LLM (e.g., DeepSeek-R1) was used to remove questions answerable from the model’s internal knowledge alone.

The final set consists of 110 high-quality UIS samples.


🌐 Task and Domain Coverage

  • Task type: Information seeking with final answer–oriented evaluation (short, deterministic answers).
  • Environment: Real-world live web; unknown start point (agents begin from a general-purpose search engine).
  • Required capabilities: Search, crawl, optional file download/parsing, and webpage interaction (click, scroll, select, type, etc.).

Domains (non-exhaustive): government announcements, official product/data pages, source code repositories, games (e.g., esports, game wikis), company reports, museums and cultural institutions, finance and markets, academic and patent databases, sports, and general factual lookups from official or authoritative sites.


βœ… Evaluation

  • Metric: Accuracy (exact or rule-based match of the model’s final answer to the golden answer).
  • Verification: The paper uses a rule-based LLM as an automatic verifier; some answers include explicit criteria (e.g., β€œeither 117 or 115 is correct”).
  • Important: Evaluation should be performed with an agent that has access to search, browsing (and optionally file reading), not with a single-call LLM API, so that unindexed information can actually be sought.

⚠️ Limitations and Considerations

Topic Note
Language imbalance Most questions (84) are in Chinese; 26 are in English.
Temporal validity Although answers were chosen for static nature, some sources may change over time (e.g., updated reports, redesigned websites), which can affect reproducibility.
Difficulty Even the best reported system (UIS-Digger) reaches about 27.27% accuracy on UIS-QA, indicating that the benchmark is challenging and that UIS remains an open research problem.
Scope The benchmark focuses on factual, verifiable QA; it does not cover subjective or long-form generation quality.

🎯 Intended Use

  • Primary: Benchmarking information-seeking agents (with search + browsing Β± file tools) on their ability to find and use unindexed web information.
  • Research: Studying failure modes (e.g., retrieval vs. reasoning), action space design, and training strategies for UIS-capable agents.
  • Not intended: Training data without proper licensing of underlying sources; evaluation of pure LLMs without tools; non–information-seeking tasks.

πŸ“Ž Citation

If you use UIS-QA in your work, please cite the ICLR 2026 paper:

@inproceedings{uis-digger-iclr2026,
  title     = {UIS-Digger: Towards Comprehensive Research Agent Systems for Real-World Unindexed Information Seeking},
  author    = {Liu, Chang and Kuang, Chuqiao and Zhuang, Tianyi and Cheng, Yuxin and Zhou, Huichi and Li, Xiaoguang and Shang, Lifeng},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year      = {2026}
}

πŸ“œ License

This dataset is released under the Apache-2.0 license.


πŸ”— Related Resources

  • UIS-Digger: The multi-agent framework and baseline introduced in the same paper; achieves 27.27% on UIS-QA with a ~30B-parameter backbone and SFT + RFT training.
  • Paper & code: See the paper for methodology, baselines, and analysis of failure modes and agent behavior.