MiroFlow-Benchmarks / README.md
nielsr's picture
nielsr HF Staff
Update dataset card with paper link, project page, and benchmark details
89f80a7 verified
|
raw
history blame
2.49 kB
metadata
license: apache-2.0
viewer: false
task_categories:
  - text-generation
language:
  - en
  - zh

MiroFlow Benchmarks

This repository contains the benchmarking datasets used for the MiroFlow Framework and the evaluation of the MiroThinker series of research agents.

For more details, see the paper MiroThinker-1.7 & H1: Towards Heavy-Duty Research Agents via Verification.

Included Benchmarks

The suite provides a comprehensive evaluation for research agents across several domains:

  • GAIA (Validation & Text-103): Benchmarks for General AI Assistants.
  • HLE (Humanity's Last Exam): Includes HLE-Text-2158 and HLE-Text-500 subsets for high-level reasoning.
  • BrowseComp (EN & ZH): Web browsing and comprehension tasks in English and Chinese.
  • WebWalkerQA: Web navigation and question answering.
  • Frames: Factuality, Retrieval, And reasoning MEasurement Set.
  • XBench-DeepSearch: Evaluation for deep research agents.
  • FutureX: A live benchmark for predicting unknown futures.
  • SEAL-0: Evaluating LLMs on conflicting-evidence web questions.
  • AIME2025: American Invitational Mathematics Examination 2025.
  • DeepSearchQA: Google's Deep Search Question Answering benchmark.

Download and Usage

The benchmark data is provided as a password-protected zip file. To download and extract the data, use the following commands:

wget https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/data_20251115_password_protected.zip
unzip data_20251115_password_protected.zip
# Password: pf4*
rm data_20251115_password_protected.zip

For evaluation scripts and agent configurations, please refer to the MiroThinker GitHub repository.

Citation

If you find these benchmarks useful in your research, please consider citing:

@article{miromind2025mirothinker,
  title={MiroThinker-1.7 & H1: Towards Heavy-Duty Research Agents via Verification},
  author={MiroMind Team and Bai, S. and Bing, L. and Lei, L. and Li, R. and Li, X. and Lin, X. and Min, E. and Su, L. and Wang, B. and Wang, L. and Wang, L. and Wang, S. and Wang, X. and Zhang, Y. and Zhang, Z. and others},
  journal={arXiv preprint arXiv:2603.15726},
  year={2026}
}