ORION / README.md
Starry-Night3's picture
upload
fa77c8d
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
  - zh

ORION Evaluation Results

ORION is a multilingual benchmark designed to evaluate open-domain reasoning across diverse web-related domains. Each example in the dataset requires multi-step logical composition grounded in verifiable sources, challenging advanced AI assistants and retrieval-augmented models. The dataset consists of 310 questions (170 in Chinese and 140 in English), each accompanied by verified answers, diverse acceptable variants (e.g., aliases or synonymous expressions, separated by |), and evidence URLs to ensure fair and flexible evaluation.The table below reports the accuracy of three AI systems on ORION.

AI System Chinese (%) English (%) Overall (%)
Kimi Exploration Edition 14.7 20.0 17.1
Doubao Search 23.5 30.7 26.8
Qwen2.5-Max Search 20.0 20.7 20.3

Citation