File size: 1,155 Bytes
3f0f2cb
 
250b8a3
 
 
 
 
3f0f2cb
 
1be0c13
 
dbaa700
1be0c13
 
 
 
 
 
 
fa77c8d
250b8a3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
license: mit
task_categories:
- question-answering
language:
- en
- zh
---

# ORION Evaluation Results

ORION is a multilingual benchmark designed to evaluate open-domain reasoning across diverse web-related domains. Each example in the dataset requires multi-step logical composition grounded in verifiable sources, challenging advanced AI assistants and retrieval-augmented models. The dataset consists of 310 questions (170 in Chinese and 140 in English), each accompanied by verified answers, diverse acceptable variants (e.g., aliases or synonymous expressions, separated by `|`), and evidence URLs to ensure fair and flexible evaluation.The table below reports the accuracy of three AI systems on ORION.

| **AI System**             | **Chinese (%)** | **English (%)** | **Overall (%)** |
|--------------------------|------------------|------------------|------------------|
| Kimi Exploration Edition | 14.7             | 20.0             | 17.1             |
| Doubao Search            | 23.5             | 30.7             | 26.8             |
| Qwen2.5-Max Search       | 20.0             | 20.7             | 20.3             |


# Citation