jacklanda commited on
Commit
a57427f
·
verified ·
1 Parent(s): 5ba4121

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -142
README.md CHANGED
@@ -1,142 +1,157 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # $OneMillion-Bench
6
-
7
- A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across **5 professional domains**. The benchmark contains **400 entries** with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following.
8
-
9
- ## Dataset Structure
10
-
11
- Each subdirectory is a **Hugging Face subset** (configuration), and all data is in the **`test`** split.
12
-
13
- ```
14
- $OneMillion-Bench/
15
- ├── economics_and_finance/
16
- │ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
17
- ├── healthcare_and_medicine/
18
- │ └── test.json # 80 entries (40 matched EN-CN pairs)
19
- ├── industry/
20
- │ └── test.json # 80 entries (40 matched EN-CN pairs)
21
- ├── law/
22
- │ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
23
- ├── natural_science/
24
- │ └── test.json # 80 entries (40 matched EN-CN pairs)
25
- └── README.md
26
- ```
27
-
28
- | Subset | Split | Entries |
29
- |---|---|---|
30
- | `economics_and_finance` | `test` | 80 |
31
- | `healthcare_and_medicine` | `test` | 80 |
32
- | `industry` | `test` | 80 |
33
- | `law` | `test` | 80 |
34
- | `natural_science` | `test` | 80 |
35
-
36
- ## Domains & Coverage
37
-
38
- | Domain | Categories | Example Subcategories | Bilingual Mode |
39
- |---|---|---|---|
40
- | **Economics & Finance** | Investing, FinTech, Banking, Insurance, M&A | Equities, VC/PE, Cryptocurrency, Commodities | Separate questions per language |
41
- | **Healthcare & Medicine** | Clinical Medicine, Basic Medicine, Pharma & Biotech | Hepatobiliary Surgery, Oncology, Nephrology, Dentistry | Matched translation pairs |
42
- | **Industry** | Telecommunications, ML, Architecture, Semiconductors | Backend Dev, Chemical Engineering, Chip Design | Matched translation pairs |
43
- | **Law** | Civil, Criminal, International, Corporate, IP, Labor | Contract Disputes, Criminal Defense, Copyright, M&A | Separate questions per language |
44
- | **Natural Science** | Chemistry, Biology, Physics, Mathematics | Organic Chemistry, Condensed Matter, Molecular Biology | Matched translation pairs |
45
-
46
- ## Entry Schema
47
-
48
- Each entry is a JSON object with 7 fields:
49
-
50
- ```jsonc
51
- {
52
- "id": "uuid-string", // globally unique identifier
53
- "case_id": 1, // links bilingual pairs (in matched-pair domains)
54
- "language": "en", // "en" or "cn" (50/50 split in every file)
55
- "system_prompt": "", // reserved (empty across all entries)
56
- "question": "...", // expert-level evaluation prompt
57
- "tags": {
58
- "topics": [ // 3-level taxonomy
59
- "Domain", // e.g. "Economics and Finance"
60
- "Category", // e.g. "Investing"
61
- "Subcategory" // e.g. "Equities"
62
- ],
63
- "time_sensitivity": {
64
- "time_sensitivity": "Time-agnostic", // or "Weakly/Strongly time-sensitive"
65
- "year_month": "NA", // "YYYY-MM" when time-sensitive
66
- "day": "NA" // "DD" when applicable
67
- }
68
- },
69
- "rubrics": [ // weighted grading criteria (11-37 per entry)
70
- {
71
- "rubric_number": 1,
72
- "rubric_detail": "...", // specific grading criterion
73
- "rubric_weight": 5, // positive = reward, negative = penalty
74
- "rubric_label": "..." // category (see below)
75
- }
76
- ]
77
- }
78
- ```
79
-
80
- ### Rubric Labels
81
-
82
- | Label | Role | Typical Weight |
83
- |---|---|---|
84
- | Factual Information | Tests factual accuracy | +3 to +5 |
85
- | Analytical Reasoning | Assesses depth of analysis | +3 to +5 |
86
- | Structure and Formatting | Evaluates output organization | -2 to -4 (penalty) |
87
- | Instructions Following | Checks compliance with task constraints | mixed |
88
-
89
- ## Quick Start
90
-
91
- ```python
92
- import json
93
-
94
- # Load a subset (test split)
95
- with open("natural_science/test.json") as f:
96
- data = json.load(f)
97
-
98
- # Filter English entries
99
- en_entries = [e for e in data if e["language"] == "en"]
100
-
101
- # Iterate with rubrics
102
- for entry in en_entries[:1]:
103
- print(f"Topic: {' > '.join(entry['tags']['topics'])}")
104
- print(f"Question: {entry['question'][:200]}...")
105
- print(f"Rubrics ({len(entry['rubrics'])}):")
106
- for r in entry["rubrics"][:3]:
107
- print(f" [{r['rubric_weight']:+d}] {r['rubric_label']}: {r['rubric_detail'][:80]}...")
108
- ```
109
-
110
- Example output:
111
-
112
- ```
113
- Topic: Natural Sciences > Chemistry > Organic Chemistry
114
- Question: You are an expert in organic chemistry. A graduate student is researching ...
115
- Rubrics (18):
116
- [+5] Factual Information: Correctly identifies the primary reaction mechanism ...
117
- [+4] Analytical Reasoning: Provides a coherent comparison of thermodynamic vs ...
118
- [-3] Structure and Formatting: Response lacks clear section headings or logica...
119
- ```
120
-
121
- ## Evaluation
122
-
123
- Score a model response by summing the weights of satisfied rubrics:
124
-
125
- ```python
126
- def score(response: str, rubrics: list, judge_fn) -> dict:
127
- """
128
- judge_fn(response, rubric_detail) -> bool
129
- """
130
- total, earned = 0, 0
131
- for r in rubrics:
132
- met = judge_fn(response, r["rubric_detail"])
133
- if met:
134
- earned += r["rubric_weight"]
135
- if r["rubric_weight"] > 0:
136
- total += r["rubric_weight"]
137
- return {"score": earned, "max_possible": total, "pct": earned / total if total else 0}
138
- ```
139
-
140
- ## License
141
-
142
- Apache 2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - economics_and_finance
11
+ - healthcare_and_medicine
12
+ - industry
13
+ - law
14
+ - natural_science
15
+ pretty_name: $OneMillion-Bench
16
+ size_categories:
17
+ - n<1K
18
+ ---
19
+
20
+ # $OneMillion-Bench
21
+
22
+ A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across **5 professional domains**. The benchmark contains **400 entries** with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following.
23
+
24
+ ## Dataset Structure
25
+
26
+ Each subdirectory is a **Hugging Face subset** (configuration), and all data is in the **`test`** split.
27
+
28
+ ```
29
+ $OneMillion-Bench/
30
+ ├── economics_and_finance/
31
+ │ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
32
+ ├── healthcare_and_medicine/
33
+ │ └── test.json # 80 entries (40 matched EN-CN pairs)
34
+ ├── industry/
35
+ │ └── test.json # 80 entries (40 matched EN-CN pairs)
36
+ ├── law/
37
+ │ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
38
+ ├── natural_science/
39
+ │ └── test.json # 80 entries (40 matched EN-CN pairs)
40
+ └── README.md
41
+ ```
42
+
43
+ | Subset | Split | Entries |
44
+ |---|---|---|
45
+ | `economics_and_finance` | `test` | 80 |
46
+ | `healthcare_and_medicine` | `test` | 80 |
47
+ | `industry` | `test` | 80 |
48
+ | `law` | `test` | 80 |
49
+ | `natural_science` | `test` | 80 |
50
+
51
+ ## Domains & Coverage
52
+
53
+ | Domain | Categories | Example Subcategories | Bilingual Mode |
54
+ |---|---|---|---|
55
+ | **Economics & Finance** | Investing, FinTech, Banking, Insurance, M&A | Equities, VC/PE, Cryptocurrency, Commodities | Separate questions per language |
56
+ | **Healthcare & Medicine** | Clinical Medicine, Basic Medicine, Pharma & Biotech | Hepatobiliary Surgery, Oncology, Nephrology, Dentistry | Matched translation pairs |
57
+ | **Industry** | Telecommunications, ML, Architecture, Semiconductors | Backend Dev, Chemical Engineering, Chip Design | Matched translation pairs |
58
+ | **Law** | Civil, Criminal, International, Corporate, IP, Labor | Contract Disputes, Criminal Defense, Copyright, M&A | Separate questions per language |
59
+ | **Natural Science** | Chemistry, Biology, Physics, Mathematics | Organic Chemistry, Condensed Matter, Molecular Biology | Matched translation pairs |
60
+
61
+ ## Entry Schema
62
+
63
+ Each entry is a JSON object with 7 fields:
64
+
65
+ ```jsonc
66
+ {
67
+ "id": "uuid-string", // globally unique identifier
68
+ "case_id": 1, // links bilingual pairs (in matched-pair domains)
69
+ "language": "en", // "en" or "cn" (50/50 split in every file)
70
+ "system_prompt": "", // reserved (empty across all entries)
71
+ "question": "...", // expert-level evaluation prompt
72
+ "tags": {
73
+ "topics": [ // 3-level taxonomy
74
+ "Domain", // e.g. "Economics and Finance"
75
+ "Category", // e.g. "Investing"
76
+ "Subcategory" // e.g. "Equities"
77
+ ],
78
+ "time_sensitivity": {
79
+ "time_sensitivity": "Time-agnostic", // or "Weakly/Strongly time-sensitive"
80
+ "year_month": "NA", // "YYYY-MM" when time-sensitive
81
+ "day": "NA" // "DD" when applicable
82
+ }
83
+ },
84
+ "rubrics": [ // weighted grading criteria (11-37 per entry)
85
+ {
86
+ "rubric_number": 1,
87
+ "rubric_detail": "...", // specific grading criterion
88
+ "rubric_weight": 5, // positive = reward, negative = penalty
89
+ "rubric_label": "..." // category (see below)
90
+ }
91
+ ]
92
+ }
93
+ ```
94
+
95
+ ### Rubric Labels
96
+
97
+ | Label | Role | Typical Weight |
98
+ |---|---|---|
99
+ | Factual Information | Tests factual accuracy | +3 to +5 |
100
+ | Analytical Reasoning | Assesses depth of analysis | +3 to +5 |
101
+ | Structure and Formatting | Evaluates output organization | -2 to -4 (penalty) |
102
+ | Instructions Following | Checks compliance with task constraints | mixed |
103
+
104
+ ## Quick Start
105
+
106
+ ```python
107
+ import json
108
+
109
+ # Load a subset (test split)
110
+ with open("natural_science/test.json") as f:
111
+ data = json.load(f)
112
+
113
+ # Filter English entries
114
+ en_entries = [e for e in data if e["language"] == "en"]
115
+
116
+ # Iterate with rubrics
117
+ for entry in en_entries[:1]:
118
+ print(f"Topic: {' > '.join(entry['tags']['topics'])}")
119
+ print(f"Question: {entry['question'][:200]}...")
120
+ print(f"Rubrics ({len(entry['rubrics'])}):")
121
+ for r in entry["rubrics"][:3]:
122
+ print(f" [{r['rubric_weight']:+d}] {r['rubric_label']}: {r['rubric_detail'][:80]}...")
123
+ ```
124
+
125
+ Example output:
126
+
127
+ ```
128
+ Topic: Natural Sciences > Chemistry > Organic Chemistry
129
+ Question: You are an expert in organic chemistry. A graduate student is researching ...
130
+ Rubrics (18):
131
+ [+5] Factual Information: Correctly identifies the primary reaction mechanism ...
132
+ [+4] Analytical Reasoning: Provides a coherent comparison of thermodynamic vs ...
133
+ [-3] Structure and Formatting: Response lacks clear section headings or logica...
134
+ ```
135
+
136
+ ## Evaluation
137
+
138
+ Score a model response by summing the weights of satisfied rubrics:
139
+
140
+ ```python
141
+ def score(response: str, rubrics: list, judge_fn) -> dict:
142
+ """
143
+ judge_fn(response, rubric_detail) -> bool
144
+ """
145
+ total, earned = 0, 0
146
+ for r in rubrics:
147
+ met = judge_fn(response, r["rubric_detail"])
148
+ if met:
149
+ earned += r["rubric_weight"]
150
+ if r["rubric_weight"] > 0:
151
+ total += r["rubric_weight"]
152
+ return {"score": earned, "max_possible": total, "pct": earned / total if total else 0}
153
+ ```
154
+
155
+ ## License
156
+
157
+ Apache 2.0