SeaWolf-AI commited on
Commit
23164c1
·
verified ·
1 Parent(s): 714453d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +398 -3
README.md CHANGED
@@ -1,3 +1,398 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ pretty_name: "FINAL Bench — Functional Metacognitive Reasoning Benchmark"
6
+ size_categories:
7
+ - n<1K
8
+ task_categories:
9
+ - text-generation
10
+ - question-answering
11
+ tags:
12
+ - functional-metacognition
13
+ - self-correction
14
+ - reasoning
15
+ - benchmark
16
+ - error-recovery
17
+ - declarative-procedural-gap
18
+ - cognitive-bias
19
+ - TICOS
20
+ - AGI-evaluation
21
+ - LLM-evaluation
22
+ - metacognition
23
+ dataset_info:
24
+ features:
25
+ - name: task_id
26
+ dtype: string
27
+ - name: domain
28
+ dtype: string
29
+ - name: grade
30
+ dtype: string
31
+ - name: ticos_type
32
+ dtype: string
33
+ - name: difficulty
34
+ dtype: string
35
+ - name: lens
36
+ dtype: string
37
+ - name: title
38
+ dtype: string
39
+ - name: prompt
40
+ dtype: string
41
+ - name: expected_behavior
42
+ dtype: string
43
+ - name: hidden_trap
44
+ dtype: string
45
+ - name: ticos_required
46
+ sequence: string
47
+ - name: ticos_optional
48
+ sequence: string
49
+ config_name: default
50
+ splits:
51
+ - name: test
52
+ num_examples: 100
53
+ data_files:
54
+ - split: test
55
+ path: FINAL_Bench_100.jsonl
56
+ ---
57
+
58
+ # FINAL Bench: Functional Metacognitive Reasoning Benchmark
59
+
60
+ > **"Not how much AI knows — but whether it knows what it doesn't know, and can fix it."**
61
+
62
+ ---
63
+
64
+ ## Overview
65
+
66
+ **FINAL Bench** (Frontier Intelligence Nexus for AGI-Level Verification) is the first comprehensive benchmark for evaluating **functional metacognition** in Large Language Models (LLMs).
67
+
68
+ Unlike existing benchmarks (MMLU, HumanEval, GPQA) that measure only final-answer accuracy, FINAL Bench evaluates the **entire pipeline of error detection, acknowledgment, and correction** — the hallmark of expert-level intelligence and a prerequisite for AGI.
69
+
70
+ | Item | Detail |
71
+ |------|--------|
72
+ | **Version** | 3.0 |
73
+ | **Tasks** | 100 |
74
+ | **Domains** | 15 (Mathematics, Medicine, Ethics, Philosophy, Economics, etc.) |
75
+ | **Metacognitive Types** | 8 TICOS types |
76
+ | **Difficulty Grades** | A (frontier) / B (expert) / C (advanced) |
77
+ | **Evaluation Axes** | 5 (PQ, MA, ER, ID, FC) |
78
+ | **Language** | English |
79
+ | **License** | Apache 2.0 |
80
+
81
+ ---
82
+
83
+ ## Why FINAL Bench?
84
+
85
+ ### Metacognition Is the Gateway to AGI
86
+
87
+ Metacognition — the ability to **detect one's own errors and self-correct** — is what separates human experts from novices. Without this capability, no system can achieve AGI regardless of its knowledge breadth or reasoning depth.
88
+
89
+ ### Limitations of Existing Benchmarks
90
+
91
+ | Generation | Representative | Measures | Limitation |
92
+ |-----------|---------------|----------|-----------|
93
+ | 1st | MMLU | Knowledge | Saturated (>90%) |
94
+ | 2nd | GSM8K, MATH | Reasoning | Answer-only |
95
+ | 3rd | GPQA, HLE | Expertise | Answer-only |
96
+ | **4th** | **FINAL Bench** | **Functional Metacognition** | **Detect → Acknowledge → Correct** |
97
+
98
+ ### Key Findings (9 SOTA Models Evaluated)
99
+
100
+ Evaluation of 9 state-of-the-art models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, DeepSeek-V3.2, and others) reveals:
101
+
102
+ - **ER Dominance**: **94.8%** of MetaCog gain originates from the Error Recovery axis alone
103
+ - **Declarative-Procedural Gap**: All 9 models can *verbalize* uncertainty but cannot *act* on it — mean MA–ER gap of 0.392
104
+ - **Difficulty Effect**: Harder tasks yield dramatically larger self-correction gains (Pearson *r* = –0.777, *p* < 0.001)
105
+
106
+ ---
107
+
108
+ ## Dataset Structure
109
+
110
+ ### Task Fields
111
+
112
+ | Field | Type | Description |
113
+ |-------|------|-------------|
114
+ | `task_id` | string | Unique identifier (e.g., FINAL-A01, FINAL-B15) |
115
+ | `domain` | string | One of 15 domains |
116
+ | `grade` | string | Difficulty grade: A / B / C |
117
+ | `ticos_type` | string | One of 8 metacognitive types |
118
+ | `difficulty` | string | frontier / expert |
119
+ | `lens` | string | Evaluation lens (theoretical / quantitative / debate) |
120
+ | `title` | string | Task title |
121
+ | `prompt` | string | Full prompt presented to the model |
122
+ | `expected_behavior` | string | Description of ideal metacognitive behavior |
123
+ | `hidden_trap` | string | Description of the embedded cognitive trap |
124
+ | `ticos_required` | list | Required TICOS elements |
125
+ | `ticos_optional` | list | Optional TICOS elements |
126
+
127
+ ### Grade Distribution
128
+
129
+ | Grade | Tasks | Weight | Characteristics |
130
+ |-------|-------|--------|----------------|
131
+ | **A** (frontier) | 50 | ×1.5 | Open problems, multi-stage traps |
132
+ | **B** (expert) | 33 | ×1.0 | Expert-level with embedded reversals |
133
+ | **C** (advanced) | 17 | ×0.7 | Advanced undergraduate level |
134
+
135
+ ### Domain Distribution (15 domains)
136
+
137
+ | Domain | n | Domain | n |
138
+ |--------|---|--------|---|
139
+ | Medicine | 11 | Art | 6 |
140
+ | Mathematics & Logic | 9 | Language & Writing | 6 |
141
+ | Ethics | 9 | AI & Technology | 6 |
142
+ | War & Security | 8 | History | 6 |
143
+ | Philosophy | 7 | Space & Physics | 6 |
144
+ | Economics | 7 | Religion & Mythology | 3 |
145
+ | Chemistry & Biology | 7 | Literature | 3 |
146
+ | Science | 6 | | |
147
+
148
+ ### TICOS Metacognitive Type Distribution (8 types)
149
+
150
+ | TICOS Type | Core Competency | Tasks | Declarative / Procedural |
151
+ |-----------|----------------|-------|--------------------------|
152
+ | **F_ExpertPanel** | Multi-perspective synthesis | 16 | Mixed |
153
+ | **H_DecisionUnderUncertainty** | Decision under incomplete info | 15 | Declarative-dominant |
154
+ | **E_SelfCorrecting** | Explicit error detection & correction | 14 | Pure procedural |
155
+ | **G_PivotDetection** | Key assumption change detection | 14 | Procedural-dominant |
156
+ | **A_TrapEscape** | Trap recognition & escape | 13 | Procedural-dominant |
157
+ | **C_ProgressiveDiscovery** | Judgment revision upon new evidence | 11 | Procedural-dominant |
158
+ | **D_MultiConstraint** | Optimization under conflicting constraints | 10 | Procedural-dominant |
159
+ | **B_ContradictionResolution** | Contradiction detection & resolution | 7 | Mixed |
160
+
161
+ ---
162
+
163
+ ## Five-Axis Evaluation Rubric
164
+
165
+ Each task is independently scored on five axes:
166
+
167
+ | Axis | Symbol | Weight | Measurement Target | Metacognitive Layer |
168
+ |------|--------|--------|--------------------|-------------------|
169
+ | Process Quality | **PQ** | 15% | Structured reasoning quality | — |
170
+ | Metacognitive Accuracy | **MA** | 20% | Confidence calibration, limit awareness | L1 (Declarative) |
171
+ | Error Recovery | **ER** | 25% | Error detection & correction behavior | L3 (Procedural) |
172
+ | Integration Depth | **ID** | 20% | Multi-perspective integration | — |
173
+ | Final Correctness | **FC** | 20% | Final answer accuracy | — |
174
+
175
+ **FINAL Score** = Σ(weighted_score × grade_weight) / Σ(grade_weight)
176
+
177
+ ### The MA–ER Separation: Core Innovation
178
+
179
+ - **MA (Metacognitive Accuracy)** = The ability to *say* "I might be wrong" (declarative metacognition)
180
+ - **ER (Error Recovery)** = The ability to *actually fix it* after recognizing the error (procedural metacognition)
181
+ - **MA–ER Gap** = The measured dissociation between "knowing" and "doing"
182
+
183
+ This separation directly maps to the monitoring–control model of Nelson & Narens (1990) from cognitive psychology.
184
+
185
+ ---
186
+
187
+ ## Usage
188
+
189
+ ### Loading the Dataset
190
+
191
+ ```python
192
+ from datasets import load_dataset
193
+
194
+ dataset = load_dataset("VIDRAFT/FINAL-Bench", split="test")
195
+
196
+ # Total 100 tasks
197
+ print(f"Total tasks: {len(dataset)}")
198
+
199
+ # Inspect a task
200
+ task = dataset[0]
201
+ print(f"ID: {task['task_id']}")
202
+ print(f"Domain: {task['domain']}")
203
+ print(f"TICOS: {task['ticos_type']}")
204
+ print(f"Prompt: {task['prompt'][:200]}...")
205
+ ```
206
+
207
+ ### Baseline Evaluation (Single API Call)
208
+
209
+ ```python
210
+ def evaluate_baseline(task, client, model_name):
211
+ """Baseline condition: single call, no self-correction prompting."""
212
+ response = client.chat.completions.create(
213
+ model=model_name,
214
+ messages=[{"role": "user", "content": task['prompt']}],
215
+ temperature=0.0
216
+ )
217
+ return response.choices[0].message.content
218
+
219
+ results = []
220
+ for task in dataset:
221
+ response = evaluate_baseline(task, client, "your-model")
222
+ results.append({
223
+ "task_id": task['task_id'],
224
+ "response": response
225
+ })
226
+ ```
227
+
228
+ ### Five-Axis Judge Evaluation
229
+
230
+ ```python
231
+ JUDGE_PROMPT = """
232
+ Evaluate the following response using the FINAL Bench 5-axis rubric.
233
+
234
+ [Task]
235
+ {prompt}
236
+
237
+ [Expected Behavior]
238
+ {expected_behavior}
239
+
240
+ [Hidden Trap]
241
+ {hidden_trap}
242
+
243
+ [Model Response]
244
+ {response}
245
+
246
+ Score each axis from 0.00 to 1.00 (in 0.25 increments):
247
+ - process_quality (PQ): Structured reasoning quality
248
+ - metacognitive_accuracy (MA): Confidence calibration, self-limit awareness
249
+ - error_recovery (ER): Error detection and correction behavior
250
+ - integration_depth (ID): Multi-perspective integration depth
251
+ - final_correctness (FC): Final answer accuracy
252
+
253
+ Output in JSON format.
254
+ """
255
+ ```
256
+
257
+ ---
258
+
259
+ ## Benchmark Results (9 SOTA Models)
260
+
261
+ ### Key Findings — Visual Summary
262
+
263
+ ![Fig 1. Multi-Model Leaderboard](fig1.png)
264
+ *Figure 1. Baseline + MetaCog scores and MetaCog gain (Δ_MC) across 9 models.*
265
+
266
+ ![Fig 2. ER Transformation](fig2.png)
267
+ *Figure 2. Error Recovery distribution shift — 79.6% at floor (Baseline) → 98.1% at ≥0.75 (MetaCog).*
268
+
269
+ ![Fig 3. Declarative-Procedural Gap](fig3.png)
270
+ *Figure 3. MA vs ER scatter plot showing the Baseline (○) → MetaCog (□) transition for all 9 models.*
271
+
272
+ ![Fig 4. Difficulty Effect](fig4.png)
273
+ *Figure 4. Harder tasks benefit more from MetaCog (Pearson r = –0.777, p < 0.001).*
274
+
275
+ ![Fig 5. Five-Axis Contribution](fig5.png)
276
+ *Figure 5. ER accounts for 94.8% of the total MetaCog gain across 9 models.*
277
+
278
+ ### Baseline Leaderboard
279
+
280
+ | Rank | Model | FINAL | PQ | MA | ER | ID | FC | MA–ER Gap |
281
+ |------|-------|-------|----|----|----|----|-----|-----------|
282
+ | 1 | Kimi K2.5 | 68.71 | 0.775 | 0.775 | 0.450 | 0.767 | 0.750 | 0.325 |
283
+ | 2 | GPT-5.2 | 62.76 | 0.750 | 0.750 | 0.336 | 0.724 | 0.681 | 0.414 |
284
+ | 3 | GLM-5 | 62.50 | 0.750 | 0.750 | 0.284 | 0.733 | 0.724 | 0.466 |
285
+ | 4 | MiniMax-M1-2.5 | 60.54 | 0.742 | 0.733 | 0.250 | 0.725 | 0.700 | 0.483 |
286
+ | 5 | GPT-OSS-120B | 60.42 | 0.750 | 0.708 | 0.267 | 0.725 | 0.692 | 0.442 |
287
+ | 6 | DeepSeek-V3.2 | 60.04 | 0.750 | 0.700 | 0.258 | 0.683 | 0.733 | 0.442 |
288
+ | 7 | GLM-4.7P | 59.54 | 0.750 | 0.575 | 0.292 | 0.733 | 0.742 | 0.283 |
289
+ | 8 | Gemini 3 Pro | 59.50 | 0.750 | 0.550 | 0.317 | 0.750 | 0.717 | 0.233 |
290
+ | 9 | Claude Opus 4.6 | 56.04 | 0.692 | 0.708 | 0.267 | 0.725 | 0.517 | 0.442 |
291
+ | | **Mean** | **61.12** | **0.745** | **0.694** | **0.302** | **0.729** | **0.695** | **0.392** |
292
+
293
+ ### MetaCog Leaderboard
294
+
295
+ | Rank | Model | FINAL | ER | Δ_MC |
296
+ |------|-------|-------|------|------|
297
+ | 1 | Kimi K2.5 | 78.54 | 0.908 | +9.83 |
298
+ | 2 | Gemini 3 Pro | 77.08 | 0.875 | +17.58 |
299
+ | 3 | GPT-5.2 | 76.50 | 0.792 | +13.74 |
300
+ | 4 | GLM-5 | 76.38 | 0.808 | +13.88 |
301
+ | 5 | Claude Opus 4.6 | 76.17 | 0.867 | +20.13 |
302
+ | | **Mean** | **75.17** | **0.835** | **+14.05** |
303
+
304
+ ### Five-Axis Contribution Analysis
305
+
306
+ | Rubric | Contribution | Interpretation |
307
+ |--------|-------------|---------------|
308
+ | **Error Recovery** | **94.8%** | Nearly all of the self-correction effect |
309
+ | Metacognitive Accuracy | 5.0% | "Saying" ability barely changes |
310
+ | Remaining 3 axes | 0.2% | Negligible change |
311
+
312
+ ---
313
+
314
+ ## Theoretical Background
315
+
316
+ ### Functional Metacognition
317
+
318
+ > **Definition.** Observable behavioral patterns in which a model *detects*, *acknowledges*, and *corrects* errors in its own reasoning. Whether this pattern shares the same internal mechanism as human subjective self-awareness is outside the scope of measurement; only behavioral indicators are assessed.
319
+
320
+ This definition is grounded in the functionalist tradition of Dennett (1987) and Block (1995), avoiding the anthropomorphic fallacy (Shanahan, 2024).
321
+
322
+ ### Three-Layer Model of AI Metacognition
323
+
324
+ | Layer | Mechanism | FINAL Bench |
325
+ |-------|-----------|-------------|
326
+ | **L1** Surface self-reflection | Linguistic expressions ("I'm not certain...") | **Measured via MA rubric** |
327
+ | **L2** Embedding-space uncertainty | Logit entropy, OOD detection | Not measured (planned) |
328
+ | **L3** Behavioral self-correction | Error detection → reasoning revision | **Measured via ER rubric** |
329
+
330
+ ### TICOS Framework
331
+
332
+ **T**ransparency · **I**ntrospection · **C**alibration · **O**bjectivity · **S**elf-correction
333
+
334
+ Each task is classified by a required/optional combination of these five metacognitive elements.
335
+
336
+ ---
337
+
338
+ ## Design Principles
339
+
340
+ ### 1. Trap-Embedded Design
341
+ All 100 tasks contain hidden cognitive traps grounded in established cognitive biases — availability heuristic, confirmation bias, anchoring, base-rate neglect, and more. The benchmark measures the model's ability to "fall into and climb out of" these traps.
342
+
343
+ ### 2. Declarative-Procedural Separation
344
+ MA and ER are scored as independent rubrics, enabling quantification of the gap between "the ability to say I don't know" and "the ability to actually fix it." No prior benchmark supports this distinction.
345
+
346
+ ### 3. Comparative Condition Design
347
+ Baseline (single call) and MetaCog (self-correction scaffold) conditions isolate the causal effect of functional metacognition, following placebo-controlled clinical trial logic.
348
+
349
+ ### 4. Anti-Contamination Design
350
+ All tasks were originally designed for FINAL Bench. They are not variants of existing benchmark problems and cannot be found in search engines or training data.
351
+
352
+ ---
353
+
354
+ ## Paper
355
+
356
+ **FINAL Bench: Measuring Functional Metacognitive Reasoning in Large Language Models**
357
+
358
+ Taebong Kim, Minsik Kim, Sunyoung Choi, Jaewon Jang
359
+
360
+ *Under review at a leading international AI venue.*
361
+
362
+ ---
363
+
364
+ ## Citation
365
+
366
+ ```bibtex
367
+ @dataset{final_bench_2026,
368
+ title={FINAL Bench: Measuring Functional Metacognitive Reasoning in Large Language Models},
369
+ author={Kim, Taebong and Kim, Minsik and Choi, Sunyoung and Jang, Jaewon},
370
+ year={2026},
371
+ version={3.0},
372
+ publisher={Hugging Face},
373
+ howpublished={\url{https://huggingface.co/datasets/VIDRAFT/FINAL-Bench}}
374
+ }
375
+ ```
376
+
377
+ ---
378
+
379
+ ## License
380
+
381
+ This dataset is distributed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
382
+
383
+ - Academic and commercial use permitted
384
+ - Modification and redistribution permitted
385
+ - Attribution required
386
+
387
+ ---
388
+
389
+ ## Contact
390
+
391
+ - **Corresponding Author**: Taebong Kim (arxivgpt@gmail.com)
392
+ - **Affiliations**: VIDRAFT / Ginigen AI, Seoul, South Korea
393
+
394
+ ---
395
+
396
+ ## Acknowledgments
397
+
398
+ This benchmark is grounded in metacognition theory from cognitive psychology (Flavell, 1979; Nelson & Narens, 1990) and recent LLM self-correction research (DeepSeek-R1, Self-Correction Bench, ReMA). We thank all model providers whose systems were evaluated.