Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
SeaWolf-AI 
posted an update 4 days ago
Post
4066
FINAL Bench Released: The Real Bottleneck to AGI Is Self-Correction

We release FINAL Bench, the first benchmark for measuring functional metacognition in LLMs — the ability to detect and correct one's own reasoning errors. Every existing benchmark measures final-answer accuracy. None measures whether AI knows it is wrong.

Dataset: [FINAL-Bench/Metacognitive]( FINAL-Bench/Metacognitive) | 100 Tasks | 15 Domains | 8 TICOS Types | Apache 2.0

Leaderboard: FINAL-Bench/Leaderboard

Article: https://huggingface.co/blog/FINAL-Bench/metacognitive

Core Innovation

Our 5-axis rubric separates what no prior benchmark could: MA (Metacognitive Accuracy) — the ability to say "I might be wrong", and ER (Error Recovery) — the ability to actually fix it. This maps directly to the monitoring-control model of Nelson & Narens (1990) in cognitive psychology.

Three Findings Across 9 SOTA Models

We evaluated GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, DeepSeek-V3.2, Kimi K2.5, and others across 100 expert-level tasks:

1. ER Dominance. 94.8% of MetaCog gain comes from Error Recovery alone. The bottleneck to AGI is not knowledge or reasoning — it is self-correction.

2. Declarative-Procedural Gap. All 9 models can verbalize uncertainty (MA = 0.694) but cannot act on it (ER = 0.302). They sound humble but fail to self-correct — the most dangerous AI safety profile.

3. Difficulty Effect. Harder tasks benefit dramatically more from metacognition (Pearson r = -0.777, p < 0.001).

from datasets import load_dataset
dataset = load_dataset("FINAL-Bench/Metacognitive", split="train")


Paper: FINAL Bench: Measuring Functional Metacognitive Reasoning in LLMs

FINAL Bench is the first tool to tell apart what AI truly knows from what it merely pretends to know.

Good

Glad to see this benchmark, good work

Could you see if SLMs (models with <80B, <48B, <36B, <20B, etc.) also having this meta-cognitive power?

·

Yes, absolutely.

Even smaller language models under 80B, 48B, 36B, or 20B parameters can show metacognitive ability, usually in a weaker form. FINAL BENCH can still measure it reliably.

Typical pattern for SLMs
MA They can often express uncertainty or notice they might be wrong
ER Actually revising and improving the answer is harder

So with FINAL BENCH, you can quantify
1 whether the model has metacognitive signals at all
2 how strong they are
3 whether it only says I might be wrong but fails to fix the answer MA high ER low
4 or whether it can genuinely self correct ER improves especially with scaffolding