File size: 2,725 Bytes
a49c679
9b15915
 
 
 
 
 
 
 
 
 
a49c679
 
e4e5117
 
a49c679
 
c0c2985
a49c679
 
 
 
 
c0c2985
a49c679
 
 
 
 
 
 
 
 
9c78b79
 
 
 
a49c679
 
 
 
 
 
4805ca2
9b15915
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: mit
task_categories:
- text-generation
tags:
- self-correction
- llms
- benchmarking
- llm-evaluation
language:
- en
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: type
    dtype: string
  - name: messages_error_injection_in_model
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  - name: messages_error_in_user
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  - name: correct_answer
    dtype: string
  splits:
  - name: test
    num_bytes: 49803
    num_examples: 286
  download_size: 12603
  dataset_size: 49803
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

This repository contains the dataset for [Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs](https://arxiv.org/abs/2507.02778).

The Self-Correction Bench is a systematic framework designed to measure the "Self-Correction Blind Spot" in Large Language Models (LLMs). This phenomenon refers to LLMs failing to correct identical errors in their own outputs, despite being able to identify such errors in user inputs. The dataset facilitates the study of this limitation through controlled error injection at three complexity levels, offering insights into avenues for improving LLM reliability and trustworthiness.

### Dataset Structure
The dataset includes the following fields:
- `id`: Unique identifier for each sample.
- `type`: Category of the error injection (e.g., controlled scenarios of different complexity).
- `messages_error_injection_in_model`: A list of messages (content and role) representing a conversation where an error is deliberately injected into the model's generated output.
- `messages_error_in_user`: A list of messages (content and role) presenting an identical error in the user's input, used for comparison.
- `correct_answer`: The expected correct response to the given scenario, serving as a ground truth for evaluation.

### Sample Usage
To load and inspect the dataset, you can use the `datasets` library:

```python
from datasets import load_dataset

# Load the 'test' split of the dataset
dataset = load_dataset("self-correction-bench", split="test")

# Accessing a sample
sample = dataset[0]
print(f"ID: {sample['id']}")
print(f"Type: {sample['type']}")
print(f"Messages with model error injection: {sample['messages_error_injection_in_model']}")
print(f"Messages with user error: {sample['messages_error_in_user']}")
print(f"Correct Answer: {sample['correct_answer']}")

# Researchers can use these samples to prompt an LLM and compare its generated
# output against the 'correct_answer' to evaluate its self-correction capabilities.
```