scli5 / README.md
kenhktsui's picture
Improve dataset card: Add task category, tags, language, detailed description, and sample usage (#1)
9b15915 verified
metadata
license: mit
task_categories:
  - text-generation
tags:
  - self-correction
  - llms
  - benchmarking
  - llm-evaluation
language:
  - en
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: type
      dtype: string
    - name: messages_error_injection_in_model
      list:
        - name: content
          dtype: string
        - name: role
          dtype: string
    - name: messages_error_in_user
      list:
        - name: content
          dtype: string
        - name: role
          dtype: string
    - name: correct_answer
      dtype: string
  splits:
    - name: test
      num_bytes: 49803
      num_examples: 286
  download_size: 12603
  dataset_size: 49803
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

This repository contains the dataset for Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs.

The Self-Correction Bench is a systematic framework designed to measure the "Self-Correction Blind Spot" in Large Language Models (LLMs). This phenomenon refers to LLMs failing to correct identical errors in their own outputs, despite being able to identify such errors in user inputs. The dataset facilitates the study of this limitation through controlled error injection at three complexity levels, offering insights into avenues for improving LLM reliability and trustworthiness.

Dataset Structure

The dataset includes the following fields:

  • id: Unique identifier for each sample.
  • type: Category of the error injection (e.g., controlled scenarios of different complexity).
  • messages_error_injection_in_model: A list of messages (content and role) representing a conversation where an error is deliberately injected into the model's generated output.
  • messages_error_in_user: A list of messages (content and role) presenting an identical error in the user's input, used for comparison.
  • correct_answer: The expected correct response to the given scenario, serving as a ground truth for evaluation.

Sample Usage

To load and inspect the dataset, you can use the datasets library:

from datasets import load_dataset

# Load the 'test' split of the dataset
dataset = load_dataset("self-correction-bench", split="test")

# Accessing a sample
sample = dataset[0]
print(f"ID: {sample['id']}")
print(f"Type: {sample['type']}")
print(f"Messages with model error injection: {sample['messages_error_injection_in_model']}")
print(f"Messages with user error: {sample['messages_error_in_user']}")
print(f"Correct Answer: {sample['correct_answer']}")

# Researchers can use these samples to prompt an LLM and compare its generated
# output against the 'correct_answer' to evaluate its self-correction capabilities.