|
|
--- |
|
|
language: |
|
|
- en |
|
|
pretty_name: DetailBench |
|
|
tags: |
|
|
- text |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- text-generation |
|
|
- translation |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
## DetailBench |
|
|
|
|
|
This is the dataset for DetailBench, which answers the question: "How good are current LLMs at finding small errors, when they are *not* explicitly asked to do so?" |
|
|
|
|
|
### Dataset Structure |
|
|
|
|
|
- `article_title`: Name of the Wikipedia article the data is from |
|
|
- `original_text`: Original excerpt from the given Wikipedia article |
|
|
- `modified_text`: Modified version of the original text with a single error (one changed number) introduced |
|
|
- `original_number`: The original number from the text (used for the LLM grader as context) |
|
|
- `modified_number`: The modified number from the text (used for the LLM grader as context) |
|
|
- `change_position`: The position of the changed number in the text (used for the LLM grader as context) |
|
|
- `target_language`: The language the LLM to evaluate should translate the modified_text into |
|
|
|
|
|
### Implementation |
|
|
|
|
|
We recommend the reference implementation provided in [openbench](https://github.com/groq/openbench) to run this benchmark. |
|
|
Simple use `bench eval detailbench --model <model_name>` |
|
|
|
|
|
### License |
|
|
|
|
|
Apache 2.0 |