| language: | |
| - en | |
| license: cc-by-4.0 | |
| size_categories: | |
| - n<1K | |
| task_categories: | |
| - text-classification | |
| dataset_info: | |
| features: | |
| - name: title | |
| dtype: string | |
| - name: paper_category | |
| dtype: string | |
| - name: error_category | |
| dtype: string | |
| - name: error_location | |
| dtype: string | |
| - name: error_severity | |
| dtype: string | |
| - name: error_annotation | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 35801 | |
| num_examples: 91 | |
| download_size: 22781 | |
| dataset_size: 35801 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| # SPOT-MetaData | |
| > Metadata & Annotations for **Scientific Paper ErrOr DeTection** (SPOT) | |
| > *SPOT contains 83 papers and 91 human-validated errors to test academic verification capabilities.* | |
| This dataset is introduced in the paper [When AI Co-Scientists Fail: SPOT-a Benchmark for Automated Verification of Scientific Research](https://huggingface.co/papers/2505.11855). | |
| ## 📖 Overview | |
| SPOT-MetaData contains all of the **annotations** for the SPOT benchmark—**no** paper PDFs or parsed content are included here. This lightweight repo is intended for anyone who needs to work with the ground-truth error labels, categories, locations, and severity ratings. | |
| Parse contents are available at: [link](https://huggingface.co/datasets/amphora/SPOT). | |
| For codes see: [link](https://github.com/guijinSON/SPOT). | |
| Project page: | |
| > **Benchmark at a glance** | |
| > - **83** published manuscripts | |
| > - **91** confirmed errors (errata or retractions) | |
| > - **10** scientific domains (Math, Physics, Biology, …) | |
| > - **6** error types (Equation/Proof, Fig-duplication, Data inconsistency, …) | |
| > - Average paper length: ~12 000 tokens & 18 figures | |
| ## 📜 License | |
| This repository (metadata & annotations) is released under the CC-BY-4.0 license. |