Add task category and link to paper
Browse filesThis PR ensures that the dataset can be found when browsing for text classification datasets, as well as linking to the paper at https://huggingface.co/papers/2505.11855.
README.md
CHANGED
|
@@ -1,4 +1,11 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: title
|
|
@@ -24,11 +31,6 @@ configs:
|
|
| 24 |
data_files:
|
| 25 |
- split: train
|
| 26 |
path: data/train-*
|
| 27 |
-
license: cc-by-4.0
|
| 28 |
-
language:
|
| 29 |
-
- en
|
| 30 |
-
size_categories:
|
| 31 |
-
- n<1K
|
| 32 |
---
|
| 33 |
|
| 34 |
# SPOT-MetaData
|
|
@@ -36,6 +38,8 @@ size_categories:
|
|
| 36 |
> Metadata & Annotations for **Scientific Paper ErrOr DeTection** (SPOT)
|
| 37 |
> *SPOT contains 83 papers and 91 human-validated errors to test academic verification capabilities.*
|
| 38 |
|
|
|
|
|
|
|
| 39 |
## 📖 Overview
|
| 40 |
|
| 41 |
SPOT-MetaData contains all of the **annotations** for the SPOT benchmark—**no** paper PDFs or parsed content are included here. This lightweight repo is intended for anyone who needs to work with the ground-truth error labels, categories, locations, and severity ratings.
|
|
@@ -43,6 +47,8 @@ SPOT-MetaData contains all of the **annotations** for the SPOT benchmark—**no*
|
|
| 43 |
Parse contents are available at: [link](https://huggingface.co/datasets/amphora/SPOT).
|
| 44 |
For codes see: [link](https://github.com/guijinSON/SPOT).
|
| 45 |
|
|
|
|
|
|
|
| 46 |
> **Benchmark at a glance**
|
| 47 |
> - **83** published manuscripts
|
| 48 |
> - **91** confirmed errors (errata or retractions)
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
+
task_categories:
|
| 8 |
+
- text-classification
|
| 9 |
dataset_info:
|
| 10 |
features:
|
| 11 |
- name: title
|
|
|
|
| 31 |
data_files:
|
| 32 |
- split: train
|
| 33 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
---
|
| 35 |
|
| 36 |
# SPOT-MetaData
|
|
|
|
| 38 |
> Metadata & Annotations for **Scientific Paper ErrOr DeTection** (SPOT)
|
| 39 |
> *SPOT contains 83 papers and 91 human-validated errors to test academic verification capabilities.*
|
| 40 |
|
| 41 |
+
This dataset is introduced in the paper [When AI Co-Scientists Fail: SPOT-a Benchmark for Automated Verification of Scientific Research](https://huggingface.co/papers/2505.11855).
|
| 42 |
+
|
| 43 |
## 📖 Overview
|
| 44 |
|
| 45 |
SPOT-MetaData contains all of the **annotations** for the SPOT benchmark—**no** paper PDFs or parsed content are included here. This lightweight repo is intended for anyone who needs to work with the ground-truth error labels, categories, locations, and severity ratings.
|
|
|
|
| 47 |
Parse contents are available at: [link](https://huggingface.co/datasets/amphora/SPOT).
|
| 48 |
For codes see: [link](https://github.com/guijinSON/SPOT).
|
| 49 |
|
| 50 |
+
Project page:
|
| 51 |
+
|
| 52 |
> **Benchmark at a glance**
|
| 53 |
> - **83** published manuscripts
|
| 54 |
> - **91** confirmed errors (errata or retractions)
|