Update README.md
Browse files
README.md
CHANGED
|
@@ -20,14 +20,52 @@ size_categories:
|
|
| 20 |
|
| 21 |
This is the corrected test set for the RotoWire dataset, released as part of the [Map&Make: Schema Guided Text to Table Generation](https://aclanthology.org/2025.acl-long.1460/) paper (ACL 2025). The original RotoWire dataset contained hallucination errors where the ground truth tables included incorrect or fabricated statistics. This corrected version provides a cleaner benchmark for text-to-table generation tasks.
|
| 22 |
|
| 23 |
-
##
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
- Hallucinated statistics not supported by the text
|
| 27 |
- Incorrect player or team information
|
| 28 |
- Fabricated numerical values
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
This corrected
|
| 31 |
|
| 32 |
## Dataset Structure
|
| 33 |
|
|
@@ -155,6 +193,12 @@ This dataset is designed for:
|
|
| 155 |
- **Information Extraction**: Identify and extract player and team statistics from text
|
| 156 |
- **Benchmark Evaluation**: Clean test set for evaluating text-to-table models without contaminated ground truth
|
| 157 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
## Citation
|
| 159 |
|
| 160 |
If you use this corrected dataset, please cite:
|
|
@@ -174,7 +218,9 @@ If you use this corrected dataset, please cite:
|
|
| 174 |
}
|
| 175 |
```
|
| 176 |
|
| 177 |
-
|
|
|
|
|
|
|
| 178 |
```bibtex
|
| 179 |
@inproceedings{wiseman2017challenges,
|
| 180 |
title={Challenges in Data-to-Document Generation},
|
|
@@ -185,6 +231,30 @@ Original RotoWire dataset:
|
|
| 185 |
}
|
| 186 |
```
|
| 187 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 188 |
## License
|
| 189 |
|
| 190 |
MIT License
|
|
|
|
| 20 |
|
| 21 |
This is the corrected test set for the RotoWire dataset, released as part of the [Map&Make: Schema Guided Text to Table Generation](https://aclanthology.org/2025.acl-long.1460/) paper (ACL 2025). The original RotoWire dataset contained hallucination errors where the ground truth tables included incorrect or fabricated statistics. This corrected version provides a cleaner benchmark for text-to-table generation tasks.
|
| 22 |
|
| 23 |
+
## Motivation: Why We Need This Corrected Dataset
|
| 24 |
|
| 25 |
+
### Data Quality Issues in Existing Datasets
|
| 26 |
+
|
| 27 |
+
The RotoWire dataset has been widely used for structured data generation tasks, but significant data quality issues have emerged as the dataset was repurposed for different tasks:
|
| 28 |
+
|
| 29 |
+
**1. Original RotoWire Dataset (Wiseman et al., 2017):**
|
| 30 |
+
The original RotoWire dataset was created for **table-to-text** generation - generating game summaries from structured statistics tables. The ground truth tables in this dataset contained basketball game statistics for teams and players.
|
| 31 |
+
|
| 32 |
+
**2. Repurposing for Text-to-Table (Wu et al., 2022):**
|
| 33 |
+
[Wu et al. (2022)](https://aclanthology.org/2022.acl-long.180.pdf) pioneered the reverse task by repurposing the RotoWire dataset for **text-to-table** generation - extracting structured tables from game summaries. However, this repurposed dataset contained substantial errors in the ground truth tables, including:
|
| 34 |
- Hallucinated statistics not supported by the text
|
| 35 |
- Incorrect player or team information
|
| 36 |
- Fabricated numerical values
|
| 37 |
+
- Missing information that should have been present
|
| 38 |
+
|
| 39 |
+
**3. Issues in Strucbench's Correction Attempt (Tang et al., 2024):**
|
| 40 |
+
[Strucbench (2024)](https://aclanthology.org/2024.naacl-short.2/) built upon Wu et al.'s work and attempted to correct these issues. However, our comprehensive analysis revealed that the corrections were incomplete and, in some cases, introduced new errors rather than fixing existing ones.
|
| 41 |
+
|
| 42 |
+
### Quantitative Analysis of Errors
|
| 43 |
+
|
| 44 |
+
Our comprehensive error analysis (shown in Table 1 of our paper) reveals the extent of contamination:
|
| 45 |
+
|
| 46 |
+
**Comparing Original to Strucbench:**
|
| 47 |
+
- **Team tables**: 1,219 hallucinated cells + 1,271 missing information cells
|
| 48 |
+
- **Player tables**: 1,390 hallucinated cells + 1,270 missing information cells
|
| 49 |
+
|
| 50 |
+
**Comparing Original to Our Corrected Version:**
|
| 51 |
+
- **Team tables**: 613 hallucinated cells + 1,137 missing information cells were fixed
|
| 52 |
+
- **Player tables**: 7,310 hallucinated cells + 1,752 missing information cells were fixed
|
| 53 |
+
|
| 54 |
+
**Comparing Strucbench to Our Corrected Version:**
|
| 55 |
+
- **Team tables**: 721 hallucinated cells + 1,247 missing information cells remained/were introduced
|
| 56 |
+
- **Player tables**: 8,104 hallucinated cells + 2,666 missing information cells remained/were introduced
|
| 57 |
+
|
| 58 |
+
These numbers demonstrate that even the Strucbench correction attempt left substantial errors and in some cases introduced new ones. Our corrected version addresses these issues through careful manual verification and correction.
|
| 59 |
+
|
| 60 |
+
### Impact on Research
|
| 61 |
+
|
| 62 |
+
Using contaminated ground truth for evaluation leads to:
|
| 63 |
+
- Unreliable model performance metrics
|
| 64 |
+
- Unfair comparison between different approaches
|
| 65 |
+
- Models potentially learning from incorrect examples
|
| 66 |
+
- Misleading conclusions about model capabilities
|
| 67 |
|
| 68 |
+
This corrected dataset provides researchers with a clean benchmark for evaluating text-to-table generation models.
|
| 69 |
|
| 70 |
## Dataset Structure
|
| 71 |
|
|
|
|
| 193 |
- **Information Extraction**: Identify and extract player and team statistics from text
|
| 194 |
- **Benchmark Evaluation**: Clean test set for evaluating text-to-table models without contaminated ground truth
|
| 195 |
|
| 196 |
+
## Related Datasets
|
| 197 |
+
|
| 198 |
+
In addition to this corrected RotoWire test set, our Map&Make paper also evaluates on:
|
| 199 |
+
|
| 200 |
+
- **Livesum Dataset**: A dataset requiring numerical aggregation for text-to-table generation. Available at: [https://github.com/HKUST-KnowComp/LiveSum/tree/main](https://github.com/HKUST-KnowComp/LiveSum/tree/main)
|
| 201 |
+
|
| 202 |
## Citation
|
| 203 |
|
| 204 |
If you use this corrected dataset, please cite:
|
|
|
|
| 218 |
}
|
| 219 |
```
|
| 220 |
|
| 221 |
+
**Related work on RotoWire dataset:**
|
| 222 |
+
|
| 223 |
+
Original RotoWire dataset (table-to-text):
|
| 224 |
```bibtex
|
| 225 |
@inproceedings{wiseman2017challenges,
|
| 226 |
title={Challenges in Data-to-Document Generation},
|
|
|
|
| 231 |
}
|
| 232 |
```
|
| 233 |
|
| 234 |
+
Text-to-table repurposing of RotoWire:
|
| 235 |
+
```bibtex
|
| 236 |
+
@inproceedings{wu-etal-2022-text,
|
| 237 |
+
title = "Text-to-Table: A New Way of Information Extraction",
|
| 238 |
+
author = "Wu, Xueqing and Zhang, Jiacheng and Li, Hang",
|
| 239 |
+
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
| 240 |
+
year = "2022",
|
| 241 |
+
pages = "2518--2533",
|
| 242 |
+
url = "https://aclanthology.org/2022.acl-long.180/"
|
| 243 |
+
}
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
Strucbench correction attempt:
|
| 247 |
+
```bibtex
|
| 248 |
+
@inproceedings{tang-etal-2024-struc,
|
| 249 |
+
title = "Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data?",
|
| 250 |
+
author = "Tang, Xiangru and Zong, Yiming and Phang, Jason and Zhao, Yilun and Zhou, Wangchunshu and Cohan, Arman and Gerstein, Mark",
|
| 251 |
+
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
|
| 252 |
+
year = "2024",
|
| 253 |
+
pages = "12--34",
|
| 254 |
+
url = "https://aclanthology.org/2024.naacl-short.2/"
|
| 255 |
+
}
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
## License
|
| 259 |
|
| 260 |
MIT License
|