Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
PlainFact-summary / README.md
uzw's picture
Update README.md
16583ed verified
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - n<1K
task_categories:
  - summarization

PlainFact-summary is a high-quality human-annotated dataset designed for Plain Language Summarization tasks, along with PlainQAFact factuality evaluation framework, as described in PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation. It is collected from the Cochrane database sampled from CELLS dataset (Guo et al., 2024). In addition to using all factual plain language summaries, we also generate contrasting non-factual examples for each plain language summary. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS (Guo et al., 2024).

We also provided a sentence-level version PlainFact that split the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs.

Here are explanations for the headings:

  • Factual: "yes": the plain language summary is factual; "no": the plain language summary is non-factual after applying faithfulness perturbation.
  • Target_Sentence: The plain language summary.
  • Original_Abstract: The scientific abstract corresponding to each sentence/summary.

Note: the number of factual and non-factual plain language summaries is the same (200 for each).

You can load our dataset as follows:

from datasets import load_dataset
plainfact = load_dataset("uzw/PlainFact-summary")

For detailed information regarding the dataset or factuality evaluation framework, please refer to our Github repo and paper.

Citation If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:

@misc{you2025plainqafactautomaticfactualityevaluation,
      title={PlainQAFact: Retrieval-augmented Factual Consistency Evaluation Metric for Biomedical Plain Language Summarization}, 
      author={Zhiwen You and Yue Guo},
      year={2025},
      eprint={2503.08890},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.08890}, 
}