| | --- |
| | language: |
| | - en |
| | license: cc-by-sa-3.0 |
| | size_categories: |
| | - 1K<n<10K |
| | task_categories: |
| | - question-answering |
| | - text-classification |
| | - summarization |
| | tags: |
| | - biomedical |
| | - health |
| | - NLP |
| | - summarization |
| | - LLM |
| | - factuality |
| | --- |
| | |
| | PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)). |
| | PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences). |
| | In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)). |
| |
|
| | > Currently, we only released the annotation for **Explanation** sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned! |
| |
|
| |
|
| | Here are explanations for the headings: |
| | - **Target_Sentence_factual**: The all factual plain language sentence. |
| | - **Target_Sentence_non_factual**: The perturbed (non-factual) plain language sentence. |
| | - **External**: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification) |
| | - **Original_Abstract**: The scientific abstract corresponding to each sentence/summary. |
| |
|
| |
|
| | You can load our dataset as follows: |
| | ```python |
| | from datasets import load_dataset |
| | plainfact = load_dataset("uzw/PlainFact") |
| | ``` |
| |
|
| | For detailed information regarding the dataset or factuality evaluation framework, please refer to our [Github repo](https://github.com/zhiwenyou103/PlainQAFact) and paper at https://huggingface.co/papers/2503.08890. |
| |
|
| | Citation |
| | If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry: |
| | ``` |
| | @misc{you2025plainqafactautomaticfactualityevaluation, |
| | title={PlainQAFact: Retrieval-augmented Factual Consistency Evaluation Metric for Biomedical Plain Language Summarization}, |
| | author={Zhiwen You and Yue Guo}, |
| | year={2025}, |
| | eprint={2503.08890}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2503.08890}, |
| | } |
| | ``` |