add explanations of non-factual instances
Browse files
README.md
CHANGED
|
@@ -9,13 +9,17 @@ task_categories:
|
|
| 9 |
---
|
| 10 |
|
| 11 |
PlainFact-summary is a high-quality human-annotated dataset designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework, as described in [PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation](https://huggingface.co/papers/2503.08890). It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
|
|
|
|
| 12 |
|
| 13 |
We also provided a sentence-level version [PlainFact](https://huggingface.co/datasets/uzw/PlainFact) that split the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs.
|
| 14 |
|
| 15 |
Here are explanations for the headings:
|
| 16 |
-
- **
|
|
|
|
| 17 |
- **Original_Abstract**: The scientific abstract corresponding to each sentence/summary.
|
| 18 |
|
|
|
|
|
|
|
| 19 |
You can load our dataset as follows:
|
| 20 |
```python
|
| 21 |
from datasets import load_dataset
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
PlainFact-summary is a high-quality human-annotated dataset designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework, as described in [PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation](https://huggingface.co/papers/2503.08890). It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
|
| 12 |
+
In addition to using all factual plain language summaries, we also generate contrasting non-factual examples for each plain language summary. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)).
|
| 13 |
|
| 14 |
We also provided a sentence-level version [PlainFact](https://huggingface.co/datasets/uzw/PlainFact) that split the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs.
|
| 15 |
|
| 16 |
Here are explanations for the headings:
|
| 17 |
+
- **Factual**: "yes": the plain language summary is factual; "no": the plain language summary is non-factual after applying faithfulness perturbation.
|
| 18 |
+
- **Target_Sentence**: The plain language summary.
|
| 19 |
- **Original_Abstract**: The scientific abstract corresponding to each sentence/summary.
|
| 20 |
|
| 21 |
+
> Note: the number of factual and non-factual plain language summaries is the same (200 for each).
|
| 22 |
+
|
| 23 |
You can load our dataset as follows:
|
| 24 |
```python
|
| 25 |
from datasets import load_dataset
|