Update task categories for PlainFact dataset

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -5,6 +5,8 @@ license: cc-by-sa-3.0
5
  size_categories:
6
  - 1K<n<10K
7
  task_categories:
 
 
8
  - summarization
9
  tags:
10
  - biomedical
@@ -15,7 +17,7 @@ tags:
15
  - factuality
16
  ---
17
 
18
- PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
19
  PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences).
20
  In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)).
21
 
@@ -41,12 +43,12 @@ Citation
41
  If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:
42
  ```
43
  @misc{you2025plainqafactautomaticfactualityevaluation,
44
- title={PlainQAFact: Retrieval-augmented Factual Consistency Evaluation Metric for Biomedical Plain Language Summarization},
45
  author={Zhiwen You and Yue Guo},
46
  year={2025},
47
  eprint={2503.08890},
48
  archivePrefix={arXiv},
49
  primaryClass={cs.CL},
50
- url={https://arxiv.org/abs/2503.08890},
51
  }
52
  ```
 
5
  size_categories:
6
  - 1K<n<10K
7
  task_categories:
8
+ - question-answering
9
+ - text-classification
10
  - summarization
11
  tags:
12
  - biomedical
 
17
  - factuality
18
  ---
19
 
20
+ PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
21
  PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences).
22
  In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)).
23
 
 
43
  If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:
44
  ```
45
  @misc{you2025plainqafactautomaticfactualityevaluation,
46
+ title={PlainQAFact: Retrieval-augmented Factual Consistency Evaluation Metric for Biomedical Plain Language Summarization},
47
  author={Zhiwen You and Yue Guo},
48
  year={2025},
49
  eprint={2503.08890},
50
  archivePrefix={arXiv},
51
  primaryClass={cs.CL},
52
+ url={https://arxiv.org/abs/2503.08890},
53
  }
54
  ```