Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
MiriUll commited on
Commit
b569758
·
verified ·
1 Parent(s): bb311cd

update citation

Browse files
Files changed (1) hide show
  1. README.md +18 -8
README.md CHANGED
@@ -134,16 +134,26 @@ The CANNOT dataset is released under [CC BY-SA
134
  </a>
135
 
136
  ### Citation
137
- Please cite our [INLG 2023 paper](https://arxiv.org/abs/2307.13989), if you use our dataset.
138
  **BibTeX:**
139
  ```bibtex
140
- @misc{anschütz2023correct,
141
- title={This is not correct! Negation-aware Evaluation of Language Generation Systems},
142
- author={Miriam Anschütz and Diego Miguel Lozano and Georg Groh},
143
- year={2023},
144
- eprint={2307.13989},
145
- archivePrefix={arXiv},
146
- primaryClass={cs.CL}
 
 
 
 
 
 
 
 
 
 
147
  }
148
  ```
149
 
 
134
  </a>
135
 
136
  ### Citation
137
+ Please cite our [INLG 2023 paper](https://aclanthology.org/2023.inlg-main.12/), if you use our dataset.
138
  **BibTeX:**
139
  ```bibtex
140
+ @inproceedings{anschutz-etal-2023-correct,
141
+ title = "This is not correct! Negation-aware Evaluation of Language Generation Systems",
142
+ author = {Ansch{\"u}tz, Miriam and
143
+ Miguel Lozano, Diego and
144
+ Groh, Georg},
145
+ editor = "Keet, C. Maria and
146
+ Lee, Hung-Yi and
147
+ Zarrie{\ss}, Sina",
148
+ booktitle = "Proceedings of the 16th International Natural Language Generation Conference",
149
+ month = sep,
150
+ year = "2023",
151
+ address = "Prague, Czechia",
152
+ publisher = "Association for Computational Linguistics",
153
+ url = "https://aclanthology.org/2023.inlg-main.12/",
154
+ doi = "10.18653/v1/2023.inlg-main.12",
155
+ pages = "163--175",
156
+ abstract = "Large language models underestimate the impact of negations on how much they change the meaning of a sentence. Therefore, learned evaluation metrics based on these models are insensitive to negations. In this paper, we propose NegBLEURT, a negation-aware version of the BLEURT evaluation metric. For that, we designed a rule-based sentence negation tool and used it to create the CANNOT negation evaluation dataset. Based on this dataset, we fine-tuned a sentence transformer and an evaluation metric to improve their negation sensitivity. Evaluating these models on existing benchmarks shows that our fine-tuned models outperform existing metrics on the negated sentences by far while preserving their base models' performances on other perturbations."
157
  }
158
  ```
159