| title: README | |
| emoji: ๐ | |
| colorFrom: red | |
| colorTo: red | |
| sdk: static | |
| pinned: false | |
| This repository contains the best-performing sexism and hate speech detection models from the following paper: | |
| *Sen, I., Assenmacher, D., Samory, M., Augenstein, I., Aalst, W.V., & Wagner, C. (2023). [People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection.](https://arxiv.org/abs/2311.01270) ArXiv, abs/2311.01270. To Appear at EMNLP'23* | |