language:
- fra
Dataset origin: https://www.kaggle.com/datasets/isakbiderre/french-gec-dataset
Context
Wikipedia is a free encyclopedia where everyone can contribute and modify, delete, or add text to the articles. Because of this, every day there is newly created text and, most importantly, new corrections made to preexisting sentences. The idea is to find the corrections made to these sentences and create a dataset with X,y sentence pairs.
The data
This dataset was created using the Wikipedia edit histories thanks to the Wikipedia dumps available here: https://dumps.wikimedia.org/frwiki/
The dataset is composed of 45 million X,y sentence pairs extracted from almost the entire French Wikipedia. There are four columns in the CSV files: X (the source sentence), y (the target sentence), title (the title of the article from which the sentence came from), timestamps (the two dates when the source sentence and target sentence were created), comments (the comment of the edit if specified).
There is one major issue with this dataset (if it is used for the GEC task): a big part of the sentence pairs extracted are not in the scope of the GEC task (grammar, typos, and syntax). Many corrections made to the sentences on Wikipedia are reformulations, synthetizations, or clarifications, which, when training a model on this dataset, gives a model that reformulates and deletes parts of the sentence it was supposed to correct. To solve this problem, I suggest creating a classification model using transfer learning could be made to filter out the "bad" sentences.