Datasets:
File size: 1,105 Bytes
ac0a196 c656680 7b4cd17 c656680 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ---
license: apache-2.0
language:
- en
tags:
- gec
- grammar
size_categories:
- 10K<n<100K
configs:
- config_name: gec
data_files:
- split: train
path: train/*
- split: dev
path: dev/*
- split: dev1
path: dev1/*
---
# Grammar Correctable Texts
These are collected texts sampled from various sources, which have been screened across several dimensions using google flash 2.0:
- `is_english`
- `meaning_is_recoverable` (Intended meaning is clear enough that correction won't change it.)
- `coherent_and_on_topic` (Sentences cohere; not random fragments or spam.)
- `not_style_or_dialect_intent` (Oddities are not intentional dialect/poetry/stylized voice.)
- `error_density_not_extreme`
- `not_code_or_markup_heavy`
- `complete_sentences_present`
Additionally, I use it to estimate the number of errors, bin into a 2d histogram with `num_tokens` and `estimated_corrections`.
I downsampled over-represented bins to help create a more uniform distribution of lengths and error rates.
This dataset has no ground truth, and targets GRPO training for GEC (grammatical error correction).
|