Update README.md
Browse files
README.md
CHANGED
|
@@ -8,3 +8,66 @@ language:
|
|
| 8 |
|
| 9 |
A dataset containing English speech with grammatical errors, along with the corresponding transcriptions. Utterances are synthesized using a
|
| 10 |
text-to-speech model, whereas the grammatically incorrect texts come from the [C4_200M](https://aclanthology.org/2021.bea-1.4) synthetic dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
A dataset containing English speech with grammatical errors, along with the corresponding transcriptions. Utterances are synthesized using a
|
| 10 |
text-to-speech model, whereas the grammatically incorrect texts come from the [C4_200M](https://aclanthology.org/2021.bea-1.4) synthetic dataset.
|
| 11 |
+
|
| 12 |
+
## Introduction
|
| 13 |
+
|
| 14 |
+
The Synthesized English Speech with Grammatical Errors (SESGE) dataset was developed to support the [DeMINT](https://github.com/transducens/demint) project.
|
| 15 |
+
The objective of DeMINT is to develop an intelligent tutoring system that helps non-native English speakers improve their language skills by analyzing and providing
|
| 16 |
+
feedback on the transcripts of their online meetings. As part of this, a system able to transcribe spoken English keeping the original
|
| 17 |
+
grammatical errors intact was essential.
|
| 18 |
+
Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task.
|
| 19 |
+
Therefore, SESGE was created to train a custom STT model that could accurately transcribe spoken English with grammatical errors preserved.
|
| 20 |
+
|
| 21 |
+
## Dataset Creation
|
| 22 |
+
|
| 23 |
+
Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources:
|
| 24 |
+
|
| 25 |
+
- [COREFL](https://www.peterlang.com/document/1049094)
|
| 26 |
+
The COREFL dataset consists of essays written by non-native English students with various levels of proficiency.
|
| 27 |
+
While some of these essays have associated audio recordings, the majority do not.
|
| 28 |
+
To expand the audio dataset, we used the [StyleTTS2](https://arxiv.org/abs/2306.07691) text-to-speech model to generate synthetic audio for the remaining texts.
|
| 29 |
+
Multiple voices were used for synthesis to increase the diversity of the dataset.
|
| 30 |
+
COREFL also includes audio directly recorded by students, which introduces natural speech variability and common errors found among L1-Spanish speakers,
|
| 31 |
+
a key demographic for the DeMINT project.
|
| 32 |
+
|
| 33 |
+
- [C4_200M](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
|
| 34 |
+
The C4_200M dataset contains synthetically generated English sentences with grammatical errors, produced using a corruption model.
|
| 35 |
+
Like with COREFL, StyleTTS2 was employed to synthesize audio from these texts, diversifying the voices to enhance the training set.
|
| 36 |
+
This dataset primarily provides varied sentence structures and error types, although with a limited number of distinct voices.
|
| 37 |
+
|
| 38 |
+
Due to licensing restrictions associated with the COREFL dataset, only the portion derived from the C4_200M dataset is publicly available as part of the
|
| 39 |
+
SESGE dataset. This means that while COREFL data was used during our training, only the C4_200M-based data is included in this dataset.
|
| 40 |
+
|
| 41 |
+
Training samples comprise **28,592** utterances from C4_200M.
|
| 42 |
+
|
| 43 |
+
## Models
|
| 44 |
+
|
| 45 |
+
Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub:
|
| 46 |
+
|
| 47 |
+
- [Error-Preserving Whisper Model](https://huggingface.co/Transducens/error-preserving-whisper)
|
| 48 |
+
- [Error-Preserving Whisper Distilled Model](https://huggingface.co/Transducens/error-preserving-whisper-distilled)
|
| 49 |
+
|
| 50 |
+
Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications
|
| 51 |
+
where fidelity to spoken errors is essential.
|
| 52 |
+
|
| 53 |
+
## How to Cite
|
| 54 |
+
|
| 55 |
+
If you use the SESGE dataset, please cite the following paper:
|
| 56 |
+
|
| 57 |
+
```bibtex
|
| 58 |
+
@inproceedings{demint2024,
|
| 59 |
+
author = {Pérez-Ortiz, Juan Antonio and
|
| 60 |
+
Esplà-Gomis, Miquel and
|
| 61 |
+
Sánchez-Cartagena, Víctor M. and
|
| 62 |
+
Sánchez-Martínez, Felipe and
|
| 63 |
+
Chernysh, Roman and
|
| 64 |
+
Mora-Rodríguez, Gabriel and
|
| 65 |
+
Berezhnoy, Lev},
|
| 66 |
+
title = {{DeMINT}: Automated Language Debriefing for English Learners via {AI}
|
| 67 |
+
Chatbot Analysis of Meeting Transcripts},
|
| 68 |
+
booktitle = {Proceedings of the 13th Workshop on NLP for Computer Assisted Language Learning},
|
| 69 |
+
month = october,
|
| 70 |
+
year = {2024},
|
| 71 |
+
url = {https://aclanthology.org/volumes/2024.nlp4call-1/},
|
| 72 |
+
}
|
| 73 |
+
```
|