File size: 3,034 Bytes
e344079 886a775 5811e07 886a775 2a8734c 886a775 dbbe916 886a775 e344079 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: mit
task_categories:
- text-generation
- feature-extraction
language:
- mg
---
## Overview
This dataset consists of clean, structured **sentences** extracted via Optical Character Recognition (OCR) from approximately **1GB of Malagasy thesis documents**. These documents were collected based on educational, cultural, and linguistic themes.
The dataset is saved in **CSV format**, and is particularly useful for NLP tasks involving **sentence-level modeling** in Malagasy — a low-resource language.
## Dataset Details
- **Language**: Malagasy
- **Source**: OCR'd academic thesis documents in PDF form
- **Download URL**: [Université d’Antananarivo Thesis Library](http://www.biblio.univ-antananarivo.mg/theses2/)
- **Collection Keywords**: `sekoly`, `boky`, `fampianarana`, `fiangonana`, `fanabeazana`, `tontolo`, `gazety`, `asa`, `tononkalo`, `faritra`, `teny`, `fiteny`, `soratra`, `poeta`, `tantara`, `literatiora`, `fomba`
- **Format**: CSV
- **Column(s)**: `text`
- **Granularity**: Each row contains a **single sentence**.
## Preprocessing Pipeline
The following steps were used to clean and normalize the raw OCR text:
1. **Unicode normalization** using NFKC to standardize characters.
2. **URL removal** to eliminate web links from scanned content.
3. **Quote standardization**, converting straight quotes to typographic quotes.
4. **Non-alphanumeric character removal**, excluding allowed punctuation.
5. **Punctuation spacing**, ensuring correct spacing after commas, periods, etc.
6. **Removal of structured markers** such as:
- Numbered headings (`1.`, `1.1.1`, etc.)
- Lettered sections (`a.`, `b-1`, etc.)
- Roman numeral references (`IV-2`, etc.)
7. **Consecutive punctuation cleanup** to reduce noise from OCR errors.
8. **Paragraph structure fixes**:
- Merging broken paragraphs that were split across lines or pages.
- Removing paragraphs shorter than 10 characters.
9. **Sentence segmentation** to split structured paragraphs into **individual sentences**.
10. **Whitespace normalization** to remove extra spaces and line breaks.
11. **Deduplicated and Shuffled**
These steps were applied **iteratively** for high-quality, standardized sentence-level data.
## Potential Applications
This dataset is well-suited for:
- **Sentence-level language modeling** and generation in Malagasy
- **Fine-tuning multilingual NLP models** on Malagasy
## Limitations
- Some sentences may contain **French words or phrases**, as they are sometimes used in citations or quoted material within the thesis documents.
- OCR errors may still be present in some complex layouts or highly degraded scans.
## Usage
To load this dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset('Lo-Renz-O/malagasy-sentence')
print(dataset['train'][0])
```
## Contribution
We welcome contributions to improve this dataset! If you have suggestions or additional Malagasy text sources, feel free to open a discussion or submit data on Hugging Face. |