malagasy-sentence / README.md
Lo-Renz-O's picture
Update README.md
dbbe916 verified
metadata
license: mit
task_categories:
  - text-generation
  - feature-extraction
language:
  - mg

Overview

This dataset consists of clean, structured sentences extracted via Optical Character Recognition (OCR) from approximately 1GB of Malagasy thesis documents. These documents were collected based on educational, cultural, and linguistic themes. The dataset is saved in CSV format, and is particularly useful for NLP tasks involving sentence-level modeling in Malagasy — a low-resource language.

Dataset Details

  • Language: Malagasy
  • Source: OCR'd academic thesis documents in PDF form
  • Download URL: Université d’Antananarivo Thesis Library
  • Collection Keywords: sekoly, boky, fampianarana, fiangonana, fanabeazana, tontolo, gazety, asa, tononkalo, faritra, teny, fiteny, soratra, poeta, tantara, literatiora, fomba
  • Format: CSV
  • Column(s): text
  • Granularity: Each row contains a single sentence.

Preprocessing Pipeline

The following steps were used to clean and normalize the raw OCR text:

  1. Unicode normalization using NFKC to standardize characters.
  2. URL removal to eliminate web links from scanned content.
  3. Quote standardization, converting straight quotes to typographic quotes.
  4. Non-alphanumeric character removal, excluding allowed punctuation.
  5. Punctuation spacing, ensuring correct spacing after commas, periods, etc.
  6. Removal of structured markers such as:
    • Numbered headings (1., 1.1.1, etc.)
    • Lettered sections (a., b-1, etc.)
    • Roman numeral references (IV-2, etc.)
  7. Consecutive punctuation cleanup to reduce noise from OCR errors.
  8. Paragraph structure fixes:
    • Merging broken paragraphs that were split across lines or pages.
    • Removing paragraphs shorter than 10 characters.
  9. Sentence segmentation to split structured paragraphs into individual sentences.
  10. Whitespace normalization to remove extra spaces and line breaks.
  11. Deduplicated and Shuffled

These steps were applied iteratively for high-quality, standardized sentence-level data.

Potential Applications

This dataset is well-suited for:

  • Sentence-level language modeling and generation in Malagasy
  • Fine-tuning multilingual NLP models on Malagasy

Limitations

  • Some sentences may contain French words or phrases, as they are sometimes used in citations or quoted material within the thesis documents.
  • OCR errors may still be present in some complex layouts or highly degraded scans.

Usage

To load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset('Lo-Renz-O/malagasy-sentence')
print(dataset['train'][0])

Contribution

We welcome contributions to improve this dataset! If you have suggestions or additional Malagasy text sources, feel free to open a discussion or submit data on Hugging Face.