|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- mg |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains news articles, stories, and cultural reports scraped from [Global Voices Malagasy](https://mg.globalvoices.org/). It is intended to support Natural Language Processing (NLP) tasks for the Malagasy language, such as language modeling and text generation. |
|
|
|
|
|
The data is automatically scraped and updated once a month to ensure freshness. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is formatted in JSONL (JSON Lines). Each entry represents a single article. |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
- `url` (string): The original URL of the article. |
|
|
- `title` (string): The title of the article. |
|
|
- `date` (string): The publication date (ISO format or similar). |
|
|
- `author` (string): The name of the author or translator. |
|
|
- `content` (string): The main body text of the article (cleaned). |
|
|
|
|
|
### Example |
|
|
|
|
|
```json |
|
|
{ |
|
|
"url": "https://mg.globalvoices.org/2025/12/15/example-story", |
|
|
"title": "Lohateny momba ny fiarahamonina", |
|
|
"date": "2025-12-15T10:00:00+03:00", |
|
|
"author": "Rakoto", |
|
|
"content": "Ity dia ohatra iray amin'ny lahatsoratra hita ao amin'ny Global Voices..." |
|
|
} |
|
|
``` |
|
|
## Use Cases |
|
|
|
|
|
This dataset is ideal for training models on **formal and standard Malagasy**. Because news articles follow specific grammatical rules and journalistic standards, they provide a clean baseline for NLP. |
|
|
|
|
|
* **Language Modeling (LLM Pre-training):** To teach models the core grammar, syntax, and vocabulary of the Malagasy language in a formal context. |
|
|
* **Named Entity Recognition (NER):** The dataset contains numerous mentions of people, locations, organizations, and dates relevant to Madagascar and the world, useful for training entity extractors. |
|
|
|
|
|
### How to use |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("Lo-Renz-O/GBV-Malagasy") |
|
|
|
|
|
# Print the first example |
|
|
print(dataset['train'][0]) |
|
|
``` |
|
|
|
|
|
|