Datasets:
Tasks:
Summarization
Modalities:
Text
Formats:
json
Sub-tasks:
news-articles-summarization
Languages:
Slovak
Size:
10K - 100K
License:
| pretty_name: SMESum | |
| dataset_summary: The Slovak SME news summarization corpus. | |
| tags: | |
| - news | |
| - summarization | |
| - slovak | |
| - slovak-language | |
| task_categories: | |
| - summarization | |
| task_ids: | |
| - news-articles-summarization | |
| language: | |
| - sk | |
| size_categories: | |
| - 100K<n<1M | |
| license: other | |
| paper: https://aclanthology.org/2020.lrec-1.830 | |
| repository: https://github.com/NaiveNeuron/sme-sum | |
| homepage: https://sme.sk | |
| configs: | |
| - config_name: default | |
| description: Slovak news summarization split extracted from the SME archive. | |
| # Dataset Card for SMESum | |
| ## Dataset Summary | |
| SMESum is a deterministic reproduction of the Slovak news summarization corpus introduced by [Šuppa and Adamec (2020)](https://aclanthology.org/2020.lrec-1.830/). It contains Slovak news articles sourced from the SME news portal via the Internet Archive. Each example provides the full article (`document`) together with two short abstractive fields (`title`, `introduction`) that can be concatenated to form the gold summary, mirroring the setup described in the paper. The corpus is split into train/validation/test partitions of sizes 64,001/8,001/8,001 using a salted SHA-256 hash of each filename to guarantee reproducibility. | |
| ## Supported Tasks and Leaderboards | |
| - `summarization`: Abstractive or extractive summarization of Slovak news articles. The [original paper](https://aclanthology.org/2020.lrec-1.830/) benchmarks multiple extractive baselines, including TextRank and a multilingual BERT model fine-tuned for extractive summarization. | |
| - `classification`: The classification of content into categories / SME section labels (e.g. `sport`). | |
| ## Languages | |
| - Slovak (`slk`, ISO 639-3: `slk`). Text retains original punctuation, casing, and diacritics. | |
| ## Dataset Structure | |
| ### Data Instances | |
| Each row is a JSON object with the following schema: | |
| ```json | |
| { | |
| "title": "<headline of the article>", | |
| "introduction": "<short abstract shown below the headline>", | |
| "document": "<full body text of the article>", | |
| "category": "<SME section label, e.g. 'sport'>", | |
| "url": "<Wayback Machine URL pointing to the captured article>" | |
| } | |
| ``` | |
| To reproduce the summarization target described in the paper, concatenate `title` and `introduction`. | |
| ### Data Fields | |
| - `title` (`string`): Article headline authored by SME editors. | |
| - `introduction` (`string`): Teaser/abstract (one or two sentences). | |
| - `document` (`string`): Full article text, as scraped from the archived page. | |
| - `category` (`string`): SME topical section (e.g., `domov`, `svet`, `sport`, `ekonomika`). | |
| - `url` (`string`): Internet Archive URL of the captured article. | |
| ### Data Splits | |
| | Split | Records | Avg. words (document) | Avg. sentences (document) | Avg. words (summary) | Avg. sentences (summary) | | |
| |-------------|---------|------------------------|----------------------------|-----------------------|---------------------------| | |
| | train | 64,001 | 339.09 | 18.08 | 23.61 | 2.16 | | |
| | validation | 8,001 | 344.99 | 18.18 | 23.58 | 2.16 | | |
| | test | 8,001 | 332.25 | 17.96 | 23.46 | 2.15 | | |
| Statistics replicate Table 2 in [Šuppa and Adamec (2020)](https://aclanthology.org/2020.lrec-1.830/). | |
| ### Loading with `datasets` | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("NaiveNeuron/SMESum") | |
| sample = dataset["train"][0] | |
| print(sample["title"]) | |
| print(sample["introduction"]) | |
| print(sample["document"]) | |
| ``` | |
| For local development, you can run the loader against the repository checkout: | |
| ```python | |
| dataset = load_dataset("./SMESum") | |
| ``` | |
| ## Data Preprocessing | |
| Source articles originate from the [`NaiveNeuron/sme-sum`](https://github.com/NaiveNeuron/sme-sum) utilities, which scrape SME.sk snapshots from the Wayback Machine. Each `.data` file is a UTF-8 encoded JSON payload with the fields above. This project orders filenames deterministically via `sha256(salt + filename)` (with salt `xsum-sme-split-v1`) and selects exactly 64,001/8,001/8,001 entries for train/validation/test. No additional cleaning, tokenization, or normalization beyond what the original crawl performed is applied. | |
| ## Data Collection | |
| - **Source**: SME.sk, a major Slovak news portal. Articles were harvested from archived snapshots hosted by the Internet Archive. | |
| - **Timeframe**: Articles span multiple years leading up to late 2019, in line with the crawl described in [Šuppa and Adamec (2020)](https://aclanthology.org/2020.lrec-1.830/). | |
| - **Selection criteria**: Paid-content stubs and incomplete articles were excluded. Categories cover general news, world affairs, business, sports, travel, tech, culture, and opinion. | |
| ## Citation | |
| ``` | |
| @inproceedings{suppa-adamec-2020-sme, | |
| title = {A Summarization Dataset of Slovak News Articles}, | |
| author = {Marek {\v{S}}uppa and Jergu{\v{s}} Adamec}, | |
| booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, | |
| year = {2020}, | |
| pages = {6725--6730}, | |
| address = {Marseille, France}, | |
| publisher = {European Language Resources Association}, | |
| url = {https://aclanthology.org/2020.lrec-1.830} | |
| } | |
| ``` | |
| ## Dataset Curators | |
| The deterministic split script and packaging in this repository were prepared by the maintainers of the SMESum project. The original crawl and dataset definition were authored by Marek Šuppa and Jerguš Adamec (Comenius University in Bratislava). | |
| ## Licensing Information | |
| - **Original content**: © Petit Press, used under fair-use/academic research assumptions. | |
| - **Paper**: Licensed under CC-BY-NC (LREC Proceedings). | |
| - **This split**: Scripts and JSONL artifacts follow the repository’s license (MIT unless noted otherwise). | |