--- pretty_name: SMESum dataset_summary: The Slovak SME news summarization corpus. tags: - news - summarization - slovak - slovak-language task_categories: - summarization task_ids: - news-articles-summarization language: - sk size_categories: - 100K", "introduction": "", "document": "", "category": "", "url": "" } ``` To reproduce the summarization target described in the paper, concatenate `title` and `introduction`. ### Data Fields - `title` (`string`): Article headline authored by SME editors. - `introduction` (`string`): Teaser/abstract (one or two sentences). - `document` (`string`): Full article text, as scraped from the archived page. - `category` (`string`): SME topical section (e.g., `domov`, `svet`, `sport`, `ekonomika`). - `url` (`string`): Internet Archive URL of the captured article. ### Data Splits | Split | Records | Avg. words (document) | Avg. sentences (document) | Avg. words (summary) | Avg. sentences (summary) | |-------------|---------|------------------------|----------------------------|-----------------------|---------------------------| | train | 64,001 | 339.09 | 18.08 | 23.61 | 2.16 | | validation | 8,001 | 344.99 | 18.18 | 23.58 | 2.16 | | test | 8,001 | 332.25 | 17.96 | 23.46 | 2.15 | Statistics replicate Table 2 in [Šuppa and Adamec (2020)](https://aclanthology.org/2020.lrec-1.830/). ### Loading with `datasets` ```python from datasets import load_dataset dataset = load_dataset("NaiveNeuron/SMESum") sample = dataset["train"][0] print(sample["title"]) print(sample["introduction"]) print(sample["document"]) ``` For local development, you can run the loader against the repository checkout: ```python dataset = load_dataset("./SMESum") ``` ## Data Preprocessing Source articles originate from the [`NaiveNeuron/sme-sum`](https://github.com/NaiveNeuron/sme-sum) utilities, which scrape SME.sk snapshots from the Wayback Machine. Each `.data` file is a UTF-8 encoded JSON payload with the fields above. This project orders filenames deterministically via `sha256(salt + filename)` (with salt `xsum-sme-split-v1`) and selects exactly 64,001/8,001/8,001 entries for train/validation/test. No additional cleaning, tokenization, or normalization beyond what the original crawl performed is applied. ## Data Collection - **Source**: SME.sk, a major Slovak news portal. Articles were harvested from archived snapshots hosted by the Internet Archive. - **Timeframe**: Articles span multiple years leading up to late 2019, in line with the crawl described in [Šuppa and Adamec (2020)](https://aclanthology.org/2020.lrec-1.830/). - **Selection criteria**: Paid-content stubs and incomplete articles were excluded. Categories cover general news, world affairs, business, sports, travel, tech, culture, and opinion. ## Citation ``` @inproceedings{suppa-adamec-2020-sme, title = {A Summarization Dataset of Slovak News Articles}, author = {Marek {\v{S}}uppa and Jergu{\v{s}} Adamec}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, year = {2020}, pages = {6725--6730}, address = {Marseille, France}, publisher = {European Language Resources Association}, url = {https://aclanthology.org/2020.lrec-1.830} } ``` ## Dataset Curators The deterministic split script and packaging in this repository were prepared by the maintainers of the SMESum project. The original crawl and dataset definition were authored by Marek Šuppa and Jerguš Adamec (Comenius University in Bratislava). ## Licensing Information - **Original content**: © Petit Press, used under fair-use/academic research assumptions. - **Paper**: Licensed under CC-BY-NC (LREC Proceedings). - **This split**: Scripts and JSONL artifacts follow the repository’s license (MIT unless noted otherwise).