Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
Slovak
Size:
10K - 100K
License:
File size: 3,540 Bytes
90acfb7 c5a2460 90acfb7 c5a2460 90acfb7 672328c 90acfb7 c5a2460 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
language:
- sk
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-classification
tags:
- clustering
- slovak
- news
- mteb
pretty_name: Pravda.sk URL-based Clustering Dataset
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: content
dtype: string
- name: date
dtype: string
- name: tags
sequence: string
- name: url_category
dtype: string
- name: url_subdomain
dtype: string
- name: url_section
dtype: string
splits:
- name: test
num_examples: 15000
---
# Pravda.sk URL-based Clustering Dataset
A Slovak news article clustering dataset based on URL structure from [pravda.sk](https://pravda.sk), designed for evaluating text clustering and embedding models.
## Dataset Description
This dataset contains 15,000 Slovak news articles categorized by their URL structure. Articles are organized into 50 categories across 11 subdomains, providing a hierarchical classification based on editorial decisions.
### Use Case
This dataset is designed for:
- Evaluating Slovak text embedding models
- Clustering benchmark tasks (MTEB)
- Text classification experiments
- Slovak NLP research
## Dataset Structure
### Fields
| Field | Type | Description |
|-------|------|-------------|
| `url` | string | Original article URL |
| `title` | string | Article headline |
| `summary` | string | Article summary/lead paragraph |
| `content` | string | Full article text |
| `date` | string | Publication date (ISO format) |
| `tags` | list[string] | Original pravda.sk tags |
| `url_category` | string | Full category path (e.g., `sportweb/tenis`) |
| `url_subdomain` | string | Subdomain (e.g., `sportweb`) |
| `url_section` | string | Section within subdomain (e.g., `tenis`) |
### Statistics
- **Total articles**: 15,000
- **Categories**: 50
- **Samples per category**: 300
- **Subdomains**: 11
### Subdomains
| Subdomain | Articles | Description |
|-----------|----------|-------------|
| `sportweb` | 2,100 | Sports news |
| `vat` | 2,100 | Science & technology |
| `koktail` | 1,800 | Celebrity & entertainment |
| `spravy` | 1,800 | General news |
| `cestovanie` | 1,500 | Travel |
| `ekonomika` | 1,200 | Business & economy |
| `kultura` | 1,200 | Culture & arts |
| `zurnal` | 1,200 | Magazine/features |
| `auto` | 900 | Automotive |
| `zdravie` | 900 | Health |
| `uzitocna` | 300 | Practical/lifestyle |
## Data Quality
- **No duplicates**: Deduplicated by `title` + `summary` combination
- **No null values**: All articles have valid `title` and `summary` fields
- **Balanced**: 300 samples per category
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("NaiveNeuron/pravda-sk-url-clustering")
# Access the test split
articles = dataset["test"]
# Example: Get articles by category
tennis_articles = [a for a in articles if a["url_category"] == "sportweb/tenis"]
```
## Source
Articles were collected from [pravda.sk](https://pravda.sk) via [web.archive.org](https://web.archive.org) snapshots spanning 2004-2025.
## License
This dataset is released under CC-BY-4.0. Please check pravda.sk terms of service for content usage rights.
## Citation
If you use this dataset, please cite:
```bibtex
@dataset{pravda_sk_url_clustering,
title = {Pravda.sk URL-based Clustering Dataset},
author = {NaiveNeuron},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/NaiveNeuron/pravda-sk-url-clustering}
}
```
|