CLASSLA-web-sl-2 Enriched
This repository contains an enriched version of the Slovenian web corpus CLASSLA-web.sl.2.0. The dataset has been annotated with extensive metadata regarding educational value, domain classification, and web registers.
It provides two configurations:
- default: The full, raw enriched dataset (4.79M documents).
- filtered: A high-quality subset optimized for LLM Continued Pre-training (CPT), filtered based on educational score and deduplicated.
Dataset Configurations
1. Default (default)
The complete dataset contains all documents from the crawl, enriched with metadata columns (eduscore, nvidia_domain, web_reg, cluster_id).
- Documents: 4,790,039
- Tokens: ~3.60 Billion
2. Filtered (filtered)
A strategic subset designed for training high-quality Language Models.
- Filtration Logic:
- Quality Cutoff: Documents with eduscore < 1.6 are removed. This effectively filters out machine-generated spam, SEO boilerplate, and low-quality promotional content while preserving lyrical content, news, and forums.
- Deduplication (Cluster Flattening): For every MinHash cluster (cluster_size > 1), only the single document with the highest eduscore is retained.
- Documents: ~3.9M (Estimated)
- Tokens: ~3.33 Billion (92.5% of original volume)
- Use Case: Recommended for LLM Continued Pre-training (CPT).
Data Fields
Standard Fields
- id (string): Unique identifier of the document (e.g., CLASSLA-web.2.0.sl.1).
- text (string): The main content of the web page.
- title (string): The title of the web page.
- url (string): The original URL.
- domain (string): The domain name (e.g., rtvslo.si).
- genre (string): The genre label - 10 values (X-GENRE classifier).
- topic (string): The topic label - 18 values (IPTC news topic classifier).
- crawl_year (int64): Year of the crawl (2024).
- token_count (int64): Number of tokens in the text (calculated using the Zlatorog MoE tokenizer).
Enriched Metadata Fields
eduscore (float)
A quality score ranging from 0.0 to 5.0, predicting the educational value of the content.
- Model: zID4si/slovenian-edu-classifier-slovlo
- Interpretation: See model card
nvidia_domain (string)
The broad topic classification of the document.
- Model: nvidia/domain-classifier
- Values: News, Science, Health, Politics, Finance, Sports, Travel, etc.
web_reg (string or list)
The functional text variety (register) of the document.
- Model: TurkuNLP/web-register-classification-multilingual
- Mapping Table:
| Code | Description | Code | Description |
|---|---|---|---|
| MT | Machine translated / generated | re | Recipe |
| ne | News report | en | Encyclopedia article |
| na | Narrative | ra | Research article |
| nb | Narrative blog | dtp | Description of a thing/person |
| sr | Sports report | fi | FAQ |
| op | Opinion | lt | Legal terms and conditions |
| ob | Opinion blog | rv | Review |
| ed | Editorial / News & opinion blog | rs | Religious blog / sermon |
| hi | How-to / Instructions | av | Advice |
| in | Informational description | ip | Informational persuasion |
| id | Interactive discussion (Forum) | ds | Description with intent to sell |
| ly | Lyrical (Poetry/Song) | sp | Spoken (Interview transcripts) |
Deduplication Fields
- minhash_sig (list[int]): A list of 128 integers representing the MinHash signature of the text, used for Locality Sensitive Hashing (LSH).
- cluster_id (int64): The identifier of the cluster this document belongs to.
- If cluster_size == 1: The document is unique.
- If cluster_size > 1: The document is a near-duplicate of others with the same cluster_id.
- cluster_size (int64): The total number of documents found in this specific cluster.
Usage
Loading the Filtered Dataset (Recommended)
from datasets import load_dataset
# Loads the CPT-optimized version (Eduscore >= 1.6 + Deduplicated)
dataset = load_dataset("zID4si/CLASSLA_web_sl_2_enriched", "filtered", split="train")
print(dataset)
Loading the Raw Enriched Dataset
# Loads the full dataset including spam and duplicates
dataset = load_dataset("zID4si/CLASSLA_web_sl_2_enriched", "default", split="train")
Source & Citation
The underlying text data is derived from the CLASSLA-web 2.0 corpus (sl subcorpus). Additional information: https://clarinsi.github.io/classla-web/
Kuzman Pungeršek, Taja; Rupnik, Peter and Ljubešić, Nikola, 2026, South Slavic web corpus collection CLASSLA-web 2.0, Slovenian language resource repository CLARIN.SI, ISSN 2820-4042, http://hdl.handle.net/11356/2079.
If you use the enriched metadata or the filtered subset, please also acknowledge this repository and the classification models used.
Author
Tomaž Savodnik, Zavod za informacijsko družbo (zID)
The dataset was created as part of research into the quality and diversity of Slovenian online corpora.
- Downloads last month
- 17