Dataset Card for SLiM-CZ-V1 Czech Text Corpus
This dataset provides preprocessed Czech text data for training SLiM-CZ-V1 (Slavic Linguistic integrated Micro-model for Czechia), a small transformer-based language model designed for Czech text generation and language modeling tasks.
Dataset Details
Dataset Description
SLiM-CZ-V1 Czech Text Corpus contains tokenized Czech text sequences ready for training autoregressive language models. The dataset has been preprocessed with consistent cleaning, tokenization, and sequence creation to ensure high-quality training data for Czech language models.
- Curated by: Filip Sedivy
- Language(s) (NLP): Czech (cs)
- License: MIT License
Dataset Sources
- Repository: https://github.com/filipsedivy/SLiM-CZ-V1
- Model Repository: https://huggingface.co/filipsedivy/SLiM-CZ-V1
Uses
Direct Use
This dataset is designed for:
- Training small to medium-sized Czech language models (3M-125M parameters)
- Autoregressive text generation in Czech
- Language modeling research for Czech NLP
- Fine-tuning pre-trained models for Czech-specific tasks
- Educational purposes for understanding transformer-based language models
Recommended use: Training SLiM-CZ-V1 models (Tiny, Small, Medium, Large variants).
Out-of-Scope Use
This dataset should NOT be used for:
- Production systems without human oversight
- Medical, legal, or financial decision-making
- Generating harmful or illegal content
- Applications where factual accuracy is critical without verification
- Training models for languages other than Czech
Dataset Structure
Data Format
The dataset is provided in JSON format as a list of token ID sequences:
[
[15, 32, 45, 67, 89, 12, 34, 56, 78, 90, 23, 45, ...], // Sequence 1
[32, 45, 67, 89, 12, 34, 56, 78, 90, 23, 45, 67, ...], // Sequence 2
[45, 67, 89, 12, 34, 56, 78, 90, 23, 45, 67, 89, ...], // Sequence 3
...
]
Each sequence is a list of integer token IDs with length seq_len + 1:
- First
seq_lentokens serve as input - Last
seq_lentokens serve as labels (shifted by 1 position)
This structure enables autoregressive language modeling where the model predicts the next token.
Example
With seq_len=512:
- Each sequence has 513 tokens
- Input: tokens [0:512]
- Target: tokens [1:513]
- This creates a "next token prediction" task
Data Files
processed_data/
βββ train.json # Training sequences (list of lists)
βββ val.json # Validation sequences (list of lists)
βββ test.json # Test sequences (list of lists)
βββ tokenizer.json # Tokenizer vocabulary and mappings
βββ stats.json # Dataset statistics
βββ data_config.json # Preprocessing configuration
Data Splits
| Split | Percentage | Approximate Sequences |
|---|---|---|
| Train | 90% | ~90,000-900,000 |
| Validation | 5% | ~5,000-50,000 |
| Test | 5% | ~5,000-50,000 |
Exact numbers depend on source corpus size and configuration.
Dataset Creation
Curation Rationale
This dataset was created to enable training of efficient Czech language models that can:
- Run on consumer-grade hardware (unlike large multilingual models)
- Generate coherent Czech text with proper morphology and syntax
- Serve as a foundation for domain-specific fine-tuning
- Support Czech NLP research with accessible model sizes
- Provide educational resources for learning about language models
Source Data
Data Collection and Processing
The dataset was created using a standardized pipeline (see prepare_data.py):
File Collection
- Recursive scanning of text files (.txt, .md, .rst, .py, .js, .html, .css, .json, .xml, .csv, .log, .c, .cpp, .java)
- Collection from multiple Czech text sources
Text Cleaning
- URL removal using regex patterns
- Email address removal
- Whitespace normalization (multiple spaces β single space)
- Short line filtering (minimum 10 characters)
- Deduplication of repeated content
Tokenization
- Character-level tokenization (configurable)
- Special tokens:
<pad>,<unk>,<bos>,<eos> - Vocabulary construction with minimum frequency threshold
- Default vocab size: 10,000 tokens
Sequence Creation
- Overlapping sequences with configurable stride
- Default:
seq_len=512,stride=256 - Each sequence is
seq_len + 1tokens (513 for default) - Ensures context preservation across sequences
Dataset Splitting
- Stratified split: 90% train, 5% validation, 5% test
- Random shuffling with fixed seed (42) for reproducibility
Who are the source data producers?
The source data comes from publicly available Czech text sources:
- Czech Wikipedia articles (licensed under CC BY-SA)
- Public domain Czech literature (classical authors)
- Czech news websites (where redistribution is permitted)
- Czech technical documentation (open-source projects)
- Czech blogs and forums (publicly accessible)
All sources respect copyright laws and licensing requirements. No personal or private communications are included.
Annotations
This dataset contains no additional annotations beyond tokenization. It is designed for unsupervised language modeling.
Personal and Sensitive Information
Efforts have been made to remove personal information:
- Email addresses: Automatically removed during preprocessing
- URLs: Automatically removed during preprocessing
- PII screening: Basic filtering applied
However, as with any web-scraped corpus, complete removal of personal information cannot be guaranteed. Users should be aware that residual personal information may exist and should implement additional safeguards for sensitive applications.
Bias, Risks, and Limitations
Known Limitations
Technical Limitations:
- Character-level tokenization: Suboptimal for Czech morphology (consider BPE/WordPiece for production)
- Fixed sequence length: Truncates long documents
- Limited vocabulary coverage: 10,000 tokens may miss rare words
- Limited coverage: Of Czech dialects and regional variations
- Static dataset: Does not include recent events or information
Quality Limitations:
- Variable text quality depending on source
- Potential for noise from web-scraped content
- Incomplete representation of all Czech language domains
- May not capture spoken Czech or informal language adequately
Biases
The dataset may contain various biases:
Source Bias:
- Overrepresentation of formal/written Czech vs. informal/spoken Czech
- Skewed toward certain topics (e.g., technical, encyclopedic content)
- Temporal bias reflecting when texts were written
Demographic Bias:
- May reflect perspectives of source text authors
- Potential underrepresentation of minority viewpoints
- Geographic bias toward standard Czech vs. regional variants
Content Bias:
- May perpetuate stereotypes present in source data
- Potential political or ideological biases from source selection
- Unequal representation across different subject domains
Citation
If you use this dataset, please cite:
BibTeX:
@misc{slim_cz_v1_dataset,
title={SLiM-CZ-V1 Czech Text Corpus},
author={Filip Sedivy},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/filipsedivy/SLiM-CZ-V1}}
}
APA:
Filip Sedivy. (2025). SLiM-CZ-V1 Czech Text Corpus. Hugging Face. https://huggingface.co/datasets/filipsedivy/SLiM-CZ-V1
Glossary
- SLiM-CZ-V1: Slavic Linguistic integrated Micro-model for Czechia
- Autoregressive: Model predicts next token based on previous tokens
- Sequence Length (seq_len): Number of input tokens in each training sequence
- Stride: Overlap between consecutive sequences (prevents context loss)
- Token: Basic unit of text (character in this implementation)
- Vocabulary Size: Number of unique tokens the model can represent
- Character-level tokenization: Each character is a separate token (simpler but less efficient than BPE)
More Information
Dataset Statistics
- Sequence Format: List of lists (no keys, just token IDs)
- Sequence Length:
seq_len + 1tokens (default: 513) - Vocabulary Size: Configurable (default: 10,000)
- Tokenization: Character-level (each character = 1 token)
- Total Tokens: ~100M-1B (depending on source corpus)
- Languages: Czech only
- File Format: JSON (plain lists)
Quality Assurance
The dataset undergoes several quality checks:
- Duplicate detection and removal
- Minimum line length filtering (10 characters)
- Character encoding validation (UTF-8)
- Token frequency analysis
- Sequence length verification (all sequences are
seq_len + 1) - Split integrity checking
Dataset Card Contact
For questions, issues, or contributions:
- GitHub Issues: https://github.com/filipsedivy/SLiM-CZ-V1/issues
- Hugging Face Discussions: https://huggingface.co/datasets/filipsedivy/SLiM-CZ-V1/discussions
- Downloads last month
- 31