dataset_info:
features:
- name: title
dtype: string
- name: crawl_date
dtype: string
- name: url
dtype: string
- name: domain
dtype: string
- name: file_type
dtype: string
- name: languages
dtype: string
- name: document_fluency
dtype: float32
- name: text
dtype: string
- name: paragraphs
sequence:
- name: paragraph_text
dtype: string
- name: is_heading
dtype: bool
- name: quality_label
dtype: string
- name: fluency
dtype: float32
- name: language
dtype: string
- name: contains_sensitive
dtype: bool
- name: sentence_count
dtype: int64
- name: paragraph_count
dtype: int64
- name: character_length
dtype: int64
- name: word_count
dtype: int64
- name: phi_tokens
sequence: int64
- name: phi_token_count
dtype: int64
- name: gemma2_tokens
sequence: int64
- name: gemma2_token_count
dtype: int64
- name: micka_tokens
sequence: int64
- name: micka_token_count
dtype: int64
- name: orca_tokens
sequence: int64
- name: orca_token_count
dtype: int64
- name: llama_tokens
sequence: int64
- name: llama_token_count
dtype: int64
- name: micka_struct_tokens
sequence: int64
- name: micka_struct_token_count
dtype: int64
splits:
- name: train
num_bytes: 212190702185
num_examples: 6302486
download_size: 54394662084
dataset_size: 212190702185
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
Dataset Card for MaCoCu-sl Multi-Tokenized
Dataset Description:
This dataset provides a pre-tokenized version of the Slovene web corpus MaCoCu. It includes the original text data and metadata from MaCoCu-sl, augmented with token IDs and token counts generated by several popular large language model tokenizers. The goal is to facilitate research and experimentation by providing ready-to-use tokenized data, saving computational resources during repeated setups.
Licensing and orginal data is taken from https://www.clarin.si/repository/xmlui/handle/11356/1795. This is just a repackaging for easier use with Machine Learning Frameworks, for licensing & use terms, see the original dataset.
Tokenization Details:
The dataset contains tokenizations for the following models, applied to each document in the train split of klokedm/MaCoCu-sl:
microsoft/phi-4: Standard tokenization.google/gemma-2-2b: Standard tokenization.klokedm/micka-32768:- Standard tokenization (
micka_tokens,micka_token_count). - Structured tokenization (
micka_struct_tokens,micka_struct_token_count): Sentences (identified using NLTK for Slovene) are wrapped in⸢s⸥...⸢/s⸥tags, and paragraphs are wrapped in⸢p⸥...⸢/p⸥tags before tokenization.
- Standard tokenization (
microsoft/Orca-2-13b: Standard tokenization.meta-llama/Llama-3.3-70B-Instruct: Standard tokenization.
Input Text Preparation:
- For standard tokenizations (Phi, Gemma2, Orca, Llama, standard Micka): The input text was primarily derived from the
textfield of the source dataset. If thetextfield was empty, paragraphs from theparagraphsfield were joined by newlines (\n). - For structured Micka tokenization: The input text was derived from the
paragraphsfield. If unavailable, thetextfield was split by newlines to simulate paragraphs. Each paragraph's text was then sentence-split usingnltk.sent_tokenize(..., language='slovene'), and the structural tags were added as described above.
All tokenizations were performed using add_special_tokens=True.
Additional Statistics:
The following statistics were computed based on the flattened text (primarily from the text field, joined by newlines if applicable):
sentence_count: Number of sentences identified usingnltk.sent_tokenize(..., language='slovene').paragraph_count: Number of paragraphs (derived from theparagraphsfield structure or non-empty lines in thetextfield).character_length: Total number of characters in the flattened text.word_count: Number of words (whitespace-separated) in the flattened text.
Data Fields:
The dataset contains the following fields:
title: (string) Document title if found, else empty.crawl_date: (string) Date of the web crawl (YYYY-MM-DD).url: (string) Source URL of the document.domain: (string) Domain name from the URL.file_type: (string) Detected file type (e.g., 'html', 'pdf').languages: (string) Detected language(s). Primarily 'sl'.document_fluency: (float32) Fluency score for the document.text: (string) Plain text content of the document.paragraphs: (list of dicts) Structured paragraph information from the source dataset (features:paragraph_text,is_heading,quality_label,fluency,language,contains_sensitive).sentence_count: (int64) Number of sentences computed for statistics.paragraph_count: (int64) Number of paragraphs computed for statistics.character_length: (int64) Character length computed for statistics.word_count: (int64) Word count computed for statistics.phi_tokens: (list of int64) Token IDs generated bymicrosoft/phi-4tokenizer.phi_token_count: (int64) Number of tokens inphi_tokens.gemma2_tokens: (list of int64) Token IDs generated bygoogle/gemma-2-2btokenizer.gemma2_token_count: (int64) Number of tokens ingemma2_tokens.micka_tokens: (list of int64) Token IDs generated byklokedm/micka-32768(standard).micka_token_count: (int64) Number of tokens inmicka_tokens.orca_tokens: (list of int64) Token IDs generated bymicrosoft/Orca-2-13btokenizer.orca_token_count: (int64) Number of tokens inorca_tokens.llama_tokens: (list of int64) Token IDs generated bymeta-llama/Llama-3.3-70B-Instructtokenizer.llama_token_count: (int64) Number of tokens inllama_tokens.micka_struct_tokens: (list of int64) Token IDs generated byklokedm/micka-32768(structured).micka_struct_token_count: (int64) Number of tokens inmicka_struct_tokens.
Data Splits:
The dataset contains only the train split, mirroring the structure of klokedm/MaCoCu-sl. It includes all examples from the original training split.
Source Dataset:
Please refer to the CLARIN MaCoCu-sl dataset for detailed information about the original data collection, cleaning, and filtering processes.
Dataset Usage:
Load the dataset using the datasets library:
from datasets import load_dataset
# Load the repository from HF
repo_id = "klokedm/MaCoCu-sl-tokenized"
ds = load_dataset(repo_id)
# Access an example and specific tokenization
example = ds['train'][0]
print(f"URL: {example['url']}")
print(f"--- Phi Tokens ({example['phi_token_count']}) ---")
print(example['phi_tokens'][:20]) # Print first 20 tokens
print(f"--- Structured Micka Tokens ({example['micka_struct_token_count']}) ---")
print(example['micka_struct_tokens'][:20]) # Print first 20 tokens
print(f"--- Statistics ---")
print(f"Sentences: {example['sentence_count']}, Paragraphs: {example['paragraph_count']}")
print(f"Chars: {example['character_length']}, Words: {example['word_count']}")