MaCoCu-sl-tokenized / README.md
klokedm's picture
Update README.md
f659c8a verified
metadata
dataset_info:
  features:
    - name: title
      dtype: string
    - name: crawl_date
      dtype: string
    - name: url
      dtype: string
    - name: domain
      dtype: string
    - name: file_type
      dtype: string
    - name: languages
      dtype: string
    - name: document_fluency
      dtype: float32
    - name: text
      dtype: string
    - name: paragraphs
      sequence:
        - name: paragraph_text
          dtype: string
        - name: is_heading
          dtype: bool
        - name: quality_label
          dtype: string
        - name: fluency
          dtype: float32
        - name: language
          dtype: string
        - name: contains_sensitive
          dtype: bool
    - name: sentence_count
      dtype: int64
    - name: paragraph_count
      dtype: int64
    - name: character_length
      dtype: int64
    - name: word_count
      dtype: int64
    - name: phi_tokens
      sequence: int64
    - name: phi_token_count
      dtype: int64
    - name: gemma2_tokens
      sequence: int64
    - name: gemma2_token_count
      dtype: int64
    - name: micka_tokens
      sequence: int64
    - name: micka_token_count
      dtype: int64
    - name: orca_tokens
      sequence: int64
    - name: orca_token_count
      dtype: int64
    - name: llama_tokens
      sequence: int64
    - name: llama_token_count
      dtype: int64
    - name: micka_struct_tokens
      sequence: int64
    - name: micka_struct_token_count
      dtype: int64
  splits:
    - name: train
      num_bytes: 212190702185
      num_examples: 6302486
  download_size: 54394662084
  dataset_size: 212190702185
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-sa-4.0

Dataset Card for MaCoCu-sl Multi-Tokenized

Dataset Description:

This dataset provides a pre-tokenized version of the Slovene web corpus MaCoCu. It includes the original text data and metadata from MaCoCu-sl, augmented with token IDs and token counts generated by several popular large language model tokenizers. The goal is to facilitate research and experimentation by providing ready-to-use tokenized data, saving computational resources during repeated setups.

Licensing and orginal data is taken from https://www.clarin.si/repository/xmlui/handle/11356/1795. This is just a repackaging for easier use with Machine Learning Frameworks, for licensing & use terms, see the original dataset.

Tokenization Details:

The dataset contains tokenizations for the following models, applied to each document in the train split of klokedm/MaCoCu-sl:

  1. microsoft/phi-4: Standard tokenization.
  2. google/gemma-2-2b: Standard tokenization.
  3. klokedm/micka-32768:
    • Standard tokenization (micka_tokens, micka_token_count).
    • Structured tokenization (micka_struct_tokens, micka_struct_token_count): Sentences (identified using NLTK for Slovene) are wrapped in ⸢s⸥...⸢/s⸥ tags, and paragraphs are wrapped in ⸢p⸥...⸢/p⸥ tags before tokenization.
  4. microsoft/Orca-2-13b: Standard tokenization.
  5. meta-llama/Llama-3.3-70B-Instruct: Standard tokenization.

Input Text Preparation:

  • For standard tokenizations (Phi, Gemma2, Orca, Llama, standard Micka): The input text was primarily derived from the text field of the source dataset. If the text field was empty, paragraphs from the paragraphs field were joined by newlines (\n).
  • For structured Micka tokenization: The input text was derived from the paragraphs field. If unavailable, the text field was split by newlines to simulate paragraphs. Each paragraph's text was then sentence-split using nltk.sent_tokenize(..., language='slovene'), and the structural tags were added as described above.

All tokenizations were performed using add_special_tokens=True.

Additional Statistics:

The following statistics were computed based on the flattened text (primarily from the text field, joined by newlines if applicable):

  • sentence_count: Number of sentences identified using nltk.sent_tokenize(..., language='slovene').
  • paragraph_count: Number of paragraphs (derived from the paragraphs field structure or non-empty lines in the text field).
  • character_length: Total number of characters in the flattened text.
  • word_count: Number of words (whitespace-separated) in the flattened text.

Data Fields:

The dataset contains the following fields:

  • title: (string) Document title if found, else empty.
  • crawl_date: (string) Date of the web crawl (YYYY-MM-DD).
  • url: (string) Source URL of the document.
  • domain: (string) Domain name from the URL.
  • file_type: (string) Detected file type (e.g., 'html', 'pdf').
  • languages: (string) Detected language(s). Primarily 'sl'.
  • document_fluency: (float32) Fluency score for the document.
  • text: (string) Plain text content of the document.
  • paragraphs: (list of dicts) Structured paragraph information from the source dataset (features: paragraph_text, is_heading, quality_label, fluency, language, contains_sensitive).
  • sentence_count: (int64) Number of sentences computed for statistics.
  • paragraph_count: (int64) Number of paragraphs computed for statistics.
  • character_length: (int64) Character length computed for statistics.
  • word_count: (int64) Word count computed for statistics.
  • phi_tokens: (list of int64) Token IDs generated by microsoft/phi-4 tokenizer.
  • phi_token_count: (int64) Number of tokens in phi_tokens.
  • gemma2_tokens: (list of int64) Token IDs generated by google/gemma-2-2b tokenizer.
  • gemma2_token_count: (int64) Number of tokens in gemma2_tokens.
  • micka_tokens: (list of int64) Token IDs generated by klokedm/micka-32768 (standard).
  • micka_token_count: (int64) Number of tokens in micka_tokens.
  • orca_tokens: (list of int64) Token IDs generated by microsoft/Orca-2-13b tokenizer.
  • orca_token_count: (int64) Number of tokens in orca_tokens.
  • llama_tokens: (list of int64) Token IDs generated by meta-llama/Llama-3.3-70B-Instruct tokenizer.
  • llama_token_count: (int64) Number of tokens in llama_tokens.
  • micka_struct_tokens: (list of int64) Token IDs generated by klokedm/micka-32768 (structured).
  • micka_struct_token_count: (int64) Number of tokens in micka_struct_tokens.

Data Splits:

The dataset contains only the train split, mirroring the structure of klokedm/MaCoCu-sl. It includes all examples from the original training split.

Source Dataset:

Please refer to the CLARIN MaCoCu-sl dataset for detailed information about the original data collection, cleaning, and filtering processes.

Dataset Usage:

Load the dataset using the datasets library:

from datasets import load_dataset

# Load the repository from HF
repo_id = "klokedm/MaCoCu-sl-tokenized"
ds = load_dataset(repo_id)

# Access an example and specific tokenization
example = ds['train'][0]

print(f"URL: {example['url']}")
print(f"--- Phi Tokens ({example['phi_token_count']}) ---")
print(example['phi_tokens'][:20]) # Print first 20 tokens

print(f"--- Structured Micka Tokens ({example['micka_struct_token_count']}) ---")
print(example['micka_struct_tokens'][:20]) # Print first 20 tokens

print(f"--- Statistics ---")
print(f"Sentences: {example['sentence_count']}, Paragraphs: {example['paragraph_count']}")
print(f"Chars: {example['character_length']}, Words: {example['word_count']}")