turkish-stratch-txt / README.md
sixfingerdev's picture
Update README.md
9b0a42c verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - tr
tags:
  - legal
size_categories:
  - 100K<n<1M

SpeedLM-Dataset-TR: Cleaned Turkish Wikipedia for Minimalist LLMs

This dataset is a high-quality, pre-processed Turkish Wikipedia corpus specifically curated for training SpeedLM (Matmul-free, Hash-based Language Models) and other lightweight architectures.

πŸ“Œ Dataset Summary

  • Source: Turkish Wikipedia (Cleaned)
  • Format: Line-separated raw UTF-8 text
  • Preprocessing: Stripped JSONL metadata, normalized whitespaces, and preserved Turkish character integrity.
  • Target Model: SpeedLM v0.1 (Byte-level Ternary Model)

🎯 Purpose

Traditional datasets are often cluttered with JSON structures and metadata. This dataset provides a pure text stream optimized for:

  1. Byte-level Training: No complex tokenizers needed.
  2. High-Speed Streaming: Optimized for single-pass online learning.
  3. Hash-Based Contexts: Uniform text distribution to minimize hash collisions in sparse architectures.

πŸ›  Preprocessing Pipeline

The data was extracted from wiki_cleaned4.jsonl using a custom Python pipeline:

  1. Extraction: Isolated the output field from each JSON line.
  2. Sanitization: Removed null bytes and control characters that break byte-level models.
  3. Normalization: Collapsed multiple spaces into a single space and ensured \n line-ending consistency.

πŸ“Š Data Statistics

Metric Value
Language Turkish (TR)
Encoding UTF-8
Vocab Size 256 (Byte-level)
Structure One Wikipedia paragraph per line

πŸš€ How to Use with SpeedLM

"""
SixFinger SpeedLM - Turkish Language Model Training
===================================================
Direct production training on Turkish dataset from HuggingFace
No test mode - full training with normalization
"""

import re
import time
from pathlib import Path

# ===== CONFIGURATION =====
CONFIG = {
    # model architecture
    'n_buckets': 500_000,
    'n_features': 1024,
    'context_sizes': [1, 2, 3, 4, 5, 8, 12],
    'batch_size': 512,
    
    # training params
    'chunk_size': 256_000,
    'num_epochs': 1,
    'lr': 0.01,
    
    # data source
    'data_url': 'https://huggingface.co/datasets/sixfingerdev/turkish-stratch-txt/resolve/main/kayra_training_raw.txt',
    'local_file': 'kayra_training_raw.txt',
    'output_model': 'turkish_speedlm_1024f.npz',
    
    # normalization
    'lowercase': True,
    'remove_punctuation': True,
    'remove_extra_spaces': True,
}


# ===== TEXT NORMALIZATION =====
def normalize_text(text: str) -> str:
    """
    normalize text for training
    removes case sensitivity and punctuation
    """
    # lowercase
    if CONFIG['lowercase']:
        text = text.lower()
    
    # remove punctuation except spaces and newlines
    if CONFIG['remove_punctuation']:
        # keep only alphanumeric, spaces, newlines
        text = re.sub(r'[^\w\s\n]', ' ', text)
    
    # normalize spaces
    if CONFIG['remove_extra_spaces']:
        # replace multiple spaces with single space
        text = re.sub(r' +', ' ', text)
        # replace multiple newlines with double newline
        text = re.sub(r'\n\n+', '\n\n', text)
    
    return text.strip()


# ===== DATA DOWNLOAD =====
def download_data():
    """
    download training data from huggingface if not exists
    """
    local_path = Path(CONFIG['local_file'])
    
    if local_path.exists():
        print(f"data file found: {local_path}")
        return local_path
    
    print(f"downloading from huggingface...")
    print(f"url: {CONFIG['data_url']}")
    
    try:
        import urllib.request
        urllib.request.urlretrieve(CONFIG['data_url'], local_path)
        print(f"download complete: {local_path}")
        return local_path
    
    except Exception as e:
        print(f"download failed: {e}")
        print(f"please manually download to: {local_path}")
        return None


# ===== PREPROCESSING =====
def preprocess_file(input_path: Path, output_path: Path):
    """
    preprocess training file with normalization
    """
    print(f"\npreprocessing data...")
    print(f"input: {input_path}")
    print(f"output: {output_path}")
    
    # read file
    with open(input_path, 'r', encoding='utf-8') as f:
        text = f.read()
    
    original_size = len(text)
    print(f"original size: {original_size:,} chars")
    
    # normalize
    normalized = normalize_text(text)
    
    normalized_size = len(normalized)
    print(f"normalized size: {normalized_size:,} chars")
    print(f"reduction: {(1 - normalized_size/original_size)*100:.1f}%")
    
    # save preprocessed
    with open(output_path, 'w', encoding='utf-8') as f:
        f.write(normalized)
    
    print(f"preprocessing complete")
    return output_path


# ===== MAIN TRAINING =====
def main():
    print("=" * 70)
    print("SIXFINGER SPEEDLM - TURKISH LANGUAGE MODEL TRAINING")
    print("=" * 70)
    
    # import model
    try:
        from sixfinger.transformers import SpeedLM
        import sixfinger
        print(f"\nsixfinger version: {sixfinger.__version__}")
    except ImportError:
        print("error: sixfinger not installed")
        print("install: pip install sixfinger[transformers]")
        return
    
    # download data if needed
    print("\n" + "=" * 70)
    print("STEP 1: DATA ACQUISITION")
    print("=" * 70)
    
    data_path = download_data()
    if not data_path:
        return
    
    file_size = data_path.stat().st_size / 1024 / 1024
    print(f"file size: {file_size:.2f} mb")
    
    # preprocess data
    print("\n" + "=" * 70)
    print("STEP 2: TEXT NORMALIZATION")
    print("=" * 70)
    print(f"lowercase: {CONFIG['lowercase']}")
    print(f"remove punctuation: {CONFIG['remove_punctuation']}")
    print(f"normalize spaces: {CONFIG['remove_extra_spaces']}")
    
    preprocessed_path = Path('preprocessed_' + CONFIG['local_file'])
    preprocessed_path = preprocess_file(data_path, preprocessed_path)
    
    # create model
    print("\n" + "=" * 70)
    print("STEP 3: MODEL INITIALIZATION")
    print("=" * 70)
    
    model = SpeedLM(
        n_buckets=CONFIG['n_buckets'],
        n_features=CONFIG['n_features'],
        context_sizes=CONFIG['context_sizes'],
        batch_size=CONFIG.get('batch_size'),
        lr=CONFIG['lr'],
        verbose=True
    )
    
    print(f"\nmodel memory: {model._memory_mb():.1f} mb")
    print(f"total parameters: {CONFIG['n_buckets'] * CONFIG['n_features'] + CONFIG['n_features'] * 256:,}")
    
    # train model
    print("\n" + "=" * 70)
    print("STEP 4: TRAINING START")
    print("=" * 70)
    
    training_start = time.time()
    
    try:
        stats = model.train_file(
            filepath=str(preprocessed_path),
            chunk_size=CONFIG['chunk_size'],
            num_epochs=CONFIG['num_epochs']
        )
        
        training_time = time.time() - training_start
        
        # training results
        print("\n" + "=" * 70)
        print("TRAINING COMPLETED")
        print("=" * 70)
        
        print(f"\ntotal tokens: {stats['tokens']:,}")
        print(f"final loss: {stats['loss']:.4f}")
        print(f"training time: {training_time/60:.2f} minutes")
        print(f"speed: {stats['speed_kb_s']:.1f} kb/s")
        
        # estimate 1gb training time
        gb_estimate = (1024 * 1024 * 1024) / (stats['speed_kb_s'] * 1024) / 60
        print(f"\n1gb training estimate: {gb_estimate:.1f} minutes")
        
        if gb_estimate < 10:
            print("performance: excellent (sub 10 min)")
        elif gb_estimate < 30:
            print("performance: very good (sub 30 min)")
        else:
            print("performance: good")
        
    except KeyboardInterrupt:
        print("\n\ntraining interrupted by user")
        training_time = time.time() - training_start
        print(f"partial training time: {training_time/60:.2f} minutes")
        
    except Exception as e:
        print(f"\ntraining error: {e}")
        import traceback
        traceback.print_exc()
        return
    
    # save model
    print("\n" + "=" * 70)
    print("STEP 5: SAVE MODEL")
    print("=" * 70)
    
    model_path = CONFIG['output_model']
    model.save(model_path)
    
    model_size = Path(model_path).stat().st_size / 1024 / 1024
    print(f"model saved: {model_path}")
    print(f"model size: {model_size:.2f} mb")
    
    # generation test
    print("\n" + "=" * 70)
    print("STEP 6: GENERATION TEST")
    print("=" * 70)
    
    test_prompts = [
        "turkiye",
        "yapay zeka",
        "bilgisayar",
        "gelecek",
        "teknoloji"
    ]
    
    print("\ngenerated samples:\n")
    
    for prompt in test_prompts:
        try:
            output = model.generate(
                prompt=prompt.encode('utf-8'),
                length=80,
                temperature=0.7
            )
            
            generated = output.decode('utf-8', errors='ignore')
            display = generated[:120] + "..." if len(generated) > 120 else generated
            
            print(f"prompt: {prompt}")
            print(f"output: {display}\n")
            
        except Exception as e:
            print(f"generation failed for '{prompt}': {e}\n")
    
    # final summary
    print("\n" + "=" * 70)
    print("TRAINING SUMMARY")
    print("=" * 70)
    
    print(f"""
dataset: turkish language corpus
source: huggingface sixfingerdev/turkish-stratch-txt
file size: {file_size:.2f} mb
processed tokens: {stats['tokens']:,}

model architecture:
  buckets: {CONFIG['n_buckets']:,}
  features: {CONFIG['n_features']}
  context: {CONFIG['context_sizes']}
  parameters: {CONFIG['n_buckets'] * CONFIG['n_features'] + CONFIG['n_features'] * 256:,}

training performance:
  time: {training_time/60:.2f} minutes
  speed: {stats['speed_kb_s']:.1f} kb/s
  final loss: {stats['loss']:.4f}
  
output:
  model file: {model_path}
  model size: {model_size:.2f} mb

status: training complete
""")
    
    print("=" * 70)
    print("DONE")
    print("=" * 70)
    
    print(f"\nto use trained model:")
    print(f"  from sixfinger.transformers import SpeedLM")
    print(f"  model = SpeedLM.from_pretrained('{model_path}')")
    print(f"  output = model.generate(b'your prompt', length=100)")


if __name__ == '__main__':
    main()