bpe_swe_Latn_128000 / README.md
gsaltintas's picture
Upload folder using huggingface_hub
f5b81bc verified
metadata
license: mit
language:
  - swe
tags:
  - tokenizer
  - bpe
  - flexitok
  - fineweb2

Byte-Level BPE Tokenizer: swe_Latn (128K)

A Byte-Level BPE tokenizer trained on swe_Latn data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language swe_Latn
Target Vocab Size 128,000
Final Vocab Size 0
Pre-tokenizer ByteLevel
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2
Data Source /scratch/gsa/data/flexitok//swe_Latn/

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("<repo_id>")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules