Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

openeurollm
/
tokenizer-128k

tokenizer
sentencepiece
bpe
multilingual
european-languages
Model card Files Files and versions
xet
Community
tokenizer-128k
2.39 MB
  • 1 contributor
History: 3 commits
timpal0l's picture
timpal0l
Add model card with description, usage, and evaluation results
a89eaa2 verified about 14 hours ago
  • .gitattributes
    1.52 kB
    initial commit about 16 hours ago
  • README.md
    4.3 kB
    Add model card with description, usage, and evaluation results about 14 hours ago
  • added_tokens.json
    22 Bytes
    Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB) about 16 hours ago
  • special_tokens_map.json
    16.6 kB
    Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB) about 16 hours ago
  • tokenizer.model
    2.35 MB
    xet
    Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB) about 16 hours ago
  • tokenizer_config.json
    22.5 kB
    Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB) about 16 hours ago