Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Datasets:
Alverciito
/
wikipedia_articles_es_tokenized

Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
License:
Dataset card Files Files and versions
xet
Community
wikipedia_articles_es_tokenized / bin
1.97 MB
  • 2 contributors
History: 2 commits
Alverciito's picture
Alverciito
Upload tokenizer_32768.json
b0af3fe verified about 1 month ago
  • config.py
    835 Bytes
    upload tokenizer script and configs about 1 month ago
  • tokenizer_32768.json
    1.97 MB
    Upload tokenizer_32768.json about 1 month ago
  • train_bpe.py
    1.99 kB
    upload tokenizer script and configs about 1 month ago