Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Datasets:
Alverciito
/
wikipedia_articles_es_tokenized

Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
License:
Dataset card Files Files and versions
xet
Community
wikipedia_articles_es_tokenized / src
30.2 kB
  • 2 contributors
History: 3 commits
Chiquitin
docstrings on tokenizer
242bd10 about 1 month ago
  • __init__.py
    759 Bytes
    upload data and src about 1 month ago
  • config.py
    984 Bytes
    upload data and src about 1 month ago
  • dataset.py
    7.96 kB
    upload data and src about 1 month ago
  • tokenized_dataset.py
    9.09 kB
    upload data and src about 1 month ago
  • tokenizer.py
    11.4 kB
    docstrings on tokenizer about 1 month ago