Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Datasets:
Alverciito
/
wikipedia_articles_es_tokenized

Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
License:
Dataset card Files Files and versions
xet
Community
wikipedia_articles_es_tokenized / tokens-A000-segmentation
11.7 GB
  • 2 contributors
History: 1 commit
Chiquitin
upload data and src
9040523 4 months ago
  • info.json
    919 Bytes
    upload data and src 4 months ago
  • x.npy
    7.78 GB
    xet
    upload data and src 4 months ago
  • x_mask.npy
    3.89 GB
    xet
    upload data and src 4 months ago
  • y.npy
    10.2 MB
    xet
    upload data and src 4 months ago
  • y_cand.npy
    10.2 MB
    xet
    upload data and src 4 months ago
  • y_mask.npy
    10.2 MB
    xet
    upload data and src 4 months ago