Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
Alverciito
/
wikipedia_articles_es_tokenized
like
0
Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
+ 6
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
wikipedia_articles_es_tokenized
/
bin
1.97 MB
2 contributors
History:
2 commits
Alverciito
Upload tokenizer_32768.json
b0af3fe
verified
about 1 month ago
config.py
835 Bytes
upload tokenizer script and configs
about 1 month ago
tokenizer_32768.json
1.97 MB
Upload tokenizer_32768.json
about 1 month ago
train_bpe.py
1.99 kB
upload tokenizer script and configs
about 1 month ago