Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
Alverciito
/
wikipedia_articles_es_tokenized
like
0
Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
+ 6
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
wikipedia_articles_es_tokenized
16.1 GB
2 contributors
History:
10 commits
Alverciito
update readme metadata
6a3f9bb
verified
about 1 month ago
bin
Upload tokenizer_32768.json
about 1 month ago
src
docstrings on tokenizer
about 1 month ago
tokens-A000-segmentation
upload data and src
about 1 month ago
tokens-A001-segmentation
upload data and src
about 1 month ago
tokens-A002-segmentation
upload data and src
about 1 month ago
.gitattributes
2.46 kB
initial commit
about 1 month ago
README.md
8.53 kB
update readme metadata
about 1 month ago
requirements.txt
79 Bytes
upload data and src
about 1 month ago