Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
Alverciito
/
wikipedia_articles_es_tokenized
like
0
Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
+ 6
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
wikipedia_articles_es_tokenized
/
src
30.2 kB
2 contributors
History:
3 commits
Chiquitin
docstrings on tokenizer
242bd10
3 months ago
__init__.py
Safe
759 Bytes
upload data and src
3 months ago
config.py
Safe
984 Bytes
upload data and src
3 months ago
dataset.py
Safe
7.96 kB
upload data and src
3 months ago
tokenized_dataset.py
Safe
9.09 kB
upload data and src
3 months ago
tokenizer.py
Safe
11.4 kB
docstrings on tokenizer
3 months ago