Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
Alverciito
/
wikipedia_articles_es_tokenized
like
0
Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
+ 6
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
wikipedia_articles_es_tokenized
/
src
/
tokenizer.py
Commit History
docstrings on tokenizer
242bd10
Chiquitin
commited on
Jan 11
fix las version of sentence segmenter
a6416c8
Chiquitin
commited on
Jan 11
upload data and src
9040523
Chiquitin
commited on
Jan 11