Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
Alverciito
/
wikipedia_articles_es_tokenized
like
0
Tasks:
Text Classification
Sentence Similarity
Token Classification
Languages:
Spanish
Size:
100M<n<1B
Tags:
wikipedia
spanish
es
segmentation
sentence-segmentation
document-segmentation
+ 6
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
wikipedia_articles_es_tokenized
/
tokens-A000-segmentation
11.7 GB
2 contributors
History:
1 commit
Chiquitin
upload data and src
9040523
4 months ago
info.json
Safe
919 Bytes
upload data and src
4 months ago
x.npy
7.78 GB
xet
upload data and src
4 months ago
x_mask.npy
3.89 GB
xet
upload data and src
4 months ago
y.npy
10.2 MB
xet
upload data and src
4 months ago
y_cand.npy
10.2 MB
xet
upload data and src
4 months ago
y_mask.npy
10.2 MB
xet
upload data and src
4 months ago