Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
Bengali
Size:
1K - 10K
Tags:
text-retrieval
License:
NonMatchingSplitsSizesError when trying to load using load_dataset()
#1
by turjo4nis - opened
code used to load dataset:
from datasets import load_dataset
test_corpus = load_dataset("carlfeynman/Bharat_NanoDBPedia_bn", "corpus", split = 'train')
error:
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=5655161, num_examples=6045, shard_lengths=None, dataset_name='bharat_nano_db_pedia_bn')}]
hi @turjo4nis
please swith off verification_mode="no_checks" (some metadata check mismatch errors are coming. will fix this in next version)
from datasets import load_dataset
# for corpus
ds = load_dataset("carlfeynman/Bharat_NanoDBPedia_bn", "corpus", split="train", verification_mode="no_checks")
# queries
ds = load_dataset("carlfeynman/Bharat_NanoDBPedia_bn", "queries", split="train", verification_mode="no_checks")
# relationships
ds = load_dataset("carlfeynman/Bharat_NanoDBPedia_bn", "qrels", split="train", verification_mode="no_checks")
thanks!
@carlfeynman thanks!
turjo4nis changed discussion status to closed