Datasets:
Dataset Viewer issue: TooBigContentError
The dataset viewer is not working for speechless_clean and speechless_noisy subsets.
Could it be fixed somehow without reducing the size of the audio.* columns ?
Error details:
Error code: TooBigContentError
It sometimes is due to the columns metadata being too big. Indeed, the dataset viewer API returns the column metadata along the rows, and we have a threshold on the response size. It's generally due to ClassLabel columns with a lot of classes. More details here: https://github.com/huggingface/dataset-viewer/issues/2215. Do you think it could make sense for your dataset? (I don't have access to the gated dataset, I filled a request)
Hi @severo , thanks for your answer. I reviewed the pending access requests, and your username was not present. So I added access directly using your username.
In order to precise the encountered problem with the dataset viewer :
for the speech_clean subset, the datasets is displaying ok for page 1, but for any other page, it displays "The dataset viewer is not available for this split.Rows from parquet row groups are too big to be read: 470.70 MiB (max=286.10 MiB) Error code: TooBigContentError".
for the speech_noisy subset, this is exactly the same problem : ok for page 1, but for any other page, it displays "The dataset viewer is not available for this split. Rows from parquet row groups are too big to be read: 408.69 MiB (max=286.10 MiB) Error code: TooBigContentError"
for the speechless_clean and speechless_noisy subset, even the first page does not display : "The dataset viewer is not available for this split. Rows from parquet row groups are too big to be read: 3.22 GiB (max=286.10 MiB) Error code: TooBigContentError".
This is weird that page 1 displays well for speech_clean and speech_noisy subsets, since all other pages should contain approximately the same amount of data per row groups that page 1.
Thanks again for you support,
Best,
Eric
OK, interesting. It's 6 audio columns, so: every page requires the creation of 600 audio files. The first page is pre-computed and cached, while the following ones are computed on the fly, which might partly explain a difference in the behavior. But as you says that "other pages should contain approximately the same amount of data per row groups that page 1." I think we have some incoherence between how we limit the size of the first and the following pages.
Anyway, we clearly are currently limited to process audio data of this size. Adding content to https://github.com/huggingface/dataset-viewer/issues/2215
@severo @lhoestq — I’d like to revive this thread, as the dataset viewer issue is still unresolved. It’s quite unfortunate for a dataset that has been downloaded over 95,000 times this month, since the dataset viewer is a really cool feature from HF Hub.
I’m wondering if the problem could be related to an incorrect size estimation, since there are several inconsistencies:
- On the main page, the dataset size is shown as 41 GB (which, as I understand, corresponds to an estimate based on only a subset of the parquet files — so that part might be normal).
- On the Files and Versions tab, the dataset is listed as 172 GB, which is roughly the correct size.
- ⚠️ However, on the Settings tab, it shows a huge 3.27 TB of Large File System usage (see screenshot below).
I really don’t understand why more than 3 TB of data are being counted there, especially since this also impacts the Public Repositories storage quota of our organization.
Do you have any insights on how this could be fixed on your side without risking to break anything? This dataset is linked to a published paper and is used by multiple research teams worldwide, so I’d like to be cautious with any workaround. The issue doesn’t prevent using the dataset itself, but it significantly affects our organization storage (and your servers), and might also indirectly be linked to the dataset viewer issues.
Best regards,
Eric
