Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: int64, updated_at: int64, due_on: int64, closed_at: int64> to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1964, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: int64, updated_at: int64, due_on: int64, closed_at: int64> to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
sequence
state
string
locked
bool
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
updated_at
int64
closed_at
int64
author_association
string
active_lock_reason
null
body
string
reactions
dict
timeline_url
string
performed_via_github_app
null
state_reason
string
draft
null
pull_request
null
is_pull_request
bool
https://api.github.com/repos/huggingface/datasets/issues/5230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5230/comments
https://api.github.com/repos/huggingface/datasets/issues/5230/events
https://github.com/huggingface/datasets/issues/5230
1,445,507,580
I_kwDODunzps5WKLH8
5,230
dataclasses error when importing the library in python 3.11
{ "login": "yonikremer", "id": 76044840, "node_id": "MDQ6VXNlcjc2MDQ0ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/76044840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yonikremer", "html_url": "https://github.com/yonikremer", "followers_url": "https://api.github.com/users/yonikremer/followers", "following_url": "https://api.github.com/users/yonikremer/following{/other_user}", "gists_url": "https://api.github.com/users/yonikremer/gists{/gist_id}", "starred_url": "https://api.github.com/users/yonikremer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonikremer/subscriptions", "organizations_url": "https://api.github.com/users/yonikremer/orgs", "repos_url": "https://api.github.com/users/yonikremer/repos", "events_url": "https://api.github.com/users/yonikremer/events{/privacy}", "received_events_url": "https://api.github.com/users/yonikremer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,668,174,829,000
1,668,174,829,000
null
NONE
null
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter notebook: ``` %%bash # create python 3.11 conda env conda create --yes --quiet -n myenv -c conda-forge python=3.11 # activate is source activate myenv # install pyarrow /opt/conda/envs/myenv/bin/python -m pip install --quiet --extra-index-url https://pypi.fury.io/arrow-nightlies/ \ --prefer-binary --pre pyarrow # install datasets /opt/conda/envs/myenv/bin/python -m pip install --quiet datasets ``` ``` # create a python file that only imports datasets with open("import_datasets.py", 'w') as f: f.write("import datasets") # run it with the env !/opt/conda/envs/myenv/bin/python import_datasets.py ``` I get the following error: ``` Traceback (most recent call last): File "/kaggle/working/import_datasets.py", line 1, in <module> import datasets File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 45, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/builder.py", line 91, in <module> @dataclass ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1221, in dataclass return wrap(cls) ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1211, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 959, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 816, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory ``` This is probably due to one of the following changes in the [dataclasses standard library](https://docs.python.org/3/library/dataclasses.html) in version 3.11: 1. Changed in version 3.11: Instead of looking for and disallowing objects of type list, dict, or set, unhashable objects are now not allowed as default values. Unhashability is used to approximate mutability. 2. fields may optionally specify a default value, using normal Python syntax: ``` @dataclass class C: a: int # 'a' has no default value b: int = 0 # assign a default value for 'b' In this example, both a and b will be included in the added __init__() method, which will be defined as: def __init__(self, a: int, b: int = 0): ``` 3. Changed in version 3.11: If a field name is already included in the __slots__ of a base class, it will not be included in the generated __slots__ to prevent [overriding them](https://docs.python.org/3/reference/datamodel.html#datamodel-note-slots). Therefore, do not use __slots__ to retrieve the field names of a dataclass. Use [fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) instead. To be able to determine inherited slots, base class __slots__ may be any iterable, but not an iterator. 4. weakref_slot: If true (the default is False), add a slot named β€œ__weakref__”, which is required to make an instance weakref-able. It is an error to specify weakref_slot=True without also specifying slots=True. [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) will be raised if a field without a default value follows a field with a default value. This is true whether this occurs in a single class, or as a result of class inheritance. ### Steps to reproduce the bug Steps to reproduce the behavior: 1. go to [the notebook in kaggle](https://www.kaggle.com/yonikremer/repreducing-issue) 2. rub both of the cells ### Expected behavior I'm expecting no issues. This error should not occur. ### Environment info kaggle kernels, with default settings: pin to original environment, no accelerator.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5230/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5230/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5229/comments
https://api.github.com/repos/huggingface/datasets/issues/5229/events
https://github.com/huggingface/datasets/issues/5229
1,445,121,028
I_kwDODunzps5WIswE
5,229
Type error when calling `map` over dataset containing 0-d tensors
{ "login": "phipsgabler", "id": 7878215, "node_id": "MDQ6VXNlcjc4NzgyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7878215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phipsgabler", "html_url": "https://github.com/phipsgabler", "followers_url": "https://api.github.com/users/phipsgabler/followers", "following_url": "https://api.github.com/users/phipsgabler/following{/other_user}", "gists_url": "https://api.github.com/users/phipsgabler/gists{/gist_id}", "starred_url": "https://api.github.com/users/phipsgabler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phipsgabler/subscriptions", "organizations_url": "https://api.github.com/users/phipsgabler/orgs", "repos_url": "https://api.github.com/users/phipsgabler/repos", "events_url": "https://api.github.com/users/phipsgabler/events{/privacy}", "received_events_url": "https://api.github.com/users/phipsgabler/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,668,155,248,000
1,668,162,496,000
null
NONE
null
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_format("torch") ds.map(None) ``` ### Expected behavior Getting back `ds` without errors. ### Environment info Python 3.10.8 datasets 2.6. torch 1.13.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5229/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5228/comments
https://api.github.com/repos/huggingface/datasets/issues/5228/events
https://github.com/huggingface/datasets/issues/5228
1,444,763,105
I_kwDODunzps5WHVXh
5,228
Loading a dataset from the hub fails if you happen to have a folder of the same name
{ "login": "dakinggg", "id": 43149077, "node_id": "MDQ6VXNlcjQzMTQ5MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dakinggg", "html_url": "https://github.com/dakinggg", "followers_url": "https://api.github.com/users/dakinggg/followers", "following_url": "https://api.github.com/users/dakinggg/following{/other_user}", "gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions", "organizations_url": "https://api.github.com/users/dakinggg/orgs", "repos_url": "https://api.github.com/users/dakinggg/repos", "events_url": "https://api.github.com/users/dakinggg/events{/privacy}", "received_events_url": "https://api.github.com/users/dakinggg/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,668,127,914,000
1,668,127,914,000
null
NONE
null
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and then training on them. Additionally, they were writing their checkpoints to a folder called `glue`. This meant that once one worker had created the `glue` folder to write checkpoints to, the next worker to try to load a glue dataset would fail as shown in the minimal repro below. I'm not sure what the solution would be since I'm not super familiar with the `datasets` code, but I would expect `load_dataset` to not crash just because i have a local folder with the same name as a dataset from the hub. ### Steps to reproduce the bug ``` In [1]: import datasets In [2]: rte = datasets.load_dataset('glue', 'rte') Downloading and preparing dataset glue/rte to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 697k/697k [00:00<00:00, 6.08MB/s] Dataset glue downloaded and prepared to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 773.81it/s] In [3]: import os In [4]: os.mkdir('glue') In [5]: rte = datasets.load_dataset('glue', 'rte') --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) <ipython-input-5-0d6b9ad8bbd0> in <cell line: 1>() ----> 1 rte = datasets.load_dataset('glue', 'rte') ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1717 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1495 download_config = download_config.copy() if download_config else DownloadConfig() 1496 download_config.use_auth_token = use_auth_token -> 1497 dataset_module = dataset_module_factory( 1498 path, 1499 revision=revision, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1152 ).get_module() 1153 elif os.path.isdir(path): -> 1154 return LocalDatasetModuleFactoryWithoutScript( 1155 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode 1156 ).get_module() ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in get_module(self) 624 base_path = os.path.join(self.path, self.data_dir) if self.data_dir else self.path 625 patterns = ( --> 626 sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns_locally(base_path) 627 ) 628 data_files = DataFilesDict.from_local_or_remote( ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/data_files.py in get_data_patterns_locally(base_path) 458 return _get_data_files_patterns(resolver) 459 except FileNotFoundError: --> 460 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 461 462 EmptyDatasetError: The directory at glue doesn't contain any data files ``` ### Expected behavior Dataset is still able to be loaded from the hub even if I have a local folder with the same name. ### Environment info datasets version: 2.6.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5228/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5227/comments
https://api.github.com/repos/huggingface/datasets/issues/5227/events
https://github.com/huggingface/datasets/issues/5227
1,444,620,094
I_kwDODunzps5WGyc-
5,227
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
{ "login": "ScottM-wizard", "id": 102275116, "node_id": "U_kgDOBhiYLA", "avatar_url": "https://avatars.githubusercontent.com/u/102275116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ScottM-wizard", "html_url": "https://github.com/ScottM-wizard", "followers_url": "https://api.github.com/users/ScottM-wizard/followers", "following_url": "https://api.github.com/users/ScottM-wizard/following{/other_user}", "gists_url": "https://api.github.com/users/ScottM-wizard/gists{/gist_id}", "starred_url": "https://api.github.com/users/ScottM-wizard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ScottM-wizard/subscriptions", "organizations_url": "https://api.github.com/users/ScottM-wizard/orgs", "repos_url": "https://api.github.com/users/ScottM-wizard/repos", "events_url": "https://api.github.com/users/ScottM-wizard/events{/privacy}", "received_events_url": "https://api.github.com/users/ScottM-wizard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fixed. Please close." ]
1,668,117,426,000
1,668,117,943,000
1,668,117,943,000
NONE
null
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Steps to reproduce the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Expected behavior Dataset should load. This same code used to work. ### Environment info Mac OS
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5227/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5226/comments
https://api.github.com/repos/huggingface/datasets/issues/5226/events
https://github.com/huggingface/datasets/issues/5226
1,444,385,148
I_kwDODunzps5WF5F8
5,226
Q: Memory release when removing the column?
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,668,105,327,000
1,668,105,327,000
null
NONE
null
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670 common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train']) common_voice.clear() # check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670 ``` I tried `gc.collect()` but did not help ### Steps to reproduce the bug 1. load dataset 2. remove all the columns 3. check memory is reduced or not [link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567) ### Expected behavior Memory released when I remove the column ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5226/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5225/comments
https://api.github.com/repos/huggingface/datasets/issues/5225/events
https://github.com/huggingface/datasets/issues/5225
1,444,305,183
I_kwDODunzps5WFlkf
5,225
Add video feature
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
null
[]
null
[]
1,668,101,771,000
1,668,101,813,000
null
CONTRIBUTOR
null
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference 3. Videos have an additional audio stream, which must be accounted for 4. The feature needs to be able to encode/decode videos (with right video settings) from bytes. ### Your contribution I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though. Would love to use this issue as a place to: - brainstorm ideas on how to do this right - list ways/examples to work around it for now CC @sayakpaul @mariosasko @fcakyon
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5225/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5225/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5224/comments
https://api.github.com/repos/huggingface/datasets/issues/5224/events
https://github.com/huggingface/datasets/issues/5224
1,443,640,867
I_kwDODunzps5WDDYj
5,224
Seems to freeze when loading audio dataset with wav files from local folder
{ "login": "uriii3", "id": 45894267, "node_id": "MDQ6VXNlcjQ1ODk0MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/45894267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uriii3", "html_url": "https://github.com/uriii3", "followers_url": "https://api.github.com/users/uriii3/followers", "following_url": "https://api.github.com/users/uriii3/following{/other_user}", "gists_url": "https://api.github.com/users/uriii3/gists{/gist_id}", "starred_url": "https://api.github.com/users/uriii3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uriii3/subscriptions", "organizations_url": "https://api.github.com/users/uriii3/orgs", "repos_url": "https://api.github.com/users/uriii3/repos", "events_url": "https://api.github.com/users/uriii3/events{/privacy}", "received_events_url": "https://api.github.com/users/uriii3/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I just tried to do the same but changing the `.wav` files to `.mp3` files and that doesn't fix it." ]
1,668,076,171,000
1,668,077,129,000
null
NONE
null
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from terminal, seems to work but then freezes with no apparent reason. The metadata.csv file contains a few columns but the important ones, `file_name` with the filename and `transcription` with the transcription are okay. The audios are `.wav` files, I don't know if that might be the problem (I will proceed to try to change them all to `.mp3` and try again). ### Steps to reproduce the bug The code I'm using: ```python from datasets import load_dataset dataset = load_dataset("audiofolder", data_dir="../archive/Dataset") dataset[0]["audio"] ``` The output I obtain: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 311135.43it/s] Using custom data configuration default-38d4546ffd010f3e Downloading and preparing dataset audiofolder/default to /Users/mine/.cache/huggingface/datasets/audiofolder/default-38d4546ffd010f3e/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc... Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 166467.72it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 187772.74it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 59623.71it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 138090.55it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 106065.64it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 56036.38it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 74004.24it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 162343.45it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 101881.23it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 60145.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 80890.02it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 54036.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 95851.09it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 155897.00it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 137656.96it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 131230.81it/s] Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e ``` And then here it just freezes and nothing more happens. ### Expected behavior Load the dataset. ### Environment info Datasets version: datasets 2.6.1 pypi_0 pypi
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5224/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5223/comments
https://api.github.com/repos/huggingface/datasets/issues/5223/events
https://github.com/huggingface/datasets/pull/5223
1,442,610,658
PR_kwDODunzps5CjT9Z
5,223
Add SQL guide
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5223). All of your documentation changes will be reflected on that endpoint.", "I think we may want more content on this page that's not SQL related. Some of that content probably already lives in the main `load` docs page, but might be bad to remove major things like csv/pandas from there...WDYT we should do @lhoestq ?" ]
1,668,021,027,000
1,668,033,646,000
null
MEMBER
null
This PR adapts @nateraw's awesome SQL notebook as a guide for the docs!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5223/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5222/comments
https://api.github.com/repos/huggingface/datasets/issues/5222/events
https://github.com/huggingface/datasets/issues/5222
1,442,412,507
I_kwDODunzps5V-Xfb
5,222
HuggingFace website is incorrectly reporting that my datasets are pickled
{ "login": "ProGamerGov", "id": 10626398, "node_id": "MDQ6VXNlcjEwNjI2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ProGamerGov", "html_url": "https://github.com/ProGamerGov", "followers_url": "https://api.github.com/users/ProGamerGov/followers", "following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}", "gists_url": "https://api.github.com/users/ProGamerGov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ProGamerGov/subscriptions", "organizations_url": "https://api.github.com/users/ProGamerGov/orgs", "repos_url": "https://api.github.com/users/ProGamerGov/repos", "events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}", "received_events_url": "https://api.github.com/users/ProGamerGov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @McPatate maybe you know what's happening ?", "Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~", "> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that for now, as it indicates that we checked for pickles and nothing dangerous appeared :)", "Closing the issue with the typical \"feature not a bug\" " ]
1,668,012,076,000
1,668,017,446,000
1,668,017,217,000
NONE
null
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images ### Expected behavior They should not be reported as being pickled. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5222/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5221/comments
https://api.github.com/repos/huggingface/datasets/issues/5221/events
https://github.com/huggingface/datasets/issues/5221
1,442,309,094
I_kwDODunzps5V9-Pm
5,221
Cannot push
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards to process)", "@lhoestq \r\nThanks for the help!\r\n> Maybe you can try before adding\r\n\r\nIt did not help\r\n\r\nBut I totally got your point about split into multiple TAR archives. It really helped!" ]
1,668,007,925,000
1,668,103,881,000
1,668,103,871,000
NONE
null
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl (venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version' (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git push EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0' ``` I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file. Following I run before the commit: ``` ╰─$ git lfs install ╰─$ huggingface-cli lfs-enable-largefiles . ``` ### Steps to reproduce the bug Create a private dataset on huggingface and push 12G tar.gz file ### Expected behavior To be pushed with no issue ### Environment info - `datasets` version: 2.6.1 - Platform: Darwin-21.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5221/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5220/comments
https://api.github.com/repos/huggingface/datasets/issues/5220/events
https://github.com/huggingface/datasets/issues/5220
1,441,664,377
I_kwDODunzps5V7g15
5,220
Implicit type conversion of lists in to_pandas
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think this behavior comes from PyArrow:\r\n```python\r\nimport pyarrow as pa\r\nt = pa.table({\"a\": [[0]]})\r\nt.to_pandas().a.values[0]\r\n# array([0])\r\n```\r\n\r\nI believe this has to do with zero-copy: you can get a pandas DataFrame without copying the buffers from arrow, and therefore end up with numpy arrays.", "That's interesting, I guess not much to do here then." ]
1,667,983,218,000
1,668,096,746,000
1,668,096,746,000
CONTRIBUTOR
null
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original type ### Environment info datasets 2.6.1 python 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5220/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5219/comments
https://api.github.com/repos/huggingface/datasets/issues/5219/events
https://github.com/huggingface/datasets/issues/5219
1,441,255,910
I_kwDODunzps5V59Hm
5,219
Delta Tables usage using Datasets Library
{ "login": "reichenbch", "id": 23002137, "node_id": "MDQ6VXNlcjIzMDAyMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/23002137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reichenbch", "html_url": "https://github.com/reichenbch", "followers_url": "https://api.github.com/users/reichenbch/followers", "following_url": "https://api.github.com/users/reichenbch/following{/other_user}", "gists_url": "https://api.github.com/users/reichenbch/gists{/gist_id}", "starred_url": "https://api.github.com/users/reichenbch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reichenbch/subscriptions", "organizations_url": "https://api.github.com/users/reichenbch/orgs", "repos_url": "https://api.github.com/users/reichenbch/repos", "events_url": "https://api.github.com/users/reichenbch/events{/privacy}", "received_events_url": "https://api.github.com/users/reichenbch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?" ]
1,667,961,836,000
1,668,000,891,000
null
NONE
null
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering. This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose. ### Your contribution Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns. I have basic idea about Delta Live Tables, would brush it easily for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5219/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5218/comments
https://api.github.com/repos/huggingface/datasets/issues/5218/events
https://github.com/huggingface/datasets/issues/5218
1,441,254,194
I_kwDODunzps5V58sy
5,218
Delta Tables usage using Datasets Library
{ "login": "rcv-koo", "id": 103188035, "node_id": "U_kgDOBiaGQw", "avatar_url": "https://avatars.githubusercontent.com/u/103188035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcv-koo", "html_url": "https://github.com/rcv-koo", "followers_url": "https://api.github.com/users/rcv-koo/followers", "following_url": "https://api.github.com/users/rcv-koo/following{/other_user}", "gists_url": "https://api.github.com/users/rcv-koo/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcv-koo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcv-koo/subscriptions", "organizations_url": "https://api.github.com/users/rcv-koo/orgs", "repos_url": "https://api.github.com/users/rcv-koo/repos", "events_url": "https://api.github.com/users/rcv-koo/events{/privacy}", "received_events_url": "https://api.github.com/users/rcv-koo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[]
1,667,961,738,000
1,667,961,756,000
1,667,961,756,000
NONE
null
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering. This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose. ### Your contribution Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns. I have basic idea about Delta Live Tables, would brush it easily for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5218/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5217/comments
https://api.github.com/repos/huggingface/datasets/issues/5217/events
https://github.com/huggingface/datasets/pull/5217
1,441,252,740
PR_kwDODunzps5CetXs
5,217
Reword E2E training and inference tips in the vision guides
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,961,601,000
1,668,044,289,000
1,668,044,169,000
CONTRIBUTOR
null
Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5217/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5216/comments
https://api.github.com/repos/huggingface/datasets/issues/5216/events
https://github.com/huggingface/datasets/issues/5216
1,441,041,947
I_kwDODunzps5V5I4b
5,216
save_elasticsearch_index
{ "login": "amobash2", "id": 12739718, "node_id": "MDQ6VXNlcjEyNzM5NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/12739718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amobash2", "html_url": "https://github.com/amobash2", "followers_url": "https://api.github.com/users/amobash2/followers", "following_url": "https://api.github.com/users/amobash2/following{/other_user}", "gists_url": "https://api.github.com/users/amobash2/gists{/gist_id}", "starred_url": "https://api.github.com/users/amobash2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amobash2/subscriptions", "organizations_url": "https://api.github.com/users/amobash2/orgs", "repos_url": "https://api.github.com/users/amobash2/repos", "events_url": "https://api.github.com/users/amobash2/events{/privacy}", "received_events_url": "https://api.github.com/users/amobash2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! I think there exist tools to dump and reload an index in your elastic search but I'm not super familiar with it.\r\n\r\nAnyway after reloading an index in elastic search you can call `ds.load_elasticsearch_index` which will connect the index to the dataset without re-indexing" ]
1,667,948,812,000
1,667,999,805,000
null
NONE
null
Hi, I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5216/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5214/comments
https://api.github.com/repos/huggingface/datasets/issues/5214/events
https://github.com/huggingface/datasets/pull/5214
1,440,334,978
PR_kwDODunzps5CbmWE
5,214
Update github pr docs actions
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint." ]
1,667,918,617,000
1,667,921,998,000
1,667,921,997,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5214/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5213/comments
https://api.github.com/repos/huggingface/datasets/issues/5213/events
https://github.com/huggingface/datasets/pull/5213
1,440,037,534
PR_kwDODunzps5CalQ_
5,213
Add support for different configs with `push_to_hub`
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint." ]
1,667,907,947,000
1,668,088,953,000
null
CONTRIBUTOR
null
will solve #5151
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5213/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5212/comments
https://api.github.com/repos/huggingface/datasets/issues/5212/events
https://github.com/huggingface/datasets/pull/5212
1,439,642,483
PR_kwDODunzps5CZPI2
5,212
Fix CI require_beam maximum compatible dill version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint." ]
1,667,892,601,000
1,667,902,497,000
null
MEMBER
null
A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`: - d7c942228b8dcf4de64b00a3053dce59b335f618 - ec222b220b79f10c8d7b015769f0999b15959feb This PR fixes the maximum compatible `dill` version with `apache-beam`, which is <0.3.2 (and not 0.3.6): https://github.com/apache/beam/blob/v2.42.0/sdks/python/setup.py#L219
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5212/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5211/comments
https://api.github.com/repos/huggingface/datasets/issues/5211/events
https://github.com/huggingface/datasets/pull/5211
1,438,544,617
PR_kwDODunzps5CVgBx
5,211
Update Overview.ipynb google colab
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint." ]
1,667,834,632,000
1,667,835,001,000
null
MEMBER
null
- removed metrics stuff - added image example - added audio example (with ffmpeg instructions) - updated the "add a new dataset" section
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5211/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5210/comments
https://api.github.com/repos/huggingface/datasets/issues/5210/events
https://github.com/huggingface/datasets/pull/5210
1,438,492,507
PR_kwDODunzps5CVUzx
5,210
Tweak readme
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5210). All of your documentation changes will be reflected on that endpoint.", "Nit: We should also update the `Disclaimers` section to let the dataset owners know they should use Hub discussions rather than GH issues for removal requests/updates" ]
1,667,832,683,000
1,668,007,024,000
null
MEMBER
null
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5210/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5209/comments
https://api.github.com/repos/huggingface/datasets/issues/5209/events
https://github.com/huggingface/datasets/issues/5209
1,438,367,678
I_kwDODunzps5Vu7--
5,209
Implement ability to define splits in metadata section of dataset card
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@merveenoyan Do you want different files to be splits or configurations?\r\n\r\nFrom [what you specified in `Readme.md`](https://huggingface.co/datasets/inria-soda/tabular-benchmark/commit/fb4575853772c62a20203bdd6cc0202f5db4ce4e) I hypothesize that you want to have 4 **configs** corresponding to directories: `\"clf_cat\", \"clf_num\", \"reg_cat\", \"reg_num\"`. And inside each config you require to have as many splits as there are `csv` files\r\nso if you run \r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\", split=\"compass\")\r\n```\r\nyou will generate the data only from `compass.csv` file.\r\nIn this case, running `load_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\"`) without split parameter will return `DatasetDict` object with `\"KDDCup09_upselling\", \"cat_compass\", \"cat_covertype\", ... \"road_safety\"` keys (which values are splits - `Dataset` objects)\r\n\r\n**or**\r\ndo you want each file to be a separate config? Like:\r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat_compass\") # returns DatasetDict with a single \"train\" split\r\n```\r\n**or**\r\nmaybe smth completely different? :smile: \r\n\r\nAnyway, now I have an impression that this is probably rather a matter of automatically inferring configs from repository structure rather than providing parameters in metadata yaml.\r\n", "@polinaeterna I want the latter where you can think of every CSV file as a config, like MNLI from GLUE.", "@merveenoyan @lhoestq I see two solutions to this case. \r\n1. Parse configurations automatically from directories names. That is, if you have data structure like:\r\n```\r\ntabular-benchmark\r\n └─clf_cat_compass\r\n └─compass.csv\r\n └─clf_cat_cat_covertype\r\n └─covertype.csv\r\n ...\r\n └─reg_cat_house_sales\r\n └─house_sales.csv\r\n```\r\nyou'll get \"clf_cat_compass\", \"clf_cat_cat_covertype\", ... \"reg_cat_house_sales\" configurations that would contain **only files from corresponding directories**. \r\n**\\+** this is a requested change and needed in general and would solve other problems, see https://github.com/huggingface/datasets/issues/4578, would also help with https://github.com/huggingface/datasets/pull/5213 which I'm working on currently\r\n**\\+** would allow users to do just `load_dataset(β€œinria-soda/tabular-benchmark”, β€œclf_cat_compass”)`, no `data_files` param required\r\n**\\-** in this specific case it would require restructuring of the data - putting each file in a directory named as a config name (to me personally it doesn't seem to be a big deal) \r\n\r\n2. More or less what we discussed before - add support for manually specifying parameters in the metadata. We can add new metadata yaml field (say, `\"custom_configs_info\"`), so that we can provide smth like:\r\n```yaml\r\n---\r\n...\r\ndataset_info:\r\n ... \r\ncustom_configs_info:\r\n- config_name: reg_cat_house_sales\r\n data_files:\r\n - reg_cat/house_sales.csv\r\n- config_name: clf_cat_compass\r\n data_files:\r\n - clf_cat/compass.csv\r\n...\r\n---\r\n```\r\n**\\+** Would be useful not only for tabular data and not only for `data_files` parameter - any packaged dataset’s viewer can be customized to use specific, non-default parameters. @merveenoyan do you maybe have any other examples/use cases in mind where you want to provide any specific parameters to the viewer? \r\n**\\-** I'm not sure here but assume that it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in `BUILDER_CONFIGS` list). cc @severo But probably this can be solved on the `datasets` side too.\r\n\r\nOverall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for `datasets` in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :) ", "> We can add new metadata yaml field (say, \"custom_configs_info\"), so that we can provide smth like:\r\n\r\nLove it ! Some other ideas to name the \"custom_configs_info\" field: \"configs\", \"parameters\", \"config_args\", \"configurations\"\r\n\r\n> it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in BUILDER_CONFIGS list)\r\n\r\nIf we update the `get_dataset_config_names()` function in `datasets` in inspect.py we should be fine - that's what the viewer is using\r\n\r\n> Overall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for datasets in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :)\r\n\r\nActually I feel like the second solution includes the first use case you mentioned. If you implement the second solution, then users would just have to add a few lines of YAML and their directories would be considered configurations no ? Maybe there's no need to implement two different logics to do the same thing" ]
1,667,827,636,000
1,668,014,388,000
null
CONTRIBUTOR
null
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`) e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead. Also pinging @polinaeterna @lhoestq @adrinjalali
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5209/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5209/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5208/comments
https://api.github.com/repos/huggingface/datasets/issues/5208/events
https://github.com/huggingface/datasets/pull/5208
1,438,035,707
PR_kwDODunzps5CTyxu
5,208
Refactor CI hub fixtures to use monkeypatch instead of patch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,813,105,000
1,667,890,280,000
1,667,890,157,000
MEMBER
null
Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5208/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5207/comments
https://api.github.com/repos/huggingface/datasets/issues/5207/events
https://github.com/huggingface/datasets/issues/5207
1,437,858,506
I_kwDODunzps5Vs_rK
5,207
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
{ "login": "leemgs", "id": 82404, "node_id": "MDQ6VXNlcjgyNDA0", "avatar_url": "https://avatars.githubusercontent.com/u/82404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leemgs", "html_url": "https://github.com/leemgs", "followers_url": "https://api.github.com/users/leemgs/followers", "following_url": "https://api.github.com/users/leemgs/following{/other_user}", "gists_url": "https://api.github.com/users/leemgs/gists{/gist_id}", "starred_url": "https://api.github.com/users/leemgs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leemgs/subscriptions", "organizations_url": "https://api.github.com/users/leemgs/orgs", "repos_url": "https://api.github.com/users/leemgs/repos", "events_url": "https://api.github.com/users/leemgs/events{/privacy}", "received_events_url": "https://api.github.com/users/leemgs/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! It looks like an issue with your python environment, can you make sure you're able to run GET requests to https://huggingface.co using `requests` in python ?", "\r\nThanks for your reply. Does this mean that I have to use the `do_dataset `function and the `requests `function to download the dataset from the company's proxy environment?\r\n\r\n\r\n* Reference: \r\n```\r\n### How to load this dataset directly with the [datasets](https://github.com/huggingface/datasets) library\r\n\r\n\r\n* https://huggingface.co/datasets/moyix/debian_csrc\r\n\r\n\r\n* \r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"moyix/debian_csrc\")\r\n\r\n\r\n\r\n### Or just clone the dataset repo\r\n\r\n\r\ngit lfs install\r\ngit clone https://huggingface.co/datasets/moyix/debian_csrc\r\n# if you want to clone without large files – just their pointers\r\n# prepend your git clone with the following env var:\r\nGIT_LFS_SKIP_SMUDGE=1\r\n```" ]
1,667,804,183,000
1,668,143,558,000
null
NONE
null
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in HuggingFace normally? I welcome any comments. I think those comments will be helpful to me. * Dataset address - https://huggingface.co/datasets/moyix/debian_csrc/viewer/moyix--debian_csrc * Log message ``` ............ OMISSION .............. Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 587, in <module> main() File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 278, in main raw_datasets = load_dataset( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) [2022-11-07 15:23:38,476] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 6760 [2022-11-07 15:23:38,476] [ERROR] [launch.py:324:sigkill_handler] ['/home/geunsik-lim/anaconda3/envs/deepspeed/bin/python', '-u', './transformers/examples/pytorch/language-modeling/run_clm.py', '--local_rank=0', '--model_name_or_path=Salesforce/codegen-350M-multi', '--per_device_train_batch_size=1', '--learning_rate', '2e-5', '--num_train_epochs', '1', '--output_dir=./codegen-350M-finetuned', '--overwrite_output_dir', '--dataset_name', 'moyix/debian_csrc', '--cache_dir', '/data/home/geunsik-lim/.cache', '--tokenizer_name', 'Salesforce/codegen-350M-multi', '--block_size', '2048', '--gradient_accumulation_steps', '32', '--do_train', '--fp16', '--deepspeed', 'ds_config_zero2.json'] exits with return code = 1 real 0m7.742s user 0m4.930s ``` ### Steps to reproduce the bug Steps to reproduce this behavior. ``` (deepspeed) geunsik-lim@ai02:~/qtlab$ ./test_debian_csrc_dataset.py Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./test_debian_csrc_dataset.py", line 6, in <module> dataset = load_dataset("moyix/debian_csrc") File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ cat ./test_debian_csrc_dataset.py #!/usr/bin/env python from datasets import load_dataset dataset = load_dataset("moyix/debian_csrc") ``` 1. Adde proxy address of a company in /etc/profile 2. Download dataset with load_dataset() function of datasets package that is provided by HuggingFace. 3. In this case, the address would be "moyix--debian_csrc". 4. I get the "`ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError`)" error message. ### Expected behavior * error message: ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) ### Environment info * software version information: ``` (deepspeed) geunsik-lim@ai02:~$ (deepspeed) geunsik-lim@ai02:~$ conda list -f pytorch # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel pytorch 1.13.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch (deepspeed) geunsik-lim@ai02:~$ conda list -f python # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel python 3.10.6 haa1d7c7_1 (deepspeed) geunsik-lim@ai02:~$ conda list -f datasets # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel datasets 2.6.1 py_0 huggingface (deepspeed) geunsik-lim@ai02:~$ uname -a Linux ai02 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux (deepspeed) geunsik-lim@ai02:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5207/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5206/comments
https://api.github.com/repos/huggingface/datasets/issues/5206/events
https://github.com/huggingface/datasets/issues/5206
1,437,223,894
I_kwDODunzps5VqkvW
5,206
Use logging instead of printing to console
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually upon closer inspection, it is documented in the code that this behavior is intentional, so I'll close this." ]
1,667,692,082,000
1,667,693,160,000
1,667,693,159,000
NONE
null
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger. ### Steps to reproduce the bug ```python >> import datasets >> datasets.load_dataset("some-dataset") Downloading and preparing dataset csv/data to <path>... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 7729.06it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 527.23it/s] Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data. ``` ### Expected behavior The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-x86_64-i386-64bit - Python version: 3.9.15 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5206/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5205/comments
https://api.github.com/repos/huggingface/datasets/issues/5205/events
https://github.com/huggingface/datasets/pull/5205
1,437,221,987
PR_kwDODunzps5CRO33
5,205
Add missing `DownloadConfig.use_auth_token` value
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,691,396,000
1,667,895,180,000
1,667,838,024,000
CONTRIBUTOR
null
This PR solves https://github.com/huggingface/datasets/issues/5204 Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5205/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5205/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5204/comments
https://api.github.com/repos/huggingface/datasets/issues/5204/events
https://github.com/huggingface/datasets/issues/5204
1,437,221,259
I_kwDODunzps5VqkGL
5,204
`push_to_hub` not propagating `token` through `DownloadConfig`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "#self-assign", "@lhoestq can you close this issue as part of the recent #5205 merge? Thanks πŸ€— ", "Thank you :)" ]
1,667,691,140,000
1,667,902,329,000
1,667,902,328,000
CONTRIBUTOR
null
### Describe the bug When trying to upload a new πŸ€— Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False. So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`. ### Steps to reproduce the bug Let's create a new dataset in our HF account via Python as: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) ``` When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) >>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`')) ``` ### Expected behavior Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5204/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5203/comments
https://api.github.com/repos/huggingface/datasets/issues/5203/events
https://github.com/huggingface/datasets/pull/5203
1,436,710,518
PR_kwDODunzps5CPlnW
5,203
Update canonical links to Hub links
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,602,250,000
1,667,846,585,000
1,667,846,419,000
MEMBER
null
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5203/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5202/comments
https://api.github.com/repos/huggingface/datasets/issues/5202/events
https://github.com/huggingface/datasets/issues/5202
1,435,886,090
I_kwDODunzps5VleIK
5,202
CI fails after bulk edit of canonical datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Fix: https://huggingface.co/datasets/paws/discussions/1" ]
1,667,559,080,000
1,667,559,097,000
null
MEMBER
null
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config_name, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_config_info(path, config_name, expected_splits): info = get_dataset_config_info(path, config_name=config_name) assert info.config_name == config_name > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:45: AssertionError _ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws' expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final'] expected_splits_in_first_config = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_configs, expected_splits_in_first_config", [ ("squad", ["plain_text"], ["train", "validation"]), ("dalle-mini/wit", ["dalle-mini--wit"], ["train"]), ("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]), ], ) def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config): infos = get_dataset_infos(path) assert list(infos.keys()) == expected_configs expected_config = expected_configs[0] assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits_in_first_config E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:90: AssertionError ______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', expected_config = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_config, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_split_names(path, expected_config, expected_splits): infos = get_dataset_infos(path) assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5202/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5201/comments
https://api.github.com/repos/huggingface/datasets/issues/5201/events
https://github.com/huggingface/datasets/pull/5201
1,435,881,554
PR_kwDODunzps5CM0zn
5,201
Do not sort splits in dataset info
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153", "I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://huggingface.co/datasets/paws/discussions/1\r\n\r\nRelated issue:\r\n- #5202", "@albertvillanova yeah I noticed it right after the PR :smile: thank you! the fix of the dataset info yaml fixes tests on CI, but in general order of splits in yaml influences the order in which they are displayed in the viewer, if I understand it correctly. So I suggest not to sort splits in yaml initially to avoid this for other datasets in the future. I think [this change](https://github.com/huggingface/datasets/pull/5201/files#diff-198ba4fdf2f94cb3e1aba8a0170a43b08d4ab5636d682374321c5a383a8be24dR571) should work for it. \r\n\r\nChanges to tests here maybe can be reverted considering that order in yaml now corresponds to the one in tests, thanks to your change in the dataset info.", "Hehe, @polinaeterna, we make comments nearly at the same time as well... :laughing: " ]
1,667,558,841,000
1,667,573,257,000
1,667,573,109,000
CONTRIBUTOR
null
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws What do you think? But I added sorting in tests to fix CI (for the same dataset).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5201/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5201/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5200/comments
https://api.github.com/repos/huggingface/datasets/issues/5200/events
https://github.com/huggingface/datasets/issues/5200
1,435,831,559
I_kwDODunzps5VlQ0H
5,200
Some links to canonical datasets in the docs are outdated
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!" ]
1,667,556,381,000
1,667,846,420,000
1,667,846,420,000
CONTRIBUTOR
null
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5200/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5199/comments
https://api.github.com/repos/huggingface/datasets/issues/5199/events
https://github.com/huggingface/datasets/pull/5199
1,434,818,836
PR_kwDODunzps5CJSv1
5,199
Deprecate dummy data generation command
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,487,954,000
1,667,570,510,000
1,667,570,387,000
CONTRIBUTOR
null
Deprecate the `dummy_data` CLI command.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5199/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5198/comments
https://api.github.com/repos/huggingface/datasets/issues/5198/events
https://github.com/huggingface/datasets/pull/5198
1,434,699,165
PR_kwDODunzps5CI49J
5,198
Add note about the name of a dataset script
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,483,492,000
1,667,566,079,000
1,667,565,961,000
CONTRIBUTOR
null
Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193 also fixed two minor issues in audio docs (broken links)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5198/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5197/comments
https://api.github.com/repos/huggingface/datasets/issues/5197/events
https://github.com/huggingface/datasets/pull/5197
1,434,676,150
PR_kwDODunzps5CI0Ac
5,197
[zstd] Use max window log size
{ "login": "reyoung", "id": 728699, "node_id": "MDQ6VXNlcjcyODY5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/728699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reyoung", "html_url": "https://github.com/reyoung", "followers_url": "https://api.github.com/users/reyoung/followers", "following_url": "https://api.github.com/users/reyoung/following{/other_user}", "gists_url": "https://api.github.com/users/reyoung/gists{/gist_id}", "starred_url": "https://api.github.com/users/reyoung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reyoung/subscriptions", "organizations_url": "https://api.github.com/users/reyoung/orgs", "repos_url": "https://api.github.com/users/reyoung/repos", "events_url": "https://api.github.com/users/reyoung/events{/privacy}", "received_events_url": "https://api.github.com/users/reyoung/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@albertvillanova Please take a review.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5197). All of your documentation changes will be reflected on that endpoint." ]
1,667,482,558,000
1,667,483,119,000
null
NONE
null
ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags. Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the max window size.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5197/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5196/comments
https://api.github.com/repos/huggingface/datasets/issues/5196/events
https://github.com/huggingface/datasets/pull/5196
1,434,401,646
PR_kwDODunzps5CH439
5,196
Use hfh hf_hub_url function
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have override this.\r\n\r\nIf so, I then would suggest to initiate a deprecation cycle.", "After a discussion with the rest of the datasets team, we agreed we can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: this will have minimal impact, only for **private Hubs**. We will address eventual possible impacts in the future.\r\n\r\nAdditionally, we also ignore `config.HUB_DEFAULT_VERSION`.\r\n\r\nSee explanation in this PR description: https://github.com/huggingface/datasets/pull/5196#issue-1434401646" ]
1,667,470,089,000
1,667,981,738,000
1,667,978,112,000
MEMBER
null
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood). EDIT: ~~Finally, we use our `config.HUB_DATASETS_URL` when using `hfh.hf_hub_url`~~ There is a breaking change: the `hfh` `hf_hub_url` function uses - `hfh` `HUGGINGFACE_CO_URL_TEMPLATE` URL template, different from the `datasets` `config.HUB_DATASETS_URL` - also, `hfh` `DEFAULT_REVISION`, instead of `datasets` `config.HUB_DEFAULT_VERSION`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5196/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5195/comments
https://api.github.com/repos/huggingface/datasets/issues/5195/events
https://github.com/huggingface/datasets/pull/5195
1,434,290,689
PR_kwDODunzps5CHhF2
5,195
[wip testing docs]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5195). All of your documentation changes will be reflected on that endpoint." ]
1,667,464,654,000
1,667,464,973,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5195/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5194/comments
https://api.github.com/repos/huggingface/datasets/issues/5194/events
https://github.com/huggingface/datasets/pull/5194
1,434,206,951
PR_kwDODunzps5CHPNY
5,194
Fix docs about dataset_info in YAML
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,459,423,000
1,667,482,287,000
1,667,482,161,000
MEMBER
null
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5194/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5193/comments
https://api.github.com/repos/huggingface/datasets/issues/5193/events
https://github.com/huggingface/datasets/issues/5193
1,433,883,780
I_kwDODunzps5Vd1SE
5,193
"One or several metadata. were found, but not in the same directory or in a parent directory"
{ "login": "lambda-science", "id": 20109584, "node_id": "MDQ6VXNlcjIwMTA5NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/20109584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lambda-science", "html_url": "https://github.com/lambda-science", "followers_url": "https://api.github.com/users/lambda-science/followers", "following_url": "https://api.github.com/users/lambda-science/following{/other_user}", "gists_url": "https://api.github.com/users/lambda-science/gists{/gist_id}", "starred_url": "https://api.github.com/users/lambda-science/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lambda-science/subscriptions", "organizations_url": "https://api.github.com/users/lambda-science/orgs", "repos_url": "https://api.github.com/users/lambda-science/repos", "events_url": "https://api.github.com/users/lambda-science/events{/privacy}", "received_events_url": "https://api.github.com/users/lambda-science/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also unrelated but still: https://huggingface.co/docs/datasets/image_dataset#generate-the-dataset\r\n```If your loading script passed the test, you should now have a dataset_infos.json file in your dataset folder.```\r\nIt's not the case anymore as it's now in the readme.md, it was confusing to me", "And here is my data loader script: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data/blob/main/SDH_16k.py\r\nI have one file archive to download that contains the images for all splits and one `metadata.jsonl` to download that contains the informations about what image goes into what split.", "Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.", "> Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.\r\n\r\nHi !\r\n\r\nThank you for your answer. That was... embarrassingly easy, sorry for this issue, everything is fixed now ! \r\n\r\nHave a nice day ! :)", "@lambda-science that's not embarrassing at all! it's actually not clear from the documentation that the script should have the same name, so thank you for the issue, we'll add this information to the docs :) " ]
1,667,429,185,000
1,667,482,756,000
1,667,482,544,000
NONE
null
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.34k/3.34k [00:00<00:00, 4.45MB/s] Using custom data configuration SDH_16k-53e7301a92ab0025 Downloading and preparing dataset None/SDH_16k to /home/corentin/.cache/huggingface/datasets/corentinm7___imagefolder/SDH_16k-53e7301a92ab0025/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:00<00:00, 4.31MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.75s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:15<00:00, 74.3MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:16<00:00, 16.09s/it] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.16s/it] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/load.py", line 1742, in load_dataset builder_instance.download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 814, in download_and_prepare self._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1423, in _download_and_prepare super()._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 905, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1374, in _prepare_split for key, record in logging.tqdm( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 394, in _generate_examples raise ValueError( ValueError: One or several metadata. were found, but not in the same directory or in a parent directory of /home/corentin/.cache/huggingface/datasets/downloads/extracted/60c4aa8d4da3065bb3d310de4373dffd73bd4dc331aedcb4ee867febe4fdb7cd/validation/sick/2_CG_SDH_TAM_Bin1cKO_ko_pla_4_1640.tif. ``` However the test command is working fine. ```datasets-cli test hugging_face_play/ds_test/SDH_16k.py --save_info --all_configs --force_redownload``` ``` Using custom data configuration SDH_16k Testing builder 'SDH_16k' (1/1) Downloading and preparing dataset sdh_16k/SDH_16k to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:14<00:00, 76.5MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:15<00:00, 15.66s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:02<00:00, 1.44MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:03<00:00, 3.21s/it] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 11586.48it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.42s/it] Dataset sdh_16k downloaded and prepared to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d. Subsequent calls will reuse this data. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 605.27it/s] Dataset card saved at hugging_face_play/ds_test/README.md Test successful. ``` ### Steps to reproduce the bug Simply run on python ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ### Expected behavior As the test command worked, this error should not appear ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5193/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5192/comments
https://api.github.com/repos/huggingface/datasets/issues/5192/events
https://github.com/huggingface/datasets/pull/5192
1,433,199,790
PR_kwDODunzps5CD2BQ
5,192
Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint." ]
1,667,397,701,000
1,667,989,192,000
null
CONTRIBUTOR
null
Will close https://github.com/huggingface/datasets/issues/5153 Drop labels by default (`drop_labels=None`) when: * there are files on different levels of directory hierarchy by checking their path depth * all files are in the same directory (=only one label was inferred) First one fixes cases like this: ``` repo image3.jpg image4.jpg data image1.jpg image2.jpg ``` Second one fixes cases like this: ``` repo image1.jpg image2.jpg image3.jpg ``` This is mostly to fix the viewer for people who just drop images in the Hub interface into the root dir. I added tests for both of the cases on local and remote files. **I also changed data files for old test on drop_labels** (`test_generate_examples_drop_labels`). The files I provide to `test_generate_examples_drop_labels` now has "canonical" classification structure (two dirs) in order not to change the logic of the test (=not to check these two cases addressed in the PR).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5192/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5191/comments
https://api.github.com/repos/huggingface/datasets/issues/5191/events
https://github.com/huggingface/datasets/pull/5191
1,433,191,658
PR_kwDODunzps5CD0Qp
5,191
Make torch.Tensor and spacy models cacheable
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667,397,378,000
1,667,409,648,000
1,667,409,522,000
CONTRIBUTOR
null
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models. Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/3178
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5191/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5190/comments
https://api.github.com/repos/huggingface/datasets/issues/5190/events
https://github.com/huggingface/datasets/issues/5190
1,433,014,626
I_kwDODunzps5VahFi
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n" ]
1,667,389,885,000
1,667,393,702,000
1,667,393,702,000
MEMBER
null
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None` Here's an example: ```python from datasets import load_dataset ds = load_dataset("lewtun/audio-test-push") ds["train"][0] # { # "audio": { # "path": None, <-- Is this expected? # "array": array( # [ # 3.97140226e-07, # 7.30310290e-07, # 7.56406735e-07, # ..., # -1.19636677e-01, # -1.16811886e-01, # -1.12441722e-01, # ] # ), # "sampling_rate": 44100, # }, # "song_id": 0, # "genre_id": 0, # "genre": "Electronic", # } ``` Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :) ### Steps to reproduce the bug 1. Create an audio dataset with the `audiofolder` feature 2. Push the dataset to the Hub with `push_to_hub()` 3. Download the Hub dataset and inspect the `audio.path` feature ### Expected behavior `audio.path` points to the file associated with the audio data ### Environment info - `datasets` version: 2.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5190/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5189/comments
https://api.github.com/repos/huggingface/datasets/issues/5189/events
https://github.com/huggingface/datasets/issues/5189
1,432,769,143
I_kwDODunzps5VZlJ3
5,189
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the generated dataset. But then again, I think this lib is a bit too old to make such changes. @lhoestq @albertvillanova WDYT?\r\n\r\n", "We can brainstorm here to see how we could make it happen ? And then depending on the options we see if it's a change we can do.\r\n\r\nI'm starting with a first reasoning\r\n\r\nCurrently not passing `split=` in `load_dataset` means \"return a dict with each split\".\r\n\r\nNow what would happen if a dataset has no split ? Ideally it should return one Dataset. And passing `split=` would have no sense. So depending on the dataset content, not passing `split=` should return a dict or a Dataset. In particular, those two cases should work:\r\n```python\r\n# case 1: dataset without split\r\nds = load_dataset(\"dataset_without_split\")\r\nds[0], ds[\"column_name\"], list(ds) # we want this\r\n\r\n# case 2: dataset with splits\r\nds = load_dataset(\"dataset_with_splits\")\r\nds[\"train\"] # this works and can't be changed\r\nds = load_dataset(\"dataset_with_splits\", split=\"train\")\r\nds[0], ds[\"column_name\"], list(ds) # this works and can't be changed\r\n```\r\n\r\nI can see several ideas:\r\n1. allowing `load_dataset` to return a different object based on the dataset content - either a Dataset or a DatasetDict\r\n - we can update `get_dataset_split_names` to return None or a list if users want to know in advance what object will be returned. They can also use `isinstance` _a posteriori_\r\n - but in this case we expect users to be careful when loading datasets and always to extra steps to check if they got a Dataset or DatasetDict\r\n2. merge Dataset and DatasetDict objects\r\n - they already share many functions: map, filter, push_to_hub etc.\r\n - we can define `ds[0]` to be the first item of the first split, and consider that the uses accesses rows from the full table of all the splits concatenated\r\n - however there is a collision when doing `ds[\"column_name\"]` or `ds[\"train\"]` that we need to address: the first returns a list, while the other returns a Dataset.\r\n\r\nWhat are your opinions on those two ideas ? Do you have other ideas in mind ?", "I like the first idea more (concatenating splits doesn't seem useful, no?). This is a significant breaking change, so I think we should do a poll (or something similar) to gather more info on the actual \"expected behavior\" and wait for Datasets 3.0 if we decide to implement it.\r\n\r\nPS: @thomwolf also suggested the same thing a while ago (https://github.com/huggingface/datasets/issues/743#issuecomment-746074641).", "I think it's an interesting improvement to the user experience for a case that comes often (no split) so I would definitively support it.\r\n\r\nI would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed", "Related: if a dataset only has one split, we don't show the splits select control in the dataset viewer on the Hub, eg. compare https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils/viewer/image/test with https://huggingface.co/datasets/glue/viewer/mnli/test.\r\n\r\nSee https://github.com/huggingface/moon-landing/pull/3858 for more details (internal)", "I feel like the second idea is a bit more overkill. \r\n@severo I would say it's a bit irrelevant to the problem we have but is a separate problem @polinaeterna is solving at the moment. πŸ˜… (also discussed on slack)", "OK, sorry for polluting the thread. The relation I saw with the dataset viewer is that from a UX point of view, we hide the concepts of split and configuration whenever possible -> this issue feels like doing the same in the datasets library.", "I would agree that returning different types based on the content of the dataset might be confusing.\r\n\r\nWe can do something similar to what `fetch_*` or `load_*` from `sklearn.datasets` do, which is to have an arg which changes the type of the returned type. For instance, `load_iris` would return a dict, but `load_iris(..., return_X_y=True)` would return a tuple.\r\n\r\nHere we can have a similar arg such as `return_X` which would then only return a single `DataSet` or an array.", "> I feel like the second idea is a bit more overkill.\r\n\r\nOverkill in what sense ?\r\n\r\n> Here we can have a similar arg such as return_X which would then only return a single DataSet or an array.\r\n\r\nRight now one can already pass `split=\"all\"` to get one `Dataset` object with all the data in it (unsplit). We could also have something like `return_all=True` so make the API clearer.\r\n\r\n> I would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed\r\n\r\nI think it would be ok to handle the collision by allowing both `ds[\"train\"]` and `ds[\"column_name\"]` (and maybe adding something like `ds.splits` for those who want to iterate over the splits or add new ones)", "Would it make sense to remove the notion of \"split\" in `load_dataset`? I feel a lof of it comes from the want to have some sort of group of more or less similar dataset. \"train\"/\"test\"/\"validation\" are the traditional ones, but there are some datasets that have much more splits.\r\n\r\nWould it make sense to force `load_dataset` to only load a single `Dataset` object, and fail if it doesn't point to one. And have another method that's like `load_dataset_group_info` that can return a very arbitrary info class (Dict, List whatever), but you need to pass individual infos to `load_dataset` to run anything? Typically I don't think `DatasetDict.map` is really that helpful, but that's my personal opinion. This would help make things more readable (typically knowing if an object is a `Dataset` or a `DatasetDict`)", "> Would it make sense to remove the notion of \"split\" in load_dataset?\r\n\r\nI think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\n> Would it make sense to force load_dataset to only load a single Dataset object, and fail if it doesn't point to one.\r\n\r\nWe need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one", "> I think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\nIt was my understanding that the whole issue was that `load_dataset` returned multiple types of objects.\r\n\r\n> We need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one\r\n\r\nYeah sorry I meant ideally. One can always start developing `load_dataset_v2` can deprecate the first one and remove it in the longer term.", "> It was my understanding that the whole issue was that load_dataset returned multiple types of objects.\r\n\r\nYes indeed, but we still want to keep a way to load the train/val/test/whatever splits alone ;)", "@thomasw21's solution is good but it will break backwards compatibility. πŸ˜…" ]
1,667,380,502,000
1,667,933,549,000
null
CONTRIBUTOR
null
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) print(next(iter(dataset["train"]))) ``` `datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors. It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default. ```diff from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) -print(next(iter(dataset["train"]))) +print(next(iter(dataset))) ``` ### Motivation I explained it above πŸ˜… ### Your contribution I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5189/timeline
null
null
null
null
false
End of preview.

No dataset card yet

Downloads last month
3