url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
758M
1.95B
node_id
stringlengths
18
32
number
int64
1.2k
6.31k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4240/comments
https://api.github.com/repos/huggingface/datasets/issues/4240/events
https://github.com/huggingface/datasets/pull/4240
1,217,287,594
PR_kwDODunzps423xRl
4,240
Fix yield for crd3
{ "avatar_url": "https://avatars.githubusercontent.com/u/21066979?v=4", "events_url": "https://api.github.com/users/shanyas10/events{/privacy}", "followers_url": "https://api.github.com/users/shanyas10/followers", "following_url": "https://api.github.com/users/shanyas10/following{/other_user}", "gists_url": "https://api.github.com/users/shanyas10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shanyas10", "id": 21066979, "login": "shanyas10", "node_id": "MDQ6VXNlcjIxMDY2OTc5", "organizations_url": "https://api.github.com/users/shanyas10/orgs", "received_events_url": "https://api.github.com/users/shanyas10/received_events", "repos_url": "https://api.github.com/users/shanyas10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shanyas10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shanyas10/subscriptions", "type": "User", "url": "https://api.github.com/users/shanyas10" }
[]
closed
false
null
[]
null
[ "I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str...
2022-04-27T12:31:36Z
2022-04-29T12:41:41Z
2022-04-29T12:41:41Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4240.diff", "html_url": "https://github.com/huggingface/datasets/pull/4240", "merged_at": "2022-04-29T12:41:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4240.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4240" }
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": datasets.features.Sequence(datasets.Value("string")), "number": datasets.Value("int32"), } ], } ``` I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4240/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4240/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2492/comments
https://api.github.com/repos/huggingface/datasets/issues/2492/events
https://github.com/huggingface/datasets/pull/2492
919,718,102
MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4
2,492
Eduge
{ "avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4", "events_url": "https://api.github.com/users/enod/events{/privacy}", "followers_url": "https://api.github.com/users/enod/followers", "following_url": "https://api.github.com/users/enod/following{/other_user}", "gists_url": "https://api.github.com/users/enod/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/enod", "id": 6023883, "login": "enod", "node_id": "MDQ6VXNlcjYwMjM4ODM=", "organizations_url": "https://api.github.com/users/enod/orgs", "received_events_url": "https://api.github.com/users/enod/received_events", "repos_url": "https://api.github.com/users/enod/repos", "site_admin": false, "starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enod/subscriptions", "type": "User", "url": "https://api.github.com/users/enod" }
[]
closed
false
null
[]
null
[]
2021-06-13T05:10:59Z
2021-06-22T09:49:04Z
2021-06-16T10:41:46Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2492.diff", "html_url": "https://github.com/huggingface/datasets/pull/2492", "merged_at": "2021-06-16T10:41:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2492" }
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2492/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2968/comments
https://api.github.com/repos/huggingface/datasets/issues/2968/events
https://github.com/huggingface/datasets/issues/2968
1,007,209,488
I_kwDODunzps48CMwQ
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two spli...
2021-09-25T22:18:39Z
2021-10-07T22:47:42Z
2021-10-07T22:47:26Z
MEMBER
null
null
null
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file. ## Steps to reproduce the bug The following works as expected: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` Modifying a single split to add a new feature ends up in a crash: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split writer.write_table(table) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "identical_answers" does not exist in table schema' ``` It does work, however, to use the `save_to_disk` and `load_from_disk` methods: ```py from datasets import load_from_disk ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds.save_to_disk("local_path") brand_new_dataset = load_from_disk("local_path") ``` ## Expected results The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files. If it's helpful, I've traced a possible patch to the `write_table` method here: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425 The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255 but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190 Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise: ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset datasets = utils.map_nested( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested mapped = [ File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset ds = self._as_dataset( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table return table_cls.from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status OSError: Header-type of flatbuffer-encoded Message is not RecordBatch. ``` ## Environment info - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2968/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
https://api.github.com/repos/huggingface/datasets/issues/2048/events
https://github.com/huggingface/datasets/issues/2048
830,953,431
MDU6SXNzdWU4MzA5NTM0MzE=
2,048
github is not always available - probably need a back up
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
closed
false
null
[]
null
[]
2021-03-13T18:03:32Z
2022-04-01T15:27:10Z
2022-04-01T15:27:10Z
CONTRIBUTOR
null
null
null
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2341/comments
https://api.github.com/repos/huggingface/datasets/issues/2341/events
https://github.com/huggingface/datasets/pull/2341
882,370,933
MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2
2,341
Added the Ascent KB
{ "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phongnt570", "id": 6749421, "login": "phongnt570", "node_id": "MDQ6VXNlcjY3NDk0MjE=", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "repos_url": "https://api.github.com/users/phongnt570/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "type": "User", "url": "https://api.github.com/users/phongnt570" }
[]
closed
false
null
[]
null
[ "Thanks for approving it!" ]
2021-05-09T14:17:39Z
2021-05-11T09:16:59Z
2021-05-11T09:16:59Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2341.diff", "html_url": "https://github.com/huggingface/datasets/pull/2341", "merged_at": "2021-05-11T09:16:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2341.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2341" }
Added the Ascent Commonsense KB of 8.9M assertions. - Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905) - Website: https://ascent.mpi-inf.mpg.de/ (I am the author of the dataset)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2341/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4950/comments
https://api.github.com/repos/huggingface/datasets/issues/4950/events
https://github.com/huggingface/datasets/pull/4950
1,365,458,633
PR_kwDODunzps4-jWZ1
4,950
Update Enwik8 broken link and information
{ "avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4", "events_url": "https://api.github.com/users/mtanghu/events{/privacy}", "followers_url": "https://api.github.com/users/mtanghu/followers", "following_url": "https://api.github.com/users/mtanghu/following{/other_user}", "gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mtanghu", "id": 54819091, "login": "mtanghu", "node_id": "MDQ6VXNlcjU0ODE5MDkx", "organizations_url": "https://api.github.com/users/mtanghu/orgs", "received_events_url": "https://api.github.com/users/mtanghu/received_events", "repos_url": "https://api.github.com/users/mtanghu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions", "type": "User", "url": "https://api.github.com/users/mtanghu" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-09-08T03:15:00Z
2022-09-24T22:14:35Z
2022-09-08T14:51:00Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4950.diff", "html_url": "https://github.com/huggingface/datasets/pull/4950", "merged_at": "2022-09-08T14:51:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4950.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4950" }
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4950/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5734/comments
https://api.github.com/repos/huggingface/datasets/issues/5734/events
https://github.com/huggingface/datasets/issues/5734
1,662,058,028
I_kwDODunzps5jEP4s
5,734
Remove temporary pin of fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[]
2023-04-11T09:04:17Z
2023-04-11T11:04:52Z
2023-04-11T11:04:52Z
MEMBER
null
null
null
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5734/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3535/comments
https://api.github.com/repos/huggingface/datasets/issues/3535/events
https://github.com/huggingface/datasets/pull/3535
1,094,633,214
PR_kwDODunzps4wkxv0
3,535
Add SVHN dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2022-01-05T18:29:09Z
2022-01-12T14:14:35Z
2022-01-12T14:14:35Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3535.diff", "html_url": "https://github.com/huggingface/datasets/pull/3535", "merged_at": "2022-01-12T14:14:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3535" }
Add the SVHN dataset. Additional notes: * compared to the TFDS implementation, exposes additional the "full numbers" config * adds the streaming support for `os.path.splitext` and `scipy.io.loadmat` * adds `h5py` to the requirements list for the dummy data test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3535/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2472/comments
https://api.github.com/repos/huggingface/datasets/issues/2472/events
https://github.com/huggingface/datasets/issues/2472
917,463,821
MDU6SXNzdWU5MTc0NjM4MjE=
2,472
Fix automatic generation of Zenodo DOI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
[ "I have received a reply from Zenodo support:\r\n> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you.", "Other repo maintainers had the same problem with Zenodo. \r\n\r\nThere is an open issue on their GitHub repo: zenodo/zenodo...
2021-06-10T15:15:46Z
2021-06-14T16:49:42Z
2021-06-14T16:49:42Z
MEMBER
null
null
null
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2472/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3663/comments
https://api.github.com/repos/huggingface/datasets/issues/3663/events
https://github.com/huggingface/datasets/issues/3663
1,121,067,647
I_kwDODunzps5C0iJ_
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Having talked to @lhoestq, I see that this feature is no longer supported. \r\n\r\nI really don't think this was a good idea. It is a major breaking change and one for which we don't even have a working solution at the moment, which is bad for PyTorch as we don't want to force people to have `datasets` decode audi...
2022-02-01T18:40:10Z
2022-09-21T15:03:09Z
2022-09-21T14:56:22Z
MEMBER
null
null
null
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results The path should be the complete absolute path to the downloaded audio file not some relative path. ## Actual results ```bash ~/hugging_face/venv_3.9/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file cv-corpus-6.1-2020-12-11/ab/clips/common_voice_ab_19904194.mp3 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.27 - Python version: 3.9.1 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3663/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1216/comments
https://api.github.com/repos/huggingface/datasets/issues/1216/events
https://github.com/huggingface/datasets/pull/1216
758,005,982
MDExOlB1bGxSZXF1ZXN0NTMzMjU0ODE2
1,216
Add limit
{ "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "events_url": "https://api.github.com/users/j-chim/events{/privacy}", "followers_url": "https://api.github.com/users/j-chim/followers", "following_url": "https://api.github.com/users/j-chim/following{/other_user}", "gists_url": "https://api.github.com/users/j-chim/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/j-chim", "id": 22435209, "login": "j-chim", "node_id": "MDQ6VXNlcjIyNDM1MjA5", "organizations_url": "https://api.github.com/users/j-chim/orgs", "received_events_url": "https://api.github.com/users/j-chim/received_events", "repos_url": "https://api.github.com/users/j-chim/repos", "site_admin": false, "starred_url": "https://api.github.com/users/j-chim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-chim/subscriptions", "type": "User", "url": "https://api.github.com/users/j-chim" }
[]
closed
false
null
[]
null
[ "My bad, didn't see this on the open dataset list. Closing this since it overlaps with PR#1256" ]
2020-12-06T19:46:18Z
2020-12-08T07:52:11Z
2020-12-08T07:52:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1216.diff", "html_url": "https://github.com/huggingface/datasets/pull/1216", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1216.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1216" }
This PR adds [LiMiT](https://github.com/ilmgut/limit_dataset), a dataset for literal motion classification/extraction by [Manotas et al., 2020](https://www.aclweb.org/anthology/2020.findings-emnlp.88.pdf).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1216/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1216/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5367/comments
https://api.github.com/repos/huggingface/datasets/issues/5367/events
https://github.com/huggingface/datasets/pull/5367
1,499,174,749
PR_kwDODunzps5FlevK
5,367
Fix remove columns from lazy dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-12-15T22:04:12Z
2022-12-15T22:27:53Z
2022-12-15T22:24:50Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5367.diff", "html_url": "https://github.com/huggingface/datasets/pull/5367", "merged_at": "2022-12-15T22:24:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/5367.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5367" }
This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597 Basically this code should return a dataset with only one column: ```python from datasets import * ds = Dataset.from_dict({"a": range(5)}) def f(x): x["b"] = x["a"] return x ds = ds.map(f, remove_columns=["a"]) assert ds.column_names == ["b"] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5367/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5701/comments
https://api.github.com/repos/huggingface/datasets/issues/5701/events
https://github.com/huggingface/datasets/pull/5701
1,652,931,399
PR_kwDODunzps5NiSCy
5,701
Add Dataset.from_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko Would you or another HF datasets maintainer be able to review this, please?", "Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `fil...
2023-04-03T23:51:29Z
2023-06-16T16:39:32Z
2023-04-26T15:43:39Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5701.diff", "html_url": "https://github.com/huggingface/datasets/pull/5701", "merged_at": "2023-04-26T15:43:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5701" }
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset. Related issue: https://github.com/huggingface/datasets/issues/5678
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 4, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5701/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4300/comments
https://api.github.com/repos/huggingface/datasets/issues/4300/events
https://github.com/huggingface/datasets/pull/4300
1,230,272,761
PR_kwDODunzps43iA86
4,300
Add API code examples for loading methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-09T21:30:26Z
2022-05-25T16:23:15Z
2022-05-25T09:20:13Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4300.diff", "html_url": "https://github.com/huggingface/datasets/pull/4300", "merged_at": "2022-05-25T09:20:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/4300.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4300" }
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :) I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me: ```py from datasets import inspect_dataset inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ``` Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4300/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1883/comments
https://api.github.com/repos/huggingface/datasets/issues/1883/events
https://github.com/huggingface/datasets/pull/1883
808,750,623
MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz
1,883
Add not-in-place implementations for several dataset transforms
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[ "@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)", "I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.", "Now let's update the ...
2021-02-15T18:44:26Z
2021-02-24T14:54:49Z
2021-02-24T14:53:26Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "html_url": "https://github.com/huggingface/datasets/pull/1883", "merged_at": "2021-02-24T14:53:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883" }
Should we deprecate in-place versions of such methods?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1883/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1741/comments
https://api.github.com/repos/huggingface/datasets/issues/1741/events
https://github.com/huggingface/datasets/issues/1741
787,327,060
MDU6SXNzdWU3ODczMjcwNjA=
1,741
error when run fine_tuning on text_classification
{ "avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4", "events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}", "followers_url": "https://api.github.com/users/XiaoYang66/followers", "following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}", "gists_url": "https://api.github.com/users/XiaoYang66/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XiaoYang66", "id": 43234824, "login": "XiaoYang66", "node_id": "MDQ6VXNlcjQzMjM0ODI0", "organizations_url": "https://api.github.com/users/XiaoYang66/orgs", "received_events_url": "https://api.github.com/users/XiaoYang66/received_events", "repos_url": "https://api.github.com/users/XiaoYang66/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XiaoYang66/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiaoYang66/subscriptions", "type": "User", "url": "https://api.github.com/users/XiaoYang66" }
[]
closed
false
null
[]
null
[ "none" ]
2021-01-16T02:23:19Z
2021-01-16T02:39:28Z
2021-01-16T02:39:18Z
NONE
null
null
null
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW). the error is like this : `File "train.py", line 69, in <module> trainer.train() File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train for step, inputs in enumerate(epoch_iterator): File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] KeyError: 2` this is my code : ```dataset_name = 'sem_eval_2014_task_1' num_labels_size = 3 batch_size = 4 model_checkpoint = 'bert-base-uncased' number_train_epoch = 5 def tokenize(batch): return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, ) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size) tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True) train_dataset = load_dataset(dataset_name, split='train') test_dataset = load_dataset(dataset_name, split='test') train_encoded_dataset = train_dataset.map(tokenize, batched=True) test_encoded_dataset = test_dataset.map(tokenize, batched=True) args = TrainingArguments( output_dir='./results', evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=number_train_epoch, weight_decay=0.01, do_predict=True, ) trainer = Trainer( model=model, args=args, compute_metrics=compute_metrics, train_dataset=train_encoded_dataset, eval_dataset=test_encoded_dataset, tokenizer=tokenizer ) trainer.train() trainer.evaluate()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1741/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2524/comments
https://api.github.com/repos/huggingface/datasets/issues/2524/events
https://github.com/huggingface/datasets/pull/2524
925,610,934
MDExOlB1bGxSZXF1ZXN0Njc0MDQzNzk1
2,524
Raise FileNotFoundError in WindowsFileLock
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?", "This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came ...
2021-06-20T14:25:11Z
2021-06-28T09:56:22Z
2021-06-28T08:47:39Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2524.diff", "html_url": "https://github.com/huggingface/datasets/pull/2524", "merged_at": "2021-06-28T08:47:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2524" }
Closes #2443
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2524/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)" ]
2022-03-24T15:13:38Z
2022-03-24T15:46:38Z
2022-03-24T15:17:29Z
NONE
null
null
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4837/comments
https://api.github.com/repos/huggingface/datasets/issues/4837/events
https://github.com/huggingface/datasets/pull/4837
1,337,079,723
PR_kwDODunzps49Fb6l
4,837
Add support for CSV metadata files to ImageFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?", "@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata...
2022-08-12T11:19:18Z
2022-08-31T12:01:27Z
2022-08-31T11:59:07Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4837.diff", "html_url": "https://github.com/huggingface/datasets/pull/4837", "merged_at": "2022-08-31T11:59:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4837" }
Fix #4814
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4837/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3573/comments
https://api.github.com/repos/huggingface/datasets/issues/3573/events
https://github.com/huggingface/datasets/pull/3573
1,101,157,676
PR_kwDODunzps4w5oE_
3,573
Add Mauve metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/2321244?v=4", "events_url": "https://api.github.com/users/jthickstun/events{/privacy}", "followers_url": "https://api.github.com/users/jthickstun/followers", "following_url": "https://api.github.com/users/jthickstun/following{/other_user}", "gists_url": "https://api.github.com/users/jthickstun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jthickstun", "id": 2321244, "login": "jthickstun", "node_id": "MDQ6VXNlcjIzMjEyNDQ=", "organizations_url": "https://api.github.com/users/jthickstun/orgs", "received_events_url": "https://api.github.com/users/jthickstun/received_events", "repos_url": "https://api.github.com/users/jthickstun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jthickstun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jthickstun/subscriptions", "type": "User", "url": "https://api.github.com/users/jthickstun" }
[]
closed
false
null
[]
null
[ "Hi ! The CI was failing because `mauve-text` wasn't installed. I added it to the CI setup :)\r\n\r\nI also did some minor changes to the script itself, especially to remove `**kwargs` and explicitly mentioned all the supported arguments (this way if someone does a typo with some parameters they get an error)" ]
2022-01-13T03:52:48Z
2022-01-20T15:00:08Z
2022-01-20T15:00:08Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3573.diff", "html_url": "https://github.com/huggingface/datasets/pull/3573", "merged_at": "2022-01-20T15:00:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3573" }
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3573/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4647/comments
https://api.github.com/repos/huggingface/datasets/issues/4647/events
https://github.com/huggingface/datasets/issues/4647
1,296,311,270
I_kwDODunzps5NRCPm
4,647
Add Reddit dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[]
2022-07-06T19:49:18Z
2022-07-06T19:49:18Z
null
NONE
null
null
null
## Adding a Dataset - **Name:** *Reddit comments (2015-2018)* - **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.* - **Paper:** *https://arxiv.org/abs/1904.06472* - **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4647/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1524/comments
https://api.github.com/repos/huggingface/datasets/issues/1524/events
https://github.com/huggingface/datasets/pull/1524
764,521,672
MDExOlB1bGxSZXF1ZXN0NTM4NTQ2MjI0
1,524
ADD: swahili dataset for language modeling
{ "avatar_url": "https://avatars.githubusercontent.com/u/29649801?v=4", "events_url": "https://api.github.com/users/akshayb7/events{/privacy}", "followers_url": "https://api.github.com/users/akshayb7/followers", "following_url": "https://api.github.com/users/akshayb7/following{/other_user}", "gists_url": "https://api.github.com/users/akshayb7/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akshayb7", "id": 29649801, "login": "akshayb7", "node_id": "MDQ6VXNlcjI5NjQ5ODAx", "organizations_url": "https://api.github.com/users/akshayb7/orgs", "received_events_url": "https://api.github.com/users/akshayb7/received_events", "repos_url": "https://api.github.com/users/akshayb7/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akshayb7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akshayb7/subscriptions", "type": "User", "url": "https://api.github.com/users/akshayb7" }
[]
closed
false
null
[]
null
[]
2020-12-12T22:47:18Z
2020-12-17T16:37:16Z
2020-12-17T16:37:16Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1524.diff", "html_url": "https://github.com/huggingface/datasets/pull/1524", "merged_at": "2020-12-17T16:37:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/1524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1524" }
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1524/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3566/comments
https://api.github.com/repos/huggingface/datasets/issues/3566/events
https://github.com/huggingface/datasets/pull/3566
1,100,155,902
PR_kwDODunzps4w2Tcc
3,566
Add initial electricity time series dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kashif", "id": 8100, "login": "kashif", "node_id": "MDQ6VXNlcjgxMDA=", "organizations_url": "https://api.github.com/users/kashif/orgs", "received_events_url": "https://api.github.com/users/kashif/received_events", "repos_url": "https://api.github.com/users/kashif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "type": "User", "url": "https://api.github.com/users/kashif" }
[]
closed
false
null
[]
null
[ "@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n", "making a new PR" ]
2022-01-12T10:21:32Z
2022-02-15T13:31:48Z
2022-02-15T13:31:48Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3566.diff", "html_url": "https://github.com/huggingface/datasets/pull/3566", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3566.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3566" }
Here is an initial prototype time series dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3566/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2478/comments
https://api.github.com/repos/huggingface/datasets/issues/2478/events
https://github.com/huggingface/datasets/issues/2478
918,507,510
MDU6SXNzdWU5MTg1MDc1MTA=
2,478
Create release script
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "I've aligned the release script with Transformers in #6004, so I think this issue can be closed." ]
2021-06-11T09:38:02Z
2023-07-20T13:22:23Z
null
MEMBER
null
null
null
Create a script so that releases can be done automatically (as done in `transformers`).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2478/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1323/comments
https://api.github.com/repos/huggingface/datasets/issues/1323/events
https://github.com/huggingface/datasets/pull/1323
759,581,919
MDExOlB1bGxSZXF1ZXN0NTM0NTYyNDQ0
1,323
Add CC-News dataset of English language articles
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje" }
[]
closed
false
null
[]
null
[ "@vblagoje nice work, please add the README.md file and it would be ready", "@lhoestq @tanmoyio @yjernite please have a look at the dataset card. Don't forget that the dataset is still hosted on my private gs bucket and should eventually be moved to the HF bucket", "I will move the files soon and ping you when ...
2020-12-08T16:18:15Z
2021-02-01T16:55:49Z
2021-02-01T16:55:49Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1323.diff", "html_url": "https://github.com/huggingface/datasets/pull/1323", "merged_at": "2021-02-01T16:55:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1323.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1323" }
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1323/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1562/comments
https://api.github.com/repos/huggingface/datasets/issues/1562/events
https://github.com/huggingface/datasets/pull/1562
765,981,749
MDExOlB1bGxSZXF1ZXN0NTM5MTc5ODc3
1,562
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
{ "avatar_url": "https://avatars.githubusercontent.com/u/14899066?v=4", "events_url": "https://api.github.com/users/arkhalid/events{/privacy}", "followers_url": "https://api.github.com/users/arkhalid/followers", "following_url": "https://api.github.com/users/arkhalid/following{/other_user}", "gists_url": "https://api.github.com/users/arkhalid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arkhalid", "id": 14899066, "login": "arkhalid", "node_id": "MDQ6VXNlcjE0ODk5MDY2", "organizations_url": "https://api.github.com/users/arkhalid/orgs", "received_events_url": "https://api.github.com/users/arkhalid/received_events", "repos_url": "https://api.github.com/users/arkhalid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arkhalid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arkhalid/subscriptions", "type": "User", "url": "https://api.github.com/users/arkhalid" }
[]
closed
false
null
[]
null
[ "Just a small revision from simon's review: 20KB for the dummy_data.zip is fine, you can keep them this way.", "Also the CI is failing because of an error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` that is not related to your dataset and is fixed on master. You can ignore it", "merging since the ...
2020-12-14T06:32:48Z
2020-12-21T13:14:46Z
2020-12-21T13:14:46Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1562.diff", "html_url": "https://github.com/huggingface/datasets/pull/1562", "merged_at": "2020-12-21T13:14:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/1562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1562" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1562/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1664/comments
https://api.github.com/repos/huggingface/datasets/issues/1664/events
https://github.com/huggingface/datasets/pull/1664
775,956,441
MDExOlB1bGxSZXF1ZXN0NTQ2NTM1NDcy
1,664
removed \n in labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
2020-12-29T15:41:43Z
2020-12-30T17:18:49Z
2020-12-30T17:18:49Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1664.diff", "html_url": "https://github.com/huggingface/datasets/pull/1664", "merged_at": "2020-12-30T17:18:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1664.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1664" }
updated social_i_qa labels as per #1633
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1664/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1664/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2312/comments
https://api.github.com/repos/huggingface/datasets/issues/2312/events
https://github.com/huggingface/datasets/pull/2312
875,435,726
MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz
2,312
Add rename_columnS method
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[ "Merging then 😄 " ]
2021-05-04T12:57:53Z
2021-05-04T13:43:13Z
2021-05-04T13:43:12Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2312.diff", "html_url": "https://github.com/huggingface/datasets/pull/2312", "merged_at": "2021-05-04T13:43:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/2312.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2312" }
Cherry-picked from #2255
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2312/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5114/comments
https://api.github.com/repos/huggingface/datasets/issues/5114/events
https://github.com/huggingface/datasets/issues/5114
1,409,236,738
I_kwDODunzps5T_z8C
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.", "What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```" ]
2022-10-14T11:54:53Z
2022-11-19T07:13:10Z
null
CONTRIBUTOR
null
null
null
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build_local_temp_path(src_dataset_path) fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) ``` If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train` Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice) Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right: ```python fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True) ``` ## Steps to reproduce the bug ```python fs = gcsfs.GCSFileSystem(**storage_options) dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine dataset.save_to_disk(output_dir, fs=fs) #works fine dataset = load_from_disk(output_dir, fs=fs) # crashes ``` ## Expected results The dataset is loaded ## Actual results FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.6.1.dev0 - Platform: mac os monterey 12.5.1 - Python version: 3.8.13 - PyArrow version:pyarrow==9.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5114/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4234/comments
https://api.github.com/repos/huggingface/datasets/issues/4234/events
https://github.com/huggingface/datasets/pull/4234
1,216,818,846
PR_kwDODunzps422Mwn
4,234
Autoeval config
{ "avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4", "events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}", "followers_url": "https://api.github.com/users/nazneenrajani/followers", "following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}", "gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nazneenrajani", "id": 3278583, "login": "nazneenrajani", "node_id": "MDQ6VXNlcjMyNzg1ODM=", "organizations_url": "https://api.github.com/users/nazneenrajani/orgs", "received_events_url": "https://api.github.com/users/nazneenrajani/received_events", "repos_url": "https://api.github.com/users/nazneenrajani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions", "type": "User", "url": "https://api.github.com/users/nazneenrajani" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424", "The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argum...
2022-04-27T05:32:10Z
2022-05-06T13:20:31Z
2022-05-05T18:20:58Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4234.diff", "html_url": "https://github.com/huggingface/datasets/pull/4234", "merged_at": "2022-05-05T18:20:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/4234.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4234" }
Added autoeval config to imdb as pilot
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4234/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4234/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2147/comments
https://api.github.com/repos/huggingface/datasets/issues/2147/events
https://github.com/huggingface/datasets/pull/2147
844,687,831
MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4
2,147
Render docstring return type as inline
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[]
2021-03-30T14:55:43Z
2021-03-31T13:11:05Z
2021-03-31T13:11:05Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2147.diff", "html_url": "https://github.com/huggingface/datasets/pull/2147", "merged_at": "2021-03-31T13:11:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2147.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2147" }
This documentation setting will avoid having the return type in a separate line under `Return type`. See e.g. current docs for `Dataset.to_csv`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2147/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
https://api.github.com/repos/huggingface/datasets/issues/1993/events
https://github.com/huggingface/datasets/issues/1993
822,758,387
MDU6SXNzdWU4MjI3NTgzODc=
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
[]
closed
false
null
[]
null
[ "Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset", "Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/rese...
2021-03-05T05:25:50Z
2021-03-22T04:05:50Z
2021-03-22T04:05:50Z
NONE
null
null
null
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2422/comments
https://api.github.com/repos/huggingface/datasets/issues/2422/events
https://github.com/huggingface/datasets/pull/2422
905,568,548
MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1
2,422
Fix save_to_disk nested features order in dataset_info.json
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-05-28T15:03:28Z
2021-05-28T15:26:57Z
2021-05-28T15:26:56Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2422.diff", "html_url": "https://github.com/huggingface/datasets/pull/2422", "merged_at": "2021-05-28T15:26:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2422.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2422" }
Fix issue https://github.com/huggingface/datasets/issues/2267 The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2422/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2422/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3052/comments
https://api.github.com/repos/huggingface/datasets/issues/3052/events
https://github.com/huggingface/datasets/issues/3052
1,021,944,435
I_kwDODunzps486aJz
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
{ "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BenoitDalFerro", "id": 69694610, "login": "BenoitDalFerro", "node_id": "MDQ6VXNlcjY5Njk0NjEw", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "type": "User", "url": "https://api.github.com/users/BenoitDalFerro" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The fo...
2021-10-10T10:31:36Z
2021-10-11T10:57:09Z
2021-10-11T10:56:36Z
NONE
null
null
null
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud. Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :( ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir) ``` Note that exact same code without specifying _cache_dir_ argument works perfectly fine. ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train') ``` ## Expected results Downloads the dataset and cache is handled in the _cache_dir_ directory ## Actual results Data download keeps hanging on forever, **NO TRACEBACK**! ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3052/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
https://api.github.com/repos/huggingface/datasets/issues/5715/events
https://github.com/huggingface/datasets/issues/5715
1,657,479,788
I_kwDODunzps5iyyJs
5,715
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
{ "avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4", "events_url": "https://api.github.com/users/jungbaepark/events{/privacy}", "followers_url": "https://api.github.com/users/jungbaepark/followers", "following_url": "https://api.github.com/users/jungbaepark/following{/other_user}", "gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jungbaepark", "id": 34066771, "login": "jungbaepark", "node_id": "MDQ6VXNlcjM0MDY2Nzcx", "organizations_url": "https://api.github.com/users/jungbaepark/orgs", "received_events_url": "https://api.github.com/users/jungbaepark/received_events", "repos_url": "https://api.github.com/users/jungbaepark/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions", "type": "User", "url": "https://api.github.com/users/jungbaepark" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n " ]
2023-04-06T13:57:48Z
2023-04-20T17:16:26Z
2023-04-20T17:16:26Z
NONE
null
null
null
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch/issues/13246 With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue. However, this issue can be released when the returning output is fixed in length. Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list. The design would be good when we load datasets as ```python load_dataset(..., with_return_as_fixed_tensor=True) ``` ### Motivation The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 : Numpy or Pandas seems not to have problems, while both have the string type. (I'm not sure that the sequence of huggingface datasets can solve this problem as well) ### Your contribution I'll read it ! thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4302/comments
https://api.github.com/repos/huggingface/datasets/issues/4302/events
https://github.com/huggingface/datasets/pull/4302
1,230,651,117
PR_kwDODunzps43jPE5
4,302
Remove hacking license tags when mirroring datasets on the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.", "Ok, let me rename the bad config names :) I think I can also keep backward compatibility...
2022-05-10T05:52:46Z
2022-05-20T09:48:30Z
2022-05-20T09:40:20Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4302.diff", "html_url": "https://github.com/huggingface/datasets/pull/4302", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4302.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4302" }
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. I guess this hacking is no longer necessary: - it is not applied to community datasets - all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones Fix #4298.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4302/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4302/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3533/comments
https://api.github.com/repos/huggingface/datasets/issues/3533/events
https://github.com/huggingface/datasets/issues/3533
1,094,156,147
I_kwDODunzps5BN39z
3,533
Task search function on hub not working correctly
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon", "hmm actually i have no recollection of why I said that", "Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end"...
2022-01-05T09:36:30Z
2022-05-12T14:45:57Z
null
MEMBER
null
null
null
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task tags seem correct: https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3533/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2947/comments
https://api.github.com/repos/huggingface/datasets/issues/2947/events
https://github.com/huggingface/datasets/pull/2947
1,000,798,338
PR_kwDODunzps4r9GIP
2,947
Don't use old, incompatible cache for the new `filter`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-09-20T10:18:59Z
2021-09-20T16:25:09Z
2021-09-20T13:43:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2947.diff", "html_url": "https://github.com/huggingface/datasets/pull/2947", "merged_at": "2021-09-20T13:43:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2947" }
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result. To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0 This way the new `filter` outputs are now considered different from the old ones from the caching point of view. This should fix #2943 cc @anton-l
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2947/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1918/comments
https://api.github.com/repos/huggingface/datasets/issues/1918/events
https://github.com/huggingface/datasets/pull/1918
812,541,510
MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0
1,918
Fix QA4MRE download URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
[]
closed
false
null
[]
null
[]
2021-02-20T07:32:17Z
2021-02-22T13:35:06Z
2021-02-22T13:35:06Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1918.diff", "html_url": "https://github.com/huggingface/datasets/pull/1918", "merged_at": "2021-02-22T13:35:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1918.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1918" }
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1918/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4469/comments
https://api.github.com/repos/huggingface/datasets/issues/4469/events
https://github.com/huggingface/datasets/pull/4469
1,267,213,849
PR_kwDODunzps45cweQ
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-10T08:13:25Z
2022-06-10T16:42:08Z
2022-06-10T16:32:46Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4469.diff", "html_url": "https://github.com/huggingface/datasets/pull/4469", "merged_at": "2022-06-10T16:32:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4469.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4469" }
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4469/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1663/comments
https://api.github.com/repos/huggingface/datasets/issues/1663/events
https://github.com/huggingface/datasets/pull/1663
775,914,320
MDExOlB1bGxSZXF1ZXN0NTQ2NTAzMjg5
1,663
update saving and loading methods for faiss index so to accept path l…
{ "avatar_url": "https://avatars.githubusercontent.com/u/11614798?v=4", "events_url": "https://api.github.com/users/tslott/events{/privacy}", "followers_url": "https://api.github.com/users/tslott/followers", "following_url": "https://api.github.com/users/tslott/following{/other_user}", "gists_url": "https://api.github.com/users/tslott/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tslott", "id": 11614798, "login": "tslott", "node_id": "MDQ6VXNlcjExNjE0Nzk4", "organizations_url": "https://api.github.com/users/tslott/orgs", "received_events_url": "https://api.github.com/users/tslott/received_events", "repos_url": "https://api.github.com/users/tslott/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tslott/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tslott/subscriptions", "type": "User", "url": "https://api.github.com/users/tslott" }
[]
closed
false
null
[]
null
[ "Seems ok for me, what do you think @lhoestq ?" ]
2020-12-29T14:15:37Z
2021-01-18T09:27:23Z
2021-01-18T09:27:23Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1663.diff", "html_url": "https://github.com/huggingface/datasets/pull/1663", "merged_at": "2021-01-18T09:27:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/1663.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1663" }
- Update saving and loading methods for faiss index so to accept path like objects from pathlib The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes becomes a more intuitive this way I think.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1663/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5851/comments
https://api.github.com/repos/huggingface/datasets/issues/5851/events
https://github.com/huggingface/datasets/issues/5851
1,707,907,048
I_kwDODunzps5lzJfo
5,851
Error message not clear in interleaving datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/surya-narayanan", "id": 17240858, "login": "surya-narayanan", "node_id": "MDQ6VXNlcjE3MjQwODU4", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "type": "User", "url": "https://api.github.com/users/surya-narayanan" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[]
2023-05-11T20:52:13Z
2023-05-23T10:32:59Z
2023-05-23T10:32:59Z
NONE
null
null
null
### System Info standard env ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful- ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3 [41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %% ----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted") File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy) [122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]: [123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)): --> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError( [125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects." [126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) ) [127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]: [128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.") ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects. ``` ### Expected behavior the error message should hopefully be more clear
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5851/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4254/comments
https://api.github.com/repos/huggingface/datasets/issues/4254/events
https://github.com/huggingface/datasets/pull/4254
1,220,204,395
PR_kwDODunzps43Bwnj
4,254
Replace data URL in SAMSum dataset and support streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-29T08:21:43Z
2022-05-06T08:38:16Z
2022-04-29T16:26:09Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4254.diff", "html_url": "https://github.com/huggingface/datasets/pull/4254", "merged_at": "2022-04-29T16:26:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/4254.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4254" }
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4254/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4254/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2772/comments
https://api.github.com/repos/huggingface/datasets/issues/2772/events
https://github.com/huggingface/datasets/issues/2772
963,348,834
MDU6SXNzdWU5NjMzNDg4MzQ=
2,772
Remove returned feature constrain
{ "avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4", "events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}", "followers_url": "https://api.github.com/users/PosoSAgapo/followers", "following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}", "gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PosoSAgapo", "id": 33200481, "login": "PosoSAgapo", "node_id": "MDQ6VXNlcjMzMjAwNDgx", "organizations_url": "https://api.github.com/users/PosoSAgapo/orgs", "received_events_url": "https://api.github.com/users/PosoSAgapo/received_events", "repos_url": "https://api.github.com/users/PosoSAgapo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions", "type": "User", "url": "https://api.github.com/users/PosoSAgapo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-08T04:01:30Z
2021-08-08T08:48:01Z
null
NONE
null
null
null
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2772/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2772/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4051/comments
https://api.github.com/repos/huggingface/datasets/issues/4051/events
https://github.com/huggingface/datasets/issues/4051
1,184,400,179
I_kwDODunzps5GmIMz
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klyuhang9", "id": 39409233, "login": "klyuhang9", "node_id": "MDQ6VXNlcjM5NDA5MjMz", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "repos_url": "https://api.github.com/users/klyuhang9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "type": "User", "url": "https://api.github.com/users/klyuhang9" }
[]
closed
false
null
[]
null
[ "Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] ...
2022-03-29T07:00:31Z
2022-05-08T07:27:32Z
2022-03-29T08:29:25Z
NONE
null
null
null
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your help!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4051/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4122/comments
https://api.github.com/repos/huggingface/datasets/issues/4122/events
https://github.com/huggingface/datasets/issues/4122
1,196,095,072
I_kwDODunzps5HSvZg
4,122
medical_dialog zh has very slow _generate_examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @nbroad1881, thanks for reporting.\r\n\r\nLet me have a look to try to improve its performance. ", "Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this. \r\n@albertvillanova please let me know if I am doing something unnecessary or time consuming.", "Hi @nbro...
2022-04-07T14:00:51Z
2022-04-08T16:20:51Z
2022-04-08T16:20:51Z
NONE
null
null
null
## Describe the bug After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours. ## Steps to reproduce the bug The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud. ```python file_ids = [ "1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E", "1tt7weAT1SZknzRFyLXOT2fizceUUVRXX", "1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc", "1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J", "1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu", "1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP", "1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c", "1pA3bCFA5nZDhsQutqsJcH3d712giFb0S", "1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU", "1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD", "1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH", ] for i in file_ids: url = f"https://drive.google.com/uc?id={i}" !gdown $url from datasets import load_dataset ds = load_dataset("medical_dialog", "zh", data_dir="./") ``` ## Expected results Faster load time ## Actual results `Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 @vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4122/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3422/comments
https://api.github.com/repos/huggingface/datasets/issues/3422/events
https://github.com/huggingface/datasets/issues/3422
1,078,022,619
I_kwDODunzps5AQVHb
3,422
Error about load_metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/30772464?v=4", "events_url": "https://api.github.com/users/jiacheng-ye/events{/privacy}", "followers_url": "https://api.github.com/users/jiacheng-ye/followers", "following_url": "https://api.github.com/users/jiacheng-ye/following{/other_user}", "gists_url": "https://api.github.com/users/jiacheng-ye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiacheng-ye", "id": 30772464, "login": "jiacheng-ye", "node_id": "MDQ6VXNlcjMwNzcyNDY0", "organizations_url": "https://api.github.com/users/jiacheng-ye/orgs", "received_events_url": "https://api.github.com/users/jiacheng-ye/received_events", "repos_url": "https://api.github.com/users/jiacheng-ye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiacheng-ye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiacheng-ye/subscriptions", "type": "User", "url": "https://api.github.com/users/jiacheng-ye" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! I wasn't able to reproduce your error.\r\n\r\nCan you try to clear your cache at `~/.cache/huggingface/modules` and try again ?" ]
2021-12-13T02:49:51Z
2022-01-07T14:06:47Z
2022-01-07T14:06:47Z
NONE
null
null
null
## Describe the bug File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python metric = load_metric("glue", "sst2") ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3422/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3422/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3606/comments
https://api.github.com/repos/huggingface/datasets/issues/3606/events
https://github.com/huggingface/datasets/issues/3606
1,108,918,701
I_kwDODunzps5CGMGt
3,606
audio column not saved correctly after resampling
{ "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laphang", "id": 24724502, "login": "laphang", "node_id": "MDQ6VXNlcjI0NzI0NTAy", "organizations_url": "https://api.github.com/users/laphang/orgs", "received_events_url": "https://api.github.com/users/laphang/received_events", "repos_url": "https://api.github.com/users/laphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "type": "User", "url": "https://api.github.com/users/laphang" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now", "Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!", "Also, just an FY...
2022-01-20T06:37:10Z
2022-01-23T01:41:01Z
2022-01-23T01:24:14Z
NONE
null
null
null
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected results I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it) {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Actual results Audio column does not have the right type {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: linux - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3606/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3606/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4887/comments
https://api.github.com/repos/huggingface/datasets/issues/4887/events
https://github.com/huggingface/datasets/pull/4887
1,349,426,693
PR_kwDODunzps49t_PM
4,887
Add "cc-by-nc-sa-2.0" to list of licenses
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry for the issue @albertvillanova! I think it's now fixed! :heart: " ]
2022-08-24T13:11:49Z
2022-08-26T10:31:32Z
2022-08-26T10:29:20Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4887.diff", "html_url": "https://github.com/huggingface/datasets/pull/4887", "merged_at": "2022-08-26T10:29:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/4887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4887" }
Datasets side of https://github.com/huggingface/hub-docs/pull/285
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4887/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3135/comments
https://api.github.com/repos/huggingface/datasets/issues/3135/events
https://github.com/huggingface/datasets/issues/3135
1,033,294,299
I_kwDODunzps49ltHb
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": fals...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?", "Yes, maybe t...
2021-10-22T08:02:50Z
2021-10-28T05:44:49Z
2021-10-28T05:44:49Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`). https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3135/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2121/comments
https://api.github.com/repos/huggingface/datasets/issues/2121/events
https://github.com/huggingface/datasets/pull/2121
842,148,633
MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4
2,121
Add Validation For README
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsect...
2021-03-26T17:02:17Z
2021-05-10T13:17:18Z
2021-05-10T09:41:41Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2121.diff", "html_url": "https://github.com/huggingface/datasets/pull/2121", "merged_at": "2021-05-10T09:41:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2121.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2121" }
Hi @lhoestq, @yjernite This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each. Let me know if this is going in the right direction :) Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`: ```json { "name": "./datasets/fashion_mnist/README.md", "attributes": "", "subsections": [ { "name": "Dataset Card for FashionMNIST", "attributes": "", "subsections": [ { "name": "Table of Contents", "attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)", "subsections": [] }, { "name": "Dataset Description", "attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**", "subsections": [ { "name": "Dataset Summary", "attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.", "subsections": [] }, { "name": "Supported Tasks and Leaderboards", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Languages", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Dataset Structure", "attributes": "", "subsections": [ { "name": "Data Instances", "attributes": "A data point comprises an image and its label.", "subsections": [] }, { "name": "Data Fields", "attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |", "subsections": [] }, { "name": "Data Splits", "attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.", "subsections": [] } ] }, { "name": "Dataset Creation", "attributes": "", "subsections": [ { "name": "Curation Rationale", "attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.", "subsections": [] }, { "name": "Source Data", "attributes": "", "subsections": [ { "name": "Initial Data Collection and Normalization", "attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.", "subsections": [] }, { "name": "Who are the source image producers?", "attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.", "subsections": [] } ] }, { "name": "Annotations", "attributes": "", "subsections": [ { "name": "Annotation process", "attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.", "subsections": [] }, { "name": "Who are the annotators?", "attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.", "subsections": [] } ] }, { "name": "Personal and Sensitive Information", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Considerations for Using the Data", "attributes": "", "subsections": [ { "name": "Social Impact of Dataset", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Discussion of Biases", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Other Known Limitations", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Additional Information", "attributes": "", "subsections": [ { "name": "Dataset Curators", "attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf", "subsections": [] }, { "name": "Licensing Information", "attributes": "MIT Licence", "subsections": [] }, { "name": "Citation Information", "attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}", "subsections": [] }, { "name": "Contributions", "attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.", "subsections": [] } ] } ] } ] } ``` Thanks, Gunjan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2520/comments
https://api.github.com/repos/huggingface/datasets/issues/2520/events
https://github.com/huggingface/datasets/issues/2520
925,015,004
MDU6SXNzdWU5MjUwMTUwMDQ=
2,520
Datasets with tricky task templates
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
closed
false
null
[]
null
[ "The `task_templates` API is deprecated in favor of the `train-eval-index` YAML field, so I'm closing this issue." ]
2021-06-18T15:33:57Z
2023-07-20T13:20:32Z
2023-07-20T13:20:32Z
MEMBER
null
null
null
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for. ## Text classification * [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized. * [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2520/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2520/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3067/comments
https://api.github.com/repos/huggingface/datasets/issues/3067/events
https://github.com/huggingface/datasets/pull/3067
1,024,023,185
PR_kwDODunzps4tFSCy
3,067
add story_cloze
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
[]
closed
false
null
[]
null
[ "Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npre...
2021-10-12T16:36:53Z
2021-10-13T13:48:13Z
2021-10-13T13:48:13Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3067.diff", "html_url": "https://github.com/huggingface/datasets/pull/3067", "merged_at": "2021-10-13T13:48:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3067.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3067" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3067/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4877/comments
https://api.github.com/repos/huggingface/datasets/issues/4877/events
https://github.com/huggingface/datasets/pull/4877
1,348,246,755
PR_kwDODunzps49qF-w
4,877
Fix documentation card of covid_qa_castorini dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint." ]
2022-08-23T16:52:33Z
2022-08-23T18:05:01Z
2022-08-23T18:05:00Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4877.diff", "html_url": "https://github.com/huggingface/datasets/pull/4877", "merged_at": "2022-08-23T18:05:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4877.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4877" }
Fix documentation card of covid_qa_castorini dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4877/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4527/comments
https://api.github.com/repos/huggingface/datasets/issues/4527/events
https://github.com/huggingface/datasets/issues/4527
1,276,583,536
I_kwDODunzps5MFx5w
4,527
Dataset Viewer issue for vadis/sv-ident
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Fixed, thanks!\r\n![Uploading Capture d’écran 2022-06-21 à 18.42.40.png…]()\r\n\r\n" ]
2022-06-20T08:47:42Z
2022-06-21T16:42:46Z
2022-06-21T16:42:45Z
MEMBER
null
null
null
### Link https://huggingface.co/datasets/vadis/sv-ident ### Description The dataset preview does not work: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` However, the dataset is streamable and works locally: ```python In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item Using custom data configuration default Out[1]: {'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.', 'is_variable': 1, 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'], 'research_data': ['ZA5400'], 'doc_id': '73106', 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10', 'lang': 'en'} ``` CC: @e-tornike ### Owner No
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4527/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4527/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2311/comments
https://api.github.com/repos/huggingface/datasets/issues/2311/events
https://github.com/huggingface/datasets/pull/2311
875,262,208
MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx
2,311
Add SLR52, SLR53 and SLR54 to OpenSLR
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!/bin/bash -eo pipefail\r\n./scripts/datasets_metadata_validator.py\r\nWARNING:root:❌ Failed to validate 'datasets/openslr/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:❌ Failed on 1 files.\r\n\r\nExited with code e...
2021-05-04T09:08:03Z
2021-05-07T09:50:55Z
2021-05-07T09:50:55Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2311.diff", "html_url": "https://github.com/huggingface/datasets/pull/2311", "merged_at": "2021-05-07T09:50:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2311.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2311" }
Add large speech datasets for Sinhala, Bengali and Nepali.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2311/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
https://api.github.com/repos/huggingface/datasets/issues/2042/events
https://github.com/huggingface/datasets/pull/2042
830,190,276
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
2,042
Fix arrow memory checks issue in tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-03-12T14:49:52Z
2021-03-12T15:04:23Z
2021-03-12T15:04:22Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "html_url": "https://github.com/huggingface/datasets/pull/2042", "merged_at": "2021-03-12T15:04:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042" }
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests. Collecting the garbage collector before checking the arrow memory usage seems to fix this issue. I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2847/comments
https://api.github.com/repos/huggingface/datasets/issues/2847/events
https://github.com/huggingface/datasets/pull/2847
981,589,693
MDExOlB1bGxSZXF1ZXN0NzIxNjA3OTA0
2,847
fix regex to accept negative timezone
{ "avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4", "events_url": "https://api.github.com/users/jadermcs/events{/privacy}", "followers_url": "https://api.github.com/users/jadermcs/followers", "following_url": "https://api.github.com/users/jadermcs/following{/other_user}", "gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jadermcs", "id": 7156771, "login": "jadermcs", "node_id": "MDQ6VXNlcjcxNTY3NzE=", "organizations_url": "https://api.github.com/users/jadermcs/orgs", "received_events_url": "https://api.github.com/users/jadermcs/received_events", "repos_url": "https://api.github.com/users/jadermcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions", "type": "User", "url": "https://api.github.com/users/jadermcs" }
[]
closed
false
null
[]
null
[]
2021-08-27T20:54:05Z
2021-09-13T20:39:50Z
2021-09-07T09:34:23Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2847.diff", "html_url": "https://github.com/huggingface/datasets/pull/2847", "merged_at": "2021-09-07T09:34:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2847" }
fix #2846
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2847/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3252/comments
https://api.github.com/repos/huggingface/datasets/issues/3252/events
https://github.com/huggingface/datasets/pull/3252
1,051,124,749
PR_kwDODunzps4uagoy
3,252
Fix failing CER metric test in CI after update
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-11-11T15:57:16Z
2021-11-12T14:06:44Z
2021-11-12T14:06:43Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3252.diff", "html_url": "https://github.com/huggingface/datasets/pull/3252", "merged_at": "2021-11-12T14:06:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3252" }
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3252/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2824/comments
https://api.github.com/repos/huggingface/datasets/issues/2824/events
https://github.com/huggingface/datasets/pull/2824
976,394,721
MDExOlB1bGxSZXF1ZXN0NzE3MzIyMzY5
2,824
Fix defaults in cache_dir docstring in load.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-08-22T14:48:37Z
2021-08-26T13:23:32Z
2021-08-26T11:55:16Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2824.diff", "html_url": "https://github.com/huggingface/datasets/pull/2824", "merged_at": "2021-08-26T11:55:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2824" }
Fix defaults in the `cache_dir` docstring.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2824/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5111/comments
https://api.github.com/repos/huggingface/datasets/issues/5111/events
https://github.com/huggingface/datasets/issues/5111
1,408,143,170
I_kwDODunzps5T7o9C
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loubnabnl", "id": 44069155, "login": "loubnabnl", "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "repos_url": "https://api.github.com/users/loubnabnl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "type": "User", "url": "https://api.github.com/users/loubnabnl" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ", "Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['re...
2022-10-13T17:00:55Z
2022-10-17T08:26:59Z
2022-10-14T14:59:59Z
NONE
null
null
null
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2 In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements. ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset def preprocess(example): return example ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)]) ds1 = ds.map(preprocess, num_proc=2) ds2 = ds.map(preprocess) # the datasets elements are the same for i in range(len(ds1)): assert ds1[i]==ds2[i] print(f'Target column before filtering {ds1["autogenerated"]}') print(f'Target column before filtering {ds2["autogenerated"]}') print(f"datasets version {datasets.__version__}") ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"]) ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"]) # all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept print(ds_filtered_1) print(ds_filtered_2) ``` ``` Target column before filtering [False, False, False, False, False, False, False, False, False, False] Target column before filtering [False, False, False, False, False, False, False, False, False, False] Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 5 }) Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 10 }) ``` ## Expected results Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen ## Actual results Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.0 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5111/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5122/comments
https://api.github.com/repos/huggingface/datasets/issues/5122/events
https://github.com/huggingface/datasets/pull/5122
1,410,732,403
PR_kwDODunzps5A4rWn
5,122
Add warning
{ "avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4", "events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}", "followers_url": "https://api.github.com/users/Salehbigdeli/followers", "following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}", "gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Salehbigdeli", "id": 34204311, "login": "Salehbigdeli", "node_id": "MDQ6VXNlcjM0MjA0MzEx", "organizations_url": "https://api.github.com/users/Salehbigdeli/orgs", "received_events_url": "https://api.github.com/users/Salehbigdeli/received_events", "repos_url": "https://api.github.com/users/Salehbigdeli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions", "type": "User", "url": "https://api.github.com/users/Salehbigdeli" }
[]
closed
false
null
[]
null
[ "As mentioned in https://github.com/huggingface/datasets/issues/5105 I think we just need to keep the existing files instead of deleting them.\r\nThe `dataset_info.json` file contains the split names anyway, so we know which files belong to the dataset, and which ones don't." ]
2022-10-17T01:30:37Z
2022-11-05T12:23:53Z
2022-11-05T12:23:53Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5122.diff", "html_url": "https://github.com/huggingface/datasets/pull/5122", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5122.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5122" }
Fixes: #5105 I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5122/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
https://api.github.com/repos/huggingface/datasets/issues/5709/events
https://github.com/huggingface/datasets/issues/5709
1,655,423,503
I_kwDODunzps5iq8IP
5,709
Manually dataset info made not taken into account
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[ "hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually...
2023-04-05T11:15:17Z
2023-04-06T08:52:20Z
2023-04-06T08:52:19Z
CONTRIBUTOR
null
null
null
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated. Former `dataset_infos.json` file: ``` {"default": { "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "_type": "Image" }, "labels": { "names": [ "Fake", "Real" ], "_type": "ClassLabel" } }, "splits": { "validation": { "name": "validation", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null }, "train": { "name": "train", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null } }, "download_size": 1802008414, "dataset_size": 1802020188.0, "size_in_bytes": 3604028602.0 }} ``` After I update it manually it looks like: ``` { "bstrai--deepfake-detection":{ "description":"", "citation":"", "homepage":"", "license":"", "features":{ "image":{ "decode":true, "id":null, "_type":"Image" }, "labels":{ "num_classes":2, "names":[ "Fake", "Real" ], "id":null, "_type":"ClassLabel" } }, "supervised_keys":{ "input":"image", "output":"labels" }, "task_templates":[ { "task":"image-classification", "image_column":"image", "label_column":"labels" } ], "config_name":null, "splits":{ "validation":{ "name":"validation", "num_bytes":36627822, "num_examples":123, "dataset_name":"deepfake-detection" }, "train":{ "name":"train", "num_bytes":901023694, "num_examples":3200, "dataset_name":"deepfake-detection" } }, "download_checksums":null, "download_size":937562209, "dataset_size":937651516, "size_in_bytes":1875213725 } } ``` Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet? Thanks! ### Steps to reproduce the bug - ### Expected behavior - ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1954/comments
https://api.github.com/repos/huggingface/datasets/issues/1954/events
https://github.com/huggingface/datasets/issues/1954
817,565,563
MDU6SXNzdWU4MTc1NjU1NjM=
1,954
add a new column
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ", "Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support ...
2021-02-26T18:17:27Z
2021-04-29T14:50:43Z
2021-04-29T14:50:43Z
NONE
null
null
null
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1954/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3678/comments
https://api.github.com/repos/huggingface/datasets/issues/3678/events
https://github.com/huggingface/datasets/pull/3678
1,123,402,426
PR_kwDODunzps4yCt91
3,678
Add code example in wikipedia card
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2022-02-03T18:09:02Z
2022-02-21T09:14:56Z
2022-02-04T13:21:39Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3678.diff", "html_url": "https://github.com/huggingface/datasets/pull/3678", "merged_at": "2022-02-04T13:21:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3678.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3678" }
Close #3292.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3678/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4762/comments
https://api.github.com/repos/huggingface/datasets/issues/4762/events
https://github.com/huggingface/datasets/pull/4762
1,321,261,733
PR_kwDODunzps48RE56
4,762
Improve features resolution in streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Just took your comment into account @mariosasko , let me know if it's good for you now :)" ]
2022-07-28T17:28:11Z
2022-09-09T17:17:39Z
2022-09-09T17:15:30Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4762.diff", "html_url": "https://github.com/huggingface/datasets/pull/4762", "merged_at": "2022-09-09T17:15:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4762.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4762" }
`IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well. I also fixed `interleave_datasets` that was not filling missing columns with None, because it was not using the columns from `IterableDataset._resolve_features` cc @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4762/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5457/comments
https://api.github.com/repos/huggingface/datasets/issues/5457/events
https://github.com/huggingface/datasets/issues/5457
1,554,171,264
I_kwDODunzps5cosWA
5,457
prebuilt dataset relies on `downloads/extracted`
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
open
false
null
[]
null
[ "Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to e...
2023-01-24T02:09:32Z
2023-01-24T18:14:10Z
null
CONTRIBUTOR
null
null
null
### Describe the bug I pre-built the dataset: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` and it can be used just fine. now I wipe out `downloads/extracted` and it no longer works. ``` rm -r ~/.cache/huggingface/datasets/downloads ``` That is I can still load it: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing No config specified, defaulting to: general-pmd-synthetic-testing/100.unique Found cached dataset general-pmd-synthetic-testing (/home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2) ``` but if I try to use it: ``` E stderr: Traceback (most recent call last): E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/main.py", line 116, in <module> E stderr: train_loader, val_loader = get_dataloaders( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 170, in get_dataloaders E stderr: train_loader = get_dataloader_from_config( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 443, in get_dataloader_from_config E stderr: dataloader = get_dataloader( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 264, in get_dataloader E stderr: is_pmd = "meta" in hf_dataset[0] and "source" in hf_dataset[0] E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2601, in __getitem__ E stderr: return self._getitem( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2586, in _getitem E stderr: formatted_output = format_table( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 634, in format_table E stderr: return formatter(pa_table, query_type=query_type) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 406, in __call__ E stderr: return self.format_row(pa_table) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 442, in format_row E stderr: row = self.python_features_decoder.decode_row(row) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 225, in decode_row E stderr: return self.features.decode_example(row) if self.features else row E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1846, in decode_example E stderr: return { E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1847, in <dictcomp> E stderr: column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1304, in decode_nested_example E stderr: return decode_nested_example([schema.feature], obj) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1296, in decode_nested_example E stderr: if decode_nested_example(sub_schema, first_elmt) != first_elmt: E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1309, in decode_nested_example E stderr: return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/image.py", line 144, in decode_example E stderr: image = PIL.Image.open(path) E stderr: File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/PIL/Image.py", line 3092, in open E stderr: fp = builtins.open(filename, "rb") E stderr: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/nvme0/code/data/cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data/101/images_01.jpg' ``` Only if I wipe out the cached dir and rebuild then it starts working as `download/extracted` is back again with extracted files. ``` rm -r ~/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` I think there are 2 issues here: 1. why does it still rely on extracted files after `arrow` files were printed - did I do something incorrectly when creating this dataset? 2. why doesn't the dataset know that it has been gutted and loads just fine? If it has a dependency on `download/extracted` then `load_dataset` should check if it's there and fail or force rebuilding. I am sure this could be a very expensive operation, so probably really solving #1 will not require this check. and this second item is probably an overkill. Other than perhaps if it had an optional `check_consistency` flag to do that. ### Environment info datasets@main
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5457/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3725/comments
https://api.github.com/repos/huggingface/datasets/issues/3725/events
https://github.com/huggingface/datasets/pull/3725
1,138,835,625
PR_kwDODunzps4y3bOG
3,725
Pin pandas to avoid bug in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2022-02-15T15:21:00Z
2022-02-15T15:52:38Z
2022-02-15T15:52:37Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3725.diff", "html_url": "https://github.com/huggingface/datasets/pull/3725", "merged_at": "2022-02-15T15:52:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3725.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3725" }
Temporarily pin pandas version to avoid bug in streaming mode (patching no longer works). Related to #3724.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3725/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4654/comments
https://api.github.com/repos/huggingface/datasets/issues/4654/events
https://github.com/huggingface/datasets/issues/4654
1,296,716,119
I_kwDODunzps5NSlFX
4,654
Add Quora Question Triplets Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/QQP_triplets)." ]
2022-07-07T02:43:42Z
2022-07-14T02:13:50Z
2022-07-14T02:13:50Z
NONE
null
null
null
## Adding a Dataset - **Name:** *Quora Question Triplets* - **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.* - **Paper:** - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4654/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1945/comments
https://api.github.com/repos/huggingface/datasets/issues/1945/events
https://github.com/huggingface/datasets/issues/1945
816,421,966
MDU6SXNzdWU4MTY0MjE5NjY=
1,945
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
closed
false
null
[]
null
[ "sorry my mistake, datasets were overwritten closing now, thanks a lot" ]
2021-02-25T13:09:45Z
2021-02-25T13:20:35Z
2021-02-25T13:20:26Z
NONE
null
null
null
Hi I am trying to concatenate a list of huggingface datastes as: ` train_dataset = datasets.concatenate_datasets(train_datasets) ` Here is the `train_datasets` when I print: ``` [Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 120361 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2670 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 6944 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 38140 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 173711 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 1655 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 4274 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2019 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 2109 }), Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows: 11963 })] ``` I am getting the following error: `AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' ` I was wondering if you could help me with this issue, thanks a lot
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1945/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5804/comments
https://api.github.com/repos/huggingface/datasets/issues/5804/events
https://github.com/huggingface/datasets/pull/5804
1,688,285,666
PR_kwDODunzps5PX0Dk
5,804
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-04-28T10:10:01Z
2023-04-28T10:18:51Z
2023-04-28T10:10:29Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5804.diff", "html_url": "https://github.com/huggingface/datasets/pull/5804", "merged_at": "2023-04-28T10:10:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5804.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5804" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5804/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3786/comments
https://api.github.com/repos/huggingface/datasets/issues/3786/events
https://github.com/huggingface/datasets/issues/3786
1,150,233,067
I_kwDODunzps5Ejynr
3,786
Bug downloading Virus scan warning page from Google Drive URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the ch...
2022-02-25T09:32:23Z
2022-03-03T09:25:59Z
2022-02-25T11:56:35Z
MEMBER
null
null
null
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3786/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3786/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5888/comments
https://api.github.com/repos/huggingface/datasets/issues/5888/events
https://github.com/huggingface/datasets/issues/5888
1,722,290,363
I_kwDODunzps5mqBC7
5,888
A way to upload and visualize .mp4 files (millions of them) as part of a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AntreasAntoniou", "id": 10792502, "login": "AntreasAntoniou", "node_id": "MDQ6VXNlcjEwNzkyNTAy", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "type": "User", "url": "https://api.github.com/users/AntreasAntoniou" }
[]
open
false
null
[]
null
[ "Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\...
2023-05-22T18:05:26Z
2023-06-23T03:37:16Z
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files. Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem. My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload. So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things. **Describe the solution you'd like** A native way to upload large datasets that include .mp4 or other video types. **Describe alternatives you've considered** Already explained earlier **Additional context** https://huggingface.co/datasets/Antreas/TALI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5888/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4310/comments
https://api.github.com/repos/huggingface/datasets/issues/4310/events
https://github.com/huggingface/datasets/issues/4310
1,231,319,815
I_kwDODunzps5JZHMH
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
{ "avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4", "events_url": "https://api.github.com/users/milmin/events{/privacy}", "followers_url": "https://api.github.com/users/milmin/followers", "following_url": "https://api.github.com/users/milmin/following{/other_user}", "gists_url": "https://api.github.com/users/milmin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/milmin", "id": 72745467, "login": "milmin", "node_id": "MDQ6VXNlcjcyNzQ1NDY3", "organizations_url": "https://api.github.com/users/milmin/orgs", "received_events_url": "https://api.github.com/users/milmin/received_events", "repos_url": "https://api.github.com/users/milmin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/milmin/subscriptions", "type": "User", "url": "https://api.github.com/users/milmin" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[]
2022-05-10T15:12:53Z
2022-05-11T16:46:31Z
2022-05-11T16:46:31Z
NONE
null
null
null
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket. ## Steps to reproduce the bug ```python from datasets import load_dataset # path is the path to parquet files data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} dataset = load_dataset("parquet", data_files=data_files, streaming=True) ``` ## Expected results A dataset object `datasets.dataset_dict.DatasetDict` ## Actual results ``` AttributeError Traceback (most recent call last) <command-562086> in <module> 11 12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} ---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1679 if streaming: 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token) -> 1681 return builder_instance.as_streaming_dataset( 1682 split=split, 1683 use_auth_token=use_auth_token, /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token) 904 ) 905 self._check_manual_download(dl_manager) --> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 907 # By default, return all splits 908 if split is None: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager) 30 if not self.config.data_files: 31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 32 data_files = dl_manager.download_and_extract(self.config.data_files) 33 if isinstance(data_files, (str, list, tuple)): 34 files = data_files /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls) 798 799 def download_and_extract(self, url_or_urls): --> 800 return self.extract(self.download(url_or_urls)) 801 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths) 776 777 def extract(self, path_or_paths): --> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True) 779 return urlpaths 780 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 312 num_proc = 1 313 if num_proc <= 1 or len(iterable) <= num_proc: --> 314 mapped = [ 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 313 if num_proc <= 1 or len(iterable) <= num_proc: 314 mapped = [ --> 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 317 ] /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 249 # Singleton first to spare some computation 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 251 return function(data_struct) 252 253 # Reduce logging to keep things readable in multiprocessing with tqdm /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath) 781 def _extract(self, urlpath: str) -> str: 782 urlpath = str(urlpath) --> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) 784 if protocol is None: 785 # no extraction /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token) 371 urlpath, kwargs = urlpath, {} 372 with fsspec.open(urlpath, **kwargs) as f: --> 373 return _get_extraction_protocol_with_magic_number(f) 374 375 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f) 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]: 336 """read the magic number from a file-like object and return the compression protocol""" --> 337 prev_loc = f.loc 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) 339 f.seek(prev_loc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item) 337 338 def __getattr__(self, item): --> 339 return getattr(self.f, item) 340 341 def __enter__(self): AttributeError: '_io.BufferedReader' object has no attribute 'loc' ``` ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 - `fsspec` version: 2021.08.1 - `s3fs` version: 2021.08.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4310/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4573/comments
https://api.github.com/repos/huggingface/datasets/issues/4573/events
https://github.com/huggingface/datasets/pull/4573
1,285,023,629
PR_kwDODunzps46YEEa
4,573
Fix evaluation metadata for ncbi_disease
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-26T20:29:32Z
2023-09-24T09:35:07Z
2022-09-23T09:38:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4573.diff", "html_url": "https://github.com/huggingface/datasets/pull/4573", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4573" }
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4573/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4450/comments
https://api.github.com/repos/huggingface/datasets/issues/4450/events
https://github.com/huggingface/datasets/pull/4450
1,261,878,324
PR_kwDODunzps45Kzwh
4,450
Update README.md of fquad
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-06T13:52:41Z
2022-06-06T14:51:49Z
2022-06-06T14:43:03Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4450.diff", "html_url": "https://github.com/huggingface/datasets/pull/4450", "merged_at": "2022-06-06T14:43:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/4450.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4450" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4450/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4450/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3114/comments
https://api.github.com/repos/huggingface/datasets/issues/3114/events
https://github.com/huggingface/datasets/issues/3114
1,030,693,130
I_kwDODunzps49byEK
3,114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.", "Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` on...
2021-10-19T20:01:45Z
2022-02-14T14:00:28Z
2022-02-14T14:00:28Z
CONTRIBUTOR
null
null
null
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to reproduce the bug The documentation for the `fs` parameter states: ``` fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``): Instance of the remote filesystem used to download the files from. ``` `PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error. ```python from fsspec.implementations.hdfs import PyArrowHDFS ... transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/" fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket) dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True) ``` ## Expected results Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing: ```python transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/" fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket) my_datasets.save_to_disk(transformed_corpus_path, fs=fs) ``` As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS: ```sh $ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/" Found 4 items -rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation ``` I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above. ## Actual results However, when trying to recover the saved datasets, I get this error: ``` ... File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True) File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download TypeError: download() got an unexpected keyword argument 'recursive' ``` Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter: ```python def download(self, path, stream, buffer_size=None): with self.open(path, 'rb') as f: f.download(stream, buffer_size=buffer_size) ``` ## Environment info - `datasets` version: 1.13.3 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3114/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6126/comments
https://api.github.com/repos/huggingface/datasets/issues/6126/events
https://github.com/huggingface/datasets/issues/6126
1,839,675,320
I_kwDODunzps5tpze4
6,126
Private datasets do not load when passing token
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.", "I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything wor...
2023-08-07T15:06:47Z
2023-08-08T15:16:23Z
2023-08-08T15:16:23Z
MEMBER
null
null
null
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets import DownloadConfig, load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>")) ds ``` gives ``` Dataset({ features: ['text'], num_rows: 4 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") ``` gives ``` --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) [<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1793 download_config = download_config.copy() if download_config else DownloadConfig() 1794 download_config.storage_options.update(storage_options) -> 1795 dataset_module = dataset_module_factory( 1796 path, 1797 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1485 if isinstance(e1, EmptyDatasetError): -> 1486 raise e1 from None 1487 if isinstance(e1, FileNotFoundError): 1488 raise FileNotFoundError( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1474 download_config=download_config, 1475 download_mode=download_mode, -> 1476 ).get_module() 1477 except ( 1478 Exception [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self) 1030 sanitize_patterns(self.data_files) 1031 if self.data_files is not None -> 1032 else get_data_patterns(base_path, download_config=self.download_config) 1033 ) 1034 data_files = DataFilesDict.from_patterns( [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config) 457 return _get_data_files_patterns(resolver) 458 except FileNotFoundError: --> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 460 461 EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files ``` ### Expected behavior The dataset should load. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6126/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3644/comments
https://api.github.com/repos/huggingface/datasets/issues/3644/events
https://github.com/huggingface/datasets/issues/3644
1,116,519,670
I_kwDODunzps5CjLz2
3,644
Add a GROUP BY operator
{ "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/felix-schneider", "id": 208336, "login": "felix-schneider", "node_id": "MDQ6VXNlcjIwODMzNg==", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "repos_url": "https://api.github.com/users/felix-schneider/repos", "site_admin": false, "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "type": "User", "url": "https://api.github.com/users/felix-schneider" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI ju...
2022-01-27T16:57:54Z
2023-03-14T14:45:59Z
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datasets.Value("string") # } ds = datasets.Dataset() def split(examples): sentences = [text.split(".") for text in examples["text"]] return { "example_id": [ example_id for example_id, sents in zip(examples["example_id"], sentences) for _ in sents ], "sentence": [sent for sents in sentences for sent in sents], "sentence_id": [i for sents in sentences for i in range(len(sents))], } split_ds = ds.map(split, batched=True) def process(examples): outputs = some_neural_network_that_works_on_sentences(examples["sentence"]) return {"outputs": outputs} split_ds = split_ds.map(process, batched=True) ``` I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together. **Describe the solution you'd like** Ideally, it would look something like this: ```python def join(examples): order = np.argsort(examples["sentence_id"]) text = ".".join(examples["text"][i] for i in order) outputs = [examples["outputs"][i] for i in order] return {"text": text, "outputs": outputs} ds = split_ds.group_by("example_id", join) ``` **Describe alternatives you've considered** Right now, we can do this: ```python def merge(example): meeting_id = example["example_id"] parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no") return {"outputs": list(parts["outputs"])} ds = ds.map(merge) ``` Of course, we could process the dataset like this: ```python def process(example): outputs = some_neural_network_that_works_on_sentences(example["text"].split(".")) return {"outputs": outputs} ds = ds.map(process, batched=True) ``` However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example. I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3644/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4834/comments
https://api.github.com/repos/huggingface/datasets/issues/4834/events
https://github.com/huggingface/datasets/pull/4834
1,336,993,511
PR_kwDODunzps49FJOu
4,834
Fix documentation card of recipe_nlg dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T09:49:39Z
2022-08-12T11:28:18Z
2022-08-12T11:13:40Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4834.diff", "html_url": "https://github.com/huggingface/datasets/pull/4834", "merged_at": "2022-08-12T11:13:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/4834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4834" }
Fix documentation card of recipe_nlg dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4834/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4796/comments
https://api.github.com/repos/huggingface/datasets/issues/4796/events
https://github.com/huggingface/datasets/issues/4796
1,329,887,810
I_kwDODunzps5PRHpC
4,796
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 4, "state": "open", "title": "3.0", "updated_at": "2023-09-22T14:07:52Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "@mariosasko I'm getting a similar issue when creating a Dataset from a Pandas dataframe, like so:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Image, Value\r\nimport pandas as pd\r\nimport requests\r\nimport PIL\r\n\r\n# we need to define the features ourselves\r\nfeatures = Features({\r\n 'a': Value(d...
2022-08-05T12:41:19Z
2023-05-31T14:05:35Z
null
CONTRIBUTOR
null
null
null
## Describe the bug When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature. ## Steps to reproduce the bug ```python from datasets import load_dataset from PIL import Image dataset = load_dataset("hf-internal-testing/example-documents") # load any random Pillow image image = Image.open("/content/cord_example.png").convert("RGB") new_image = {'image': image} dataset['test'] = dataset['test'].add_item(new_image) ``` ## Expected results The image should be automatically casted to the Image feature when using `add_item`. For now, this can be fixed by using `encode_example`: ``` import datasets feature = datasets.Image(decode=False) new_image = {'image': feature.encode_example(image)} dataset['test'] = dataset['test'].add_item(new_image) ``` ## Actual results ``` ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=576x864 at 0x7F7CCC4589D0> with type Image: did not recognize Python value type when inferring an Arrow data type ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4796/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
https://api.github.com/repos/huggingface/datasets/issues/5731/events
https://github.com/huggingface/datasets/pull/5731
1,662,012,913
PR_kwDODunzps5N_7Un
5,731
Temporarily pin fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-11T08:33:15Z
2023-04-11T08:57:45Z
2023-04-11T08:47:55Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "html_url": "https://github.com/huggingface/datasets/pull/5731", "merged_at": "2023-04-11T08:47:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731" }
Fix #5730.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2396
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2396/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2396/comments
https://api.github.com/repos/huggingface/datasets/issues/2396/events
https://github.com/huggingface/datasets/issues/2396
899,016,308
MDU6SXNzdWU4OTkwMTYzMDg=
2,396
strange datasets from OSCAR corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jerryIsHere", "id": 50871412, "login": "jerryIsHere", "node_id": "MDQ6VXNlcjUwODcxNDEy", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "type": "User", "url": "https://api.github.com/users/jerryIsHere" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting\r\ncc @pjox is this an issue from the data ?\r\n\r\nAnyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere ", "Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's ...
2021-05-23T13:06:02Z
2021-06-17T13:54:37Z
null
CONTRIBUTOR
null
null
null
![image](https://user-images.githubusercontent.com/50871412/119260850-4f876b80-bc07-11eb-8894-124302600643.png) ![image](https://user-images.githubusercontent.com/50871412/119260875-675eef80-bc07-11eb-9da4-ee27567054ac.png) From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data. 7 training instances is obviously not a right number. As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl. And even if you don't read Yue Chinese, you can tell the first six instance are problematic. (It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app) It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted. I will try to inform the host of OSCAR corpus later. Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue. > Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it? Thanks a lot, the new post is here: https://github.com/oscar-corpus/oscar-website/issues/11
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2396/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2396/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
{ "avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4", "events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}", "followers_url": "https://api.github.com/users/NicolasMICAUX/followers", "following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}", "gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NicolasMICAUX", "id": 72763959, "login": "NicolasMICAUX", "node_id": "MDQ6VXNlcjcyNzYzOTU5", "organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs", "received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events", "repos_url": "https://api.github.com/users/NicolasMICAUX/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions", "type": "User", "url": "https://api.github.com/users/NicolasMICAUX" }
[]
closed
false
null
[]
null
[ "Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ", "Ok thank you for your answer", "I'm closing this as a duplicate of #5380" ]
2023-07-26T14:08:22Z
2023-07-28T11:05:03Z
2023-07-28T11:05:03Z
NONE
null
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3098/comments
https://api.github.com/repos/huggingface/datasets/issues/3098/events
https://github.com/huggingface/datasets/pull/3098
1,028,210,790
PR_kwDODunzps4tSRSZ
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[]
closed
false
null
[]
null
[ "Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's sti...
2021-10-17T04:12:44Z
2021-12-08T16:04:50Z
2021-11-24T11:25:36Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3098.diff", "html_url": "https://github.com/huggingface/datasets/pull/3098", "merged_at": "2021-11-24T11:25:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3098.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3098" }
This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it. This implementation needs to be used with the following `huggingface_hub` branch in order to work correctly: https://github.com/huggingface/huggingface_hub/pull/415 ### Implementation The `push_to_hub` API is entirely based on HTTP requests rather than a git-based workflow: - This allows pushing changes without firstly cloning the repository, which reduces the time in half for the `push_to_hub` method. - Collaboration, as well as the system of branches/merges/rebases is IMO less straightforward than for models and spaces. In the situation where such collaboration is needed, I would *heavily* advocate for the `Repository` helper of the `huggingface_hub` to be used instead of the `push_to_hub` method which will always be, by design, limiting in that regard (even if based on a git-workflow instead of HTTP requests) In order to overcome the limit of 5GB files set by the HTTP requests, dataset sharding is used. ### Testing The test suite implemented here makes use of the moon-staging instead of the production setup. As several repositories are created and deleted, it is better to use the staging. It does not require setting an environment variable or any kind of special attention but introduces a new decorator `with_staging_testing` which patches global variables to use the staging endpoint instead of the production endpoint. ### Examples The tests cover a lot of examples and behavior.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/3098/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3098/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5920/comments
https://api.github.com/repos/huggingface/datasets/issues/5920/events
https://github.com/huggingface/datasets/pull/5920
1,736,196,991
PR_kwDODunzps5R5TRB
5,920
Optimize IterableDataset.from_file using ArrowExamplesIterable
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-06-01T12:14:36Z
2023-06-01T12:42:10Z
2023-06-01T12:35:14Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5920.diff", "html_url": "https://github.com/huggingface/datasets/pull/5920", "merged_at": "2023-06-01T12:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/5920.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5920" }
following https://github.com/huggingface/datasets/pull/5893
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5920/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2494/comments
https://api.github.com/repos/huggingface/datasets/issues/2494/events
https://github.com/huggingface/datasets/issues/2494
920,149,183
MDU6SXNzdWU5MjAxNDkxODM=
2,494
Improve docs on Enhancing performance
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
open
false
null
[]
null
[]
2021-06-14T08:11:48Z
2021-06-14T08:11:48Z
null
MEMBER
null
null
null
In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases: - How to make datasets the fastest - How to make datasets take the less RAM - How to make datasets take the less hard drive mem cc: @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2494/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1999/comments
https://api.github.com/repos/huggingface/datasets/issues/1999/events
https://github.com/huggingface/datasets/pull/1999
823,753,591
MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy
1,999
Add FashionMNIST dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added the changes from the review." ]
2021-03-06T21:36:57Z
2021-03-09T09:52:11Z
2021-03-09T09:52:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1999.diff", "html_url": "https://github.com/huggingface/datasets/pull/1999", "merged_at": "2021-03-09T09:52:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1999" }
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1999/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4277/comments
https://api.github.com/repos/huggingface/datasets/issues/4277/events
https://github.com/huggingface/datasets/pull/4277
1,225,002,286
PR_kwDODunzps43RZV9
4,277
Enable label alignment for token classification datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again ...
2022-05-04T07:15:16Z
2022-05-06T15:42:15Z
2022-05-06T15:36:31Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4277.diff", "html_url": "https://github.com/huggingface/datasets/pull/4277", "merged_at": "2022-05-06T15:36:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4277.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4277" }
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0, 7, 0, 0] ner_ds[0]["ner_tags"] # hypothetical model mapping with O <--> B-LOC label2id = { "B-LOC": "0", "B-MISC": "7", "B-ORG": "3", "B-PER": "1", "I-LOC": "6", "I-MISC": "8", "I-ORG": "4", "I-PER": "2", "O": "5" } ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags") # returns [3, 5, 7, 5, 5, 5, 7, 5, 5] ner_aligned_ds[0]["ner_tags"] ``` Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4277/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4277/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4192/comments
https://api.github.com/repos/huggingface/datasets/issues/4192/events
https://github.com/huggingface/datasets/issues/4192
1,210,692,554
I_kwDODunzps5IKbPK
4,192
load_dataset can't load local dataset,Unable to find ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33253979?v=4", "events_url": "https://api.github.com/users/ahf876828330/events{/privacy}", "followers_url": "https://api.github.com/users/ahf876828330/followers", "following_url": "https://api.github.com/users/ahf876828330/following{/other_user}", "gists_url": "https://api.github.com/users/ahf876828330/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ahf876828330", "id": 33253979, "login": "ahf876828330", "node_id": "MDQ6VXNlcjMzMjUzOTc5", "organizations_url": "https://api.github.com/users/ahf876828330/orgs", "received_events_url": "https://api.github.com/users/ahf876828330/received_events", "repos_url": "https://api.github.com/users/ahf876828330/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ahf876828330/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahf876828330/subscriptions", "type": "User", "url": "https://api.github.com/users/ahf876828330" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json...
2022-04-21T08:28:58Z
2022-04-25T16:51:57Z
2022-04-22T07:39:53Z
NONE
null
null
null
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4192/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3828/comments
https://api.github.com/repos/huggingface/datasets/issues/3828/events
https://github.com/huggingface/datasets/issues/3828
1,160,064,029
I_kwDODunzps5FJSwd
3,828
The Pile's _FEATURE spec seems to be incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dlwh", "id": 9633, "login": "dlwh", "node_id": "MDQ6VXNlcjk2MzM=", "organizations_url": "https://api.github.com/users/dlwh/orgs", "received_events_url": "https://api.github.com/users/dlwh/received_events", "repos_url": "https://api.github.com/users/dlwh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "type": "User", "url": "https://api.github.com/users/dlwh" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \...
2022-03-04T21:25:32Z
2022-03-08T09:30:49Z
2022-03-08T09:30:48Z
NONE
null
null
null
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a dict with an id field inside. ## Steps to reproduce the bug ## Expected results Feature spec should match the data I'd think? ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3828/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4959/comments
https://api.github.com/repos/huggingface/datasets/issues/4959/events
https://github.com/huggingface/datasets/pull/4959
1,367,924,429
PR_kwDODunzps4-rx6l
4,959
Fix data URLs of compguesswhat dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-09-09T14:36:10Z
2022-09-09T16:01:34Z
2022-09-09T15:59:04Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4959.diff", "html_url": "https://github.com/huggingface/datasets/pull/4959", "merged_at": "2022-09-09T15:59:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4959.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4959" }
After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1 This PR updates their data URLs in our loading script. Related to: - #3191
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4959/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4112/comments
https://api.github.com/repos/huggingface/datasets/issues/4112/events
https://github.com/huggingface/datasets/issues/4112
1,194,752,765
I_kwDODunzps5HNnr9
4,112
ImageFolder with Grayscale images dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "events_url": "https://api.github.com/users/chainyo/events{/privacy}", "followers_url": "https://api.github.com/users/chainyo/followers", "following_url": "https://api.github.com/users/chainyo/following{/other_user}", "gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chainyo", "id": 50595514, "login": "chainyo", "node_id": "MDQ6VXNlcjUwNTk1NTE0", "organizations_url": "https://api.github.com/users/chainyo/orgs", "received_events_url": "https://api.github.com/users/chainyo/received_events", "repos_url": "https://api.github.com/users/chainyo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainyo/subscriptions", "type": "User", "url": "https://api.github.com/users/chainyo" }
[]
closed
false
null
[]
null
[ "Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n...
2022-04-06T15:10:00Z
2022-04-22T10:21:53Z
2022-04-22T10:21:52Z
NONE
null
null
null
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP) I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback: ```bash AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__ return self._getitem( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem formatted_output = format_table( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__ return self.format_row(pa_table) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested mapped = [ File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested return function(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' ``` I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well): ```python train_dataset = load_dataset("imagefolder", data_dir="data/train") train_dataset = train_dataset["train"] test_dataset = load_dataset("imagefolder", data_dir="data/test") test_dataset = test_dataset["train"] val_dataset = load_dataset("imagefolder", data_dir="data/val") val_dataset = val_dataset["train"] dataset = DatasetDict({ "train": train_dataset, "val": val_dataset, "test": test_dataset }) dataset.push_to_hub("ChainYo/rvl-cdip") ``` Now here is the code I am using to get the dataset and prepare it for training: ```python img_size = 512 batch_size = 128 normalize = [(0.5), (0.5)] data_dir = "ChainYo/rvl-cdip" dataset = load_dataset(data_dir, split="train") transforms = transforms.Compose([ transforms.Resize(img_size), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize(*normalize) ]) transformed_dataset = dataset.with_transform(transforms) transformed_dataset.set_format(type="torch", device="cuda") train_dataloader = torch.utils.data.DataLoader( transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True ) ``` But this get me the error above. I don't understand why it's doing this kind of weird thing? Do I need to map something on the dataset? Something like this: ```python labels = dataset.features["label"].names num_labels = dataset.features["label"].num_classes def preprocess_data(examples): images = [ex.convert("RGB") for ex in examples["image"]] labels = [ex for ex in examples["label"]] return {"images": images, "labels": labels} features = Features({ "images": Image(decode=True, id=None), "labels": ClassLabel(num_classes=num_labels, names=labels) }) decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4112/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2204/comments
https://api.github.com/repos/huggingface/datasets/issues/2204/events
https://github.com/huggingface/datasets/pull/2204
855,144,431
MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2
2,204
Add configurable options to `seqeval` metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
[]
closed
false
null
[]
null
[]
2021-04-10T19:58:19Z
2021-04-15T13:49:46Z
2021-04-15T13:49:46Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2204.diff", "html_url": "https://github.com/huggingface/datasets/pull/2204", "merged_at": "2021-04-15T13:49:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2204.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2204" }
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2204/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3489/comments
https://api.github.com/repos/huggingface/datasets/issues/3489/events
https://github.com/huggingface/datasets/pull/3489
1,089,401,926
PR_kwDODunzps4wT97d
3,489
Avoid unnecessary list creations
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
[]
open
false
null
[]
null
[ "@bryant1410 Thanks for working on this. Could you please split the PR into 4 or 5 smaller PRs (ideally one PR for each bullet point from your description) because it's not practical to review such a large PR, especially if the changes are not interrelated?" ]
2021-12-27T18:20:56Z
2022-07-06T15:19:49Z
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3489.diff", "html_url": "https://github.com/huggingface/datasets/pull/3489", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3489.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3489" }
Like in `join([... for s in ...])`. Also changed other things that I saw: * Use a `with` statement for many `open` that missed them, so the files don't remain open. * Remove unused variables. * Many HTTP links converted into HTTPS (verified). * Remove unnecessary "r" mode arg in `open` (double-checked it was actually the default in each case). * Remove Python 2 style of using `super`. * Run `pyupgrade $(find . -name "*.py" -type f) --py36-plus` (which already does some of the previous points). * Run `dos2unix $(find . -name "*.py" -type f)` (CRLF to LF line endings). * Fix typos.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3489/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3489/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3940/comments
https://api.github.com/repos/huggingface/datasets/issues/3940/events
https://github.com/huggingface/datasets/pull/3940
1,171,106,853
PR_kwDODunzps40iYxr
3,940
Create CoVAL metric card
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-03-16T14:31:49Z
2022-03-18T17:37:59Z
2022-03-18T17:35:14Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3940.diff", "html_url": "https://github.com/huggingface/datasets/pull/3940", "merged_at": "2022-03-18T17:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3940.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3940" }
Initial CoVAL metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3940/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
https://api.github.com/repos/huggingface/datasets/issues/5670/events
https://github.com/huggingface/datasets/issues/5670
1,640,607,045
I_kwDODunzps5hya1F
5,670
Unable to load multi class classification datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4", "events_url": "https://api.github.com/users/ysahil97/events{/privacy}", "followers_url": "https://api.github.com/users/ysahil97/followers", "following_url": "https://api.github.com/users/ysahil97/following{/other_user}", "gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysahil97", "id": 19690506, "login": "ysahil97", "node_id": "MDQ6VXNlcjE5NjkwNTA2", "organizations_url": "https://api.github.com/users/ysahil97/orgs", "received_events_url": "https://api.github.com/users/ysahil97/received_events", "repos_url": "https://api.github.com/users/ysahil97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions", "type": "User", "url": "https://api.github.com/users/ysahil97" }
[]
closed
false
null
[]
null
[ "Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)", "Thanks @lhoestq!\r\n\r\nI'll close this issue now." ]
2023-03-25T18:06:15Z
2023-03-27T22:54:56Z
2023-03-27T22:54:56Z
NONE
null
null
null
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting the following error snippet. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[44], line 3 1 from datasets import load_dataset ----> 3 imdb_dataset = load_dataset("yelp_review_full") 4 imdb_dataset File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1716 ignore_verifications = ignore_verifications or save_infos 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, 1722 data_dir=data_dir, 1723 data_files=data_files, 1724 cache_dir=cache_dir, 1725 features=features, 1726 download_config=download_config, 1727 download_mode=download_mode, 1728 revision=revision, 1729 use_auth_token=use_auth_token, 1730 **config_kwargs, 1731 ) 1733 # Return iterable dataset in case of streaming 1734 if streaming: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1520 raise ValueError(error_msg) 1522 # Instantiate the dataset builder -> 1523 builder_instance: DatasetBuilder = builder_cls( 1524 cache_dir=cache_dir, 1525 config_name=config_name, 1526 data_dir=data_dir, 1527 data_files=data_files, 1528 hash=hash, 1529 features=features, 1530 use_auth_token=use_auth_token, 1531 **builder_kwargs, 1532 **config_kwargs, 1533 ) 1535 return builder_instance File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1291 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1292 super().__init__(*args, **kwargs) 1293 # Batch size used by the ArrowWriter 1294 # It defines the number of samples that are kept in memory before writing them 1295 # and also the length of the arrow chunks 1296 # None means that the ArrowWriter will use its default value 1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 309 # prepare info: DatasetInfo are a standardized dataclass across all datasets 310 # Prefill datasetinfo 311 if info is None: --> 312 info = self.get_exported_dataset_info() 313 info.update(self._info()) 314 info.builder_name = self.name File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self) 400 def get_exported_dataset_info(self) -> DatasetInfo: 401 """Empty DatasetInfo if doesn't exist 402 403 Example: (...) 410 ``` 411 """ --> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls) 385 @classmethod 386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict: 387 """Empty dict if doesn't exist 388 389 Example: (...) 396 ``` 397 """ --> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir) 368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md") 369 if "dataset_info" in dataset_metadata: --> 370 return cls.from_metadata(dataset_metadata) 371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)): 372 # this is just to have backward compatibility with dataset_infos.json files 373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata) 387 return cls( 388 { 389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict( (...) 393 } 394 ) 395 else: --> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"]) 397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default") 398 return cls({dataset_info.config_name: dataset_info}) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data) 330 yaml_data = copy.deepcopy(yaml_data) 331 if yaml_data.get("features") is not None: --> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) 333 if yaml_data.get("splits") is not None: 334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data) 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." 1709 ) 1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids] TypeError: can only concatenate str (not "int") to str ``` The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue? ### Steps to reproduce the bug Run the following code snippet in a python script/ notebook cell: ``` from datasets import load_dataset yelp_dataset = load_dataset("yelp_review_full") yelp_dataset ``` ### Expected behavior The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4724/comments
https://api.github.com/repos/huggingface/datasets/issues/4724/events
https://github.com/huggingface/datasets/pull/4724
1,311,127,404
PR_kwDODunzps47vLrP
4,724
Download and prepare as Parquet for cloud storage
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added some docs for dask and took your comments into account\r\n\r\ncc @philschmid if you also want to take a look :)", "Just noticed that it would be more convenient to pass the output dir to download_and_prepare directly, to bypa...
2022-07-20T13:39:02Z
2022-09-05T17:27:25Z
2022-09-05T17:25:27Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4724.diff", "html_url": "https://github.com/huggingface/datasets/pull/4724", "merged_at": "2022-09-05T17:25:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/4724.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4724" }
Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark/dask/ray. This PR adds support for `fsspec` URIs like `s3://...`, `gcs://...` etc. and ads the `file_format` to save as parquet instead of arrow: ```python from datasets import * cache_dir = "s3://..." builder = load_dataset_builder("crime_and_punish", cache_dir=cache_dir) builder.download_and_prepare(file_format="parquet") ``` EDIT: actually changed the API to ```python from datasets import * builder = load_dataset_builder("crime_and_punish") builder.download_and_prepare("s3://...", file_format="parquet") ``` credentials to cloud storage can be passed using the `storage_options` argument in For consistency with the BeamBasedBuilder, I name the parquet files `{builder.name}-{split}-xxxxx-of-xxxxx.parquet`. I think this is fine since we'll need to implement parquet sharding after this PR, so that a dataset can be used efficiently with dask for example. Note that images/audio files are not embedded yet in the parquet files, this will added in a subsequent PR TODO: - [x] docs - [x] tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4724/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
https://api.github.com/repos/huggingface/datasets/issues/3986/events
https://github.com/huggingface/datasets/issues/3986
1,176,429,565
I_kwDODunzps5GHuP9
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
{ "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kelvinAI", "id": 10686779, "login": "kelvinAI", "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "repos_url": "https://api.github.com/users/kelvinAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "type": "User", "url": "https://api.github.com/users/kelvinAI" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?", "Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datas...
2022-03-22T08:23:21Z
2023-03-06T16:55:04Z
null
NONE
null
null
null
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear and concise description of what the bug is. Issue: - Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory - No error code, had to terminate the process - There are some files created in the cache directory: ``` custom_cache_dir | -- modules | -- __init__.py | -- datasets_modules | -- __init__.py | -- datasets | -- __init__.py | -- script.py (Dataset loading script) | -- script.lock ``` There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk. ## Steps to reproduce the bug What I've tried: - Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703) - Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html) - Modifying cache_dir param during runtime ```python >>> from datasets import load_dataset >>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache') ``` - Disabling dataset cache ```python >>> from datasets import set_caching_enabled >>> set_caching_enabled(False) ``` ## Expected results Datasets should load / cache as usual with the only exception that cache directory is different ## Actual results Any actions taken above to change the cache directory results in loading indefinitely without terminating. ## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4305/comments
https://api.github.com/repos/huggingface/datasets/issues/4305/events
https://github.com/huggingface/datasets/pull/4305
1,231,099,934
PR_kwDODunzps43kt4P
4,305
Fixes FrugalScore
{ "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/moussaKam", "id": 28675016, "login": "moussaKam", "node_id": "MDQ6VXNlcjI4Njc1MDE2", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "repos_url": "https://api.github.com/users/moussaKam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "type": "User", "url": "https://api.github.com/users/moussaKam" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint.", "> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references a...
2022-05-10T12:44:06Z
2022-09-22T16:42:06Z
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4305.diff", "html_url": "https://github.com/huggingface/datasets/pull/4305", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4305.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4305" }
There are two minor modifications in this PR: 1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper. 2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore. @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4305/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4305/timeline
null
null
true