url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 3 | 2021-04-07T09:30:50Z | 2021-04-20T14:20:44Z | 2021-04-13T09:28:16Z | null | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"merged_at": "2021-04-13T09:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178"
} | true | [
"I addressed your comments about the docstrings and the output validation :)",
"I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.",
"Thanks @lhoestq and @albertvillanova !"
] |
https://api.github.com/repos/huggingface/datasets/issues/2922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2922/comments | https://api.github.com/repos/huggingface/datasets/issues/2922/events | https://github.com/huggingface/datasets/pull/2922 | 997,332,662 | PR_kwDODunzps4ry6-s | 2,922 | Fix conversion of multidim arrays in list to arrow | [] | closed | false | null | 0 | 2021-09-15T17:21:36Z | 2021-09-15T17:22:52Z | 2021-09-15T17:21:45Z | null | Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.
However in #2361 we started to keep numpy arrays in order to keep their dtypes.
It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.
In this PR I added two strategies:
- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)
- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.
Fix https://github.com/huggingface/datasets/issues/2921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2922/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2922.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2922",
"merged_at": "2021-09-15T17:21:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2922.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2922"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3199/comments | https://api.github.com/repos/huggingface/datasets/issues/3199/events | https://github.com/huggingface/datasets/pull/3199 | 1,042,860,935 | PR_kwDODunzps4uAVzQ | 3,199 | Bump huggingface_hub | [] | closed | false | null | 0 | 2021-11-02T21:29:10Z | 2021-11-14T01:48:11Z | 2021-11-02T21:41:40Z | null | huggingface_hub just released its first minor version, so we need to update the dependency
It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3199/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3199",
"merged_at": "2021-11-02T21:41:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3199"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/664/comments | https://api.github.com/repos/huggingface/datasets/issues/664/events | https://github.com/huggingface/datasets/issues/664 | 707,017,791 | MDU6SXNzdWU3MDcwMTc3OTE= | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | [] | closed | false | null | 4 | 2020-09-23T03:53:36Z | 2023-04-17T09:31:20Z | 2020-10-20T09:06:13Z | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/664/timeline | null | completed | null | null | false | [
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activ... |
https://api.github.com/repos/huggingface/datasets/issues/4310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4310/comments | https://api.github.com/repos/huggingface/datasets/issues/4310/events | https://github.com/huggingface/datasets/issues/4310 | 1,231,319,815 | I_kwDODunzps5JZHMH | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-05-10T15:12:53Z | 2022-05-11T16:46:31Z | 2022-05-11T16:46:31Z | null | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# path is the path to parquet files
data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
dataset = load_dataset("parquet", data_files=data_files, streaming=True)
```
## Expected results
A dataset object `datasets.dataset_dict.DatasetDict`
## Actual results
```
AttributeError Traceback (most recent call last)
<command-562086> in <module>
11
12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1679 if streaming:
1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)
-> 1681 return builder_instance.as_streaming_dataset(
1682 split=split,
1683 use_auth_token=use_auth_token,
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
904 )
905 self._check_manual_download(dl_manager)
--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
907 # By default, return all splits
908 if split is None:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager)
30 if not self.config.data_files:
31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)
33 if isinstance(data_files, (str, list, tuple)):
34 files = data_files
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
798
799 def download_and_extract(self, url_or_urls):
--> 800 return self.extract(self.download(url_or_urls))
801
802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
776
777 def extract(self, path_or_paths):
--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
779 return urlpaths
780
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
312 num_proc = 1
313 if num_proc <= 1 or len(iterable) <= num_proc:
--> 314 mapped = [
315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
313 if num_proc <= 1 or len(iterable) <= num_proc:
314 mapped = [
--> 315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
317 ]
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
249 # Singleton first to spare some computation
250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 251 return function(data_struct)
252
253 # Reduce logging to keep things readable in multiprocessing with tqdm
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
781 def _extract(self, urlpath: str) -> str:
782 urlpath = str(urlpath)
--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
784 if protocol is None:
785 # no extraction
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)
371 urlpath, kwargs = urlpath, {}
372 with fsspec.open(urlpath, **kwargs) as f:
--> 373 return _get_extraction_protocol_with_magic_number(f)
374
375
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)
335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:
336 """read the magic number from a file-like object and return the compression protocol"""
--> 337 prev_loc = f.loc
338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)
339 f.seek(prev_loc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item)
337
338 def __getattr__(self, item):
--> 339 return getattr(self.f, item)
340
341 def __enter__(self):
AttributeError: '_io.BufferedReader' object has no attribute 'loc'
```
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
- `fsspec` version: 2021.08.1
- `s3fs` version: 2021.08.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4310/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4775/comments | https://api.github.com/repos/huggingface/datasets/issues/4775/events | https://github.com/huggingface/datasets/issues/4775 | 1,324,136,486 | I_kwDODunzps5O7Lgm | 4,775 | Streaming not supported in Theivaprakasham/wildreceipt | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 1 | 2022-08-01T09:46:17Z | 2022-08-01T10:30:29Z | 2022-08-01T10:30:29Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4775/timeline | null | completed | null | null | false | [
"Thanks for reporting @NitishkKarra.\r\n\r\nThe root source of the issue is that streaming mode is not supported out-of-the-box for that dataset, because it contains a TAR file.\r\n\r\nWe have opened a discussion in the corresponding Hub dataset page, pointing out this issue: https://huggingface.co/datasets/Theivap... |
https://api.github.com/repos/huggingface/datasets/issues/429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/429/comments | https://api.github.com/repos/huggingface/datasets/issues/429/events | https://github.com/huggingface/datasets/pull/429 | 664,412,137 | MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5 | 429 | mlsum | [] | closed | false | null | 6 | 2020-07-23T11:52:39Z | 2020-07-31T11:46:20Z | 2020-07-31T11:46:20Z | null | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/429/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/429",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/429"
} | true | [
"Thanks @RachelKer for this PR.\r\n\r\nI think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files[\"validation\"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files[\"validation\"], lang+\"_val.zip\", lang+'... |
https://api.github.com/repos/huggingface/datasets/issues/3632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3632/comments | https://api.github.com/repos/huggingface/datasets/issues/3632/events | https://github.com/huggingface/datasets/issues/3632 | 1,115,027,185 | I_kwDODunzps5Cdfbx | 3,632 | Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-01-26T13:35:37Z | 2022-02-10T06:58:11Z | 2022-02-10T06:58:11Z | null | ## Describe the bug
The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable.
Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible.
Also the URLs for dataset file per language isn't accessible: http://data.statmt.org/cc-100/<language code here>.txt.xz (language codes: am, sr, ka, etc.)
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cc100", "ka")
```
It throws 503 error.
## Expected results
It should successfully download and load dataset but it throws an exception because the dataset files are no longer accessible.
## Environment info
Run from google colab. Just installed the library using pip:
```!pip install -U datasets```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3632/timeline | null | completed | null | null | false | [
"Hi @AnzorGozalishvili,\r\n\r\nMaybe their site was temporarily down, but it seems to work fine now.\r\n\r\nCould you please try again and confirm if the problem persists? ",
"Hi @albertvillanova \r\nI checked and it works. \r\nIt seems that it was really temporarily down.\r\nThanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2280/comments | https://api.github.com/repos/huggingface/datasets/issues/2280/events | https://github.com/huggingface/datasets/pull/2280 | 870,780,431 | MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy | 2,280 | Fixed typo seperate->separate | [] | closed | false | null | 2 | 2021-04-29T08:55:46Z | 2021-04-29T16:41:22Z | 2021-04-29T16:41:16Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2280/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2280.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2280",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2280.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2280"
} | true | [
"Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind",
"The PR has been merged ! Feel free to merge master into your branch to fix the CI"
] | |
https://api.github.com/repos/huggingface/datasets/issues/5022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5022/comments | https://api.github.com/repos/huggingface/datasets/issues/5022/events | https://github.com/huggingface/datasets/pull/5022 | 1,385,432,859 | PR_kwDODunzps4_kxYe | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 4 | 2022-09-26T05:13:39Z | 2022-09-26T12:27:20Z | 2022-09-26T10:57:30Z | null | Fix #5017.
CC: @yangxqiao, @yuchenlin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5022/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"merged_at": "2022-09-26T10:57:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4... |
https://api.github.com/repos/huggingface/datasets/issues/2015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2015/comments | https://api.github.com/repos/huggingface/datasets/issues/2015/events | https://github.com/huggingface/datasets/pull/2015 | 825,942,108 | MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0 | 2,015 | Fix ipython function creation in tests | [] | closed | false | null | 0 | 2021-03-09T13:36:59Z | 2021-03-09T14:06:04Z | 2021-03-09T14:06:03Z | null | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"merged_at": "2021-03-09T14:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5283/comments | https://api.github.com/repos/huggingface/datasets/issues/5283/events | https://github.com/huggingface/datasets/pull/5283 | 1,460,291,003 | PR_kwDODunzps5De5M1 | 5,283 | Release: 2.6.2 | [] | closed | false | null | 1 | 2022-11-22T17:36:24Z | 2022-11-22T17:50:12Z | 2022-11-22T17:47:02Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5283/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5283",
"merged_at": "2022-11-22T17:47:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5283"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5412/comments | https://api.github.com/repos/huggingface/datasets/issues/5412/events | https://github.com/huggingface/datasets/issues/5412 | 1,524,250,269 | I_kwDODunzps5a2jad | 5,412 | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | [] | closed | false | null | 4 | 2023-01-08T00:44:32Z | 2023-01-19T20:28:43Z | 2023-01-19T20:28:43Z | null | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would solve my problem too.
I am using datasets version 2.8.0.
### Steps to reproduce the bug
1. Start training run of GPU 0 loading dataset from
```
load_dataset(
"json",
data_files=tr_dataset_path,
split=f"train",
download_mode="force_redownload",
)
```
2. While GPU 0 is training, start an identical run on GPU 1. GPU 1 will produce the following error:
```
Traceback (most recent call last):
File "/local-scratch1/data/mt/code/qq/train.py", line 198, in <module>
main()
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/local-scratch1/data/mt/code/qq/train.py", line 113, in main
load_dataset(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1734, in load_dataset
builder_instance = load_dataset_builder(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1518, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/builder.py", line 366, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/info.py", line 313, in from_directory
with fs.open(path_join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f:
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1094, in open
self.open(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1106, in open
f = self._open(
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 175, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 273, in __init__
self._open()
File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 278, in _open
self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/username/.cache/huggingface/datasets/json/default-43d06a4aedb25e6d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json'
```
### Expected behavior
Expected behavior: 2nd GPU training run should run the same as 1st GPU training run.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 9.0.0
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5412/timeline | null | completed | null | null | false | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand... |
https://api.github.com/repos/huggingface/datasets/issues/5199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5199/comments | https://api.github.com/repos/huggingface/datasets/issues/5199/events | https://github.com/huggingface/datasets/pull/5199 | 1,434,818,836 | PR_kwDODunzps5CJSv1 | 5,199 | Deprecate dummy data generation command | [] | closed | false | null | 1 | 2022-11-03T15:05:54Z | 2022-11-04T14:01:50Z | 2022-11-04T13:59:47Z | null | Deprecate the `dummy_data` CLI command. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5199/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5199",
"merged_at": "2022-11-04T13:59:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5199"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/367/comments | https://api.github.com/repos/huggingface/datasets/issues/367/events | https://github.com/huggingface/datasets/pull/367 | 654,012,984 | MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz | 367 | Update Xtreme to add PAWS-X es | [] | closed | false | null | 0 | 2020-07-09T12:14:37Z | 2020-07-09T12:37:11Z | 2020-07-09T12:37:10Z | null | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/367/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"merged_at": "2020-07-09T12:37:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/648/comments | https://api.github.com/repos/huggingface/datasets/issues/648/events | https://github.com/huggingface/datasets/issues/648 | 704,753,123 | MDU6SXNzdWU3MDQ3NTMxMjM= | 648 | offset overflow when multiprocessing batched map on large datasets. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2020-09-19T02:15:11Z | 2020-09-19T16:47:07Z | 2020-09-19T16:46:31Z | null | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single
batch = self[i : i + batch_size]
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem
data_subset = self._data.take(indices_array)
File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take
return call_function('take', [data, indices], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
"""
The above exception was the direct cause of the following exception:
ArrowInvalid Traceback (most recent call last)
in
30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train']
31 print('load/create data from OpenWebText Corpus for ELECTRA')
---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow")
33 dsets.append(e_owt)
34
~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs)
126 writer_batch_size=10**4,
127 num_proc=num_proc,
--> 128 **kwargs
129 )
130
~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs)
21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'
22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)
---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)
24
25 @patch
~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/datasets/src/datasets/arrow_dataset.py in (.0)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
ArrowInvalid: offset overflow while concatenating arrays
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/648/timeline | null | completed | null | null | false | [
"This should be fixed with #645 ",
"Feel free to re-open if it still occurs"
] |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | 11 | 2021-02-08T13:21:05Z | 2022-03-20T15:23:40Z | 2021-03-15T05:56:21Z | null | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | completed | null | null | false | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the downloa... |
https://api.github.com/repos/huggingface/datasets/issues/3484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3484/comments | https://api.github.com/repos/huggingface/datasets/issues/3484/events | https://github.com/huggingface/datasets/issues/3484 | 1,088,910,402 | I_kwDODunzps5A53RC | 3,484 | make shape verification to use ArrayXD instead of nested lists for map | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2021-12-27T02:16:02Z | 2022-01-05T13:54:03Z | null | null | As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3484/timeline | null | null | null | null | false | [
"Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic."
] |
https://api.github.com/repos/huggingface/datasets/issues/2039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2039/comments | https://api.github.com/repos/huggingface/datasets/issues/2039/events | https://github.com/huggingface/datasets/pull/2039 | 830,047,652 | MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3 | 2,039 | Doc2dial rc | [] | closed | false | null | 0 | 2021-03-12T11:56:28Z | 2021-03-12T15:32:36Z | 2021-03-12T15:32:36Z | null | Added fix to handle the last turn that is a user turn. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2039/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/events | https://github.com/huggingface/datasets/pull/1853 | 804,791,166 | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | 1,853 | Configure library root logger at the module level | [] | closed | false | null | 0 | 2021-02-09T18:11:12Z | 2021-02-10T12:32:34Z | 2021-02-10T12:32:34Z | null | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"merged_at": "2021-02-10T12:32:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1853"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | [] | closed | false | null | 1 | 2021-02-19T05:12:30Z | 2021-03-04T22:02:47Z | 2021-03-04T22:02:47Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910"
} | true | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] | |
https://api.github.com/repos/huggingface/datasets/issues/4996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4996/comments | https://api.github.com/repos/huggingface/datasets/issues/4996/events | https://github.com/huggingface/datasets/issues/4996 | 1,379,345,161 | I_kwDODunzps5SNyMJ | 4,996 | Dataset Viewer issue for Jean-Baptiste/wikiner_fr | [] | closed | false | null | 2 | 2022-09-20T12:32:07Z | 2022-09-27T12:35:44Z | 2022-09-27T12:35:44Z | null | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4996/timeline | null | completed | null | null | false | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: ... |
https://api.github.com/repos/huggingface/datasets/issues/2013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2013/comments | https://api.github.com/repos/huggingface/datasets/issues/2013/events | https://github.com/huggingface/datasets/pull/2013 | 825,694,305 | MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx | 2,013 | Add Cryptonite dataset | [] | closed | false | null | 0 | 2021-03-09T10:32:11Z | 2021-03-09T19:27:07Z | 2021-03-09T19:27:06Z | null | cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2013/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2013.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2013",
"merged_at": "2021-03-09T19:27:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2013.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2013"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3183/comments | https://api.github.com/repos/huggingface/datasets/issues/3183/events | https://github.com/huggingface/datasets/pull/3183 | 1,039,761,120 | PR_kwDODunzps4t3Dag | 3,183 | Add missing docstring to DownloadConfig | [] | closed | false | null | 0 | 2021-10-29T16:56:35Z | 2021-11-02T10:25:38Z | 2021-11-02T10:25:37Z | null | Document the `use_etag` and `num_proc` attributes in `DownloadConig`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3183/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3183",
"merged_at": "2021-11-02T10:25:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3183"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/62 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/62/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/62/comments | https://api.github.com/repos/huggingface/datasets/issues/62/events | https://github.com/huggingface/datasets/pull/62 | 614,630,830 | MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx | 62 | [Cached Path] Better error message | [] | closed | false | null | 0 | 2020-05-08T09:39:47Z | 2020-05-08T09:45:47Z | 2020-05-08T09:45:47Z | null | IMO returning `None` in this function only leads to confusion and is never helpful. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/62/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/62/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/62.diff",
"html_url": "https://github.com/huggingface/datasets/pull/62",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/62.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/62"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/624/comments | https://api.github.com/repos/huggingface/datasets/issues/624/events | https://github.com/huggingface/datasets/issues/624 | 700,541,628 | MDU6SXNzdWU3MDA1NDE2Mjg= | 624 | Add learningq dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2020-09-13T10:20:27Z | 2020-09-14T09:50:02Z | null | null | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/624/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1058/comments | https://api.github.com/repos/huggingface/datasets/issues/1058/events | https://github.com/huggingface/datasets/pull/1058 | 756,332,704 | MDExOlB1bGxSZXF1ZXN0NTMxODk0Mjc0 | 1,058 | added paws-x dataset | [] | closed | false | null | 0 | 2020-12-03T16:06:01Z | 2020-12-04T13:46:05Z | 2020-12-04T13:46:05Z | null | Added paws-x dataset. Updating README and tags in the dataset card in a while | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1058/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1058.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1058",
"merged_at": "2020-12-04T13:46:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1058.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1058"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5001/comments | https://api.github.com/repos/huggingface/datasets/issues/5001/events | https://github.com/huggingface/datasets/pull/5001 | 1,379,844,820 | PR_kwDODunzps4_TBWa | 5,001 | Support loading XML datasets | [] | open | false | null | 3 | 2022-09-20T18:42:58Z | 2022-11-01T12:44:42Z | null | null | CC: @davanstrien | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5001/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML data... |
https://api.github.com/repos/huggingface/datasets/issues/1500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1500/comments | https://api.github.com/repos/huggingface/datasets/issues/1500/events | https://github.com/huggingface/datasets/pull/1500 | 763,479,305 | MDExOlB1bGxSZXF1ZXN0NTM3OTM0OTI1 | 1,500 | adding polsum | [] | closed | false | null | 1 | 2020-12-12T09:05:29Z | 2020-12-18T09:43:43Z | 2020-12-18T09:43:43Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1500/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1500",
"merged_at": "2020-12-18T09:43:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1500"
} | true | [
"@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated."
] | |
https://api.github.com/repos/huggingface/datasets/issues/467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/467/comments | https://api.github.com/repos/huggingface/datasets/issues/467/events | https://github.com/huggingface/datasets/pull/467 | 671,580,010 | MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy | 467 | DOCS: Fix typo | [] | closed | false | null | 1 | 2020-08-02T08:59:37Z | 2020-08-02T13:52:27Z | 2020-08-02T09:18:54Z | null | Fix typo from dictionnary -> dictionary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/467/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/467",
"merged_at": "2020-08-02T09:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/467"
} | true | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2669/comments | https://api.github.com/repos/huggingface/datasets/issues/2669/events | https://github.com/huggingface/datasets/issues/2669 | 946,982,998 | MDU6SXNzdWU5NDY5ODI5OTg= | 2,669 | Metric kwargs are not passed to underlying external metric f1_score | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-18T08:32:31Z | 2021-07-18T18:36:05Z | 2021-07-18T11:19:04Z | null | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to reproduce the bug
```python
import datasets
f1 = datasets.load_metric("f1", keep_in_memory=True, average="min")
f1.add_batch(predictions=[0,2,3], references=[1, 2, 3])
f1.compute()
```
## Expected results
No error, because `average="min"` should be passed correctly to f1_score in sklearn.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute
"f1": f1_score(
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score
return fbeta_score(y_true, y_pred, beta=1, labels=labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score
_, _, f, _ = precision_recall_fscore_support(y_true, y_pred,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support
labels = _check_set_wise_labels(y_true, y_pred, average, labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels
raise ValueError("Target is %s but average='binary'. Please "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2669/timeline | null | completed | null | null | false | [
"Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weigh... |
https://api.github.com/repos/huggingface/datasets/issues/3408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3408/comments | https://api.github.com/repos/huggingface/datasets/issues/3408/events | https://github.com/huggingface/datasets/issues/3408 | 1,075,642,915 | I_kwDODunzps5AHQIj | 3,408 | Typo in Dataset viewer error message | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-12-09T14:34:02Z | 2021-12-22T11:02:53Z | 2021-12-22T11:02:53Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource"

Am I the one who added this dataset ?
N/A
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3408/timeline | null | completed | null | null | false | [
"Fixed, thanks\r\n<img width=\"661\" alt=\"Capture d’écran 2021-12-22 à 12 02 30\" src=\"https://user-images.githubusercontent.com/1676121/147082881-cf700e8d-0511-4431-b214-d6cf8137db10.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4285/comments | https://api.github.com/repos/huggingface/datasets/issues/4285/events | https://github.com/huggingface/datasets/pull/4285 | 1,226,374,831 | PR_kwDODunzps43VtEa | 4,285 | Update LexGLUE README.md | [] | closed | false | null | 1 | 2022-05-05T08:36:50Z | 2022-05-05T13:39:04Z | 2022-05-05T13:33:35Z | null | Update the leaderboard based on the latest results presented in the ACL 2022 version of the article. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4285/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4285.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4285",
"merged_at": "2022-05-05T13:33:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4285.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4285"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1439/comments | https://api.github.com/repos/huggingface/datasets/issues/1439/events | https://github.com/huggingface/datasets/pull/1439 | 760,968,410 | MDExOlB1bGxSZXF1ZXN0NTM1NzA4NDU1 | 1,439 | Update README.md | [] | closed | false | null | 0 | 2020-12-10T06:57:01Z | 2020-12-11T15:22:53Z | 2020-12-11T15:22:53Z | null | 1k-10k -> 1k-1M
3 separate configs are available with min. 1K and max. 211.3k examples | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1439/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1439.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1439",
"merged_at": "2020-12-11T15:22:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1439.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1439"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6014/comments | https://api.github.com/repos/huggingface/datasets/issues/6014/events | https://github.com/huggingface/datasets/issues/6014 | 1,798,213,816 | I_kwDODunzps5rLpC4 | 6,014 | Request to Share/Update Dataset Viewer Code | [] | open | false | null | 6 | 2023-07-11T06:36:09Z | 2023-07-12T14:18:49Z | null | null |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6014/timeline | null | null | null | null | false | [
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/da... |
https://api.github.com/repos/huggingface/datasets/issues/3021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3021/comments | https://api.github.com/repos/huggingface/datasets/issues/3021/events | https://github.com/huggingface/datasets/pull/3021 | 1,015,444,094 | PR_kwDODunzps4spzJU | 3,021 | Support loading dataset from multiple zipped CSV data files | [] | closed | false | null | 0 | 2021-10-04T17:33:57Z | 2021-10-06T08:36:46Z | 2021-10-06T08:36:45Z | null | Fix partially #3018.
CC: @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3021/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3021",
"merged_at": "2021-10-06T08:36:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3021"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5987/comments | https://api.github.com/repos/huggingface/datasets/issues/5987/events | https://github.com/huggingface/datasets/issues/5987 | 1,773,047,909 | I_kwDODunzps5prpBl | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | [] | closed | false | null | 5 | 2023-06-25T04:19:13Z | 2023-06-29T16:06:08Z | 2023-06-29T16:06:08Z | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5987/timeline | null | completed | null | null | false | [
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure... |
https://api.github.com/repos/huggingface/datasets/issues/5659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5659/comments | https://api.github.com/repos/huggingface/datasets/issues/5659/events | https://github.com/huggingface/datasets/issues/5659 | 1,635,447,540 | I_kwDODunzps5hevL0 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | [] | closed | false | null | 9 | 2023-03-22T10:07:33Z | 2023-04-28T03:25:39Z | 2023-04-07T08:51:28Z | null | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type.
The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71
However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing:
```
pip install soundfile==0.12.1
```
Then:
```python
>>> soundfile
>>> soundfile.__libsndfile_version__
```
<details>
<summary> Traceback (most recent call last): </summary>
```
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module>
import _soundfile_data # ImportError if this doesn't exist
ModuleNotFoundError: No module named '_soundfile_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module>
raise OSError('sndfile library not found using ctypes.util.find_library')
OSError: sndfile library not found using ctypes.util.find_library
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module>
_snd = _ffi.dlopen(_explicit_libname)
OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory
```
</details>
Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as:
```
pip install --upgrade soundfile
sudo apt install libsndfile1
```
We can now import `soundfile`:
```python
>>> import soundfile
>>> soundfile.__version__
'0.12.1'
>>> soundfile.__libsndfile_version__
'1.0.28'
```
We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147
But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138
Updating/upgrading the `libsndfile` doesn't change this:
```
sudo apt-get update
sudo apt-get upgrade
```
Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files.
Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues.
### Steps to reproduce the bug
Environment described above. Loading mp3 files:
```python
from datasets import load_dataset
common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
print(next(iter(common_voice_es)))
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
----> 2 print(next(iter(common_voice_es)))
File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self)
937 for key, example in ex_iterable:
938 if self.features:
939 # `IterableDataset` automatically fills missing columns with None.
940 # This is done with `_apply_feature_types_on_example`.
--> 941 yield _apply_feature_types_on_example(
942 example, self.features, token_per_repo_id=self._token_per_repo_id
943 )
944 else:
945 yield example
File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id)
698 encoded_example = features.encode_example(example)
699 # Decode example for Audio feature, e.g.
--> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
701 return decoded_example
File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
-> 1864 return {
1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
1864 return {
-> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id)
1305 elif isinstance(schema, (Audio, Image)):
1306 # we pass the token to read and decode files from private repositories in streaming mode
1307 if obj is not None and schema.decode:
-> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1309 return obj
File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id)
162 raise RuntimeError(
163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, "
164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
165 )
166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3":
--> 167 raise RuntimeError(
168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, "
169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
170 )
172 if file is None:
173 token_per_repo_id = token_per_repo_id or {}
RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`.
```
### Expected behavior
Load mp3 files!
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Soundfile version: 0.12.1
- Libsndfile version: 1.0.28 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5659/timeline | null | completed | null | null | false | [
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume... |
https://api.github.com/repos/huggingface/datasets/issues/4061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4061/comments | https://api.github.com/repos/huggingface/datasets/issues/4061/events | https://github.com/huggingface/datasets/issues/4061 | 1,186,317,071 | I_kwDODunzps5GtcMP | 4,061 | Loading cnn_dailymail dataset failed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | closed | false | null | 1 | 2022-03-30T11:29:02Z | 2022-03-30T13:36:14Z | 2022-03-30T13:36:14Z | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
## Expected results
load `cnn_dailymail` dataset succesfully
## Actual results
failed to load and get error
> NotADirectoryError: [Errno 20] Not a directory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` 1.8.0:
- Platform: Ubuntu-20.04
- Python version: 3.9.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4061/timeline | null | completed | null | null | false | [
"Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -... |
https://api.github.com/repos/huggingface/datasets/issues/6016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6016/comments | https://api.github.com/repos/huggingface/datasets/issues/6016/events | https://github.com/huggingface/datasets/pull/6016 | 1,798,968,033 | PR_kwDODunzps5VNEvn | 6,016 | Dataset string representation enhancement | [] | open | false | null | 2 | 2023-07-11T13:38:25Z | 2023-07-16T10:26:18Z | null | null | my attempt at #6010
not sure if this is the right way to go about it, I will wait for your feedback | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6016/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6016",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6016"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6016). All of your documentation changes will be reflected on that endpoint.",
"It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`/`__str__` :\r\n```\r\nshape... |
https://api.github.com/repos/huggingface/datasets/issues/2646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2646/comments | https://api.github.com/repos/huggingface/datasets/issues/2646/events | https://github.com/huggingface/datasets/issues/2646 | 944,379,954 | MDU6SXNzdWU5NDQzNzk5NTQ= | 2,646 | downloading of yahoo_answers_topics dataset failed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-14T12:31:05Z | 2022-08-04T08:28:24Z | 2022-08-04T08:28:24Z | null | ## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
# Sample code to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2646/timeline | null | completed | null | null | false | [
"Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota wa... |
https://api.github.com/repos/huggingface/datasets/issues/1447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1447/comments | https://api.github.com/repos/huggingface/datasets/issues/1447/events | https://github.com/huggingface/datasets/pull/1447 | 761,067,955 | MDExOlB1bGxSZXF1ZXN0NTM1NzkxODk1 | 1,447 | Update step-by-step guide for windows | [] | closed | false | null | 1 | 2020-12-10T09:30:59Z | 2020-12-10T12:18:47Z | 2020-12-10T09:31:14Z | null | Update step-by-step guide for windows to give an alternative to `make style`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1447",
"merged_at": "2020-12-10T09:31:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1447"
} | true | [
"Hi @thomwolf, for simplification purposes, I think you could remove the \"`pip install ...`\" steps from this commit, 'cause these deps (black, isort, flake8) are already installed on `pip install -e \".[dev]\"` on the [Start by preparing your environment](https://github.com/huggingface/datasets/blob/704107f924e74... |
https://api.github.com/repos/huggingface/datasets/issues/902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/902/comments | https://api.github.com/repos/huggingface/datasets/issues/902/events | https://github.com/huggingface/datasets/pull/902 | 752,345,739 | MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw | 902 | Follow cache_dir parameter to gcs downloader | [] | closed | false | null | 0 | 2020-11-27T16:02:06Z | 2020-11-29T22:48:54Z | 2020-11-29T22:48:53Z | null | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/902/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"merged_at": "2020-11-29T22:48:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5891/comments | https://api.github.com/repos/huggingface/datasets/issues/5891/events | https://github.com/huggingface/datasets/pull/5891 | 1,722,384,135 | PR_kwDODunzps5RKchn | 5,891 | Make split slicing consisten with list slicing | [] | open | false | null | 2 | 2023-05-23T16:04:33Z | 2023-05-23T16:11:12Z | null | null | Fix #1774, fix #5875
TODO: a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5891/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | [] | closed | false | null | 2 | 2021-02-20T14:22:39Z | 2021-02-22T10:30:11Z | 2021-02-22T10:30:11Z | null | Fixes #1919
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920"
} | true | [
"So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\... |
https://api.github.com/repos/huggingface/datasets/issues/2969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2969/comments | https://api.github.com/repos/huggingface/datasets/issues/2969/events | https://github.com/huggingface/datasets/issues/2969 | 1,007,217,867 | I_kwDODunzps48COzL | 2,969 | medical-dialog error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-25T23:08:44Z | 2021-10-11T07:46:42Z | 2021-10-11T07:46:42Z | null | ## Describe the bug
A clear and concise description of what the bug is.
When I attempt to download the huggingface datatset medical_dialog it errors out midway through
## Steps to reproduce the bug
```python
raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English")
```
## Expected results
A clear and concise description of the expected results.
No error
## Actual results
```
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}]
```
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.21.1
- Platform: colab
- Python version: colab 3.7
- PyArrow version: N/A
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2969/timeline | null | completed | null | null | false | [
"Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=... |
https://api.github.com/repos/huggingface/datasets/issues/1250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1250/comments | https://api.github.com/repos/huggingface/datasets/issues/1250/events | https://github.com/huggingface/datasets/pull/1250 | 758,491,704 | MDExOlB1bGxSZXF1ZXN0NTMzNjU2NTI4 | 1,250 | added Nergrit dataset | [] | closed | false | null | 0 | 2020-12-07T13:06:12Z | 2020-12-08T14:33:29Z | 2020-12-08T14:33:29Z | null | Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1250/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1250",
"merged_at": "2020-12-08T14:33:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1250"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1242/comments | https://api.github.com/repos/huggingface/datasets/issues/1242/events | https://github.com/huggingface/datasets/pull/1242 | 758,370,579 | MDExOlB1bGxSZXF1ZXN0NTMzNTU0MzAx | 1,242 | adding bprec | [] | closed | false | null | 2 | 2020-12-07T10:15:49Z | 2020-12-08T14:33:49Z | 2020-12-08T14:33:48Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1242/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1242.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1242",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1242.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1242"
} | true | [
"looks like this PR includes changes to many files other than the ones related to bprec\r\nCan you create another branch and another PR please ?",
"> looks like this PR includes changes to many files other than the ones related to bprec\r\n> Can you create another branch and another PR please ?\r\n\r\nYes, I real... | |
https://api.github.com/repos/huggingface/datasets/issues/3820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3820/comments | https://api.github.com/repos/huggingface/datasets/issues/3820/events | https://github.com/huggingface/datasets/issues/3820 | 1,159,106,603 | I_kwDODunzps5FFpAr | 3,820 | `pubmed_qa` checksum mismatch | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | closed | false | null | 1 | 2022-03-04T00:28:08Z | 2022-03-04T09:42:32Z | 2022-03-04T09:42:32Z | null | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_unlabeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_artificial")
except Exception as e:
print(e)
```
## Expected results
Successful download.
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: macOS
- Python version: 3.8.1
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3820/timeline | null | completed | null | null | false | [
"Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing o... |
https://api.github.com/repos/huggingface/datasets/issues/196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/196/comments | https://api.github.com/repos/huggingface/datasets/issues/196/events | https://github.com/huggingface/datasets/pull/196 | 624,901,266 | MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw | 196 | Check invalid config name | [] | closed | false | null | 13 | 2020-05-26T13:52:51Z | 2020-05-26T21:04:56Z | 2020-05-26T21:04:55Z | null | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/196/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/196",
"merged_at": "2020-05-26T21:04:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/196"
} | true | [
"I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n",
"> I think that's not related... |
https://api.github.com/repos/huggingface/datasets/issues/1675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1675/comments | https://api.github.com/repos/huggingface/datasets/issues/1675/events | https://github.com/huggingface/datasets/issues/1675 | 777,367,320 | MDU6SXNzdWU3NzczNjczMjA= | 1,675 | Add the 800GB Pile dataset? | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 7 | 2021-01-01T22:58:12Z | 2021-12-01T15:29:07Z | 2021-12-01T15:29:07Z | null | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:** https://pile.eleuther.ai/paper.pdf
- **Data:** https://pile.eleuther.ai/
- **Motivation:** Enables hardcore (GPT-3 scale!) language modelling
## Remarks
Given the extreme size of this dataset, I'm not sure how feasible this will be to include in `datasets` 🤯 . I'm also unsure how many `datasets` users are pretraining LMs, so the usage of this dataset may not warrant the effort to integrate it.
| {
"+1": 5,
"-1": 0,
"confused": 1,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 5,
"total_count": 13,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1675/timeline | null | completed | null | null | false | [
"The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models",
"The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \... |
https://api.github.com/repos/huggingface/datasets/issues/1587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1587/comments | https://api.github.com/repos/huggingface/datasets/issues/1587/events | https://github.com/huggingface/datasets/pull/1587 | 768,929,877 | MDExOlB1bGxSZXF1ZXN0NTQxMjAwMDk3 | 1,587 | Add nq_open question answering dataset | [] | closed | false | null | 1 | 2020-12-16T14:22:08Z | 2020-12-17T16:07:10Z | 2020-12-17T16:07:10Z | null | this is pr is a copy of #1506 due to messed up git history in that pr. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1587",
"merged_at": "2020-12-17T16:07:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1587"
} | true | [
"@SBrandeis all checks passing"
] |
https://api.github.com/repos/huggingface/datasets/issues/3782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3782/comments | https://api.github.com/repos/huggingface/datasets/issues/3782/events | https://github.com/huggingface/datasets/pull/3782 | 1,148,994,022 | PR_kwDODunzps4zY-Xb | 3,782 | Error of writing with different schema, due to nonpreservation of nullability | [] | closed | false | null | 1 | 2022-02-24T08:23:07Z | 2022-03-03T14:54:39Z | 2022-03-03T14:54:39Z | null | ## 1. Case
```
dataset.map(
batched=True,
disable_nullable=True,
)
```
will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516
`pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema`
## 2. Debugging
### 2.1 tracing
During `_map_single`, the following are called
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511
### 2.2. Observation
The problem is, even after `table_cast`, `pa_table.schema != self._schema`
`pa_table.schema` (before/after `table_cast`)
```
input_ids: list<item: int32>
child 0, item: int32
```
`self._schema`
```
input_ids: list<item: int32> not null
child 0, item: int32
```
### 2.3. Reason
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121
Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability.
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103
So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema
## 3. Solution
1. Let `Features` stores nullability.
2. Directly cast table with original schema but not schema from converted `Features`. (this PR)
3. Don't `cast_table` when `write_table` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3782/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3782",
"merged_at": "2022-03-03T14:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3782"
} | true | [
"Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types"
] |
https://api.github.com/repos/huggingface/datasets/issues/814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/814/comments | https://api.github.com/repos/huggingface/datasets/issues/814/events | https://github.com/huggingface/datasets/issues/814 | 738,500,443 | MDU6SXNzdWU3Mzg1MDA0NDM= | 814 | Joining multiple datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2020-11-08T16:19:30Z | 2020-11-08T19:38:48Z | 2020-11-08T19:38:48Z | null | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/814/timeline | null | completed | null | null | false | [
"found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/397/comments | https://api.github.com/repos/huggingface/datasets/issues/397/events | https://github.com/huggingface/datasets/pull/397 | 657,510,856 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4 | 397 | Add contiguous sharding | [] | closed | false | null | 0 | 2020-07-15T17:02:58Z | 2020-07-17T16:59:31Z | 2020-07-17T16:59:31Z | null | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/397/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"merged_at": "2020-07-17T16:59:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4522/comments | https://api.github.com/repos/huggingface/datasets/issues/4522/events | https://github.com/huggingface/datasets/issues/4522 | 1,274,929,328 | I_kwDODunzps5L_eCw | 4,522 | Try to reduce the number of datasets that require manual download | [] | open | false | null | 0 | 2022-06-17T11:42:03Z | 2022-06-17T11:52:48Z | null | null | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore
from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4522/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4714/comments | https://api.github.com/repos/huggingface/datasets/issues/4714/events | https://github.com/huggingface/datasets/pull/4714 | 1,309,265,682 | PR_kwDODunzps47o0YG | 4,714 | Fix named split sorting and remove unnecessary casting | [] | closed | false | null | 3 | 2022-07-19T09:48:28Z | 2022-07-22T09:39:45Z | 2022-07-22T09:10:57Z | null | This PR:
- makes `NamedSplit` sortable: so that `sorted()` can be called on them
- removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set`
- removes unnecessary casting of `NamedSplit` to `str` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4714/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4714",
"merged_at": "2022-07-22T09:10:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4714"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"hahaha what a timing, I added my comment right after you merged x)\r\n\r\nyou can ignore my (nit), it's fine",
"Sorry, just too sync... :sweat_smile: "
] |
https://api.github.com/repos/huggingface/datasets/issues/5082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5082/comments | https://api.github.com/repos/huggingface/datasets/issues/5082/events | https://github.com/huggingface/datasets/pull/5082 | 1,399,379,777 | PR_kwDODunzps5ATJv- | 5,082 | adding keep in memory | [] | closed | false | null | 2 | 2022-10-06T11:10:46Z | 2022-10-07T14:35:34Z | 2022-10-07T14:32:54Z | null | Fixing #514 .
Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5082",
"merged_at": "2022-10-07T14:32:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5082"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mariosasko , I have added a test for the `keep_in_memory` version. I have also removed the `Compatible with temp_seed` part in the scope of `dset_shuffled`, please verify if that makes sense."
] |
https://api.github.com/repos/huggingface/datasets/issues/3939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3939/comments | https://api.github.com/repos/huggingface/datasets/issues/3939/events | https://github.com/huggingface/datasets/issues/3939 | 1,170,882,331 | I_kwDODunzps5Fyj8b | 3,939 | Source links broken | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 8 | 2022-03-16T11:17:47Z | 2022-03-19T04:41:32Z | 2022-03-19T04:41:32Z | null | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3939/timeline | null | completed | null | null | false | [
"Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/",
"@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin compari... |
https://api.github.com/repos/huggingface/datasets/issues/3454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3454/comments | https://api.github.com/repos/huggingface/datasets/issues/3454/events | https://github.com/huggingface/datasets/pull/3454 | 1,084,519,107 | PR_kwDODunzps4wENam | 3,454 | Fix iter_archive generator | [] | closed | false | null | 0 | 2021-12-20T08:50:15Z | 2021-12-20T10:05:00Z | 2021-12-20T10:04:59Z | null | This PR:
- Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs
- Fixes bugs in `iter_archive` introduced in:
- #3443
Fix #3453. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3454/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3454/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3454.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3454",
"merged_at": "2021-12-20T10:04:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3454.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3454"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5965/comments | https://api.github.com/repos/huggingface/datasets/issues/5965/events | https://github.com/huggingface/datasets/issues/5965 | 1,763,648,540 | I_kwDODunzps5pHyQc | 5,965 | "Couldn't cast array of type" in complex datasets | [] | closed | false | null | 4 | 2023-06-19T14:16:14Z | 2023-07-26T15:13:53Z | 2023-07-26T15:13:53Z | null | ### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5965/timeline | null | completed | null | null | false | [
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datase... |
https://api.github.com/repos/huggingface/datasets/issues/589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/589/comments | https://api.github.com/repos/huggingface/datasets/issues/589/events | https://github.com/huggingface/datasets/issues/589 | 696,488,447 | MDU6SXNzdWU2OTY0ODg0NDc= | 589 | Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' | [] | closed | false | null | 0 | 2020-09-09T06:46:53Z | 2020-09-09T08:57:54Z | 2020-09-09T08:57:54Z | null |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/root/anaconda3/envs/pytorch/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/datasets/text/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08/text.py", line 9, in <module>
logger = nlp.utils.logging.get_logger(__name__)
AttributeError: module 'nlp.utils' has no attribute 'logging'
```
Occurs on the following code, or any code including the load_dataset('text'):
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/589/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3362/comments | https://api.github.com/repos/huggingface/datasets/issues/3362/events | https://github.com/huggingface/datasets/pull/3362 | 1,068,809,768 | PR_kwDODunzps4vRR2r | 3,362 | Adapt image datasets | [] | closed | false | null | 3 | 2021-12-01T19:52:01Z | 2021-12-09T18:37:42Z | 2021-12-09T18:37:41Z | null | This PR:
* adapts the ImageClassification template to use the new Image feature
* adapts the following datasets to use the new Image feature:
* beans (+ fixes streaming)
* cast_vs_dogs (+ fixes streaming)
* cifar10
* cifar100
* fashion_mnist
* mnist
* head_qa
cc @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3362",
"merged_at": "2021-12-09T18:37:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3362"
} | true | [
"This PR can be merged after #3163 is merged (this PR is pretty big because I was working on the forked branch).\r\n\r\n@lhoestq @albertvillanova Could you please take a look at the changes in `src/datasets/utils/streaming_download_manager.py`? These changes were required to support streaming of the `cats_vs_dogs` ... |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | [] | closed | false | null | 6 | 2021-04-02T10:06:46Z | 2022-10-05T13:26:51Z | 2022-10-05T13:26:51Z | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | null | null | false | [
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be... |
https://api.github.com/repos/huggingface/datasets/issues/1642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1642/comments | https://api.github.com/repos/huggingface/datasets/issues/1642/events | https://github.com/huggingface/datasets/pull/1642 | 775,159,568 | MDExOlB1bGxSZXF1ZXN0NTQ1ODk1MzY1 | 1,642 | Ollie dataset | [] | closed | false | null | 0 | 2020-12-28T02:43:37Z | 2021-01-04T13:35:25Z | 2021-01-04T13:35:24Z | null | This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1642/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1642",
"merged_at": "2021-01-04T13:35:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1642"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3546/comments | https://api.github.com/repos/huggingface/datasets/issues/3546/events | https://github.com/huggingface/datasets/pull/3546 | 1,096,367,684 | PR_kwDODunzps4wqYIV | 3,546 | Remove print statements in datasets | [] | closed | false | null | 1 | 2022-01-07T14:30:24Z | 2022-01-07T18:09:16Z | 2022-01-07T18:09:15Z | null | This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3546/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3546.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3546",
"merged_at": "2022-01-07T18:09:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3546.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3546"
} | true | [
"The CI failures are unrelated to the changes."
] |
https://api.github.com/repos/huggingface/datasets/issues/877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/877/comments | https://api.github.com/repos/huggingface/datasets/issues/877/events | https://github.com/huggingface/datasets/issues/877 | 748,234,438 | MDU6SXNzdWU3NDgyMzQ0Mzg= | 877 | DataLoader(datasets) become more and more slowly within iterations | [] | closed | false | null | 2 | 2020-11-22T12:41:10Z | 2020-11-29T15:45:12Z | 2020-11-29T15:45:12Z | null | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s.
And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s.
Could you please help me with this problem?
Thanks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/877/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset... |
https://api.github.com/repos/huggingface/datasets/issues/4783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4783/comments | https://api.github.com/repos/huggingface/datasets/issues/4783/events | https://github.com/huggingface/datasets/pull/4783 | 1,326,375,011 | PR_kwDODunzps48iHey | 4,783 | Docs for creating a loading script for image datasets | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 7 | 2022-08-02T20:36:03Z | 2022-09-09T17:08:14Z | 2022-09-07T19:07:34Z | null | This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4783",
"merged_at": "2022-09-07T19:07:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4783"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"IMO it would make more sense to add a \"Create image dataset\" page with two main sections - a no-code approach with `imagefolder` + metadata (preferred way), and with a loading script (advanced). It should be clear when to choose wh... |
https://api.github.com/repos/huggingface/datasets/issues/2882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2882/comments | https://api.github.com/repos/huggingface/datasets/issues/2882/events | https://github.com/huggingface/datasets/issues/2882 | 991,800,141 | MDU6SXNzdWU5OTE4MDAxNDE= | 2,882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-09T05:55:02Z | 2021-09-13T11:24:30Z | 2021-09-13T11:24:30Z | null | ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2882/timeline | null | completed | null | null | false | [
"Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in th... |
https://api.github.com/repos/huggingface/datasets/issues/6071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6071/comments | https://api.github.com/repos/huggingface/datasets/issues/6071/events | https://github.com/huggingface/datasets/issues/6071 | 1,821,990,749 | I_kwDODunzps5smV9d | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | [] | closed | false | null | 2 | 2023-07-26T09:37:20Z | 2023-07-27T12:42:58Z | 2023-07-27T12:42:58Z | null | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6071/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a... |
https://api.github.com/repos/huggingface/datasets/issues/3005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3005/comments | https://api.github.com/repos/huggingface/datasets/issues/3005/events | https://github.com/huggingface/datasets/issues/3005 | 1,014,615,420 | I_kwDODunzps48ec18 | 3,005 | DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-04T00:49:29Z | 2021-10-11T10:18:01Z | 2021-10-04T08:46:13Z | null | ## Describe the bug
The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument
## Steps to reproduce the bug
```python
import datasets
example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}})
def filter_value(example, value):
return example['a'] == value
filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
```
## Expected results
`filtered` is a dataset containing {"a": {3}}
## Actual results
> Traceback (most recent call last):
> File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module>
> filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter
> indices = self.map(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map
> return self._map_single(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single
> batch = apply_function_on_filtered_inputs(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3005/timeline | null | completed | null | null | false | [
"Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip i... |
https://api.github.com/repos/huggingface/datasets/issues/4164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4164/comments | https://api.github.com/repos/huggingface/datasets/issues/4164/events | https://github.com/huggingface/datasets/pull/4164 | 1,203,661,346 | PR_kwDODunzps42MfxX | 4,164 | Fix duplicate key in multi_news | [] | closed | false | null | 1 | 2022-04-13T18:48:24Z | 2022-04-13T21:04:16Z | 2022-04-13T20:58:02Z | null | To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"merged_at": "2022-04-13T20:58:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | [] | closed | false | null | 0 | 2021-02-19T16:12:34Z | 2021-02-21T19:48:03Z | 2021-02-21T19:48:03Z | null | Fix library relative logging imports and make all datasets use library logger. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"merged_at": "2021-02-21T19:48:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1325/comments | https://api.github.com/repos/huggingface/datasets/issues/1325/events | https://github.com/huggingface/datasets/pull/1325 | 759,595,556 | MDExOlB1bGxSZXF1ZXN0NTM0NTczNjM2 | 1,325 | Add humicroedit dataset | [] | closed | false | null | 2 | 2020-12-08T16:35:46Z | 2020-12-17T17:59:09Z | 2020-12-17T17:59:09Z | null | Pull request for adding humicroedit dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1325/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1325",
"merged_at": "2020-12-17T17:59:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1325"
} | true | [
"Updated the commit with the generated yaml tags",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4366/comments | https://api.github.com/repos/huggingface/datasets/issues/4366/events | https://github.com/huggingface/datasets/issues/4366 | 1,239,534,165 | I_kwDODunzps5J4cpV | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2022-05-18T07:17:29Z | 2022-05-18T16:36:22Z | 2022-05-18T16:36:21Z | null | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4366/timeline | null | completed | null | null | false | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] |
https://api.github.com/repos/huggingface/datasets/issues/1589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1589/comments | https://api.github.com/repos/huggingface/datasets/issues/1589/events | https://github.com/huggingface/datasets/pull/1589 | 769,187,141 | MDExOlB1bGxSZXF1ZXN0NTQxMzcwMTM0 | 1,589 | Update doc2dial.py | [] | closed | false | null | 1 | 2020-12-16T18:50:56Z | 2022-07-06T15:19:57Z | 2022-07-06T15:19:57Z | null | Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1589/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1589"
} | true | [
"Thanks for adding the `doc2dial_rc` config :) \r\n\r\nIt looks like you're missing the dummy data for this config though. Could you add them please ?\r\nAlso to fix the CI you'll need to format the code with `make style`"
] |
https://api.github.com/repos/huggingface/datasets/issues/3019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3019/comments | https://api.github.com/repos/huggingface/datasets/issues/3019/events | https://github.com/huggingface/datasets/pull/3019 | 1,015,339,983 | PR_kwDODunzps4speOB | 3,019 | Fix filter leaking | [] | closed | false | null | 0 | 2021-10-04T15:42:58Z | 2022-06-03T08:28:14Z | 2021-10-05T08:33:07Z | null | If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering
Affected versions: 1.12.0 and 1.12.1
This should fix #3010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3019/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3019",
"merged_at": "2021-10-05T08:33:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3019"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5153/comments | https://api.github.com/repos/huggingface/datasets/issues/5153/events | https://github.com/huggingface/datasets/issues/5153 | 1,420,833,457 | I_kwDODunzps5UsDKx | 5,153 | default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-10-24T13:28:18Z | 2022-11-15T16:31:10Z | 2022-11-15T16:31:09Z | null | ### Describe the bug
By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios
As this is a corner case for quick exploration of images or audios on the Hub.
### Steps to reproduce the bug
If you have directory like this:
```
repo
image1.jpg
image2.jpg
image3.jpg
```
or
```
repo
data
image1.jpg
image2.jpg
image3.jpg
```
doing `ds = load_dataset(repo)` would create `label` feature:
```python
print(ds["train"][0])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0}
```
Also, if you have the following structure:
```
repo
data
image1.jpg
image2.jpg
image3.jpg
image4.jpg
image5.jpg
image6.jpg
```
it will infer two labels:
```python
print(ds["train"][0])
print(ds["train"][-1])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1}
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0}
```
### Expected behavior
We should have only one base feature (Image/Audio) in such cases.
### Environment info
all versions of `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5153/timeline | null | completed | null | null | false | [
"Makes sense! For the last structure, we could count the path segments (delimited by \"/\" for URLs and `os.sep` for local paths) to ensure all inferred labels are on the same level. Otherwise, I think it's safe to assume they are meaningless and ignore them.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3901/comments | https://api.github.com/repos/huggingface/datasets/issues/3901/events | https://github.com/huggingface/datasets/issues/3901 | 1,167,339,773 | I_kwDODunzps5FlDD9 | 3,901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-03-12T16:56:05Z | 2022-04-12T12:10:50Z | 2022-04-12T12:10:49Z | null | ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3901/timeline | null | completed | null | null | false | [
"It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 à 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6064/comments | https://api.github.com/repos/huggingface/datasets/issues/6064/events | https://github.com/huggingface/datasets/pull/6064 | 1,818,703,725 | PR_kwDODunzps5WPzAv | 6,064 | set dev version | [] | closed | false | null | 3 | 2023-07-24T15:56:00Z | 2023-07-24T16:05:19Z | 2023-07-24T15:56:10Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6064/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6064",
"merged_at": "2023-07-24T15:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6064"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/5645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5645/comments | https://api.github.com/repos/huggingface/datasets/issues/5645/events | https://github.com/huggingface/datasets/issues/5645 | 1,627,108,278 | I_kwDODunzps5g-7O2 | 5,645 | Datasets map and select(range()) is giving dill error | [] | closed | false | null | 2 | 2023-03-16T10:01:28Z | 2023-03-17T04:24:51Z | 2023-03-17T04:24:51Z | null | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
I get following error: `module 'dill._dill' has no attribute 'log'`
I've tried downgrading the dill version from latest to 0.2.8, but no luck.
Stack trace:
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj)
> 367 try:
> --> 368 import transformers as tr
> 369
>
> ModuleNotFoundError: No module named 'transformers'
>
> During handling of the above exception, another exception occurred:
>
> AttributeError Traceback (most recent call last)
> 17 frames
> <ipython-input-13-dd14813880a6> in <module>
> ----> 1 test = train_dataset.select(range(10))
>
> /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
> 155 }
> 156 # apply actual function
> --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
> 159 # re-apply format to the output
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
> 155 if kwargs.get(fingerprint_name) is None:
> 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
> --> 157 kwargs[fingerprint_name] = update_fingerprint(
> 158 self._fingerprint, transform, kwargs_for_fingerprint
> 159 )
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
> 103 for key in sorted(transform_args):
> 104 hasher.update(key)
> --> 105 hasher.update(transform_args[key])
> 106 return hasher.hexdigest()
> 107
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value)
> 55 def update(self, value):
> 56 self.m.update(f"=={type(value)}==".encode("utf8"))
> ---> 57 self.m.update(self.hash(value).encode("utf-8"))
> 58
> 59 def hexdigest(self):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value)
> 51 return cls.dispatch[type(value)](cls, value)
> 52 else:
> ---> 53 return cls.hash_default(value)
> 54
> 55 def update(self, value):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value)
> 44 @classmethod
> 45 def hash_default(cls, value):
> ---> 46 return cls.hash_bytes(dumps(value))
> 47
> 48 @classmethod
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj)
> 387 file = StringIO()
> 388 with _no_cache_fields(obj):
> --> 389 dump(obj, file)
> 390 return file.getvalue()
> 391
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file)
> 359 def dump(obj, file):
> 360 """pickle an object to a file"""
> --> 361 Pickler(file, recurse=True).dump(obj)
> 362 return
> 363
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj)
> 392 return
> 393
> --> 394 def load_session(filename='/tmp/session.pkl', main=None):
> 395 """update the __main__ module with the state from the session file"""
> 396 if main is None: main = _main_module
>
> /usr/lib/python3.9/pickle.py in dump(self, obj)
> 485 if self.proto >= 4:
> 486 self.framer.start_framing()
> --> 487 self.save(obj)
> 488 self.write(STOP)
> 489 self.framer.end_framing()
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj)
>
> /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
> 689 write(NEWOBJ)
> 690 else:
> --> 691 save(func)
> 692 save(args)
> 693 write(REDUCE)
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj)
> 583 dill._dill.log.info("# F1")
> 584 else:
> --> 585 dill._dill.log.info("F2: %s" % obj)
> 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None))
> 587 dill._dill.StockPickler.save_global(pickler, obj, name=name)
>
> AttributeError: module 'dill._dill' has no attribute 'log'
### Steps to reproduce the bug
After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab
do either
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
### Expected behavior
The map and select function should work
### Environment info
dataset: https://huggingface.co/datasets/scientific_papers
dill = 0.3.6
python= 3.9.16
transformer = 4.2.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5645/timeline | null | completed | null | null | false | [
"It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-... |
https://api.github.com/repos/huggingface/datasets/issues/5120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5120/comments | https://api.github.com/repos/huggingface/datasets/issues/5120/events | https://github.com/huggingface/datasets/pull/5120 | 1,410,641,221 | PR_kwDODunzps5A4X10 | 5,120 | Fix `tqdm` zip bug | [] | closed | false | null | 11 | 2022-10-16T22:19:18Z | 2022-10-23T10:27:53Z | 2022-10-19T08:53:17Z | null | This PR solves #5117, by wrapping the entire `zip` clause in tqdm.
For more information, please checkout this Stack Overflow thread:
https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5120/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5120.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5120",
"merged_at": "2022-10-19T08:53:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5120.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5120"
} | true | [
"@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.",
"@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updat... |
https://api.github.com/repos/huggingface/datasets/issues/2060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2060/comments | https://api.github.com/repos/huggingface/datasets/issues/2060/events | https://github.com/huggingface/datasets/pull/2060 | 832,588,591 | MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx | 2,060 | Filtering refactor | [] | closed | false | null | 10 | 2021-03-16T09:23:30Z | 2021-10-13T09:09:04Z | 2021-10-13T09:09:03Z | null | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2060/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060"
} | true | [
"I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe ... |
https://api.github.com/repos/huggingface/datasets/issues/1697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1697/comments | https://api.github.com/repos/huggingface/datasets/issues/1697/events | https://github.com/huggingface/datasets/pull/1697 | 781,126,579 | MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5 | 1,697 | Update DialogRE DatasetCard | [] | closed | false | null | 1 | 2021-01-07T08:22:33Z | 2021-01-07T13:34:28Z | 2021-01-07T13:34:28Z | null | Update the information in the dataset card for the Dialog RE dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1697/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"merged_at": "2021-01-07T13:34:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697"
} | true | [
"Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/125/comments | https://api.github.com/repos/huggingface/datasets/issues/125/events | https://github.com/huggingface/datasets/pull/125 | 618,869,048 | MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0 | 125 | [Newsroom] add newsroom | [] | closed | false | null | 0 | 2020-05-15T10:34:34Z | 2020-05-15T10:37:07Z | 2020-05-15T10:37:02Z | null | I checked it with the data link of the mail you forwarded @thomwolf => works well! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/125/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/125",
"merged_at": "2020-05-15T10:37:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/125"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5839/comments | https://api.github.com/repos/huggingface/datasets/issues/5839/events | https://github.com/huggingface/datasets/issues/5839 | 1,704,554,718 | I_kwDODunzps5lmXDe | 5,839 | Make models/functions optimized with `torch.compile` hashable | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2023-05-10T20:02:08Z | 2023-05-10T20:02:08Z | null | null | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling the original, uncompiled version of a compiled model/function (attributes `_orig_mod`/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)
2. wait for https://github.com/pytorch/pytorch/issues/101107 to be resolved
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5839/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1004/comments | https://api.github.com/repos/huggingface/datasets/issues/1004/events | https://github.com/huggingface/datasets/issues/1004 | 755,325,368 | MDU6SXNzdWU3NTUzMjUzNjg= | 1,004 | how large datasets are handled under the hood | [] | closed | false | null | 3 | 2020-12-02T14:32:40Z | 2022-10-05T12:13:29Z | 2022-10-05T12:13:29Z | null | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1004/timeline | null | completed | null | null | false | [
"This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or ... |
https://api.github.com/repos/huggingface/datasets/issues/2937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2937/comments | https://api.github.com/repos/huggingface/datasets/issues/2937/events | https://github.com/huggingface/datasets/issues/2937 | 999,548,277 | I_kwDODunzps47k-V1 | 2,937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-09-17T16:52:10Z | 2022-08-24T13:09:08Z | 2022-08-24T13:09:08Z | null | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2937/timeline | null | completed | null | null | false | [
"Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\... |
https://api.github.com/repos/huggingface/datasets/issues/3625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3625/comments | https://api.github.com/repos/huggingface/datasets/issues/3625/events | https://github.com/huggingface/datasets/issues/3625 | 1,113,017,522 | I_kwDODunzps5CV0yy | 3,625 | Add a metadata field for when source data was produced | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 5 | 2022-01-24T18:52:39Z | 2022-06-28T13:54:49Z | null | null | **Is your feature request related to a problem? Please describe.**
The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly.
**Describe the solution you'd like**
There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`.
**Describe alternatives you've considered**
This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets.
**Additional context**
I believe this feature is relevant for a number of reasons:
- Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant.
- More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important.
- time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here.
**open questions**
- I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss.
- what level of granularity would make sense for this? e.g. assigning a decade, century or year?
- how to encode this information? What formatting makes sense
- what specific time to encode; a data range? (mean, modal, min, max value?)
This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3625/timeline | null | null | null | null | false | [
"A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has h... |
https://api.github.com/repos/huggingface/datasets/issues/3196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3196/comments | https://api.github.com/repos/huggingface/datasets/issues/3196/events | https://github.com/huggingface/datasets/pull/3196 | 1,042,223,913 | PR_kwDODunzps4t-bxy | 3,196 | QOL improvements: auto-flatten_indices and desc in map calls | [] | closed | false | null | 0 | 2021-11-02T11:28:50Z | 2021-11-02T15:41:09Z | 2021-11-02T15:41:08Z | null | This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3196/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"merged_at": "2021-11-02T15:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/554/comments | https://api.github.com/repos/huggingface/datasets/issues/554/events | https://github.com/huggingface/datasets/issues/554 | 690,173,214 | MDU6SXNzdWU2OTAxNzMyMTQ= | 554 | nlp downloads to its module path | [] | closed | false | null | 8 | 2020-09-01T14:06:14Z | 2020-09-11T06:19:24Z | 2020-09-11T06:19:24Z | null | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset
module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module
os.makedirs(main_folder_path, exist_ok=True)
File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad'
```
Do you have any suggested workaround for this issue?
Perhaps overriding the default value for `force_local_path` of `prepare_module`? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/554/timeline | null | completed | null | null | false | [
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in... |
https://api.github.com/repos/huggingface/datasets/issues/5358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5358/comments | https://api.github.com/repos/huggingface/datasets/issues/5358/events | https://github.com/huggingface/datasets/pull/5358 | 1,495,270,822 | PR_kwDODunzps5FYBcq | 5,358 | Fix `fs.open` resource leaks | [] | closed | false | null | 3 | 2022-12-13T22:35:51Z | 2023-01-05T16:46:31Z | 2023-01-05T15:59:51Z | null | Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix.
Introduces no significant logic changes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5358/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5358.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5358",
"merged_at": "2023-01-05T15:59:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5358.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5358"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Sorry, I didn't check tests/style after doing a merge from the Git UI last week. Thx for fixing. \r\n\r\nFYI I'm getting \"Only those with [write access](https://docs.github.com/articles/what-are-the-different-access-perm... |
https://api.github.com/repos/huggingface/datasets/issues/4913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4913/comments | https://api.github.com/repos/huggingface/datasets/issues/4913/events | https://github.com/huggingface/datasets/pull/4913 | 1,355,232,007 | PR_kwDODunzps4-BP00 | 4,913 | Add license and citation information to cosmos_qa dataset | [] | closed | false | null | 1 | 2022-08-30T06:23:19Z | 2022-08-30T09:49:31Z | 2022-08-30T09:47:35Z | null | This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.
This PR also updates the citation information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4913/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4913",
"merged_at": "2022-08-30T09:47:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4913"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3294/comments | https://api.github.com/repos/huggingface/datasets/issues/3294/events | https://github.com/huggingface/datasets/issues/3294 | 1,057,495,473 | I_kwDODunzps4_CBmx | 3,294 | Add Natural Adversarial Objects dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | 0 | 2021-11-18T15:34:44Z | 2021-12-08T12:00:02Z | null | null | ## Adding a Dataset
- **Name:** Natural Adversarial Objects (NAO)
- **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence.
- **Paper:** https://arxiv.org/abs/2111.04204v1
- **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8
- **Motivation:** interesting object detection dataset useful for miscclassifications
cc @NielsRogge
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3294/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | [] | closed | false | null | 1 | 2023-04-27T08:13:30Z | 2023-04-27T09:33:05Z | 2023-04-27T09:30:16Z | null | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"merged_at": "2023-04-27T09:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/30 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/30/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/30/comments | https://api.github.com/repos/huggingface/datasets/issues/30/events | https://github.com/huggingface/datasets/pull/30 | 610,549,072 | MDExOlB1bGxSZXF1ZXN0NDExOTY4Mzk3 | 30 | add metrics which require download files from github | [] | closed | false | null | 0 | 2020-05-01T04:13:22Z | 2022-10-04T09:31:58Z | 2020-05-11T08:19:54Z | null | To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics/metric_utils.py`. I made the following changes:
- copy the needed files in a folder`metric_name`
- delete all other files that are not needed
For metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/30/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/30/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/30.diff",
"html_url": "https://github.com/huggingface/datasets/pull/30",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/30.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/30"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1027/comments | https://api.github.com/repos/huggingface/datasets/issues/1027/events | https://github.com/huggingface/datasets/issues/1027 | 755,695,420 | MDU6SXNzdWU3NTU2OTU0MjA= | 1,027 | Hi | [] | closed | false | null | 0 | 2020-12-02T23:47:14Z | 2020-12-03T16:42:41Z | 2020-12-03T16:42:41Z | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1027/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1027/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2987/comments | https://api.github.com/repos/huggingface/datasets/issues/2987/events | https://github.com/huggingface/datasets/issues/2987 | 1,011,026,141 | I_kwDODunzps48Qwjd | 2,987 | ArrowInvalid: Can only convert 1-dimensional array values | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-29T14:18:52Z | 2021-10-01T13:57:45Z | 2021-10-01T13:57:45Z | null | ## Describe the bug
For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset:
```
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
return encoded_inputs
```
```
Full trace:
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-8-0fc3efc6f0c2> in <module>()
27
28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names,
---> 29 features=features)
30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names,
31 features=features)
13 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1701 new_fingerprint=new_fingerprint,
1702 disable_tqdm=disable_tqdm,
-> 1703 desc=desc,
1704 )
1705 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
183 }
184 # apply actual function
--> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
187 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
396 # Call actual function
397
--> 398 out = func(self, *args, **kwargs)
399
400 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2063 writer.write_table(batch)
2064 else:
-> 2065 writer.write_batch(batch)
2066 if update_data and writer is not None:
2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
410 typed_sequence_examples[col] = typed_sequence
--> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples)
412 self.write_table(pa_table, writer_batch_size)
413
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type)
107 else:
--> 108 storage = pa.array(self.data, type.storage_dtype)
109 out = pa.ExtensionArray.from_storage(type, storage)
110 elif isinstance(self.data, np.ndarray):
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
It can be fixed by adding the following line:
```diff
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
+ encoded_inputs["image"] = np.array(encoded_inputs["image"])
return encoded_inputs
```
However, would be great if this can be fixed within Datasets itself. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2987/timeline | null | completed | null | null | false | [
"Hi @NielsRogge, thanks for reporting!\r\n\r\nIn `datasets`, we were handling N-dimensional arrays only when passed as an instance of `np.array`, not when passed as a list of `np.array`s.\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/1735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1735/comments | https://api.github.com/repos/huggingface/datasets/issues/1735/events | https://github.com/huggingface/datasets/pull/1735 | 785,184,740 | MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw | 1,735 | Update add new dataset template | [] | closed | false | null | 2 | 2021-01-13T15:08:09Z | 2021-01-14T15:16:01Z | 2021-01-14T15:16:00Z | null | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1735/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"merged_at": "2021-01-14T15:16:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735"
} | true | [
"Add new \"dataset\"? ;)",
"Lol, too used to Transformers ;-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5688/comments | https://api.github.com/repos/huggingface/datasets/issues/5688/events | https://github.com/huggingface/datasets/issues/5688 | 1,648,463,504 | I_kwDODunzps5iQY6Q | 5,688 | Wikipedia download_and_prepare for GCS | [] | open | false | null | 2 | 2023-03-30T23:43:22Z | 2023-03-31T13:31:32Z | null | null | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5688/timeline | null | null | null | null | false | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processin... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.