id int64 1.14B 2.23B | labels_url stringlengths 75 75 | body stringlengths 2 33.9k ⌀ | updated_at stringlengths 20 20 | number int64 3.76k 6.79k | milestone dict | repository_url stringclasses 1 value | draft bool 2 classes | labels listlengths 0 4 | created_at stringlengths 20 20 | comments_url stringlengths 70 70 | assignee dict | timeline_url stringlengths 70 70 | title stringlengths 1 290 | events_url stringlengths 68 68 | active_lock_reason null | user dict | assignees listlengths 0 3 | performed_via_github_app null | state_reason stringclasses 3 values | author_association stringclasses 3 values | closed_at stringlengths 20 20 ⌀ | pull_request dict | node_id stringlengths 18 19 | comments listlengths 0 30 | reactions dict | state stringclasses 2 values | locked bool 1 class | url stringlengths 61 61 | html_url stringlengths 49 51 | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,162,702,044 | https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name} | Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version? | 2023-04-21T14:32:03Z | 3,861 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | 2022-03-08T14:08:55Z | https://api.github.com/repos/huggingface/datasets/issues/3861/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3861/timeline | big_patent cased version | https://api.github.com/repos/huggingface/datasets/issues/3861/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/slvcsl",
"id": 25265140,
"login": "slvcsl",
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/slvcsl"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-04-21T14:32:03Z | null | I_kwDODunzps5FTWzc | [
"To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.\r\n\r\nSee the paper describing the issue here:\r\nhttps://aclanthology.org/2022.gem-1.34/",
"Thanks for proposing the addition of the ca... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3861 | https://github.com/huggingface/datasets/issues/3861 | false |
1,162,623,329 | https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name} | null | 2022-03-08T17:37:13Z | 3,860 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T12:55:39Z | https://api.github.com/repos/huggingface/datasets/issues/3860/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3860/timeline | Small doc fixes | https://api.github.com/repos/huggingface/datasets/issues/3860/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | null | null | CONTRIBUTOR | 2022-03-08T17:37:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3860",
"merged_at": "2022-03-08T17:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3860"
} | PR_kwDODunzps40GpzZ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.",
"There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datase... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3860 | https://github.com/huggingface/datasets/pull/3860 | true |
1,162,559,333 | https://api.github.com/repos/huggingface/datasets/issues/3859/labels{/name} | ## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent call last)
[<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
8 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1705 ignore_verifications=ignore_verifications,
1706 try_from_hf_gcs=try_from_hf_gcs,
-> 1707 use_auth_token=use_auth_token,
1708 )
1709
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 if not downloaded_from_gcs:
594 self._download_and_prepare(
--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
596 )
597 # Sync info
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
659 split_dict = SplitDict(dataset_name=self.name)
660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
662
663 # Checksums verification
[/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
123 split_types = ["train", "val", "test"]
124 extract_paths = dl_manager.extract(
--> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types}
126 )
127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types}
[/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc)
282 download_config.extract_compressed_file = True
283 extracted_paths = map_nested(
--> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
285 )
286 path_or_paths = NestedDataStructure(path_or_paths)
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args)
194 # Singleton first to spare some computation
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 # Reduce logging to keep things readable in multiprocessing with tqdm
[/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs)
314 elif is_local_path(url_or_filename):
315 # File, but it doesn't exist.
--> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
317 else:
318 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist
I have tried this in a number of machines, including on Colab, so I think this is not environment dependent.
How do I load the bigPatent dataset? | 2022-03-08T13:04:09Z | 3,859 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | 2022-03-08T11:47:12Z | https://api.github.com/repos/huggingface/datasets/issues/3859/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3859/timeline | Unable to dowload big_patent (FileNotFoundError) | https://api.github.com/repos/huggingface/datasets/issues/3859/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/slvcsl",
"id": 25265140,
"login": "slvcsl",
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/slvcsl"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-03-08T13:04:04Z | null | I_kwDODunzps5FSz9l | [
"Hi @slvcsl, thanks for reporting.\r\n\r\nYesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.\r\nhttps://pypi.org/project/datasets/#history\r\n\r\nPlease, feel free to update `datasets` library to the latest version: \r\n```shell\r\npip install -U datasets\r\n```\... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3859/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3859 | https://github.com/huggingface/datasets/issues/3859 | false |
1,162,526,688 | https://api.github.com/repos/huggingface/datasets/issues/3858/labels{/name} | null | 2022-03-08T12:57:57Z | 3,858 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T11:11:52Z | https://api.github.com/repos/huggingface/datasets/issues/3858/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3858/timeline | Udpate index.mdx margins | https://api.github.com/repos/huggingface/datasets/issues/3858/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4",
"events_url": "https://api.github.com/users/gary149/events{/privacy}",
"followers_url": "https://api.github.com/users/gary149/followers",
"following_url": "https://api.github.com/users/gary149/following{/other_user}",
"gists_url": "https://api.github.com/users/gary149/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gary149",
"id": 3841370,
"login": "gary149",
"node_id": "MDQ6VXNlcjM4NDEzNzA=",
"organizations_url": "https://api.github.com/users/gary149/orgs",
"received_events_url": "https://api.github.com/users/gary149/received_events",
"repos_url": "https://api.github.com/users/gary149/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gary149/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gary149"
} | [] | null | null | CONTRIBUTOR | 2022-03-08T12:57:56Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3858",
"merged_at": "2022-03-08T12:57:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3858"
} | PR_kwDODunzps40GVSq | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3858). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3858/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3858 | https://github.com/huggingface/datasets/pull/3858 | true |
1,162,525,353 | https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name} | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483 | 2022-03-14T11:08:22Z | 3,857 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | 2022-03-08T11:10:30Z | https://api.github.com/repos/huggingface/datasets/issues/3857/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3857/timeline | Order of dataset changes due to glob.glob. | https://api.github.com/repos/huggingface/datasets/issues/3857/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5FSrqp | [
"I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3857 | https://github.com/huggingface/datasets/issues/3857 | false |
1,162,522,034 | https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name} | This code currently raises an error because of the null image:
```python
import datasets
dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] }
features = datasets.Features({
'name': datasets.Value('string'),
'image': datasets.Image(),
})
dataset = datasets.Dataset.from_dict(dataset_dict, features)
dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable
```
I fixed this in this PR
TODO:
- [x] add a test | 2022-03-08T15:22:17Z | 3,856 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T11:07:09Z | https://api.github.com/repos/huggingface/datasets/issues/3856/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3856/timeline | Fix push_to_hub with null images | https://api.github.com/repos/huggingface/datasets/issues/3856/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-08T15:22:16Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3856.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3856",
"merged_at": "2022-03-08T15:22:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3856.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3856"
} | PR_kwDODunzps40GUSf | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3856 | https://github.com/huggingface/datasets/pull/3856 | true |
1,162,448,589 | https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name} | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
| 2022-07-11T15:06:40Z | 3,855 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-08T09:55:17Z | https://api.github.com/repos/huggingface/datasets/issues/3855/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3855/timeline | Bad error message when loading private dataset | https://api.github.com/repos/huggingface/datasets/issues/3855/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | CONTRIBUTOR | 2022-07-11T15:06:40Z | null | I_kwDODunzps5FSY7N | [
"We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !",
"Resolved via https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3855 | https://github.com/huggingface/datasets/issues/3855 | false |
1,162,434,199 | https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name} | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Proportions:**
- 24% United States English
- 8% England English
- 5% India and South Asia (India, Pakistan, Sri Lanka)
- 3% Australian English
- 3% Canadian English
- 2% Scottish English
- 1% Irish English
- 1% Southern African (South Africa, Zimbabwe, Namibia)
- 1% New Zealand English
Can we replicate this for Age as well?
**Age proportions of the common voice:-**
- 24% 19 - 29
- 14% 30 - 39
- 10% 40 - 49
- 6% < 19
- 4% 50 - 59
- 4% 60 - 69
- 1% 70 – 79 | 2024-03-23T12:40:58Z | 3,854 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | 2022-03-08T09:40:52Z | https://api.github.com/repos/huggingface/datasets/issues/3854/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3854/timeline | load only England English dataset from common voice english dataset | https://api.github.com/repos/huggingface/datasets/issues/3854/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4",
"events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}",
"followers_url": "https://api.github.com/users/amanjaiswal777/followers",
"following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}",
"gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amanjaiswal777",
"id": 36677001,
"login": "amanjaiswal777",
"node_id": "MDQ6VXNlcjM2Njc3MDAx",
"organizations_url": "https://api.github.com/users/amanjaiswal777/orgs",
"received_events_url": "https://api.github.com/users/amanjaiswal777/received_events",
"repos_url": "https://api.github.com/users/amanjaiswal777/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amanjaiswal777"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-03-09T08:13:33Z | null | I_kwDODunzps5FSVaX | [
"Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3854 | https://github.com/huggingface/datasets/issues/3854 | false |
1,162,386,592 | https://api.github.com/repos/huggingface/datasets/issues/3853/labels{/name} | # Introduction of the dataset
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task
, includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling.
In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency.
# Some workarounds I did
1. task ids
I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea.
2. `dl_manage.extract`
Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data.
# Help
Don't know how to fix the building doc error. | 2022-03-15T10:48:02Z | 3,853 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T08:53:42Z | https://api.github.com/repos/huggingface/datasets/issues/3853/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3853/timeline | add ontonotes_conll dataset | https://api.github.com/repos/huggingface/datasets/issues/3853/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | null | null | CONTRIBUTOR | 2022-03-15T10:48:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3853",
"merged_at": "2022-03-15T10:48:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3853"
} | PR_kwDODunzps40F3uN | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3853). All of your documentation changes will be reflected on that endpoint.",
"The CI fail is unrelated to this dataset, merging :)"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3853/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3853 | https://github.com/huggingface/datasets/pull/3853 | true |
1,162,252,337 | https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name} | > Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation.
The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?". | 2022-03-08T16:54:36Z | 3,852 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T05:57:05Z | https://api.github.com/repos/huggingface/datasets/issues/3852/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3852/timeline | Redundant add dataset information and dead link. | https://api.github.com/repos/huggingface/datasets/issues/3852/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | null | null | CONTRIBUTOR | 2022-03-08T16:54:36Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3852",
"merged_at": "2022-03-08T16:54:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3852"
} | PR_kwDODunzps40Fb26 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3852 | https://github.com/huggingface/datasets/pull/3852 | true |
1,162,137,998 | https://api.github.com/repos/huggingface/datasets/issues/3851/labels{/name} | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 2022-09-27T12:13:55Z | 3,851 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-08T02:16:04Z | https://api.github.com/repos/huggingface/datasets/issues/3851/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3851/timeline | Load audio dataset error | https://api.github.com/repos/huggingface/datasets/issues/3851/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31890987?v=4",
"events_url": "https://api.github.com/users/lemoner20/events{/privacy}",
"followers_url": "https://api.github.com/users/lemoner20/followers",
"following_url": "https://api.github.com/users/lemoner20/following{/other_user}",
"gists_url": "https://api.github.com/users/lemoner20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lemoner20",
"id": 31890987,
"login": "lemoner20",
"node_id": "MDQ6VXNlcjMxODkwOTg3",
"organizations_url": "https://api.github.com/users/lemoner20/orgs",
"received_events_url": "https://api.github.com/users/lemoner20/received_events",
"repos_url": "https://api.github.com/users/lemoner20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lemoner20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemoner20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lemoner20"
} | [] | null | completed | NONE | 2022-03-08T11:20:06Z | null | I_kwDODunzps5FRNGO | [
"Hi @lemoner20, thanks for reporting.\r\n\r\nI'm sorry but I cannot reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset, load_metric, Audio\r\n ...: raw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\")\r\n ...: print(raw_datasets[0][\"audio\"])\r\nDownloading builder sc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3851/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3851 | https://github.com/huggingface/datasets/issues/3851 | false |
1,162,126,030 | https://api.github.com/repos/huggingface/datasets/issues/3850/labels{/name} | In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible. | 2022-12-16T05:34:07Z | 3,850 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T01:53:25Z | https://api.github.com/repos/huggingface/datasets/issues/3850/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3850/timeline | [feat] Add tqdm arguments | https://api.github.com/repos/huggingface/datasets/issues/3850/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4",
"events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}",
"followers_url": "https://api.github.com/users/penguinwang96825/followers",
"following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}",
"gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/penguinwang96825",
"id": 28087825,
"login": "penguinwang96825",
"node_id": "MDQ6VXNlcjI4MDg3ODI1",
"organizations_url": "https://api.github.com/users/penguinwang96825/orgs",
"received_events_url": "https://api.github.com/users/penguinwang96825/received_events",
"repos_url": "https://api.github.com/users/penguinwang96825/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions",
"type": "User",
"url": "https://api.github.com/users/penguinwang96825"
} | [] | null | null | NONE | 2022-12-16T05:34:07Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3850",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3850"
} | PR_kwDODunzps40FBx9 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3850/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3850 | https://github.com/huggingface/datasets/pull/3850 | true |
1,162,091,075 | https://api.github.com/repos/huggingface/datasets/issues/3849/labels{/name} | Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/
```python
>>> import datasets
>>> >>> datasets.load_dataset('adv_glue')
Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset
builder_instance = load_dataset_builder(
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__
super().__init__(*args, **kwargs)
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__
self.config, self.config_id = self._create_builder_config(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config
raise ValueError(
ValueError: Config name is missing.
Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte']
Example of usage:
`load_dataset('adv_glue', 'adv_sst2')`
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.11it/s]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
``` | 2022-03-28T11:17:14Z | 3,849 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-08T00:47:11Z | https://api.github.com/repos/huggingface/datasets/issues/3849/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3849/timeline | Add "Adversarial GLUE" dataset to datasets library | https://api.github.com/repos/huggingface/datasets/issues/3849/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | null | null | CONTRIBUTOR | 2022-03-28T11:12:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3849.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3849",
"merged_at": "2022-03-28T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3849.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3849"
} | PR_kwDODunzps40E6sW | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq can you review when you have some time?",
"Hi @lhoestq -- thanks so much for your review! I just added the stuff you requested to the README.md, including an example from the dataset, the table of contents, and lots of sec... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3849/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3849 | https://github.com/huggingface/datasets/pull/3849 | true |
1,162,076,902 | https://api.github.com/repos/huggingface/datasets/issues/3848/labels{/name} | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}}
verification_name = 'dataset source files'
def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None):
if expected_checksums is None:
logger.info("Unable to verify checksums.")
return
if len(set(expected_checksums) - set(recorded_checksums)) > 0:
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
if len(set(recorded_checksums) - set(expected_checksums)) > 0:
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))
bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
for_verification_name = " for " + verification_name if verification_name is not None else ""
if len(bad_urls) > 0:
error_msg = "Checksums didn't match" + for_verification_name + ":\n"
> raise NonMatchingChecksumError(error_msg + str(bad_urls))
E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
E ['https://adversarialglue.github.io/dataset/dev.zip']
src/datasets/utils/info_utils.py:40: NonMatchingChecksumError
```
## Expected results
The dataset downloads correctly, and there is no error.
## Actual results
Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug. | 2022-03-15T14:37:26Z | 3,848 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-08T00:24:12Z | https://api.github.com/repos/huggingface/datasets/issues/3848/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3848/timeline | NonMatchingChecksumError when checksum is None | https://api.github.com/repos/huggingface/datasets/issues/3848/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | CONTRIBUTOR | 2022-03-15T12:28:23Z | null | I_kwDODunzps5FQ-Lm | [
"Hi @jxmorris12, thanks for reporting.\r\n\r\nThe objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.\r\n\r\nThe question is: how ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3848/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3848 | https://github.com/huggingface/datasets/issues/3848 | false |
1,161,856,417 | https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name} | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
# tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
text_column_name = "text"
column_names = raw_datasets["train"].column_names
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on every text in dataset",
)
```
## Expected results
No tokenization would be required after the 1st run. Everything should be loaded from the cache.
## Actual results
Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 18.04.6 LTS
- Python version: 3.6.9
- PyArrow version: 6.0.1
| 2023-11-20T18:14:37Z | 3,847 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-07T19:55:15Z | https://api.github.com/repos/huggingface/datasets/issues/3847/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3847/timeline | Datasets' cache not re-used | https://api.github.com/repos/huggingface/datasets/issues/3847/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/15106980?v=4",
"events_url": "https://api.github.com/users/gejinchen/events{/privacy}",
"followers_url": "https://api.github.com/users/gejinchen/followers",
"following_url": "https://api.github.com/users/gejinchen/following{/other_user}",
"gists_url": "https://api.github.com/users/gejinchen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gejinchen",
"id": 15106980,
"login": "gejinchen",
"node_id": "MDQ6VXNlcjE1MTA2OTgw",
"organizations_url": "https://api.github.com/users/gejinchen/orgs",
"received_events_url": "https://api.github.com/users/gejinchen/received_events",
"repos_url": "https://api.github.com/users/gejinchen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gejinchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gejinchen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gejinchen"
} | [] | null | reopened | NONE | null | null | I_kwDODunzps5FQIWh | [
"<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic f... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3847 | https://github.com/huggingface/datasets/issues/3847 | false |
1,161,810,226 | https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name} | Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset` | 2022-03-07T19:21:23Z | 3,846 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T19:06:59Z | https://api.github.com/repos/huggingface/datasets/issues/3846/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3846/timeline | Update faiss device docstring | https://api.github.com/repos/huggingface/datasets/issues/3846/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-07T19:21:22Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3846",
"merged_at": "2022-03-07T19:21:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3846"
} | PR_kwDODunzps40D-uh | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3846 | https://github.com/huggingface/datasets/pull/3846 | true |
1,161,739,483 | https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name} | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Please suggest any changes if required. Thank you. | 2022-03-09T16:50:03Z | 3,845 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T17:53:24Z | https://api.github.com/repos/huggingface/datasets/issues/3845/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3845/timeline | add RMSE and MAE metrics. | https://api.github.com/repos/huggingface/datasets/issues/3845/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | null | null | CONTRIBUTOR | 2022-03-09T16:50:03Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3845",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3845"
} | PR_kwDODunzps40DvqX | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.",
"@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.",
"Thanks for suggestions. :) I have added update the KWARG... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3845 | https://github.com/huggingface/datasets/pull/3845 | true |
1,161,686,754 | https://api.github.com/repos/huggingface/datasets/issues/3844/labels{/name} | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Any suggestions and changes required will be helpful.
| 2022-03-07T17:24:32Z | 3,844 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T17:06:38Z | https://api.github.com/repos/huggingface/datasets/issues/3844/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3844/timeline | Add rmse and mae metrics. | https://api.github.com/repos/huggingface/datasets/issues/3844/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T17:15:06Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3844",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3844"
} | PR_kwDODunzps40DkYL | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3844). All of your documentation changes will be reflected on that endpoint.",
"@dnaveenr This PR is in pretty good shape, so feel free to reopen it."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3844/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3844 | https://github.com/huggingface/datasets/pull/3844 | true |
1,161,397,812 | https://api.github.com/repos/huggingface/datasets/issues/3843/labels{/name} | The streaming version of https://github.com/huggingface/datasets/pull/3787.
Fix #3835
CC: @albertvillanova | 2022-03-15T12:30:25Z | 3,843 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T13:09:19Z | https://api.github.com/repos/huggingface/datasets/issues/3843/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3843/timeline | Fix Google Drive URL to avoid Virus scan warning in streaming mode | https://api.github.com/repos/huggingface/datasets/issues/3843/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-15T12:30:23Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3843.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3843",
"merged_at": "2022-03-15T12:30:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3843.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3843"
} | PR_kwDODunzps40Cm0D | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3843). All of your documentation changes will be reflected on that endpoint.",
"Cool ! Looks like it breaks `test_streaming_gg_drive_gzipped` for some reason..."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3843/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3843 | https://github.com/huggingface/datasets/pull/3843 | true |
1,161,336,483 | https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name} | From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode).
Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead.
In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument. | 2022-03-07T19:03:43Z | 3,842 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T12:10:46Z | https://api.github.com/repos/huggingface/datasets/issues/3842/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3842/timeline | Align IterableDataset.shuffle with Dataset.shuffle | https://api.github.com/repos/huggingface/datasets/issues/3842/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-07T19:03:42Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3842.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3842",
"merged_at": "2022-03-07T19:03:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3842.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3842"
} | PR_kwDODunzps40CZvE | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.",
"We should also add `generator` as a param to `shuffle` to fully align the APIs, no?",
"I added the `generator` argument.\r\n\r\nI had to make a f... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3842 | https://github.com/huggingface/datasets/pull/3842 | true |
1,161,203,842 | https://api.github.com/repos/huggingface/datasets/issues/3841/labels{/name} | ## Describe the bug
Pyright complains about module not exported.
## Steps to reproduce the bug
Use an editor/IDE with Pyright Language server with default configuration:
```python
from datasets import load_dataset
```
## Expected results
No complain from Pyright
## Actual results
Pyright complain below:
```
`load_dataset` is not exported from module "datasets"
Import from "datasets.load" instead [reportPrivateImportUsage]
```
Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation.
## Environment info
- `datasets` version: 1.18.3
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0
| 2023-02-18T19:14:03Z | 3,841 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-07T10:24:04Z | https://api.github.com/repos/huggingface/datasets/issues/3841/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3841/timeline | Pyright reportPrivateImportUsage when `from datasets import load_dataset` | https://api.github.com/repos/huggingface/datasets/issues/3841/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4",
"events_url": "https://api.github.com/users/lkhphuc/events{/privacy}",
"followers_url": "https://api.github.com/users/lkhphuc/followers",
"following_url": "https://api.github.com/users/lkhphuc/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lkhphuc",
"id": 12573521,
"login": "lkhphuc",
"node_id": "MDQ6VXNlcjEyNTczNTIx",
"organizations_url": "https://api.github.com/users/lkhphuc/orgs",
"received_events_url": "https://api.github.com/users/lkhphuc/received_events",
"repos_url": "https://api.github.com/users/lkhphuc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lkhphuc"
} | [] | null | completed | CONTRIBUTOR | 2023-02-13T13:48:41Z | null | I_kwDODunzps5FNpCC | [
"Hi! \r\n\r\nThis issue stems from `datasets` having `py.typed` defined (see https://github.com/microsoft/pyright/discussions/3764#discussioncomment-3282142) - to avoid it, we would either have to remove `py.typed` (added to be compliant with PEP-561) or export the names with `__all__`/`from .submodule import name ... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3841/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3841 | https://github.com/huggingface/datasets/issues/3841 | false |
1,161,183,773 | https://api.github.com/repos/huggingface/datasets/issues/3840/labels{/name} | Temporarily fix CI for Windows by pinning `responses`.
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
Fix: #3839 | 2022-03-07T10:12:36Z | 3,840 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T10:06:53Z | https://api.github.com/repos/huggingface/datasets/issues/3840/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3840/timeline | Pin responses to fix CI for Windows | https://api.github.com/repos/huggingface/datasets/issues/3840/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-07T10:07:24Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3840.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3840",
"merged_at": "2022-03-07T10:07:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3840.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3840"
} | PR_kwDODunzps40B8eu | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3840). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3840/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3840 | https://github.com/huggingface/datasets/pull/3840 | true |
1,161,183,482 | https://api.github.com/repos/huggingface/datasets/issues/3839/labels{/name} | ## Describe the bug
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
```
___________________ test_datasetdict_from_text_split[test] ____________________
[gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe
split = 'test'
text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt'
tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7')
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_text_split(split, text_path, tmp_path):
if split:
path = {split: text_path}
else:
split = "train"
path = {"train": text_path, "test": text_path}
cache_dir = tmp_path / "cache"
expected_features = {"text": "string"}
> dataset = TextDatasetReader(path, cache_dir=cache_dir).read()
tests\io\test_text.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read
use_auth_token=use_auth_token,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare
self._download_prepared_from_hf_gcs(dl_manager.download_config)
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs
reader.download_from_hf_gcs(download_config, relative_data_dir)
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs
downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/"))
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path
download_desc=download_config.download_desc,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache
headers=headers,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head
max_retries=max_retries,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request
return session.request(method=method, url=url, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request
resp = self.send(prep, **send_kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send
r = adapter.send(request, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send
return self._on_request(adapter, request, *a, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request
match, match_failed_reasons = self._find_match(request)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <responses.RequestsMock object at 0x000002048AD70588>
request = <PreparedRequest [HEAD]>
def _find_first_match(self, request):
match_failed_reasons = []
> for i, match in enumerate(self._matches):
E AttributeError: 'RequestsMock' object has no attribute '_matches'
C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError
```
| 2022-05-20T14:13:43Z | 3,839 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-07T10:06:42Z | https://api.github.com/repos/huggingface/datasets/issues/3839/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3839/timeline | CI is broken for Windows | https://api.github.com/repos/huggingface/datasets/issues/3839/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2022-03-07T10:07:24Z | null | I_kwDODunzps5FNkD6 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3839/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3839 | https://github.com/huggingface/datasets/issues/3839 | false |
1,161,137,406 | https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name} | It might be a mix of Image and ClassLabel, and the color palette might be generated automatically.
---
### Example
every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
<img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png">
See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow | 2022-04-10T13:34:59Z | 3,838 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-03-07T09:38:15Z | https://api.github.com/repos/huggingface/datasets/issues/3838/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | https://api.github.com/repos/huggingface/datasets/issues/3838/timeline | Add a data type for labeled images (image segmentation) | https://api.github.com/repos/huggingface/datasets/issues/3838/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5FNYz- | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3838 | https://github.com/huggingface/datasets/issues/3838 | false |
1,161,109,031 | https://api.github.com/repos/huggingface/datasets/issues/3837/labels{/name} | null | 2022-03-07T11:07:35Z | 3,837 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T09:13:29Z | https://api.github.com/repos/huggingface/datasets/issues/3837/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3837/timeline | Release: 1.18.4 | https://api.github.com/repos/huggingface/datasets/issues/3837/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-07T11:07:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3837.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3837",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3837.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3837"
} | PR_kwDODunzps40BwE1 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3837/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3837 | https://github.com/huggingface/datasets/pull/3837 | true |
1,161,072,531 | https://api.github.com/repos/huggingface/datasets/issues/3836/labels{/name} | <img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
| 2022-03-07T20:21:11Z | 3,836 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-07T08:38:34Z | https://api.github.com/repos/huggingface/datasets/issues/3836/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3836/timeline | Logo float left | https://api.github.com/repos/huggingface/datasets/issues/3836/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T09:14:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3836.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3836",
"merged_at": "2022-03-07T09:14:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3836.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3836"
} | PR_kwDODunzps40Bobr | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3836). All of your documentation changes will be reflected on that endpoint.",
"Weird, the logo doesn't seem to be floating on my side (using Chrome) at https://huggingface.co/docs/datasets/master/en/index",
"https://huggingf... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3836/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3836 | https://github.com/huggingface/datasets/pull/3836 | true |
1,161,029,205 | https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name} | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 2022-03-15T12:30:23Z | 3,835 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-07T07:56:42Z | https://api.github.com/repos/huggingface/datasets/issues/3835/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3835/timeline | The link given on the gigaword does not work | https://api.github.com/repos/huggingface/datasets/issues/3835/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26357784?v=4",
"events_url": "https://api.github.com/users/martin6336/events{/privacy}",
"followers_url": "https://api.github.com/users/martin6336/followers",
"following_url": "https://api.github.com/users/martin6336/following{/other_user}",
"gists_url": "https://api.github.com/users/martin6336/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/martin6336",
"id": 26357784,
"login": "martin6336",
"node_id": "MDQ6VXNlcjI2MzU3Nzg0",
"organizations_url": "https://api.github.com/users/martin6336/orgs",
"received_events_url": "https://api.github.com/users/martin6336/received_events",
"repos_url": "https://api.github.com/users/martin6336/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/martin6336/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martin6336/subscriptions",
"type": "User",
"url": "https://api.github.com/users/martin6336"
} | [] | null | completed | NONE | 2022-03-15T12:30:23Z | null | I_kwDODunzps5FM-ZV | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3835 | https://github.com/huggingface/datasets/issues/3835 | false |
1,160,657,937 | https://api.github.com/repos/huggingface/datasets/issues/3834/labels{/name} | Previous link gives 404 error. Updated with a new dataset scripts creation link. | 2022-03-07T12:12:07Z | 3,834 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-06T16:45:48Z | https://api.github.com/repos/huggingface/datasets/issues/3834/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3834/timeline | Fix dead dataset scripts creation link. | https://api.github.com/repos/huggingface/datasets/issues/3834/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T12:12:07Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3834",
"merged_at": "2022-03-07T12:12:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3834"
} | PR_kwDODunzps40ATVw | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3834/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3834 | https://github.com/huggingface/datasets/pull/3834 | true |
1,160,543,713 | https://api.github.com/repos/huggingface/datasets/issues/3833/labels{/name} | null | 2022-03-07T12:35:33Z | 3,833 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-06T07:49:49Z | https://api.github.com/repos/huggingface/datasets/issues/3833/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3833/timeline | Small typos in How-to-train tutorial. | https://api.github.com/repos/huggingface/datasets/issues/3833/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4",
"events_url": "https://api.github.com/users/lkhphuc/events{/privacy}",
"followers_url": "https://api.github.com/users/lkhphuc/followers",
"following_url": "https://api.github.com/users/lkhphuc/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lkhphuc",
"id": 12573521,
"login": "lkhphuc",
"node_id": "MDQ6VXNlcjEyNTczNTIx",
"organizations_url": "https://api.github.com/users/lkhphuc/orgs",
"received_events_url": "https://api.github.com/users/lkhphuc/received_events",
"repos_url": "https://api.github.com/users/lkhphuc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lkhphuc"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T12:13:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3833",
"merged_at": "2022-03-07T12:13:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3833"
} | PR_kwDODunzps4z_99t | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3833/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3833 | https://github.com/huggingface/datasets/pull/3833 | true |
1,160,503,446 | https://api.github.com/repos/huggingface/datasets/issues/3832/labels{/name} | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Special thanks to @napoles-uach for his collaboration on identifying the first ones:
- [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb).
- [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns).
- [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression)
cc @osanseviero
| 2022-03-14T07:45:38Z | 3,832 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "7AFCAA",... | 2022-03-06T03:02:58Z | https://api.github.com/repos/huggingface/datasets/issues/3832/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3832/timeline | Making Hugging Face the place to go for Graph NNs datasets | https://api.github.com/repos/huggingface/datasets/issues/3832/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel"
} | [] | null | null | NONE | null | null | I_kwDODunzps5FK-CW | [
"It will be indeed really great to add support to GNN datasets. Big :+1: for this initiative.",
"@napoles-uach identifies the [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression). \r\n\r\nAdded to the Tasks in the initial issue.",
"Thank... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3832/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3832 | https://github.com/huggingface/datasets/issues/3832 | false |
1,160,501,000 | https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name} | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected results
regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16.
## Actual results
4 batches
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| 2022-03-08T15:18:56Z | 3,831 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-06T02:43:50Z | https://api.github.com/repos/huggingface/datasets/issues/3831/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3831/timeline | when using to_tf_dataset with shuffle is true, not all completed batches are made | https://api.github.com/repos/huggingface/datasets/issues/3831/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42107709?v=4",
"events_url": "https://api.github.com/users/greenned/events{/privacy}",
"followers_url": "https://api.github.com/users/greenned/followers",
"following_url": "https://api.github.com/users/greenned/following{/other_user}",
"gists_url": "https://api.github.com/users/greenned/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/greenned",
"id": 42107709,
"login": "greenned",
"node_id": "MDQ6VXNlcjQyMTA3NzA5",
"organizations_url": "https://api.github.com/users/greenned/orgs",
"received_events_url": "https://api.github.com/users/greenned/received_events",
"repos_url": "https://api.github.com/users/greenned/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/greenned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greenned/subscriptions",
"type": "User",
"url": "https://api.github.com/users/greenned"
} | [] | null | completed | NONE | 2022-03-08T15:18:56Z | null | I_kwDODunzps5FK9cI | [
"Maybe @Rocketknight1 can help here",
"Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.",
"@Rocketk... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3831 | https://github.com/huggingface/datasets/issues/3831 | false |
1,160,181,404 | https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name} | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The code is to load dataset:
windows os:
```
from datasets import load_dataset
dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data")
```
google colab:
```
import datasets
train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
``` | 2022-03-07T06:53:41Z | 3,830 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | 2022-03-05T01:43:12Z | https://api.github.com/repos/huggingface/datasets/issues/3830/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3830/timeline | Got error when load cnn_dailymail dataset | https://api.github.com/repos/huggingface/datasets/issues/3830/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/78331051?v=4",
"events_url": "https://api.github.com/users/wgong0510/events{/privacy}",
"followers_url": "https://api.github.com/users/wgong0510/followers",
"following_url": "https://api.github.com/users/wgong0510/following{/other_user}",
"gists_url": "https://api.github.com/users/wgong0510/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wgong0510",
"id": 78331051,
"login": "wgong0510",
"node_id": "MDQ6VXNlcjc4MzMxMDUx",
"organizations_url": "https://api.github.com/users/wgong0510/orgs",
"received_events_url": "https://api.github.com/users/wgong0510/received_events",
"repos_url": "https://api.github.com/users/wgong0510/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wgong0510/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgong0510/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wgong0510"
} | [] | null | completed | NONE | 2022-03-07T06:53:41Z | null | I_kwDODunzps5FJvac | [
"Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3830 | https://github.com/huggingface/datasets/issues/3830 | false |
1,160,154,352 | https://api.github.com/repos/huggingface/datasets/issues/3829/labels{/name} | ## Brief Overview
Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments.
## Feature Request
Could we create a performance guide for using `datasets`, similar to:
* [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735)
* [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis)
This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below).

## Related Issues
* [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670)
* [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499)
* [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004)
* [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830)
* [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315)
* [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911) | 2022-03-10T16:24:27Z | 3,829 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-03-05T00:28:06Z | https://api.github.com/repos/huggingface/datasets/issues/3829/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3829/timeline | [📄 Docs] Create a `datasets` performance guide. | https://api.github.com/repos/huggingface/datasets/issues/3829/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3712347?v=4",
"events_url": "https://api.github.com/users/dynamicwebpaige/events{/privacy}",
"followers_url": "https://api.github.com/users/dynamicwebpaige/followers",
"following_url": "https://api.github.com/users/dynamicwebpaige/following{/other_user}",
"gists_url": "https://api.github.com/users/dynamicwebpaige/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dynamicwebpaige",
"id": 3712347,
"login": "dynamicwebpaige",
"node_id": "MDQ6VXNlcjM3MTIzNDc=",
"organizations_url": "https://api.github.com/users/dynamicwebpaige/orgs",
"received_events_url": "https://api.github.com/users/dynamicwebpaige/received_events",
"repos_url": "https://api.github.com/users/dynamicwebpaige/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dynamicwebpaige/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dynamicwebpaige/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dynamicwebpaige"
} | [] | null | null | NONE | null | null | I_kwDODunzps5FJozw | [
"Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.\r\n\r\nI think we'll start by documenting the performance of the dataset transforms we provide,... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3829/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3829 | https://github.com/huggingface/datasets/issues/3829 | false |
1,160,064,029 | https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name} | ## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is specified to be a string, but it's actually a dict with an id field inside.
## Steps to reproduce the bug
## Expected results
Feature spec should match the data I'd think?
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 2022-03-08T09:30:49Z | 3,828 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-04T21:25:32Z | https://api.github.com/repos/huggingface/datasets/issues/3828/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3828/timeline | The Pile's _FEATURE spec seems to be incorrect | https://api.github.com/repos/huggingface/datasets/issues/3828/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dlwh",
"id": 9633,
"login": "dlwh",
"node_id": "MDQ6VXNlcjk2MzM=",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"repos_url": "https://api.github.com/users/dlwh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dlwh"
} | [] | null | completed | NONE | 2022-03-08T09:30:48Z | null | I_kwDODunzps5FJSwd | [
"Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3828 | https://github.com/huggingface/datasets/issues/3828 | false |
1,159,878,436 | https://api.github.com/repos/huggingface/datasets/issues/3827/labels{/name} | A leftover from #3803. | 2022-03-07T12:37:52Z | 3,827 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-04T17:23:26Z | https://api.github.com/repos/huggingface/datasets/issues/3827/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3827/timeline | Remove deprecated `remove_columns` param in `filter` | https://api.github.com/repos/huggingface/datasets/issues/3827/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T12:37:51Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3827.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3827",
"merged_at": "2022-03-07T12:37:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3827.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3827"
} | PR_kwDODunzps4z95dj | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3827). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3827/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3827 | https://github.com/huggingface/datasets/pull/3827 | true |
1,159,851,110 | https://api.github.com/repos/huggingface/datasets/issues/3826/labels{/name} | _Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_
I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`:
```python
def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None):
```
TODO:
- [x] tests
- [x] docs
related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753 | 2022-03-09T17:23:13Z | 3,826 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-04T16:57:23Z | https://api.github.com/repos/huggingface/datasets/issues/3826/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3826/timeline | Add IterableDataset.filter | https://api.github.com/repos/huggingface/datasets/issues/3826/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-09T17:23:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3826",
"merged_at": "2022-03-09T17:23:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3826"
} | PR_kwDODunzps4z90JU | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3826). All of your documentation changes will be reflected on that endpoint.",
"Indeed ! If `batch_size` is `None` or `<=0` then the full dataset should be passed. It's been mentioned in the docs for a while but never actually ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3826/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3826 | https://github.com/huggingface/datasets/pull/3826 | true |
1,159,802,345 | https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name} | CC: @geohci | 2022-03-04T17:24:37Z | 3,825 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-04T16:05:27Z | https://api.github.com/repos/huggingface/datasets/issues/3825/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3825/timeline | Update version and date in Wikipedia dataset | https://api.github.com/repos/huggingface/datasets/issues/3825/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-04T17:24:36Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3825",
"merged_at": "2022-03-04T17:24:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3825"
} | PR_kwDODunzps4z9p4b | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3825 | https://github.com/huggingface/datasets/pull/3825 | true |
1,159,574,186 | https://api.github.com/repos/huggingface/datasets/issues/3824/labels{/name} | Fix #3818 | 2022-03-04T18:04:22Z | 3,824 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-04T12:04:40Z | https://api.github.com/repos/huggingface/datasets/issues/3824/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3824/timeline | Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute` | https://api.github.com/repos/huggingface/datasets/issues/3824/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-04T18:04:21Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3824",
"merged_at": "2022-03-04T18:04:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3824"
} | PR_kwDODunzps4z85SO | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3824). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3824/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3824 | https://github.com/huggingface/datasets/pull/3824 | true |
1,159,497,844 | https://api.github.com/repos/huggingface/datasets/issues/3823/labels{/name} | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine.
In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format?
For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/mrms")
```
## Expected results
The dataset should be downloaded or open up
## Actual results
A 500 internal server error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35
- Python version: 3.9.10
- PyArrow version: 7.0.0
| 2022-03-08T09:47:39Z | 3,823 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-04T10:37:14Z | https://api.github.com/repos/huggingface/datasets/issues/3823/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3823/timeline | 500 internal server error when trying to open a dataset composed of Zarr stores | https://api.github.com/repos/huggingface/datasets/issues/3823/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4",
"events_url": "https://api.github.com/users/jacobbieker/events{/privacy}",
"followers_url": "https://api.github.com/users/jacobbieker/followers",
"following_url": "https://api.github.com/users/jacobbieker/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jacobbieker",
"id": 7170359,
"login": "jacobbieker",
"node_id": "MDQ6VXNlcjcxNzAzNTk=",
"organizations_url": "https://api.github.com/users/jacobbieker/orgs",
"received_events_url": "https://api.github.com/users/jacobbieker/received_events",
"repos_url": "https://api.github.com/users/jacobbieker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jacobbieker"
} | [] | null | completed | NONE | 2022-03-08T09:47:39Z | null | I_kwDODunzps5FHIh0 | [
"Hi @jacobbieker, thanks for reporting!\r\n\r\nI have transferred this issue to our Hub team and they are investigating it. I keep you informed. ",
"Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3823/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3823 | https://github.com/huggingface/datasets/issues/3823 | false |
1,159,395,728 | https://api.github.com/repos/huggingface/datasets/issues/3822/labels{/name} | ## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles.
- **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html)
- **Motivation:** Useful pose estimation dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 2022-06-01T13:00:47Z | 3,822 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | 2022-03-04T08:48:39Z | https://api.github.com/repos/huggingface/datasets/issues/3822/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | https://api.github.com/repos/huggingface/datasets/issues/3822/timeline | Add Biwi Kinect Head Pose Database | https://api.github.com/repos/huggingface/datasets/issues/3822/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gi... | null | completed | MEMBER | 2022-06-01T13:00:47Z | null | I_kwDODunzps5FGvmQ | [
"Official dataset location : https://icu.ee.ethz.ch/research/datsets.html\r\nIn the \"Biwi Kinect Head Pose Database\" section, I do not find any information regarding \"Downloading the dataset.\" . Do we mail the authors regarding this ?\r\n\r\nI found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/b... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3822/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3822 | https://github.com/huggingface/datasets/issues/3822 | false |
1,159,371,927 | https://api.github.com/repos/huggingface/datasets/issues/3821/labels{/name} | This PR combines all updates to Wikipedia dataset.
Once approved, this will be used to generate the pre-processed Wikipedia datasets.
Finally, this PR will be able to be merged into master:
- NOT using squash
- BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved
TODO:
- [x] #3435
- [x] #3789
- [x] #3825
- [x] Run to get the pre-processed data for big languages (backward compatibility)
- [x] #3958
CC: @geohci | 2022-03-21T12:35:23Z | 3,821 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-04T08:19:21Z | https://api.github.com/repos/huggingface/datasets/issues/3821/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3821/timeline | Update Wikipedia dataset | https://api.github.com/repos/huggingface/datasets/issues/3821/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-21T12:31:00Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3821.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3821",
"merged_at": "2022-03-21T12:31:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3821.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3821"
} | PR_kwDODunzps4z8O5J | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm starting to generate the pre-processed data for some of the languages (for backward compatibility).\r\n\r\nOnce this merged, we will create the pre-processed data on the Hub under the Wikimedia namespace.",
"All steps have been... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3821/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3821 | https://github.com/huggingface/datasets/pull/3821 | true |
1,159,106,603 | https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name} | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_unlabeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_artificial")
except Exception as e:
print(e)
```
## Expected results
Successful download.
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: macOS
- Python version: 3.8.1
- PyArrow version: 3.0.0
| 2022-03-04T09:42:32Z | 3,820 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | 2022-03-04T00:28:08Z | https://api.github.com/repos/huggingface/datasets/issues/3820/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3820/timeline | `pubmed_qa` checksum mismatch | https://api.github.com/repos/huggingface/datasets/issues/3820/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jon-tow",
"id": 41410219,
"login": "jon-tow",
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jon-tow"
} | [] | null | completed | CONTRIBUTOR | 2022-03-04T09:42:32Z | null | I_kwDODunzps5FFpAr | [
"Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing o... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3820 | https://github.com/huggingface/datasets/issues/3820 | false |
1,158,848,288 | https://api.github.com/repos/huggingface/datasets/issues/3819/labels{/name} | cc: @lhoestq | 2022-03-04T13:07:41Z | 3,819 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T20:08:44Z | https://api.github.com/repos/huggingface/datasets/issues/3819/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3819/timeline | Fix typo in doc build yml | https://api.github.com/repos/huggingface/datasets/issues/3819/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | null | null | CONTRIBUTOR | 2022-03-04T13:07:41Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3819",
"merged_at": "2022-03-04T13:07:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3819"
} | PR_kwDODunzps4z6fvn | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3819). All of your documentation changes will be reflected on that endpoint."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3819/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3819 | https://github.com/huggingface/datasets/pull/3819 | true |
1,158,788,545 | https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name} | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input.
For example, when the `add_batch` method is used, then the `compute()` method fails:
```
metric = load_metric("sari")
metric.add_batch(
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
metric.compute()
> TypeError: _compute() missing 1 required positional argument: 'sources'
```
Therefore, the `compute() `method can only be used standalone:
```
metric = load_metric("sari")
result = metric.compute(
sources=["About 95 species are currently accepted ."],
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
> {'sari': 26.953601953601954}
```
**Describe the solution you'd like**
Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class.
```
add_batch(*, sources=None, predictions=None, references=None, **kwargs)
add(*, sources=None, predictions=None, references=None, **kwargs)
compute()
```
**Describe alternatives you've considered**
I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch).
**Additional context**
These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
| 2022-03-04T18:04:21Z | 3,818 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-03-03T18:57:54Z | https://api.github.com/repos/huggingface/datasets/issues/3818/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3818/timeline | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | https://api.github.com/repos/huggingface/datasets/issues/3818/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4",
"events_url": "https://api.github.com/users/lmvasque/events{/privacy}",
"followers_url": "https://api.github.com/users/lmvasque/followers",
"following_url": "https://api.github.com/users/lmvasque/following{/other_user}",
"gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lmvasque",
"id": 6901031,
"login": "lmvasque",
"node_id": "MDQ6VXNlcjY5MDEwMzE=",
"organizations_url": "https://api.github.com/users/lmvasque/orgs",
"received_events_url": "https://api.github.com/users/lmvasque/received_events",
"repos_url": "https://api.github.com/users/lmvasque/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lmvasque"
} | [] | null | completed | NONE | 2022-03-04T18:04:21Z | null | I_kwDODunzps5FEbXB | [
"Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?",
"Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3818 | https://github.com/huggingface/datasets/issues/3818 | false |
1,158,592,335 | https://api.github.com/repos/huggingface/datasets/issues/3817/labels{/name} | In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming.
In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-streaming mode.
cc @patrickvonplaten @polinaeterna @anton-l @albertvillanova since this will become the template for many audio datasets to come.
This change can also trivially be applied to the other audio datasets that already exist. Using this line, you can get access to local files in non-streaming mode:
```python
local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
``` | 2022-03-04T14:51:48Z | 3,817 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T16:01:21Z | https://api.github.com/repos/huggingface/datasets/issues/3817/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3817/timeline | Simplify Common Voice code | https://api.github.com/repos/huggingface/datasets/issues/3817/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-04T12:39:23Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3817.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3817",
"merged_at": "2022-03-04T12:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3817.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3817"
} | PR_kwDODunzps4z5pQ7 | [
"I think the script looks pretty clean and readable now! cool!\r\n"
] | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3817/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3817 | https://github.com/huggingface/datasets/pull/3817 | true |
1,158,589,913 | https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name} | null | 2022-10-04T09:35:53Z | 3,816 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T15:59:14Z | https://api.github.com/repos/huggingface/datasets/issues/3816/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3816/timeline | Doc new UI test workflows2 | https://api.github.com/repos/huggingface/datasets/issues/3816/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | null | null | CONTRIBUTOR | 2022-03-03T16:42:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3816",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3816"
} | PR_kwDODunzps4z5owP | [
"<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3816 | https://github.com/huggingface/datasets/pull/3816 | true |
1,158,589,512 | https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name} | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator.
The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py` | 2022-03-03T18:06:37Z | 3,815 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T15:58:52Z | https://api.github.com/repos/huggingface/datasets/issues/3815/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3815/timeline | Fix iter_archive getting reset | https://api.github.com/repos/huggingface/datasets/issues/3815/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-03T18:06:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"merged_at": "2022-03-03T18:06:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3815"
} | PR_kwDODunzps4z5oq- | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3815 | https://github.com/huggingface/datasets/pull/3815 | true |
1,158,518,995 | https://api.github.com/repos/huggingface/datasets/issues/3814/labels{/name} | This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
| 2022-03-03T16:37:44Z | 3,814 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T15:03:35Z | https://api.github.com/repos/huggingface/datasets/issues/3814/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3814/timeline | Handle Nones in PyArrow struct | https://api.github.com/repos/huggingface/datasets/issues/3814/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-03T16:37:43Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3814",
"merged_at": "2022-03-03T16:37:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3814"
} | PR_kwDODunzps4z5Zk4 | [
"Looks like I added my comments while you were editing - sorry about that"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3814/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3814 | https://github.com/huggingface/datasets/pull/3814 | true |
1,158,474,859 | https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name} | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| 2022-04-10T13:39:59Z | 3,813 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | 2022-03-03T14:26:45Z | https://api.github.com/repos/huggingface/datasets/issues/3813/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | https://api.github.com/repos/huggingface/datasets/issues/3813/timeline | Add MetaShift dataset | https://api.github.com/repos/huggingface/datasets/issues/3813/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gi... | null | completed | MEMBER | 2022-04-10T13:39:59Z | null | I_kwDODunzps5FDOxr | [
"I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.",
"#self-assign",
"I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_m... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3813 | https://github.com/huggingface/datasets/issues/3813 | false |
1,158,369,995 | https://api.github.com/repos/huggingface/datasets/issues/3812/labels{/name} | # do not merge
## Hypothesis
packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar)
## Data
I host it [here](https://huggingface.co/datasets/polinaeterna/benchmark_dataset/)
## I checked three configurations:
1. All data in one zip archive, streaming only those files that exist in split metadata file (we can access them directrly with no need to iterate over full archive), see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR196)
2. All data in three splits, the standart way to make streaming efficient, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR174)
3. All data in single tar, iterate over the full archive and take only files existing in split metadata file, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR150)
## Results
1. one zip

2. three tars

3. one tar

didn't check on the full data as it's time consuming but anyway it's pretty obvious that one-zip-way is not a good idea. here it's even worse than full iteration over tar containing all three splits (but that would depend on the case).
| 2022-03-03T14:55:34Z | 3,812 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T12:48:41Z | https://api.github.com/repos/huggingface/datasets/issues/3812/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3812/timeline | benchmark streaming speed with tar vs zip archives | https://api.github.com/repos/huggingface/datasets/issues/3812/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | null | null | CONTRIBUTOR | 2022-03-03T14:55:33Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3812",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3812"
} | PR_kwDODunzps4z46C4 | [
"I'm closing the PR since we're not going to merge it"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3812/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3812 | https://github.com/huggingface/datasets/pull/3812 | true |
1,158,234,407 | https://api.github.com/repos/huggingface/datasets/issues/3811/labels{/name} | Reflect changes from https://github.com/huggingface/transformers/pull/15891 | 2022-10-04T09:35:54Z | 3,811 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T10:29:01Z | https://api.github.com/repos/huggingface/datasets/issues/3811/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3811/timeline | Update dev doc gh workflows | https://api.github.com/repos/huggingface/datasets/issues/3811/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | null | null | CONTRIBUTOR | 2022-03-03T10:45:54Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3811.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3811",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3811.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3811"
} | PR_kwDODunzps4z4dHS | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3811/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3811 | https://github.com/huggingface/datasets/pull/3811 | true |
1,158,202,093 | https://api.github.com/repos/huggingface/datasets/issues/3810/labels{/name} | Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases
We updated our loading script, but we did not bump a new version number:
- #3254
This PR updates our loading script version from `1.0.0` to `1.1.0`. | 2022-03-03T10:44:30Z | 3,810 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-03T09:58:25Z | https://api.github.com/repos/huggingface/datasets/issues/3810/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3810/timeline | Update version of xcopa dataset | https://api.github.com/repos/huggingface/datasets/issues/3810/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-03T10:44:29Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3810",
"merged_at": "2022-03-03T10:44:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3810"
} | PR_kwDODunzps4z4WUW | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3810/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3810 | https://github.com/huggingface/datasets/pull/3810 | true |
1,158,143,480 | https://api.github.com/repos/huggingface/datasets/issues/3809/labels{/name} | ## Describe the bug
Datasets hosted on Google Drive do not seem to work right now.
Loading them fails with a checksum error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
for dataset in ["head_qa", "yelp_review_full"]:
try:
load_dataset(dataset)
except Exception as exception:
print("Error", dataset, exception)
```
Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4).
## Expected results
The datasets should be loaded.
## Actual results
```
Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Error head_qa Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...
Error yelp_review_full Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| 2022-03-03T09:24:58Z | 3,809 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | 2022-03-03T09:01:10Z | https://api.github.com/repos/huggingface/datasets/issues/3809/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3809/timeline | Checksums didn't match for datasets on Google Drive | https://api.github.com/repos/huggingface/datasets/issues/3809/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11507045?v=4",
"events_url": "https://api.github.com/users/muelletm/events{/privacy}",
"followers_url": "https://api.github.com/users/muelletm/followers",
"following_url": "https://api.github.com/users/muelletm/following{/other_user}",
"gists_url": "https://api.github.com/users/muelletm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muelletm",
"id": 11507045,
"login": "muelletm",
"node_id": "MDQ6VXNlcjExNTA3MDQ1",
"organizations_url": "https://api.github.com/users/muelletm/orgs",
"received_events_url": "https://api.github.com/users/muelletm/received_events",
"repos_url": "https://api.github.com/users/muelletm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muelletm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muelletm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muelletm"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-03-03T09:24:05Z | null | I_kwDODunzps5FB934 | [
"Hi @muelletm, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nUntil our next `datasets` library release, you can get this fix by installing our library from the GitHub ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3809/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3809 | https://github.com/huggingface/datasets/issues/3809 | false |
1,157,650,043 | https://api.github.com/repos/huggingface/datasets/issues/3808/labels{/name} | ## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bug
```python
def preprocess_function_factory(augmentation=None):
def preprocess_function(examples):
# Tokenize the texts
if augmentation:
conversions1 = [
augmentation(example)
for example in examples[sentence1_key]
]
if sentence2_key is None:
args = (conversions1,)
else:
conversions2 = [
augmentation(example)
for example in examples[sentence2_key]
]
args = (conversions1, conversions2)
else:
args = (
(examples[sentence1_key],)
if sentence2_key is None
else (examples[sentence1_key], examples[sentence2_key])
)
result = tokenizer(
*args, padding=padding, max_length=max_seq_length, truncation=True
)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [
(label_to_id[l] if l != -1 else -1) for l in examples["label"]
]
return result
return preprocess_function
capitalize = lambda x: x.capitalize()
preprocess_function = preprocess_function_factory(augmentation=capitalize)
print(hash(preprocess_function)) # This will change on each run
raw_datasets = raw_datasets.map(
preprocess_function,
batched=True,
load_from_cache_file=True,
desc="Running transformation and tokenizer on dataset",
)
```
## Expected results
Running the code twice will cause the cache to be re-used.
## Actual results
Running the code twice causes the whole dataset to be re-processed
| 2022-03-10T23:01:47Z | 3,808 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-02T20:18:43Z | https://api.github.com/repos/huggingface/datasets/issues/3808/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3808/timeline | Pre-Processing Cache Fails when using a Factory pattern | https://api.github.com/repos/huggingface/datasets/issues/3808/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9847335?v=4",
"events_url": "https://api.github.com/users/Helw150/events{/privacy}",
"followers_url": "https://api.github.com/users/Helw150/followers",
"following_url": "https://api.github.com/users/Helw150/following{/other_user}",
"gists_url": "https://api.github.com/users/Helw150/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Helw150",
"id": 9847335,
"login": "Helw150",
"node_id": "MDQ6VXNlcjk4NDczMzU=",
"organizations_url": "https://api.github.com/users/Helw150/orgs",
"received_events_url": "https://api.github.com/users/Helw150/received_events",
"repos_url": "https://api.github.com/users/Helw150/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Helw150/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Helw150/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Helw150"
} | [] | null | completed | NONE | 2022-03-10T23:01:47Z | null | I_kwDODunzps5FAFZ7 | [
"Ok - this is still an issue but I believe the root cause is different than I originally thought. I'm now able to get caching to work consistently with the above example as long as I fix the python hash seed `export PYTHONHASHSEED=1234`",
"Hi! \r\n\r\nYes, our hasher should work with decorators. For instance, thi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3808/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3808 | https://github.com/huggingface/datasets/issues/3808 | false |
1,157,531,812 | https://api.github.com/repos/huggingface/datasets/issues/3807/labels{/name} | ## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be loaded correctly.
## Actual results
Fails with:
```python
in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/cambridgeltl/xcopa/archive/master.zip']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3, and 1.18.4.dev0
- Platform:
- Python version: 3.8
- PyArrow version:
| 2022-05-20T06:00:42Z | 3,807 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-03-02T18:10:19Z | https://api.github.com/repos/huggingface/datasets/issues/3807/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3807/timeline | NonMatchingChecksumError in xcopa dataset | https://api.github.com/repos/huggingface/datasets/issues/3807/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/93286455?v=4",
"events_url": "https://api.github.com/users/afcruzs-ms/events{/privacy}",
"followers_url": "https://api.github.com/users/afcruzs-ms/followers",
"following_url": "https://api.github.com/users/afcruzs-ms/following{/other_user}",
"gists_url": "https://api.github.com/users/afcruzs-ms/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/afcruzs-ms",
"id": 93286455,
"login": "afcruzs-ms",
"node_id": "U_kgDOBY9wNw",
"organizations_url": "https://api.github.com/users/afcruzs-ms/orgs",
"received_events_url": "https://api.github.com/users/afcruzs-ms/received_events",
"repos_url": "https://api.github.com/users/afcruzs-ms/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/afcruzs-ms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afcruzs-ms/subscriptions",
"type": "User",
"url": "https://api.github.com/users/afcruzs-ms"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-03-03T17:40:31Z | null | I_kwDODunzps5E_oik | [
"@albertvillanova here's a separate issue for a bug similar to #3792",
"Hi @afcruzs-ms, thanks for opening this separate issue for your problem.\r\n\r\nThe root problem in the other issue (#3792) was a change in the service of Google Drive.\r\n\r\nBut in your case, the `xcopa` dataset is not hosted on Google Driv... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3807/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3807 | https://github.com/huggingface/datasets/issues/3807 | false |
1,157,505,826 | https://api.github.com/repos/huggingface/datasets/issues/3806/labels{/name} | This PR fixes the URL for Spanish data file.
Previously, Spanish had the same URL as Vietnamese data file. | 2022-03-03T08:38:17Z | 3,806 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-02T17:43:42Z | https://api.github.com/repos/huggingface/datasets/issues/3806/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3806/timeline | Fix Spanish data file URL in wiki_lingua dataset | https://api.github.com/repos/huggingface/datasets/issues/3806/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-03T08:38:16Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3806",
"merged_at": "2022-03-03T08:38:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3806"
} | PR_kwDODunzps4z2FeI | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3806/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3806 | https://github.com/huggingface/datasets/pull/3806 | true |
1,157,454,884 | https://api.github.com/repos/huggingface/datasets/issues/3805/labels{/name} | This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it. | 2022-03-07T12:13:36Z | 3,805 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-02T16:58:34Z | https://api.github.com/repos/huggingface/datasets/issues/3805/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3805/timeline | Remove decode: true for image feature in head_qa | https://api.github.com/repos/huggingface/datasets/issues/3805/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4",
"events_url": "https://api.github.com/users/craffel/events{/privacy}",
"followers_url": "https://api.github.com/users/craffel/followers",
"following_url": "https://api.github.com/users/craffel/following{/other_user}",
"gists_url": "https://api.github.com/users/craffel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/craffel",
"id": 417568,
"login": "craffel",
"node_id": "MDQ6VXNlcjQxNzU2OA==",
"organizations_url": "https://api.github.com/users/craffel/orgs",
"received_events_url": "https://api.github.com/users/craffel/received_events",
"repos_url": "https://api.github.com/users/craffel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/craffel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/craffel"
} | [] | null | null | CONTRIBUTOR | 2022-03-07T12:13:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3805.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3805",
"merged_at": "2022-03-07T12:13:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3805.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3805"
} | PR_kwDODunzps4z16os | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3805/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3805 | https://github.com/huggingface/datasets/pull/3805 | true |
1,157,297,278 | https://api.github.com/repos/huggingface/datasets/issues/3804/labels{/name} | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted.
**Describe the solution you'd like**
```python
if self.config.sample_by == "line":
batch_idx = 0
while True:
batch = f.read(self.config.chunksize)
if not batch:
break
batch += f.readline() # finish current line
if self.config.custom_newline is None:
batch = batch.splitlines(keepends=self.config.keep_linebreaks)
else:
batch = batch.split(self.config.custom_newline)[:-1]
pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema)
# Uncomment for debugging (will print the Arrow table size and elements)
# logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
# logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
yield (file_idx, batch_idx), pa_table
batch_idx += 1
```
**A clear and concise description of what you want to happen.**
Creating the dataset rows with a subset of the `splitlines()` line boundaries. | 2022-03-16T15:53:59Z | 3,804 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-03-02T14:50:16Z | https://api.github.com/repos/huggingface/datasets/issues/3804/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3804/timeline | Text builder with custom separator line boundaries | https://api.github.com/repos/huggingface/datasets/issues/3804/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cronoik",
"id": 18630848,
"login": "cronoik",
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"repos_url": "https://api.github.com/users/cronoik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cronoik"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | null | NONE | null | null | I_kwDODunzps5E-vR- | [
"Gently pinging @lhoestq",
"Hi ! Interresting :)\r\n\r\nCould you give more details on what kind of separators you would like to use instead ?",
"In my case, I just want to use `\\n` but not `U+2028`.",
"Ok I see, maybe there can be a `sep` parameter to allow users to specify what line/paragraph separator the... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3804/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3804 | https://github.com/huggingface/datasets/issues/3804 | false |
1,157,271,679 | https://api.github.com/repos/huggingface/datasets/issues/3803/labels{/name} | This PR removes the following deprecated methos/params:
* `Dataset.cast_`/`DatasetDict.cast_`
* `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_`
* `Dataset.remove_columns_`/`DatasetDict.remove_columns_`
* `Dataset.rename_columns_`/`DatasetDict.rename_columns_`
* `prepare_module`
* param `script_version` in `load_dataset`/`load_metric`
* param `version` in `hf_github_url`
| 2022-03-02T14:53:21Z | 3,803 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-02T14:29:12Z | https://api.github.com/repos/huggingface/datasets/issues/3803/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3803/timeline | Remove deprecated methods/params (preparation for v2.0) | https://api.github.com/repos/huggingface/datasets/issues/3803/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-02T14:53:21Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3803.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3803",
"merged_at": "2022-03-02T14:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3803.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3803"
} | PR_kwDODunzps4z1T48 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3803/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3803 | https://github.com/huggingface/datasets/pull/3803 | true |
1,157,009,964 | https://api.github.com/repos/huggingface/datasets/issues/3802/labels{/name} |
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing**
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
*Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
| 2022-03-02T15:21:10Z | 3,802 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-02T10:40:18Z | https://api.github.com/repos/huggingface/datasets/issues/3802/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3802/timeline | Release of FairLex dataset | https://api.github.com/repos/huggingface/datasets/issues/3802/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | null | null | CONTRIBUTOR | 2022-03-02T15:18:54Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3802",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3802"
} | PR_kwDODunzps4z0biM | [
"This is awesome ! The dataset card and the dataset script look amazing :)\r\n\r\nI wanted to ask you if you'd be interested to have this dataset under the namespace of you research group at https://huggingface.co/coastalcph ? If yes, then you can actually create a dataset repository under your research group name ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3802/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3802 | https://github.com/huggingface/datasets/pull/3802 | true |
1,155,649,279 | https://api.github.com/repos/huggingface/datasets/issues/3801/labels{/name} | Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing.
In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0**
In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them.
I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns
### TODO
- [x] tests
- [x] docs
Related to https://github.com/huggingface/datasets/issues/3444 | 2022-03-07T16:30:30Z | 3,801 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-01T18:06:43Z | https://api.github.com/repos/huggingface/datasets/issues/3801/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3801/timeline | [Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters | https://api.github.com/repos/huggingface/datasets/issues/3801/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-07T16:30:29Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3801",
"merged_at": "2022-03-07T16:30:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3801"
} | PR_kwDODunzps4zvqjN | [
"Right ! Will add it in another PR :)"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3801/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3801 | https://github.com/huggingface/datasets/pull/3801 | true |
1,155,620,761 | https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name} | Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks. | 2022-03-04T07:15:55Z | 3,800 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-01T17:37:46Z | https://api.github.com/repos/huggingface/datasets/issues/3800/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3800/timeline | Added computer vision tasks | https://api.github.com/repos/huggingface/datasets/issues/3800/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan"
} | [] | null | null | CONTRIBUTOR | 2022-03-04T07:15:55Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3800",
"merged_at": "2022-03-04T07:15:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3800"
} | PR_kwDODunzps4zvkjA | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3800 | https://github.com/huggingface/datasets/pull/3800 | true |
1,155,356,102 | https://api.github.com/repos/huggingface/datasets/issues/3799/labels{/name} | **Added datasets (TODO)**:
- [x] MLS
- [x] Covost2
- [x] Minds-14
- [x] Voxpopuli
- [x] FLoRes (need data)
**Metrics**: Done | 2022-03-16T14:40:29Z | 3,799 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-03-01T13:42:28Z | https://api.github.com/repos/huggingface/datasets/issues/3799/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3799/timeline | Xtreme-S Metrics | https://api.github.com/repos/huggingface/datasets/issues/3799/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | null | null | CONTRIBUTOR | 2022-03-16T14:40:26Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3799",
"merged_at": "2022-03-16T14:40:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3799"
} | PR_kwDODunzps4zus9R | [
"@lhoestq - if you could take a final review here this would be great (if you have 5min :-) ) ",
"Don't think the failures are related but not 100% sure",
"Yes the CI fail is unrelated - you can ignore it"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3799/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3799 | https://github.com/huggingface/datasets/pull/3799 | true |
1,154,411,066 | https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name} | Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this:
```python
csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f
```
CC: @SBrandeis | 2022-02-28T18:51:39Z | 3,798 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-28T18:24:10Z | https://api.github.com/repos/huggingface/datasets/issues/3798/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3798/timeline | Fix error message in CSV loader for newer Pandas versions | https://api.github.com/repos/huggingface/datasets/issues/3798/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-02-28T18:51:38Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3798",
"merged_at": "2022-02-28T18:51:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3798"
} | PR_kwDODunzps4zrl5Y | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3798 | https://github.com/huggingface/datasets/pull/3798 | true |
1,154,383,063 | https://api.github.com/repos/huggingface/datasets/issues/3797/labels{/name} | Description tags for webis-tldr-17 added. | 2023-03-09T22:08:58Z | 3,797 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-28T17:53:18Z | https://api.github.com/repos/huggingface/datasets/issues/3797/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3797/timeline | Reddit dataset card contribution | https://api.github.com/repos/huggingface/datasets/issues/3797/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anna-kay",
"id": 56791604,
"login": "anna-kay",
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anna-kay"
} | [] | null | null | CONTRIBUTOR | 2022-03-01T12:58:57Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3797.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3797",
"merged_at": "2022-03-01T12:58:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3797.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3797"
} | PR_kwDODunzps4zrgAD | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3797/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3797 | https://github.com/huggingface/datasets/pull/3797 | true |
1,154,298,629 | https://api.github.com/repos/huggingface/datasets/issues/3796/labels{/name} | This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance) | 2022-02-28T17:03:46Z | 3,796 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-28T16:28:45Z | https://api.github.com/repos/huggingface/datasets/issues/3796/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3796/timeline | Skip checksum computation if `ignore_verifications` is `True` | https://api.github.com/repos/huggingface/datasets/issues/3796/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-02-28T17:03:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3796.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3796",
"merged_at": "2022-02-28T17:03:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3796.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3796"
} | PR_kwDODunzps4zrOQ4 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3796/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3796 | https://github.com/huggingface/datasets/pull/3796 | true |
1,153,261,281 | https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name} | ## Describe the bug
after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir')
dataset['train'].flatten()
```
## Expected results
a dataset with `long_answer` as features
## Actual results
Traceback (most recent call last):
File "temp.py", line 5, in <module>
dataset['train'].flatten()
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper
out = func(self, *args, **kwargs)
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten
dataset._data = update_metadata_with_features(dataset._data, dataset.features)
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features
features = Features({col_name: features[col_name] for col_name in table.column_names})
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp>
features = Features({col_name: features[col_name] for col_name in table.column_names})
KeyError: 'annotations.long_answer'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.13
- Platform: MBP
- Python version: 3.8
- PyArrow version: 6.0.1
| 2022-03-21T14:36:12Z | 3,795 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-27T13:57:40Z | https://api.github.com/repos/huggingface/datasets/issues/3795/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/3795/timeline | can not flatten natural_questions dataset | https://api.github.com/repos/huggingface/datasets/issues/3795/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hannibal046",
"id": 38466901,
"login": "Hannibal046",
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hannibal046"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | completed | NONE | 2022-03-21T14:36:12Z | null | I_kwDODunzps5EvV7h | [
"same issue. downgrade it to a lower version.",
"Thanks for reporting, I'll take a look tomorrow :)"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3795 | https://github.com/huggingface/datasets/issues/3795 | false |
1,153,185,343 | https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name} | Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P.
In this PR I implement the metric in a simple way with the help of numpy only.
Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable. | 2022-03-02T14:46:15Z | 3,794 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-27T10:56:31Z | https://api.github.com/repos/huggingface/datasets/issues/3794/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3794/timeline | Add Mahalanobis distance metric | https://api.github.com/repos/huggingface/datasets/issues/3794/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4",
"events_url": "https://api.github.com/users/JoaoLages/events{/privacy}",
"followers_url": "https://api.github.com/users/JoaoLages/followers",
"following_url": "https://api.github.com/users/JoaoLages/following{/other_user}",
"gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoaoLages",
"id": 17574157,
"login": "JoaoLages",
"node_id": "MDQ6VXNlcjE3NTc0MTU3",
"organizations_url": "https://api.github.com/users/JoaoLages/orgs",
"received_events_url": "https://api.github.com/users/JoaoLages/received_events",
"repos_url": "https://api.github.com/users/JoaoLages/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoaoLages"
} | [] | null | null | CONTRIBUTOR | 2022-03-02T14:46:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3794",
"merged_at": "2022-03-02T14:46:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3794"
} | PR_kwDODunzps4zniT4 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3794 | https://github.com/huggingface/datasets/pull/3794 | true |
1,150,974,950 | https://api.github.com/repos/huggingface/datasets/issues/3793/labels{/name} | Removes the need to have a self-hosted runner for the dev documentation | 2022-03-01T15:55:29Z | 3,793 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T23:48:55Z | https://api.github.com/repos/huggingface/datasets/issues/3793/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3793/timeline | Docs new UI actions no self hosted | https://api.github.com/repos/huggingface/datasets/issues/3793/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | null | null | MEMBER | 2022-03-01T15:55:28Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3793",
"merged_at": "2022-03-01T15:55:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3793"
} | PR_kwDODunzps4zfdL0 | [
"It seems like the doc can't be compiled right now because of the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 33, in <module>\r\n sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())\r\n File \"/__w/datasets/datasets/doc-bui... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3793/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3793 | https://github.com/huggingface/datasets/pull/3793 | true |
1,150,812,404 | https://api.github.com/repos/huggingface/datasets/issues/3792/labels{/name} | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]()
```
Am I the one who added this dataset ? No
| 2024-03-13T12:25:08Z | 3,792 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | 2022-02-25T19:55:09Z | https://api.github.com/repos/huggingface/datasets/issues/3792/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3792/timeline | Checksums didn't match for dataset source | https://api.github.com/repos/huggingface/datasets/issues/3792/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4",
"events_url": "https://api.github.com/users/rafikg/events{/privacy}",
"followers_url": "https://api.github.com/users/rafikg/followers",
"following_url": "https://api.github.com/users/rafikg/following{/other_user}",
"gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rafikg",
"id": 13174842,
"login": "rafikg",
"node_id": "MDQ6VXNlcjEzMTc0ODQy",
"organizations_url": "https://api.github.com/users/rafikg/orgs",
"received_events_url": "https://api.github.com/users/rafikg/received_events",
"repos_url": "https://api.github.com/users/rafikg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafikg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rafikg"
} | [] | null | completed | NONE | 2022-02-28T08:44:18Z | null | I_kwDODunzps5EmAD0 | [
"Same issue with `dataset = load_dataset(\"dbpedia_14\")`\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']",
"I think this is a side-effect of #3787. The checksums won't match because the URLs ha... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3792/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3792 | https://github.com/huggingface/datasets/issues/3792 | false |
1,150,733,475 | https://api.github.com/repos/huggingface/datasets/issues/3791/labels{/name} | As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`. | 2022-03-01T13:10:43Z | 3,791 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T18:26:35Z | https://api.github.com/repos/huggingface/datasets/issues/3791/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3791/timeline | Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem | https://api.github.com/repos/huggingface/datasets/issues/3791/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-03-01T13:10:42Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3791",
"merged_at": "2022-03-01T13:10:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3791"
} | PR_kwDODunzps4zevU2 | [] | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3791/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3791 | https://github.com/huggingface/datasets/pull/3791 | true |
1,150,646,899 | https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name} | I added the three scripts:
- build_dev_documentation.yml
- build_documentation.yml
- delete_dev_documentation.yml
I got them from `transformers` and did a few changes:
- I removed the `transformers`-specific dependencies
- I changed all the paths to be "datasets" instead of "transformers"
- I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26)
cc @LysandreJik @mishig25 | 2022-03-01T15:55:42Z | 3,790 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T16:38:47Z | https://api.github.com/repos/huggingface/datasets/issues/3790/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3790/timeline | Add doc builder scripts | https://api.github.com/repos/huggingface/datasets/issues/3790/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-03-01T15:55:41Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3790",
"merged_at": "2022-03-01T15:55:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3790"
} | PR_kwDODunzps4zedMa | [
"I think we're only missing the hosted runner to be configured for this repository and we should be good",
"Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents que... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3790 | https://github.com/huggingface/datasets/pull/3790 | true |
1,150,587,404 | https://api.github.com/repos/huggingface/datasets/issues/3789/labels{/name} | This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using.
About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL
Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent:
> For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed.
> [[%C3%80_propos_de_M%C3%A9ta]]
> is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL
> [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta)
> while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same.
Fix #3398.
CC: @geohci | 2022-03-04T08:24:24Z | 3,789 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T15:34:37Z | https://api.github.com/repos/huggingface/datasets/issues/3789/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3789/timeline | Add URL and ID fields to Wikipedia dataset | https://api.github.com/repos/huggingface/datasets/issues/3789/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-03-04T08:24:23Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3789",
"merged_at": "2022-03-04T08:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3789"
} | PR_kwDODunzps4zeQpx | [
"Do you think we have a dedicated branch for all the changes we want to do to wikipedia ? Then once everything looks good + we have preprocessed the main languages, we can merge it on the `master` branch",
"Yes, @lhoestq, I agree with you.\r\n\r\nI have just created the dedicated branch [`update-wikipedia`](https... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3789/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3789 | https://github.com/huggingface/datasets/pull/3789 | true |
1,150,375,720 | https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name} | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 2022-02-28T11:22:22Z | 3,788 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-25T12:11:39Z | https://api.github.com/repos/huggingface/datasets/issues/3788/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3788/timeline | Only-data dataset loaded unexpectedly as validation split | https://api.github.com/repos/huggingface/datasets/issues/3788/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | null | null | I_kwDODunzps5EkVco | [
"I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3788 | https://github.com/huggingface/datasets/issues/3788 | false |
1,150,235,569 | https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name} | This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs.
Fix #3786, fix #3784. | 2022-03-04T20:43:32Z | 3,787 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T09:35:12Z | https://api.github.com/repos/huggingface/datasets/issues/3787/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3787/timeline | Fix Google Drive URL to avoid Virus scan warning | https://api.github.com/repos/huggingface/datasets/issues/3787/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-25T11:56:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3787",
"merged_at": "2022-02-25T11:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3787"
} | PR_kwDODunzps4zdE7b | [
"Thanks for this @albertvillanova!",
"Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previous... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3787 | https://github.com/huggingface/datasets/pull/3787 | true |
1,150,233,067 | https://api.github.com/repos/huggingface/datasets/issues/3786/labels{/name} | ## Describe the bug
Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself.
See:
- #3758
- #3773
- #3784
| 2022-03-03T09:25:59Z | 3,786 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-25T09:32:23Z | https://api.github.com/repos/huggingface/datasets/issues/3786/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3786/timeline | Bug downloading Virus scan warning page from Google Drive URLs | https://api.github.com/repos/huggingface/datasets/issues/3786/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2022-02-25T11:56:35Z | null | I_kwDODunzps5Ejynr | [
"Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the ch... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3786/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3786 | https://github.com/huggingface/datasets/issues/3786 | false |
1,150,069,801 | https://api.github.com/repos/huggingface/datasets/issues/3785/labels{/name} | This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets.
So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ
The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t
Fixes #3784 | 2022-03-03T16:43:47Z | 3,785 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-25T05:48:57Z | https://api.github.com/repos/huggingface/datasets/issues/3785/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3785/timeline | Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset) | https://api.github.com/repos/huggingface/datasets/issues/3785/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AngadSethi",
"id": 58678541,
"login": "AngadSethi",
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AngadSethi"
} | [] | null | null | NONE | 2022-03-03T14:03:37Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3785",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3785"
} | PR_kwDODunzps4zciES | [
"Thank you, @albertvillanova!",
"Got it. Thanks for explaining this, @albertvillanova!\r\n\r\n> On the other hand, the tests are not passing because the dummy data should also be fixed. Once done, this PR will be able to be merged into master.\r\n\r\nWill do this 👍",
"Hi ! I think we need to fix the issue for ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3785/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3785 | https://github.com/huggingface/datasets/pull/3785 | true |
1,150,057,955 | https://api.github.com/repos/huggingface/datasets/issues/3784/labels{/name} | ## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:**

- **This leads to the following error**:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
```
## Expected results
That the dataset is downloaded and processed just like other datasets.
## Actual results
Hit with this error:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| 2022-03-03T14:05:17Z | 3,784 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-25T05:24:47Z | https://api.github.com/repos/huggingface/datasets/issues/3784/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AngadSethi",
"id": 58678541,
"login": "AngadSethi",
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AngadSethi"
} | https://api.github.com/repos/huggingface/datasets/issues/3784/timeline | Unable to Download CNN-Dailymail Dataset | https://api.github.com/repos/huggingface/datasets/issues/3784/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AngadSethi",
"id": 58678541,
"login": "AngadSethi",
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AngadSethi"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
... | null | completed | NONE | 2022-03-03T14:05:17Z | null | I_kwDODunzps5EjH3j | [
"#self-assign",
"@AngadSethi thanks for reporting and thanks for your PR!",
"Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running 😀",
"Fixed by:\r\n- #3787"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3784/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3784 | https://github.com/huggingface/datasets/issues/3784 | false |
1,149,256,744 | https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name} | null | 2022-02-24T16:01:40Z | 3,783 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-24T12:58:15Z | https://api.github.com/repos/huggingface/datasets/issues/3783/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3783/timeline | Support passing str to iter_files | https://api.github.com/repos/huggingface/datasets/issues/3783/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-24T16:01:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3783",
"merged_at": "2022-02-24T16:01:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3783"
} | PR_kwDODunzps4zZ1jR | [
"@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... 😉"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3783 | https://github.com/huggingface/datasets/pull/3783 | true |
1,148,994,022 | https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name} | ## 1. Case
```
dataset.map(
batched=True,
disable_nullable=True,
)
```
will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516
`pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema`
## 2. Debugging
### 2.1 tracing
During `_map_single`, the following are called
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511
### 2.2. Observation
The problem is, even after `table_cast`, `pa_table.schema != self._schema`
`pa_table.schema` (before/after `table_cast`)
```
input_ids: list<item: int32>
child 0, item: int32
```
`self._schema`
```
input_ids: list<item: int32> not null
child 0, item: int32
```
### 2.3. Reason
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121
Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability.
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103
So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema
## 3. Solution
1. Let `Features` stores nullability.
2. Directly cast table with original schema but not schema from converted `Features`. (this PR)
3. Don't `cast_table` when `write_table` | 2022-03-03T14:54:39Z | 3,782 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-24T08:23:07Z | https://api.github.com/repos/huggingface/datasets/issues/3782/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3782/timeline | Error of writing with different schema, due to nonpreservation of nullability | https://api.github.com/repos/huggingface/datasets/issues/3782/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | null | null | CONTRIBUTOR | 2022-03-03T14:54:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3782",
"merged_at": "2022-03-03T14:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3782"
} | PR_kwDODunzps4zY-Xb | [
"Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3782 | https://github.com/huggingface/datasets/pull/3782 | true |
1,148,599,680 | https://api.github.com/repos/huggingface/datasets/issues/3781/labels{/name} | The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38
It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well.
The task at which the dataset is aimed is abstractive summarization.
| 2022-02-28T18:00:40Z | 3,781 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-23T21:29:16Z | https://api.github.com/repos/huggingface/datasets/issues/3781/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3781/timeline | Reddit dataset card additions | https://api.github.com/repos/huggingface/datasets/issues/3781/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anna-kay",
"id": 56791604,
"login": "anna-kay",
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anna-kay"
} | [] | null | null | CONTRIBUTOR | 2022-02-28T11:21:14Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3781",
"merged_at": "2022-02-28T11:21:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3781"
} | PR_kwDODunzps4zXr_O | [
"Hello! I added the tags and created a PR. Just to note, regarding the paperswithcode_id tag, that currently has the value \"reddit\"; the dataset described as reddit in paperswithcode is https://paperswithcode.com/dataset/reddit and it isn't the Webis-tldr-17. I could not find Webis-tldr-17 in paperswithcode neith... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3781/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3781 | https://github.com/huggingface/datasets/pull/3781 | true |
1,148,186,272 | https://api.github.com/repos/huggingface/datasets/issues/3780/labels{/name} | null | 2022-03-04T19:04:29Z | 3,780 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-23T14:44:17Z | https://api.github.com/repos/huggingface/datasets/issues/3780/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3780/timeline | Add ElkarHizketak v1.0 dataset | https://api.github.com/repos/huggingface/datasets/issues/3780/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7646055?v=4",
"events_url": "https://api.github.com/users/antxa/events{/privacy}",
"followers_url": "https://api.github.com/users/antxa/followers",
"following_url": "https://api.github.com/users/antxa/following{/other_user}",
"gists_url": "https://api.github.com/users/antxa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antxa",
"id": 7646055,
"login": "antxa",
"node_id": "MDQ6VXNlcjc2NDYwNTU=",
"organizations_url": "https://api.github.com/users/antxa/orgs",
"received_events_url": "https://api.github.com/users/antxa/received_events",
"repos_url": "https://api.github.com/users/antxa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antxa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antxa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antxa"
} | [] | null | null | CONTRIBUTOR | 2022-03-04T19:04:29Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3780",
"merged_at": "2022-03-04T19:04:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3780"
} | PR_kwDODunzps4zWVSM | [
"I also filled some missing sections in the dataset card"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3780/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3780 | https://github.com/huggingface/datasets/pull/3780 | true |
1,148,050,636 | https://api.github.com/repos/huggingface/datasets/issues/3779/labels{/name} | Fix #3778. | 2022-02-23T13:26:41Z | 3,779 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-23T12:49:07Z | https://api.github.com/repos/huggingface/datasets/issues/3779/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3779/timeline | Update manual download URL in newsroom dataset | https://api.github.com/repos/huggingface/datasets/issues/3779/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-23T13:26:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3779",
"merged_at": "2022-02-23T13:26:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3779"
} | PR_kwDODunzps4zV4qr | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3779/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3779 | https://github.com/huggingface/datasets/pull/3779 | true |
1,147,898,946 | https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name} | Hello,
I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**!
For manually, Link is also didn't work! It is sawing some ad or something!
If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help!
Thanks
Darshan Tank | 2022-02-23T17:05:04Z | 3,778 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | 2022-02-23T10:15:50Z | https://api.github.com/repos/huggingface/datasets/issues/3778/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3778/timeline | Not be able to download dataset - "Newsroom" | https://api.github.com/repos/huggingface/datasets/issues/3778/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/61326242?v=4",
"events_url": "https://api.github.com/users/Darshan2104/events{/privacy}",
"followers_url": "https://api.github.com/users/Darshan2104/followers",
"following_url": "https://api.github.com/users/Darshan2104/following{/other_user}",
"gists_url": "https://api.github.com/users/Darshan2104/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darshan2104",
"id": 61326242,
"login": "Darshan2104",
"node_id": "MDQ6VXNlcjYxMzI2MjQy",
"organizations_url": "https://api.github.com/users/Darshan2104/orgs",
"received_events_url": "https://api.github.com/users/Darshan2104/received_events",
"repos_url": "https://api.github.com/users/Darshan2104/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darshan2104/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darshan2104/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darshan2104"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-02-23T13:26:40Z | null | I_kwDODunzps5Ea4xC | [
"Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3778 | https://github.com/huggingface/datasets/issues/3778 | false |
1,147,232,875 | https://api.github.com/repos/huggingface/datasets/issues/3777/labels{/name} | I updated the source code and the documentation to start removing the "canonical datasets" logic.
Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly.
### Changes
- the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now)
- the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library
- the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist
Would love to have your feedbacks on this !
cc @julien-c @thomwolf @SBrandeis | 2022-02-24T15:04:37Z | 3,777 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-22T18:23:30Z | https://api.github.com/repos/huggingface/datasets/issues/3777/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3777/timeline | Start removing canonical datasets logic | https://api.github.com/repos/huggingface/datasets/issues/3777/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-02-24T15:04:36Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3777.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3777",
"merged_at": "2022-02-24T15:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3777.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3777"
} | PR_kwDODunzps4zTVrz | [
"I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?",
"> I'm not sure if the documentation explains why the dataset identifiers might have a na... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3777/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3777 | https://github.com/huggingface/datasets/pull/3777 | true |
1,146,932,871 | https://api.github.com/repos/huggingface/datasets/issues/3776/labels{/name} | **Is your feature request related to a problem? Please describe.**
The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb).
**Describe the solution you'd like**
I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`.
**Describe alternatives you've considered**
I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
| 2022-02-22T14:50:02Z | 3,776 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-02-22T13:46:41Z | https://api.github.com/repos/huggingface/datasets/issues/3776/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3776/timeline | Allow download only some files from the Wikipedia dataset | https://api.github.com/repos/huggingface/datasets/issues/3776/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jvanz",
"id": 1514798,
"login": "jvanz",
"node_id": "MDQ6VXNlcjE1MTQ3OTg=",
"organizations_url": "https://api.github.com/users/jvanz/orgs",
"received_events_url": "https://api.github.com/users/jvanz/received_events",
"repos_url": "https://api.github.com/users/jvanz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jvanz"
} | [] | null | null | NONE | null | null | I_kwDODunzps5EXM6H | [
"Hi @jvanz, thank you for your proposal.\r\n\r\nIn fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_fil... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3776/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3776 | https://github.com/huggingface/datasets/issues/3776 | false |
1,146,849,454 | https://api.github.com/repos/huggingface/datasets/issues/3775/labels{/name} | Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999 | 2022-02-28T11:35:24Z | 3,775 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-22T12:27:16Z | https://api.github.com/repos/huggingface/datasets/issues/3775/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3775/timeline | Update gigaword card and info | https://api.github.com/repos/huggingface/datasets/issues/3775/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | null | null | CONTRIBUTOR | 2022-02-28T11:35:24Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3775.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3775",
"merged_at": "2022-02-28T11:35:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3775.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3775"
} | PR_kwDODunzps4zSEd4 | [
"I think it actually comes from an issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/file_utils.py#L575-L579\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/streaming... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3775/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3775 | https://github.com/huggingface/datasets/pull/3775 | true |
1,146,843,177 | https://api.github.com/repos/huggingface/datasets/issues/3774/labels{/name} | Fix #3773. | 2022-02-22T12:38:45Z | 3,774 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-22T12:21:15Z | https://api.github.com/repos/huggingface/datasets/issues/3774/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3774/timeline | Fix reddit_tifu data URL | https://api.github.com/repos/huggingface/datasets/issues/3774/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-22T12:38:44Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3774",
"merged_at": "2022-02-22T12:38:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3774"
} | PR_kwDODunzps4zSDHC | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3774/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3774 | https://github.com/huggingface/datasets/pull/3774 | true |
1,146,758,335 | https://api.github.com/repos/huggingface/datasets/issues/3773/labels{/name} | ## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded and cached locally.
## Actual results
File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
| 2022-02-25T19:27:49Z | 3,773 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-22T10:57:07Z | https://api.github.com/repos/huggingface/datasets/issues/3773/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3773/timeline | Checksum mismatch for the reddit_tifu dataset | https://api.github.com/repos/huggingface/datasets/issues/3773/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anna-kay",
"id": 56791604,
"login": "anna-kay",
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anna-kay"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | CONTRIBUTOR | 2022-02-22T12:38:44Z | null | I_kwDODunzps5EWiS_ | [
"Thanks for reporting, @anna-kay. We are fixing it.",
"@albertvillanova Thank you for the fast response! However I am still getting the same error:\r\n\r\nDownloading: 2.23kB [00:00, ?B/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Anna\\PycharmProjects\\summarization\\main.py\", line 17, in <mo... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3773/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3773 | https://github.com/huggingface/datasets/issues/3773 | false |
1,146,718,630 | https://api.github.com/repos/huggingface/datasets/issues/3772/labels{/name} | null | 2022-02-22T11:08:34Z | 3,772 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-22T10:20:37Z | https://api.github.com/repos/huggingface/datasets/issues/3772/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3772/timeline | Fix: dataset name is stored in keys | https://api.github.com/repos/huggingface/datasets/issues/3772/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
} | [] | null | null | CONTRIBUTOR | 2022-02-22T11:08:33Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3772",
"merged_at": "2022-02-22T11:08:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3772"
} | PR_kwDODunzps4zRor8 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3772/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3772 | https://github.com/huggingface/datasets/pull/3772 | true |
1,146,561,140 | https://api.github.com/repos/huggingface/datasets/issues/3771/labels{/name} | Fix #3770. | 2022-02-22T08:12:40Z | 3,771 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-22T07:44:24Z | https://api.github.com/repos/huggingface/datasets/issues/3771/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3771/timeline | Fix DuplicatedKeysError on msr_sqa dataset | https://api.github.com/repos/huggingface/datasets/issues/3771/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-22T08:12:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3771.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3771",
"merged_at": "2022-02-22T08:12:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3771.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3771"
} | PR_kwDODunzps4zRHsd | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3771/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3771 | https://github.com/huggingface/datasets/pull/3771 | true |
1,146,336,667 | https://api.github.com/repos/huggingface/datasets/issues/3770/labels{/name} | ### Describe the bug
Failure to generate dataset msr_sqa because of duplicate keys.
### Steps to reproduce the bug
```
from datasets import load_dataset
load_dataset("msr_sqa")
```
### Expected results
The examples keys should be unique.
**Actual results**
```
>>> load_dataset("msr_sqa")
Downloading:
6.72k/? [00:00<00:00, 148kB/s]
Downloading:
2.93k/? [00:00<00:00, 53.8kB/s]
Using custom data configuration default
Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1...
Downloading: 100%
4.80M/4.80M [00:00<00:00, 7.49MB/s]
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator)
1080 example = self.info.features.encode_example(record)
-> 1081 writer.write(example, key)
1082 finally:
8 frames
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self)
449 for hash, key in self.hkey_record:
450 if hash in tmp_record:
--> 451 raise DuplicatedKeysError(key)
452 else:
453 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
```
### Environment info
datasets version: 1.18.3
Platform: Google colab notebook
Python version: 3.7
PyArrow version: 6.0.1
| 2022-02-22T08:12:39Z | 3,770 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2022-02-22T00:43:33Z | https://api.github.com/repos/huggingface/datasets/issues/3770/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/3770/timeline | DuplicatedKeysError on msr_sqa dataset | https://api.github.com/repos/huggingface/datasets/issues/3770/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kolk",
"id": 9049591,
"login": "kolk",
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"repos_url": "https://api.github.com/users/kolk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kolk"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2022-02-22T08:12:39Z | null | I_kwDODunzps5EU7Wb | [
"Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3770/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3770 | https://github.com/huggingface/datasets/issues/3770 | false |
1,146,258,023 | https://api.github.com/repos/huggingface/datasets/issues/3769/labels{/name} | ## Describe the bug
assigning the resulted dataset to original dataset causes lost of the faiss index
## Steps to reproduce the bug
`my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure
```python
self.dataset.add_faiss_index('embeddings')
self.dataset.list_indexes()
# ['embeddings']
dataset2 = my_dataset.map(
lambda x: self._get_nearest_examples_batch(x['text']), batch=True
)
# the unexpected result:
dataset2.list_indexes()
# []
self.dataset.list_indexes()
# ['embeddings']
```
in case something wrong with my `_get_nearest_examples_batch()`, it's like this
```python
def _get_nearest_examples_batch(self, examples, k=5):
queries = embed(examples)
scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k)
return {
'neighbors': [batch['text'] for batch in retrievals_batch],
'scores': scores_batch
}
```
## Expected results
`map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset
## Actual results
map drops the indexes
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.8.12
- PyArrow version: 7.0.0
| 2022-06-27T14:56:29Z | 3,769 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-21T21:59:23Z | https://api.github.com/repos/huggingface/datasets/issues/3769/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3769/timeline | `dataset = dataset.map()` causes faiss index lost | https://api.github.com/repos/huggingface/datasets/issues/3769/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13076552?v=4",
"events_url": "https://api.github.com/users/Oaklight/events{/privacy}",
"followers_url": "https://api.github.com/users/Oaklight/followers",
"following_url": "https://api.github.com/users/Oaklight/following{/other_user}",
"gists_url": "https://api.github.com/users/Oaklight/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oaklight",
"id": 13076552,
"login": "Oaklight",
"node_id": "MDQ6VXNlcjEzMDc2NTUy",
"organizations_url": "https://api.github.com/users/Oaklight/orgs",
"received_events_url": "https://api.github.com/users/Oaklight/received_events",
"repos_url": "https://api.github.com/users/Oaklight/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oaklight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oaklight/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oaklight"
} | [] | null | null | NONE | null | null | I_kwDODunzps5EUoJn | [
"Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)\r\n\r\nI guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what d... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3769/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/3769 | https://github.com/huggingface/datasets/issues/3769 | false |
1,146,102,442 | https://api.github.com/repos/huggingface/datasets/issues/3768/labels{/name} | null | 2022-02-22T09:13:03Z | 3,768 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-21T18:14:40Z | https://api.github.com/repos/huggingface/datasets/issues/3768/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3768/timeline | Fix HfFileSystem docstring | https://api.github.com/repos/huggingface/datasets/issues/3768/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | null | null | MEMBER | 2022-02-22T09:13:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3768",
"merged_at": "2022-02-22T09:13:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3768"
} | PR_kwDODunzps4zPobl | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3768/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3768 | https://github.com/huggingface/datasets/pull/3768 | true |
1,146,036,648 | https://api.github.com/repos/huggingface/datasets/issues/3767/labels{/name} | A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670 | 2022-02-22T08:35:03Z | 3,767 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-21T16:57:47Z | https://api.github.com/repos/huggingface/datasets/issues/3767/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3767/timeline | Expose method and fix param | https://api.github.com/repos/huggingface/datasets/issues/3767/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | null | null | CONTRIBUTOR | 2022-02-22T08:35:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3767",
"merged_at": "2022-02-22T08:35:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3767"
} | PR_kwDODunzps4zPahh | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3767/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3767 | https://github.com/huggingface/datasets/pull/3767 | true |
1,145,829,289 | https://api.github.com/repos/huggingface/datasets/issues/3766/labels{/name} | Fix #3758. | 2022-02-21T14:39:20Z | 3,766 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-21T13:52:50Z | https://api.github.com/repos/huggingface/datasets/issues/3766/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3766/timeline | Fix head_qa data URL | https://api.github.com/repos/huggingface/datasets/issues/3766/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | null | null | MEMBER | 2022-02-21T14:39:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3766.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3766",
"merged_at": "2022-02-21T14:39:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3766.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3766"
} | PR_kwDODunzps4zOujH | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3766/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3766 | https://github.com/huggingface/datasets/pull/3766 | true |
1,145,126,881 | https://api.github.com/repos/huggingface/datasets/issues/3765/labels{/name} | This PR updates the URL for the tagging app to be the one on Spaces. | 2022-02-20T20:36:10Z | 3,765 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-02-20T20:34:31Z | https://api.github.com/repos/huggingface/datasets/issues/3765/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3765/timeline | Update URL for tagging app | https://api.github.com/repos/huggingface/datasets/issues/3765/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | null | null | MEMBER | 2022-02-20T20:36:06Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/3765.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3765",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3765.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3765"
} | PR_kwDODunzps4zMdIL | [
"Oh, this URL shouldn't be updated to the tagging app as it's actually used for creating the README - closing this."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3765/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3765 | https://github.com/huggingface/datasets/pull/3765 | true |
1,145,107,050 | https://api.github.com/repos/huggingface/datasets/issues/3764/labels{/name} | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 2022-02-21T08:55:58Z | 3,764 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | 2022-02-20T19:05:43Z | https://api.github.com/repos/huggingface/datasets/issues/3764/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3764/timeline | ! | https://api.github.com/repos/huggingface/datasets/issues/3764/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/77545307?v=4",
"events_url": "https://api.github.com/users/LesiaFedorenko/events{/privacy}",
"followers_url": "https://api.github.com/users/LesiaFedorenko/followers",
"following_url": "https://api.github.com/users/LesiaFedorenko/following{/other_user}",
"gists_url": "https://api.github.com/users/LesiaFedorenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LesiaFedorenko",
"id": 77545307,
"login": "LesiaFedorenko",
"node_id": "MDQ6VXNlcjc3NTQ1MzA3",
"organizations_url": "https://api.github.com/users/LesiaFedorenko/orgs",
"received_events_url": "https://api.github.com/users/LesiaFedorenko/received_events",
"repos_url": "https://api.github.com/users/LesiaFedorenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LesiaFedorenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LesiaFedorenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LesiaFedorenko"
} | [] | null | completed | NONE | 2022-02-21T08:55:58Z | null | I_kwDODunzps5EQPJq | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3764/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3764 | https://github.com/huggingface/datasets/issues/3764 | false |
1,145,099,878 | https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name} | ## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect to download the dataset locally.
## Actual results
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475...
/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features.
warnings.warn(
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare
super()._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested
mapped = [
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested
return function(data_struct)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json
```
## Environment info
```
- `datasets` version: 1.18.3
- Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
``` | 2022-02-21T12:06:12Z | 3,763 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2022-02-20T18:34:58Z | https://api.github.com/repos/huggingface/datasets/issues/3763/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3763/timeline | It's not possible download `20200501.pt` dataset | https://api.github.com/repos/huggingface/datasets/issues/3763/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jvanz",
"id": 1514798,
"login": "jvanz",
"node_id": "MDQ6VXNlcjE1MTQ3OTg=",
"organizations_url": "https://api.github.com/users/jvanz/orgs",
"received_events_url": "https://api.github.com/users/jvanz/received_events",
"repos_url": "https://api.github.com/users/jvanz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jvanz"
} | [] | null | completed | NONE | 2022-02-21T09:25:06Z | null | I_kwDODunzps5EQNZm | [
"Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3763 | https://github.com/huggingface/datasets/issues/3763 | false |
1,144,849,557 | https://api.github.com/repos/huggingface/datasets/issues/3762/labels{/name} | I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235
**Describe the solution you'd like**
I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values.
**Describe alternatives you've considered**
One can use map instead. I find it harder to read.
```python
CLASS_NAMES = ['apple', 'orange', 'potato']
ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column]))
# Proposition
ds = ds.class_encode_column(label_column, CLASS_NAMES)
```
**Additional context**
I can make the PR if this feature is accepted.
| 2022-02-21T12:16:35Z | 3,762 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2022-02-19T21:21:45Z | https://api.github.com/repos/huggingface/datasets/issues/3762/comments | null | https://api.github.com/repos/huggingface/datasets/issues/3762/timeline | `Dataset.class_encode` should support custom class names | https://api.github.com/repos/huggingface/datasets/issues/3762/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | null | completed | CONTRIBUTOR | 2022-02-21T12:16:35Z | null | I_kwDODunzps5EPQSV | [
"Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_e... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3762/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/3762 | https://github.com/huggingface/datasets/issues/3762 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.