id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
974,552,009
| 2,818
|
cannot load data from my loacal path
|
closed
| 2021-08-19T11:13:30
| 2023-07-25T17:42:15
| 2023-07-25T17:42:15
|
https://github.com/huggingface/datasets/issues/2818
| null |
yang-collect
| false
|
[
"Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict.\r\n\r\nCan you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?"
] |
974,486,051
| 2,817
|
Rename The Pile subsets
|
closed
| 2021-08-19T09:56:22
| 2021-08-23T16:24:10
| 2021-08-23T16:24:09
|
https://github.com/huggingface/datasets/pull/2817
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2817",
"html_url": "https://github.com/huggingface/datasets/pull/2817",
"diff_url": "https://github.com/huggingface/datasets/pull/2817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2817.patch",
"merged_at": "2021-08-23T16:24:09"
}
|
lhoestq
| true
|
[
"Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?",
"I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subsets they want:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"the_pile\", subsets=[\"openwebtext2\", \"books3\", \"hn\"])\r\n```\r\n\r\nWe're alrady doing something similar for mC4, where users can specify the list of languages they want to load."
] |
974,031,404
| 2,816
|
Add Mostly Basic Python Problems Dataset
|
open
| 2021-08-18T20:28:39
| 2021-09-10T08:04:20
| null |
https://github.com/huggingface/datasets/issues/2816
| null |
osanseviero
| false
|
[
"I started working on that."
] |
973,862,024
| 2,815
|
Tiny typo fixes of "fo" -> "of"
|
closed
| 2021-08-18T16:36:11
| 2021-08-19T08:03:02
| 2021-08-19T08:03:02
|
https://github.com/huggingface/datasets/pull/2815
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2815",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"merged_at": "2021-08-19T08:03:02"
}
|
aronszanto
| true
|
[] |
973,632,645
| 2,814
|
Bump tqdm version
|
closed
| 2021-08-18T12:51:29
| 2021-08-18T13:44:11
| 2021-08-18T13:39:50
|
https://github.com/huggingface/datasets/pull/2814
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2814",
"html_url": "https://github.com/huggingface/datasets/pull/2814",
"diff_url": "https://github.com/huggingface/datasets/pull/2814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2814.patch",
"merged_at": "2021-08-18T13:39:49"
}
|
mariosasko
| true
|
[] |
973,470,580
| 2,813
|
Remove compression from xopen
|
closed
| 2021-08-18T09:35:59
| 2021-08-23T15:59:14
| 2021-08-23T15:59:14
|
https://github.com/huggingface/datasets/issues/2813
| null |
albertvillanova
| false
|
[
"After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http://domain.org/filename.bz2`\r\n- `xopen` parses the `urlpath` and extracts the `compression` parameter and passes it to `fsspec.open`:\r\n `fsspec.open(\"http://domain.org/filename.bz2\", compression=\"bz2\")`\r\n\r\nPros:\r\n- clean solution that continues giving support to all compression formats\r\n- no breaking change when opening non-decompressed files: if no compression-protocol-like is passed, fsspec.open does not uncompress (passes compression=None)\r\n\r\nCons:\r\n- we create a \"private\" convention for the format of `urlpath`: although similar to `fsspec` protocols, we add custom prefixes for the `compression` argument"
] |
972,936,889
| 2,812
|
arXiv Dataset verification problem
|
open
| 2021-08-17T18:01:48
| 2022-01-19T14:15:35
| null |
https://github.com/huggingface/datasets/issues/2812
| null |
eladsegal
| false
|
[] |
972,522,480
| 2,811
|
Fix stream oscar
|
closed
| 2021-08-17T10:10:59
| 2021-08-26T10:26:15
| 2021-08-26T10:26:14
|
https://github.com/huggingface/datasets/pull/2811
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2811",
"html_url": "https://github.com/huggingface/datasets/pull/2811",
"diff_url": "https://github.com/huggingface/datasets/pull/2811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2811.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"One additional note: if we can try to not change the code of oscar.py too often, I'm sure users that have it in their cache directory will be happy to not have to redownload it every time they update the library ;)\r\n\r\n(since changing the code changes the cache directory of the dataset)",
"I don't think this is confusing for users because users don't even know we have patched `open`. The only thing users care is that if the pass `streaming=True`, they want to be able to load the dataset in streaming mode.\r\n\r\nI don't see any other dataset where patching `open` with `fsspec.open`+`compression` is an \"underlying issue\". Are there other datasets where this is an issue?\r\n\r\nThe only dataset where this was an issue is in oscar and the issue is indeed due to the additional `open` you added inside `zip.open`.",
"Closing this one since https://github.com/huggingface/datasets/pull/2822 reverted the change of behavior of `open`"
] |
972,040,022
| 2,810
|
Add WIT Dataset
|
closed
| 2021-08-16T19:34:09
| 2022-05-06T12:27:29
| 2022-05-06T12:26:16
|
https://github.com/huggingface/datasets/pull/2810
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2810",
"html_url": "https://github.com/huggingface/datasets/pull/2810",
"diff_url": "https://github.com/huggingface/datasets/pull/2810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2810.patch",
"merged_at": null
}
|
hassiahk
| true
|
[
"Google's version of WIT is now available here: https://huggingface.co/datasets/google/wit"
] |
971,902,613
| 2,809
|
Add Beans Dataset
|
closed
| 2021-08-16T16:22:33
| 2021-08-26T11:42:27
| 2021-08-26T11:42:27
|
https://github.com/huggingface/datasets/pull/2809
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2809",
"html_url": "https://github.com/huggingface/datasets/pull/2809",
"diff_url": "https://github.com/huggingface/datasets/pull/2809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2809.patch",
"merged_at": "2021-08-26T11:42:27"
}
|
nateraw
| true
|
[] |
971,882,320
| 2,808
|
Enable streaming for Wikipedia corpora
|
closed
| 2021-08-16T15:59:12
| 2023-07-20T13:45:30
| 2023-07-20T13:45:30
|
https://github.com/huggingface/datasets/issues/2808
| null |
lewtun
| false
|
[
"Closing as this has been addressed in https://github.com/huggingface/datasets/pull/5689."
] |
971,849,863
| 2,807
|
Add cats_vs_dogs dataset
|
closed
| 2021-08-16T15:21:11
| 2021-08-30T16:35:25
| 2021-08-30T16:35:24
|
https://github.com/huggingface/datasets/pull/2807
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2807",
"html_url": "https://github.com/huggingface/datasets/pull/2807",
"diff_url": "https://github.com/huggingface/datasets/pull/2807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2807.patch",
"merged_at": "2021-08-30T16:35:24"
}
|
nateraw
| true
|
[] |
971,625,449
| 2,806
|
Fix streaming tar files from canonical datasets
|
closed
| 2021-08-16T11:10:28
| 2021-10-13T09:04:03
| 2021-10-13T09:04:02
|
https://github.com/huggingface/datasets/pull/2806
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2806",
"html_url": "https://github.com/huggingface/datasets/pull/2806",
"diff_url": "https://github.com/huggingface/datasets/pull/2806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2806.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n# Throws a 404 HTTP error\r\nnext(iter(books_dataset_streamed))\r\n```\r\n\r\nThe full stack trace is:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\n<ipython-input-11-5ebbbe110b13> in <module>()\r\n----> 1 next(iter(books_dataset_streamed))\r\n\r\n11 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in __iter__(self)\r\n 339 \r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in _iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n 339 \r\n 340 def __iter__(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py in __iter__(self)\r\n 76 \r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80 \r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/bookcorpus/44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700/bookcorpus.py in _generate_examples(self, directory)\r\n 98 for txt_file in files:\r\n 99 with open(txt_file, mode=\"r\", encoding=\"utf-8\") as f:\r\n--> 100 for line in f:\r\n 101 yield _id, {\"text\": line.strip()}\r\n 102 _id += 1\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/implementations/http.py in read(self, length)\r\n 496 else:\r\n 497 length = min(self.size - self.loc, length)\r\n--> 498 return super().read(length)\r\n 499 \r\n 500 async def async_fetch_all(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/spec.py in read(self, length)\r\n 1481 # don't even bother calling fetch\r\n 1482 return b\"\"\r\n-> 1483 out = self.cache._fetch(self.loc, self.loc + length)\r\n 1484 self.loc += len(out)\r\n 1485 return out\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/caching.py in _fetch(self, start, end)\r\n 374 ):\r\n 375 # First read, or extending both before and after\r\n--> 376 self.cache = self.fetcher(start, bend)\r\n 377 self.start = start\r\n 378 elif start < self.start:\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in wrapper(*args, **kwargs)\r\n 86 def wrapper(*args, **kwargs):\r\n 87 self = obj or args[0]\r\n---> 88 return sync(self.loop, func, *args, **kwargs)\r\n 89 \r\n 90 return wrapper\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in sync(loop, func, timeout, *args, **kwargs)\r\n 67 raise FSTimeoutError\r\n 68 if isinstance(result[0], BaseException):\r\n---> 69 raise result[0]\r\n 70 return result[0]\r\n 71 \r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/asyn.py in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\n/usr/local/lib/python3.7/dist-packages/fsspec/implementations/http.py in async_fetch_range(self, start, end)\r\n 535 # range request outside file\r\n 536 return b\"\"\r\n--> 537 r.raise_for_status()\r\n 538 if r.status == 206:\r\n 539 # partial content, as expected\r\n\r\n/usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py in raise_for_status(self)\r\n 1003 status=self.status,\r\n 1004 message=self.reason,\r\n-> 1005 headers=self.headers,\r\n 1006 )\r\n 1007 \r\n\r\nClientResponseError: 404, message='Not Found', url=URL('https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2/books_large_p1.txt')\r\n```\r\n\r\nLet me know if this is unrelated and I'll open a separate issue :)\r\n\r\nEnvironment info:\r\n\r\n```\r\n- `datasets` version: 1.11.1.dev0\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n```",
"@lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.",
"> @lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.\r\n\r\nthanks for the context and the great work on the streaming features (right now i'm writing the streaming section of the HF course, so am acting like a beta tester 😄)",
"@lewtun this PR fixes previous issue with xjoin:\r\n\r\nGiven:\r\n```python\r\nxjoin(\r\n \"https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2\",\r\n \"books_large_p1.txt\"\r\n)\r\n```\r\n\r\n- Before it gave: \r\n `\"https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2/books_large_p1.txt\"`\r\n thus raising the 404 error\r\n\r\n- Now it gives:\r\n `tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2`\r\n (this is the expected format for `fsspec`) and additionally passes the parameter `compression=\"bz2\"`.\r\n See: https://github.com/huggingface/datasets/pull/2806/files#diff-97bb2d08db65ce3b679aefc43cadad76d053c1e58ecc315e49b80873d0fbdabeR15",
"closing in favor of #3066 "
] |
971,436,456
| 2,805
|
Fix streaming zip files from canonical datasets
|
closed
| 2021-08-16T07:11:40
| 2021-08-16T10:34:00
| 2021-08-16T10:34:00
|
https://github.com/huggingface/datasets/pull/2805
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2805",
"html_url": "https://github.com/huggingface/datasets/pull/2805",
"diff_url": "https://github.com/huggingface/datasets/pull/2805.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2805.patch",
"merged_at": "2021-08-16T10:34:00"
}
|
albertvillanova
| true
|
[] |
971,353,437
| 2,804
|
Add Food-101
|
closed
| 2021-08-16T04:26:15
| 2021-08-20T14:31:33
| 2021-08-19T12:48:06
|
https://github.com/huggingface/datasets/pull/2804
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2804",
"html_url": "https://github.com/huggingface/datasets/pull/2804",
"diff_url": "https://github.com/huggingface/datasets/pull/2804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2804.patch",
"merged_at": "2021-08-19T12:48:06"
}
|
nateraw
| true
|
[] |
970,858,928
| 2,803
|
add stack exchange
|
closed
| 2021-08-14T08:11:02
| 2021-08-19T10:07:33
| 2021-08-19T08:07:38
|
https://github.com/huggingface/datasets/pull/2803
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2803",
"html_url": "https://github.com/huggingface/datasets/pull/2803",
"diff_url": "https://github.com/huggingface/datasets/pull/2803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2803.patch",
"merged_at": "2021-08-19T08:07:38"
}
|
richarddwang
| true
|
[
"Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming",
"\r\n> If you don't mind I'll open a PR to do the renaming\r\n\r\n@lhoestq That will be nice !!\r\n"
] |
970,848,302
| 2,802
|
add openwebtext2
|
closed
| 2021-08-14T07:09:03
| 2021-08-23T14:06:14
| 2021-08-23T14:06:14
|
https://github.com/huggingface/datasets/pull/2802
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2802",
"html_url": "https://github.com/huggingface/datasets/pull/2802",
"diff_url": "https://github.com/huggingface/datasets/pull/2802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2802.patch",
"merged_at": "2021-08-23T14:06:14"
}
|
richarddwang
| true
|
[
"It seems we need to `pip install jsonlines` to pass the checks ?",
"Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replace `jsonlines` with a simple for loop on the lines of the files and use `json.loads`, or you can add `TESTS_REQUIRE` to the test requirements (but in this case users will have to install it as well).",
"Thanks for your suggestion. I now know `io` and json lines format better and has changed `jsonlines` to just `readlines`."
] |
970,844,617
| 2,801
|
add books3
|
closed
| 2021-08-14T07:04:25
| 2021-08-19T16:43:09
| 2021-08-18T15:36:59
|
https://github.com/huggingface/datasets/pull/2801
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2801",
"html_url": "https://github.com/huggingface/datasets/pull/2801",
"diff_url": "https://github.com/huggingface/datasets/pull/2801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2801.patch",
"merged_at": "2021-08-18T15:36:59"
}
|
richarddwang
| true
|
[
"> When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797\r\n\r\nThanks for the message, we'll definitely improve this\r\n\r\n> Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n\r\nWell currently no, but I think @lewtun was about to do it (though he's currently on vacations)",
"> > Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n> \r\n> Well currently no, but I think @lewtun was about to do it (though he's currently on vacations)\r\n\r\nyes i plan to start working on this next week #2185 \r\n\r\none question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles? \r\n\r\nin appendix c of their paper, they provide details on how they extracted these datasets, but it would be nice if we could just point to a url so we can be as close as possible to original implementation.",
"@lewtun \r\n\r\n> yes i plan to start working on this next week\r\n\r\nNice! Looking forward to it.\r\n\r\n> one question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles?\r\n\r\nSadly, I don't know any existing dataset of enron emails, but I believe opensubtitles dataset is hosted at here. https://the-eye.eu/public/AI/pile_preliminary_components/\r\n\r\n",
"thanks for the link @richarddwang! i think that corpus is actually the youtube subtitles one and my impression is that eleutherai have only uploaded the 14 new datasets they created. i've contacted one of the authors so hopefully they can share some additional info for us :)\r\n\r\nbtw it might take a while to put together all the corpora if i also need to preprocess them (e.g. the open subtitles / enron email etc), but i expect no longer than a few weeks."
] |
970,819,988
| 2,800
|
Support streaming tar files
|
closed
| 2021-08-14T04:40:17
| 2021-08-26T10:02:30
| 2021-08-14T04:55:57
|
https://github.com/huggingface/datasets/pull/2800
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2800",
"html_url": "https://github.com/huggingface/datasets/pull/2800",
"diff_url": "https://github.com/huggingface/datasets/pull/2800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2800.patch",
"merged_at": "2021-08-14T04:55:57"
}
|
albertvillanova
| true
|
[
"Hi ! Why do we need the custom `readline` for exactly ? feel free to add a comment to say why it's needed"
] |
970,507,351
| 2,799
|
Loading JSON throws ArrowNotImplementedError
|
closed
| 2021-08-13T15:31:48
| 2022-01-10T18:59:32
| 2022-01-10T18:59:32
|
https://github.com/huggingface/datasets/issues/2799
| null |
lewtun
| false
|
[
"Hi @lewtun, thanks for reporting.\r\n\r\nApparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.\r\n\r\nI will investigate if there is a way to tell pyarrow not to try that timestamp casting.",
"I think the issue is more complex than that...\r\n\r\nI just took one of your JSON lines and pyarrow.json read it without problem.",
"> I just took one of your JSON lines an pyarrow.json read it without problem.\r\n\r\nyes, and for some peculiar reason the error is non-deterministic (i was eventually able to load the whole dataset by just re-running the `load_dataset` cell multiple times 🤔)\r\n\r\nthanks for looking into this 🙏 !",
"I think the error is generated by the `pyarrow.json.read()` option: `read_options=paj.ReadOptions(block_size=block_size)`...\r\ncc: @lhoestq ",
"The code works fine on my side.\r\nNot sure what's going on here :/\r\n\r\nI remember we did a few changes in the JSON loader in #2638 , did you do an update `datasets` when debugging this ?\r\n",
"OK after upgrading `datasets` to v1.12.1 the issue seems to have gone away. Closing this now :)",
"Oops, I spoke too soon 😓 \r\n\r\nAfter deleting the cache and trying the above code snippet again I am hitting the same error. You can also reproduce it in the Colab notebook I linked to in the issue description. ",
"@albertvillanova @lhoestq I noticed the same issue using datasets v1.12.1. Is there an update on when this could be fixed?",
"Apparently it's possible to make it work by increasing the `block_size`, let me open a PR",
"I just opened a PR with a fix, feel free to install `datasets` from source from source and let me know if it helps",
"@zijwang did PR #3000 solve the problem for you? It did for me, so it all is good on your end we can close this issue. Thanks again to @lhoestq for the pyarrow magic 🤯 "
] |
970,493,126
| 2,798
|
Fix streaming zip files
|
closed
| 2021-08-13T15:17:01
| 2021-08-16T14:16:50
| 2021-08-13T15:38:28
|
https://github.com/huggingface/datasets/pull/2798
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2798",
"html_url": "https://github.com/huggingface/datasets/pull/2798",
"diff_url": "https://github.com/huggingface/datasets/pull/2798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2798.patch",
"merged_at": "2021-08-13T15:38:28"
}
|
albertvillanova
| true
|
[
"Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ?",
"This change is to allow this:\r\n```python\r\ndata_files = f\"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\nassert isinstance(ds, IterableDataset)\r\n```\r\nNote that in this case the user will not call os.path.join.\r\n\r\nBefore this PR it gave error because pointing to the root, without any subsequent join, gives error:\r\n```python\r\nfsspec.open(\"zip://::https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\")\r\n```"
] |
970,331,634
| 2,797
|
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
|
open
| 2021-08-13T11:54:49
| 2021-08-14T08:42:09
| null |
https://github.com/huggingface/datasets/issues/2797
| null |
richarddwang
| false
|
[] |
970,235,846
| 2,796
|
add cedr dataset
|
closed
| 2021-08-13T09:37:35
| 2021-08-27T16:01:36
| 2021-08-27T16:01:36
|
https://github.com/huggingface/datasets/pull/2796
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2796",
"html_url": "https://github.com/huggingface/datasets/pull/2796",
"diff_url": "https://github.com/huggingface/datasets/pull/2796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2796.patch",
"merged_at": "2021-08-27T16:01:35"
}
|
naumov-al
| true
|
[
"> Hi ! Thanks a lot for adding this one :)\r\n> \r\n> Good job with the dataset card and the dataset script !\r\n> \r\n> I left a few suggestions\r\n\r\nThank you very much for your helpful suggestions. I have tried to carry them all out."
] |
969,728,545
| 2,794
|
Warnings and documentation about pickling incorrect
|
open
| 2021-08-12T23:09:13
| 2021-08-12T23:09:31
| null |
https://github.com/huggingface/datasets/issues/2794
| null |
mbforbes
| false
|
[] |
968,967,773
| 2,793
|
Fix type hint for data_files
|
closed
| 2021-08-12T14:42:37
| 2021-08-12T15:35:29
| 2021-08-12T15:35:29
|
https://github.com/huggingface/datasets/pull/2793
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2793",
"html_url": "https://github.com/huggingface/datasets/pull/2793",
"diff_url": "https://github.com/huggingface/datasets/pull/2793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2793.patch",
"merged_at": "2021-08-12T15:35:29"
}
|
albertvillanova
| true
|
[] |
968,650,274
| 2,792
|
Update: GooAQ - add train/val/test splits
|
closed
| 2021-08-12T11:40:18
| 2021-08-27T15:58:45
| 2021-08-27T15:58:14
|
https://github.com/huggingface/datasets/pull/2792
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2792",
"html_url": "https://github.com/huggingface/datasets/pull/2792",
"diff_url": "https://github.com/huggingface/datasets/pull/2792.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2792.patch",
"merged_at": "2021-08-27T15:58:14"
}
|
bhavitvyamalik
| true
|
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] |
968,360,314
| 2,791
|
Fix typo in cnn_dailymail
|
closed
| 2021-08-12T08:38:42
| 2021-08-12T11:17:59
| 2021-08-12T11:17:59
|
https://github.com/huggingface/datasets/pull/2791
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2791",
"html_url": "https://github.com/huggingface/datasets/pull/2791",
"diff_url": "https://github.com/huggingface/datasets/pull/2791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2791.patch",
"merged_at": "2021-08-12T11:17:59"
}
|
omaralsayed
| true
|
[] |
967,772,181
| 2,790
|
Fix typo in test_dataset_common
|
closed
| 2021-08-12T01:10:29
| 2021-08-12T11:31:29
| 2021-08-12T11:31:29
|
https://github.com/huggingface/datasets/pull/2790
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2790",
"html_url": "https://github.com/huggingface/datasets/pull/2790",
"diff_url": "https://github.com/huggingface/datasets/pull/2790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2790.patch",
"merged_at": "2021-08-12T11:31:29"
}
|
nateraw
| true
|
[] |
967,361,934
| 2,789
|
Updated dataset description of DaNE
|
closed
| 2021-08-11T19:58:48
| 2021-08-12T16:10:59
| 2021-08-12T16:06:01
|
https://github.com/huggingface/datasets/pull/2789
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2789",
"html_url": "https://github.com/huggingface/datasets/pull/2789",
"diff_url": "https://github.com/huggingface/datasets/pull/2789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2789.patch",
"merged_at": "2021-08-12T16:06:01"
}
|
KennethEnevoldsen
| true
|
[
"Thanks for finishing it @albertvillanova "
] |
967,149,389
| 2,788
|
How to sample every file in a list of files making up a split in a dataset when loading?
|
closed
| 2021-08-11T17:43:21
| 2023-07-25T17:40:50
| 2023-07-25T17:40:50
|
https://github.com/huggingface/datasets/issues/2788
| null |
brijow
| false
|
[
"Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\r\n \"csv\",\r\n data_files=data_files_dict,\r\n).shuffle(seed=seed)\r\n\r\nsample_dataset = {splitname: split.select(range(8)) for splitname, split in dataset.items()}\r\n```\r\n\r\nAnother alternative is loading each file separately with `split=\"train[:8]\"` and then use `concatenate_datasets` to merge the sample of each file."
] |
967,018,406
| 2,787
|
ConnectionError: Couldn't reach https://raw.githubusercontent.com
|
closed
| 2021-08-11T16:19:01
| 2023-10-03T12:39:25
| 2021-08-18T15:09:18
|
https://github.com/huggingface/datasets/issues/2787
| null |
jinec
| false
|
[
"the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)",
"Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\r\n\r\nNormally, it should work if you wait a little and then retry.\r\n\r\nCould you please confirm if the problem persists?",
"cannot connect,even by Web browser,please check that there is some problems。",
"I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...",
"> I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n\r\nI can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China",
"Finally i can access it, by the superfast software. Thanks",
"> Finally i can access it, by the superfast software. Thanks\r\n\r\nExcuse me, I have the same problem as you, could you please tell me how to solve it?",
"It is not related to the area, the ConnectionError with http://raw.githubuserconent.com has persisted with load_data function, datasets module. However, it can be set to either wget or ssl snippet to download dataset from github as following. \r\n\r\n`$ wget https://raw.githubusercontent.com/... --no-check-certificate`\r\n\r\n\r\nor \r\n\r\nfor the tfds, nltk or pandas.read_csv downloading as follows. \r\n\r\n```\r\nimport ssl\r\n\r\ntry:\r\n _create_unverified_https_context = ssl._create_unverified_context\r\nexcept AttributeError:\r\n pass\r\nelse:\r\n ssl._create_default_https_context = _create_unverified_https_context\r\n```\r\n\r\nSo it is most probably the problem of github rather than users \r\n",
"> > I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n> \r\n> I can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China\r\n\r\n所以老哥怎么解决这个问题呢"
] |
966,282,934
| 2,786
|
Support streaming compressed files
|
closed
| 2021-08-11T09:02:06
| 2021-08-17T05:28:39
| 2021-08-16T06:36:19
|
https://github.com/huggingface/datasets/pull/2786
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2786",
"html_url": "https://github.com/huggingface/datasets/pull/2786",
"diff_url": "https://github.com/huggingface/datasets/pull/2786.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2786.patch",
"merged_at": "2021-08-16T06:36:19"
}
|
albertvillanova
| true
|
[] |
965,461,382
| 2,783
|
Add KS task to SUPERB
|
closed
| 2021-08-10T22:14:07
| 2021-08-12T16:45:01
| 2021-08-11T20:19:17
|
https://github.com/huggingface/datasets/pull/2783
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2783",
"html_url": "https://github.com/huggingface/datasets/pull/2783",
"diff_url": "https://github.com/huggingface/datasets/pull/2783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2783.patch",
"merged_at": "2021-08-11T20:19:17"
}
|
anton-l
| true
|
[
"thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)",
"@albertvillanova thanks! Everything should be ready now :)",
"> The _background_noise_/_silence_ audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)\r\n\r\n@anton-l I was thinking that maybe we could give some hints in the dataset card (in a Usage section); something similar as for diarization: https://github.com/huggingface/datasets/blob/master/datasets/superb/README.md#example-of-usage\r\nNote that for diarization it is not yet finished: we have to test it and then provide an end-to-end example: https://github.com/huggingface/datasets/pull/2661/files#r680224909 ",
"@albertvillanova yeah, I'm not sure how to best implement it in pure `datasets` yet. It's something like this, where `sample_noise()` needs to be called from a pytorch batch collator or other framework-specific variant:\r\n\r\n```python\r\ndef map_to_array(example):\r\n import soundfile as sf\r\n\r\n speech_array, sample_rate = sf.read(example[\"file\"])\r\n example[\"speech\"] = speech_array\r\n example[\"sample_rate\"] = sample_rate\r\n return example\r\n\r\n\r\ndef sample_noise(example):\r\n # Use a version of this function in a stateless way to extract random 1 sec slices of background noise\r\n # on each epoch\r\n from random import randint\r\n\r\n # _silence_ audios are longer than 1 sec\r\n if example[\"label\"] == \"_silence_\":\r\n random_offset = randint(0, len(example[\"speech\"]) - example[\"sample_rate\"] - 1)\r\n example[\"speech\"] = example[\"speech\"][random_offset : random_offset + example[\"sample_rate\"]]\r\n\r\n return example\r\n```",
"I see... Yes, not trivial indeed. Maybe for the moment you could add those functions above to the README (as it is the case for now in diarization)? What do you think?"
] |
964,858,439
| 2,782
|
Fix renaming of corpus_bleu args
|
closed
| 2021-08-10T11:02:34
| 2021-08-10T11:16:07
| 2021-08-10T11:16:07
|
https://github.com/huggingface/datasets/pull/2782
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2782",
"html_url": "https://github.com/huggingface/datasets/pull/2782",
"diff_url": "https://github.com/huggingface/datasets/pull/2782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2782.patch",
"merged_at": "2021-08-10T11:16:07"
}
|
albertvillanova
| true
|
[] |
964,805,351
| 2,781
|
Latest v2.0.0 release of sacrebleu has broken some metrics
|
closed
| 2021-08-10T09:59:41
| 2021-08-10T11:16:07
| 2021-08-10T11:16:07
|
https://github.com/huggingface/datasets/issues/2781
| null |
albertvillanova
| false
|
[] |
964,794,764
| 2,780
|
VIVOS dataset for Vietnamese ASR
|
closed
| 2021-08-10T09:47:36
| 2021-08-12T11:09:30
| 2021-08-12T11:09:30
|
https://github.com/huggingface/datasets/pull/2780
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2780",
"html_url": "https://github.com/huggingface/datasets/pull/2780",
"diff_url": "https://github.com/huggingface/datasets/pull/2780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2780.patch",
"merged_at": "2021-08-12T11:09:30"
}
|
binh234
| true
|
[] |
964,775,085
| 2,779
|
Fix sacrebleu tokenizers
|
closed
| 2021-08-10T09:24:27
| 2021-08-10T11:03:08
| 2021-08-10T10:57:54
|
https://github.com/huggingface/datasets/pull/2779
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2779",
"html_url": "https://github.com/huggingface/datasets/pull/2779",
"diff_url": "https://github.com/huggingface/datasets/pull/2779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2779.patch",
"merged_at": "2021-08-10T10:57:54"
}
|
albertvillanova
| true
|
[] |
964,737,422
| 2,778
|
Do not pass tokenize to sacrebleu
|
closed
| 2021-08-10T08:40:37
| 2021-08-10T10:03:37
| 2021-08-10T10:03:37
|
https://github.com/huggingface/datasets/pull/2778
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2778",
"html_url": "https://github.com/huggingface/datasets/pull/2778",
"diff_url": "https://github.com/huggingface/datasets/pull/2778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2778.patch",
"merged_at": "2021-08-10T10:03:37"
}
|
albertvillanova
| true
|
[] |
964,696,380
| 2,777
|
Use packaging to handle versions
|
closed
| 2021-08-10T07:51:39
| 2021-08-18T13:56:27
| 2021-08-18T13:56:27
|
https://github.com/huggingface/datasets/pull/2777
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2777",
"html_url": "https://github.com/huggingface/datasets/pull/2777",
"diff_url": "https://github.com/huggingface/datasets/pull/2777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2777.patch",
"merged_at": "2021-08-18T13:56:27"
}
|
albertvillanova
| true
|
[] |
964,400,596
| 2,776
|
document `config.HF_DATASETS_OFFLINE` and precedence
|
open
| 2021-08-09T21:23:17
| 2021-08-09T21:23:17
| null |
https://github.com/huggingface/datasets/issues/2776
| null |
stas00
| false
|
[] |
964,303,626
| 2,775
|
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
|
closed
| 2021-08-09T19:28:51
| 2024-01-26T15:05:36
| 2024-01-26T15:05:35
|
https://github.com/huggingface/datasets/issues/2775
| null |
mbforbes
| false
|
[
"I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo",
"Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.\r\n\r\nAny opinion on this @LysandreJik ?",
"Yes, this sounds good @lhoestq "
] |
963,932,199
| 2,774
|
Prevent .map from using multiprocessing when loading from cache
|
closed
| 2021-08-09T12:11:38
| 2021-09-09T10:20:28
| 2021-09-09T10:20:28
|
https://github.com/huggingface/datasets/pull/2774
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2774",
"html_url": "https://github.com/huggingface/datasets/pull/2774",
"diff_url": "https://github.com/huggingface/datasets/pull/2774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2774.patch",
"merged_at": "2021-09-09T10:20:28"
}
|
thomasw21
| true
|
[
"I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ",
"Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r\ngit checkout sequential_map_when_cached\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push -u origin sequential_map_when_cached\r\n```",
"Thanks for working on this ! I'm sure we can figure something out ;)\r\n\r\nCurrently `map` starts a process to apply the map function on each shard. If the shard has already been processed, then the process that has been spawned loads the processed shard from the cache and returns it.\r\n\r\nI think we should be able to simply not start a process if a shard is already processed and cached.\r\nThis way:\r\n- you won't need to specify `sequential=True`\r\n- it won't create new processes if the dataset is already processed and cached\r\n- it will properly reload each processed shard that is cached\r\n\r\nTo know if we have to start a new process for a shard you can use the function `update_fingerprint` from fingerprint.py to know the expected fingerprint of the processed shard.\r\nThen, if the shard has already been processed, there will be a cache file named `cached-<new_fingerprint>.arrow` and you can load it with\r\n```\r\nDataset.from_file(path_to_cache_file, info=self.info, split=self.split)\r\n```\r\n\r\nLet me know if that makes sense !",
"Yes that makes total sense, I tried to initially do that, except the way fingerprint is handled doesn't allow to easily manipulate such a field. Typically the fingerprinting is handled in `@fingerprint_transform` which has a bunch of params that aren't quite easy to extract. Those params are used to manipulate args, kwargs in fancy ways in order to finally obtain a dictionary used for fingerprint. I could duplicate everything, but this look like a very risky thing to do. I'll take a look if I can make something work with `inspect` if I can make a very simple wrapper.\r\n\r\nA much more simpler solution I think is adding an optional `shard: Optional[int] = None` parameter. If None, we use the number of proc as the number of shards, otherwise we pass down the expected number of shards and use either sequential/multiprocessing (with arbitrary number of workers) to load the shards? This would allow the weird case where one wants a large number of shards with a limited amount of processes. Not the smartest thing to do, but it's not an absurd behaviour. Would this be acceptable?",
"@lhoestq friendly ping as I feel it's up for review.",
"The CI error is unrelated to the changes of this PR - it looks like an SSL issue with conda"
] |
963,730,497
| 2,773
|
Remove dataset_infos.json
|
closed
| 2021-08-09T07:43:19
| 2024-05-04T14:52:10
| 2024-05-04T14:52:10
|
https://github.com/huggingface/datasets/issues/2773
| null |
albertvillanova
| false
|
[
"This was closed by:\r\n- #4926"
] |
963,348,834
| 2,772
|
Remove returned feature constrain
|
open
| 2021-08-08T04:01:30
| 2021-08-08T08:48:01
| null |
https://github.com/huggingface/datasets/issues/2772
| null |
PosoSAgapo
| false
|
[] |
963,257,036
| 2,771
|
[WIP][Common Voice 7] Add common voice 7.0
|
closed
| 2021-08-07T16:01:10
| 2021-12-06T23:24:02
| 2021-12-06T23:24:02
|
https://github.com/huggingface/datasets/pull/2771
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2771",
"html_url": "https://github.com/huggingface/datasets/pull/2771",
"diff_url": "https://github.com/huggingface/datasets/pull/2771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2771.patch",
"merged_at": null
}
|
patrickvonplaten
| true
|
[
"Hi ! I think the name `common_voice_7` is fine :)\r\nMoreover if the dataset_infos.json is missing I'm pretty sure you don't need to specify `ignore_verifications=True`",
"Hi, how about to add a new parameter \"version\" in the function load_dataset, something like: \r\n`load_dataset(\"common_voice\", \"lg\", version=\"7.0\") `\r\nThis is to avoid creating a new common_voice_? dataset (with almost the same code) every time \r\nMozilla updates their Common Voice dataset.\r\n"
] |
963,246,512
| 2,770
|
Add support for fast tokenizer in BertScore
|
closed
| 2021-08-07T15:00:03
| 2021-08-09T12:34:43
| 2021-08-09T11:16:25
|
https://github.com/huggingface/datasets/pull/2770
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2770",
"html_url": "https://github.com/huggingface/datasets/pull/2770",
"diff_url": "https://github.com/huggingface/datasets/pull/2770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2770.patch",
"merged_at": "2021-08-09T11:16:25"
}
|
mariosasko
| true
|
[] |
963,240,802
| 2,769
|
Allow PyArrow from source
|
closed
| 2021-08-07T14:26:44
| 2021-08-09T15:38:39
| 2021-08-09T15:38:39
|
https://github.com/huggingface/datasets/pull/2769
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2769",
"html_url": "https://github.com/huggingface/datasets/pull/2769",
"diff_url": "https://github.com/huggingface/datasets/pull/2769.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2769.patch",
"merged_at": "2021-08-09T15:38:39"
}
|
patrickvonplaten
| true
|
[] |
963,229,173
| 2,768
|
`ArrowInvalid: Added column's length must match table's length.` after using `select`
|
closed
| 2021-08-07T13:17:29
| 2021-08-09T11:26:43
| 2021-08-09T11:26:43
|
https://github.com/huggingface/datasets/issues/2768
| null |
lvwerra
| false
|
[
"Hi,\r\n\r\nthe `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"tweets_hate_speech_detection\")['train'].select(range(128))\r\nds = ds.flatten_indices()\r\nds = ds.add_column('ones', [1]*128)\r\n```",
"Thanks for the question @lvwerra. And thanks for the answer @mariosasko. ^^"
] |
963,002,120
| 2,767
|
equal operation to perform unbatch for huggingface datasets
|
closed
| 2021-08-06T19:45:52
| 2022-03-07T13:58:00
| 2022-03-07T13:58:00
|
https://github.com/huggingface/datasets/issues/2767
| null |
dorooddorood606
| false
|
[
"Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can handle this operation, thanks a lot",
"Hi,\r\nthis is also my question on how to perform similar operation as \"unbatch\" in tensorflow in great huggingface dataset library. \r\nthanks.",
"Hi,\r\n\r\n`Dataset.map` in the batched mode allows you to map a single row to multiple rows. So to perform \"unbatch\", you can do the following:\r\n```python\r\nimport collections\r\n\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {ex['passage']}\"\r\n new_batch[\"inputs\"].extend([inputs] * len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"])\r\n return new_batch\r\n\r\ndset = dset.map(unbatch, batched=True, remove_columns=dset.column_names)\r\n```",
"Dear @mariosasko \r\nFirst, thank you very much for coming back to me on this, I appreciate it a lot. I tried this solution, I am getting errors, do you mind\r\ngiving me one test example to be able to run your code, to understand better the format of the inputs to your function?\r\nin this function https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L952 they copy each example to the number of \"answers\", do you mean one should not do the copying part and use directly your function? \r\n\r\n\r\nthank you very much for your help and time.",
"Hi @mariosasko \r\nI think finally I got this, I think you mean to do things in one step, here is the full example for completeness:\r\n\r\n```\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n # updates the passage.\r\n passage = ex['passage']\r\n passage = re.sub(r'(\\.|\\?|\\!|\\\"|\\')\\n@highlight\\n', r'\\1 ', passage)\r\n passage = re.sub(r'\\n@highlight\\n', '. ', passage)\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {passage}\"\r\n # duplicates the samples based on number of answers.\r\n num_answers = len(ex[\"answers\"])\r\n num_duplicates = np.maximum(1, num_answers)\r\n new_batch[\"inputs\"].extend([inputs] * num_duplicates) #len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"] if num_answers > 0 else [\"<unk>\"])\r\n return new_batch\r\n\r\ndata = datasets.load_dataset('super_glue', 'record', split=\"train\", script_version=\"master\")\r\ndata = data.map(unbatch, batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nThanks a lot again, this was a super great way to do it."
] |
962,994,198
| 2,766
|
fix typo (ShuffingConfig -> ShufflingConfig)
|
closed
| 2021-08-06T19:31:40
| 2021-08-10T14:17:03
| 2021-08-10T14:17:02
|
https://github.com/huggingface/datasets/pull/2766
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2766",
"html_url": "https://github.com/huggingface/datasets/pull/2766",
"diff_url": "https://github.com/huggingface/datasets/pull/2766.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2766.patch",
"merged_at": "2021-08-10T14:17:02"
}
|
daleevans
| true
|
[] |
962,861,395
| 2,765
|
BERTScore Error
|
closed
| 2021-08-06T15:58:57
| 2021-08-09T11:16:25
| 2021-08-09T11:16:25
|
https://github.com/huggingface/datasets/issues/2765
| null |
gagan3012
| false
|
[
"Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\n```"
] |
962,554,799
| 2,764
|
Add DER metric for SUPERB speaker diarization task
|
closed
| 2021-08-06T09:12:36
| 2023-07-11T09:35:23
| 2023-07-11T09:35:23
|
https://github.com/huggingface/datasets/pull/2764
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2764",
"html_url": "https://github.com/huggingface/datasets/pull/2764",
"diff_url": "https://github.com/huggingface/datasets/pull/2764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2764.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
961,895,523
| 2,763
|
English wikipedia datasets is not clean
|
closed
| 2021-08-05T14:37:24
| 2023-07-25T17:43:04
| 2023-07-25T17:43:04
|
https://github.com/huggingface/datasets/issues/2763
| null |
lucadiliello
| false
|
[
"Hi ! Certain users might need these data (for training or simply to explore/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training"
] |
961,652,046
| 2,762
|
Add RVL-CDIP dataset
|
closed
| 2021-08-05T09:57:05
| 2022-04-21T17:15:41
| 2022-04-21T17:15:41
|
https://github.com/huggingface/datasets/issues/2762
| null |
NielsRogge
| false
|
[
"cc @nateraw ",
"#self-assign",
"[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. That’s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) regarding this, and he told me that the link works for him. Not sure what the issue is. But Adam shared the file (labels_only.tar.gz) with me as an attachment.\r\n\r\nAre we allowed to host this file(labels_only.tar.gz) elsewhere and use that link instead ?\r\n\r\nThank you.\r\n"
] |
961,568,287
| 2,761
|
Error loading C4 realnewslike dataset
|
closed
| 2021-08-05T08:16:58
| 2021-08-08T19:44:34
| 2021-08-08T19:44:34
|
https://github.com/huggingface/datasets/issues/2761
| null |
danshirron
| false
|
[
"Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.",
"@bhavitvyamalik @lhoestq , just tried the above and got:\r\n>>> a=datasets.load_dataset('c4','en.realnewslike')\r\nDownloading: 3.29kB [00:00, 1.66MB/s] \r\nDownloading: 2.40MB [00:00, 12.6MB/s] \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 819, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 701, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1049, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 268, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 360, in _create_builder_config\r\n raise ValueError(\r\nValueError: BuilderConfig en.realnewslike not found. Available: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']\r\n>>> \r\n\r\ndatasets version is 1.11.0\r\n",
"I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`. \r\n\r\nI tried `raw_datasets = load_dataset('c4', 'realnewslike')` and the download started. Make sure you don't have any old copy of this dataset and you download it fresh using the latest version of datasets. Sorry for the mix up!",
"It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks"
] |
961,372,667
| 2,760
|
Add Nuswide dataset
|
open
| 2021-08-05T03:00:41
| 2021-12-08T12:06:23
| null |
https://github.com/huggingface/datasets/issues/2760
| null |
shivangibithel
| false
|
[] |
960,206,575
| 2,758
|
Raise ManualDownloadError when loading a dataset that requires previous manual download
|
closed
| 2021-08-04T10:19:55
| 2021-08-04T11:36:30
| 2021-08-04T11:36:30
|
https://github.com/huggingface/datasets/pull/2758
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2758",
"html_url": "https://github.com/huggingface/datasets/pull/2758",
"diff_url": "https://github.com/huggingface/datasets/pull/2758.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2758.patch",
"merged_at": "2021-08-04T11:36:30"
}
|
albertvillanova
| true
|
[] |
959,984,081
| 2,757
|
Unexpected type after `concatenate_datasets`
|
closed
| 2021-08-04T07:10:39
| 2021-08-04T16:01:24
| 2021-08-04T16:01:23
|
https://github.com/huggingface/datasets/issues/2757
| null |
JulesBelveze
| false
|
[
"Hi @JulesBelveze, thanks for your question.\r\n\r\nNote that 🤗 `datasets` internally store their data in Apache Arrow format.\r\n\r\nHowever, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).\r\n\r\nIf you would like their columns to be returned in a more suitable format for your use case (torch arrays), you can use the method `set_format()`:\r\n```python\r\nconcat_dataset.set_format(type=\"torch\")\r\n```\r\n\r\nYou have detailed information in our docs:\r\n- [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)\r\n- [Dataset.set_format()](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format)",
"Thanks @albertvillanova it indeed did the job 😃 \r\nThanks for your answer!"
] |
959,255,646
| 2,756
|
Fix metadata JSON for ubuntu_dialogs_corpus dataset
|
closed
| 2021-08-03T15:48:59
| 2021-08-04T09:43:25
| 2021-08-04T09:43:25
|
https://github.com/huggingface/datasets/pull/2756
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2756",
"html_url": "https://github.com/huggingface/datasets/pull/2756",
"diff_url": "https://github.com/huggingface/datasets/pull/2756.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2756.patch",
"merged_at": "2021-08-04T09:43:25"
}
|
albertvillanova
| true
|
[] |
959,115,888
| 2,755
|
Fix metadata JSON for turkish_movie_sentiment dataset
|
closed
| 2021-08-03T13:25:44
| 2021-08-04T09:06:54
| 2021-08-04T09:06:53
|
https://github.com/huggingface/datasets/pull/2755
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2755",
"html_url": "https://github.com/huggingface/datasets/pull/2755",
"diff_url": "https://github.com/huggingface/datasets/pull/2755.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2755.patch",
"merged_at": "2021-08-04T09:06:53"
}
|
albertvillanova
| true
|
[] |
959,105,577
| 2,754
|
Generate metadata JSON for telugu_books dataset
|
closed
| 2021-08-03T13:14:52
| 2021-08-04T08:49:02
| 2021-08-04T08:49:02
|
https://github.com/huggingface/datasets/pull/2754
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2754",
"html_url": "https://github.com/huggingface/datasets/pull/2754",
"diff_url": "https://github.com/huggingface/datasets/pull/2754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2754.patch",
"merged_at": "2021-08-04T08:49:01"
}
|
albertvillanova
| true
|
[] |
959,036,995
| 2,753
|
Generate metadata JSON for reclor dataset
|
closed
| 2021-08-03T11:52:29
| 2021-08-04T08:07:15
| 2021-08-04T08:07:15
|
https://github.com/huggingface/datasets/pull/2753
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2753",
"html_url": "https://github.com/huggingface/datasets/pull/2753",
"diff_url": "https://github.com/huggingface/datasets/pull/2753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2753.patch",
"merged_at": "2021-08-04T08:07:15"
}
|
albertvillanova
| true
|
[] |
959,023,608
| 2,752
|
Generate metadata JSON for lm1b dataset
|
closed
| 2021-08-03T11:34:56
| 2021-08-04T06:40:40
| 2021-08-04T06:40:39
|
https://github.com/huggingface/datasets/pull/2752
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2752",
"html_url": "https://github.com/huggingface/datasets/pull/2752",
"diff_url": "https://github.com/huggingface/datasets/pull/2752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2752.patch",
"merged_at": "2021-08-04T06:40:39"
}
|
albertvillanova
| true
|
[] |
959,021,262
| 2,751
|
Update metadata for wikihow dataset
|
closed
| 2021-08-03T11:31:57
| 2021-08-03T15:52:09
| 2021-08-03T15:52:09
|
https://github.com/huggingface/datasets/pull/2751
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2751",
"html_url": "https://github.com/huggingface/datasets/pull/2751",
"diff_url": "https://github.com/huggingface/datasets/pull/2751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2751.patch",
"merged_at": "2021-08-03T15:52:09"
}
|
albertvillanova
| true
|
[] |
958,984,730
| 2,750
|
Second concatenation of datasets produces errors
|
closed
| 2021-08-03T10:47:04
| 2022-01-19T14:23:43
| 2022-01-19T14:19:05
|
https://github.com/huggingface/datasets/issues/2750
| null |
Aktsvigun
| false
|
[
"@albertvillanova ",
"Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.",
"Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?",
"Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅 \r\n\r\nIn the meantime, if you would like to contribute, feel free to open a Pull Request. You are welcome. Here you can find more information: [How to contribute to Datasets?](CONTRIBUTING.md)",
"I can't reproduce the bug on master. I believe this issue was fixed by https://github.com/huggingface/datasets/pull/3551."
] |
958,968,748
| 2,749
|
Raise a proper exception when trying to stream a dataset that requires to manually download files
|
closed
| 2021-08-03T10:26:27
| 2021-08-09T08:53:35
| 2021-08-04T11:36:30
|
https://github.com/huggingface/datasets/issues/2749
| null |
severo
| false
|
[
"Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requiring manual download, their builder have a property `manual_download_instructions` which is not None:\r\n```python\r\n# Dataset requiring manual download:\r\nbuilder.manual_download_instructions is not None\r\n```",
"Thanks @albertvillanova "
] |
958,889,041
| 2,748
|
Generate metadata JSON for wikihow dataset
|
closed
| 2021-08-03T08:55:40
| 2021-08-03T10:17:51
| 2021-08-03T10:17:51
|
https://github.com/huggingface/datasets/pull/2748
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2748",
"html_url": "https://github.com/huggingface/datasets/pull/2748",
"diff_url": "https://github.com/huggingface/datasets/pull/2748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2748.patch",
"merged_at": "2021-08-03T10:17:51"
}
|
albertvillanova
| true
|
[] |
958,867,627
| 2,747
|
add multi-proc in `to_json`
|
closed
| 2021-08-03T08:30:13
| 2021-10-19T18:24:21
| 2021-09-13T13:56:37
|
https://github.com/huggingface/datasets/pull/2747
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2747",
"html_url": "https://github.com/huggingface/datasets/pull/2747",
"diff_url": "https://github.com/huggingface/datasets/pull/2747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2747.patch",
"merged_at": "2021-09-13T13:56:37"
}
|
bhavitvyamalik
| true
|
[
"Thank you for working on this, @bhavitvyamalik \r\n\r\n10% is not solving the issue, we want 5-10x faster on a machine that has lots of resources, but limited processing time.\r\n\r\nSo let's benchmark it on an instance with many more cores, I can test with 12 on my dev box and 40 on JZ. \r\n\r\nCould you please share the test I could run with both versions?\r\n\r\nShould we also test the sharded version I shared in https://github.com/huggingface/datasets/issues/2663#issue-946552273 so optionally 3 versions to test.",
"Since I was facing `OSError: [Errno 12] Cannot allocate memory` in CircleCI tests, I've added `num_proc` option instead of always using full `cpu_count`. You can test both v1 and v2 through this branch (some redundancy needs to be removed). \r\n\r\nUpdate: I was able to convert into json which took 50% less time as compared to v1 on `ascent_kb` dataset. Will post the benchmarking script with results here.",
"Here are the benchmarks with the current branch for both v1 and v2 (dataset: `ascent_kb`, 8.9M samples):\r\n| batch_size | time (in sec) | time (in sec) |\r\n|------------|---------------|---------------|\r\n| | num_proc = 1 | num_proc = 4 |\r\n| 10k | 185.56 | 170.11 |\r\n| 50k | 175.79 | 86.84 |\r\n| **100k** | 191.09 | **78.35** |\r\n| 125k | 198.28 | 90.89 |\r\n\r\nIncreasing the batch size on my machine helped in making v2 around 50% faster as compared to v1. Timings may vary depending on the machine. I'm including the benchmarking script as well. CircleCI errors are unrelated (something related to `bertscore`)\r\n```\r\nimport time\r\nfrom datasets import load_dataset\r\nimport pathlib\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\nimport gc\r\n\r\nbatch_sizes = [10_000, 50_000, 100_000, 125_000]\r\nnum_procs = [1, 4] # change this according to your machine\r\n\r\nSAVE_LOC = \"./new_dataset.json\"\r\n\r\nfor batch in batch_sizes:\r\n for num in num_procs:\r\n dataset = load_dataset(\"ascent_kb\")\r\n\r\n local_start = time.time()\r\n ans = dataset['train'].to_json(SAVE_LOC, batch_size=batch, num_proc=num)\r\n local_end = time.time() - local_start\r\n\r\n print(f\"Time taken on {num} num_proc and {batch} batch_size: \", local_end)\r\n\r\n # remove that dataset and its contents from cache and newly generated json\r\n new_json = pathlib.Path(SAVE_LOC)\r\n new_json.unlink()\r\n\r\n try:\r\n shutil.rmtree(os.path.join(str(Path.home()), \".cache\", \"huggingface\"))\r\n except OSError as e:\r\n print(\"Error: %s - %s.\" % (e.filename, e.strerror))\r\n\r\n gc.collect()\r\n```\r\nThis will download the dataset in every iteration and run `to_json`. I didn't do multiple iterations here for `to_json` (for a specific batch_size and num_proc) and took average time as I found that v1 got faster after 1st iteration (maybe it's caching somewhere). Since you'll be doing this operation only once, I thought it'll be better to report how both v1 and v2 performed in single iteration only. \r\n\r\nImportant: Benchmarking script will delete the newly generated json and `~/.cache/huggingface/` after every iteration so that it doesn't end up using any cached data (just to be on a safe side)",
"Thank you for sharing the benchmark, @bhavitvyamalik. Your results look promising.\r\n\r\nBut if I remember correctly the sharded version at https://github.com/huggingface/datasets/issues/2663#issue-946552273 was much faster. So we probably should compare to it as well? And if it's faster than at least document that manual sharding version?\r\n\r\n-------\r\n\r\nThat's a dangerous benchmark as it'd wipe out many other HF things. Why not wipe out:\r\n```\r\n~/.cache/huggingface/datasets/ascent_kb/\r\n```\r\n\r\nRunning the benchmark now.",
"Weird, I tried to adapt your benchmark to using shards and the program no longer works. It instead quickly uses up all available RAM and hangs. Has something changed recently in `datasets`? You can try:\r\n\r\n```\r\nimport time\r\nfrom datasets import load_dataset\r\nimport pathlib\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\nimport gc\r\nfrom multiprocessing import cpu_count, Process, Queue\r\n\r\nbatch_sizes = [10_000, 50_000, 100_000, 125_000]\r\nnum_procs = [1, 8] # change this according to your machine\r\n\r\nDATASET_NAME = (\"ascent_kb\")\r\nnum_shards = [1, 8]\r\nfor batch in batch_sizes:\r\n for shards in num_shards:\r\n dataset = load_dataset(DATASET_NAME)[\"train\"]\r\n #print(dataset)\r\n\r\n def process_shard(idx):\r\n print(f\"Sharding {idx}\")\r\n ds_shard = dataset.shard(shards, idx, contiguous=True)\r\n # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling\r\n print(f\"Saving {DATASET_NAME}-{idx}.jsonl\")\r\n ds_shard.to_json(f\"{DATASET_NAME}-{idx}.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n\r\n local_start = time.time()\r\n queue = Queue()\r\n processes = [Process(target=process_shard, args=(idx,)) for idx in range(shards)]\r\n for p in processes:\r\n p.start()\r\n\r\n for p in processes:\r\n p.join()\r\n local_end = time.time() - local_start\r\n\r\n print(f\"Time taken on {shards} shards and {batch} batch_size: \", local_end)\r\n```\r\n\r\nJust careful, so that it won't crash your compute environment. As it almost crashed mine.",
"So this part seems to no longer work:\r\n```\r\n dataset = load_dataset(\"ascent_kb\")[\"train\"]\r\n ds_shard = dataset.shard(1, 0, contiguous=True)\r\n ds_shard.to_json(\"ascent_kb-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n```",
"If you are using `to_json` without any `num_proc`or `num_proc=1` then essentially it'll fall back to v1 only and I've kept it as it is (the tests were passing as well)\r\n\r\n> That's a dangerous benchmark as it'd wipe out many other HF things. Why not wipe out:\r\n\r\nThat's because some dataset related files were still left inside `~/.cache/huggingface/datasets` folder. You can wipe off datasets folder inside your cache maybe\r\n\r\n> dataset = load_dataset(\"ascent_kb\")[\"train\"]\r\n> ds_shard = dataset.shard(1, 0, contiguous=True)\r\n> ds_shard.to_json(\"ascent_kb-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n\r\nI tried this `lama` dataset (1.3M) and it worked fine. Trying it with `ascent_kb` currently, will update it here.",
"I don't think the issue has anything to do with your work, @bhavitvyamalik. I forgot to mention I tested to see the same problem with the latest datasets release.\r\n\r\nInteresting, I tried your suggestion. This:\r\n```\r\npython -c 'import datasets; ds=\"lama\"; dataset = datasets.load_dataset(ds)[\"train\"]; \\\r\ndataset.shard(1, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nworks fine and takes just a few GBs to complete.\r\n\r\nthis on the other hand blows up memory-wise:\r\n```\r\npython -c 'import datasets; ds=\"ascent_kb\"; dataset = datasets.load_dataset(ds)[\"train\"]; \\\r\ndataset.shard(1, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nand I have to kill it before it uses up all RAM. (I have 128GB of it, so it should be more than enough)",
"> That's because some dataset related files were still left inside ~/.cache/huggingface/datasets folder. You can wipe off datasets folder inside your cache maybe\r\n\r\nI think recent datasets added a method that will print out the path for all the different components for a given dataset, I can't recall the name though. It was when we were discussing a janitor program to clear up space selectively.",
"> and I have to kill it before it uses up all RAM. (I have 128GB of it, so it should be more than enough)\r\n\r\nSame thing just happened on my machine too. Memory leak somewhere maybe? Even if you were to load this dataset in your memory it shouldn't take more than 4GB. You were earlier doing this for `oscar` dataset. Is it working fine for that?",
"Hmm, looks like `datasets` has changed and won't accept my currently cached oscar-en (crashes), so I'd rather not download 0.5TB again. \r\n\r\nWere you able to reproduce the memory blow up with `ascent_kb`? It's should be a much quicker task to verify.\r\n\r\nBut yes, oscar worked just fine with `.shard()` which is what I used to process it fast.",
"What I tried is:\r\n```\r\nHF_DATASETS_OFFLINE=1 HF_DATASETS_CACHE=cache python -c 'import datasets; ds=\"oscar\"; \\\r\ndataset = datasets.load_dataset(ds, \"unshuffled_deduplicated_en\")[\"train\"]; \\\r\ndataset.shard(1000000, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nand got:\r\n```\r\nUsing the latest cached version of the module from /gpfswork/rech/six/commun/modules/datasets_modules/datasets/oscar/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d (last modified on Fri Aug 6 01:52:35 2021) since it couldn't be found locally at oscar/oscar.py or remotely (OfflineModeIsEnabled).\r\nReusing dataset oscar (cache/oscar/unshuffled_deduplicated_en/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/load.py\", line 755, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/builder.py\", line 737, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/builder.py\", line 764, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/builder.py\", line 834, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 217, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 238, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 173, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 308, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 327, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/table.py\", line 450, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/gpfswork/rech/six/commun/conda/stas/lib/python3.8/site-packages/datasets/table.py\", line 43, in _memory_mapped_arrow_table_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```",
"> Were you able to reproduce the memory blow up with ascent_kb? It's should be a much quicker task to verify.\r\n\r\nYes, this blows up memory-wise on my machine too. \r\n\r\nI found that a [similar error](https://discuss.huggingface.co/t/saving-memory-with-run-mlm-py-with-wikipedia-datasets/4160) was posted on the forum on 5th March. Since you already knew how much time [#2663 comment](https://github.com/huggingface/datasets/issues/2663#issue-946552273) took, can you try benchmarking v1 and v2 for now maybe until we have a fix for this memory blow up?",
"OK, so I benchmarked using \"lama\" though it's too small for this kind of test, since the sharding is much slower than one thread here.\r\n\r\nResults: https://gist.github.com/stas00/dc1597a1e245c5915cfeefa0eee6902c\r\n\r\nSo sharding does really bad there, and your json over procs is doing great!\r\n\r\nAny suggestions to a somewhat bigger dataset, but not too big? say 10 times of lama?",
"Looks great! I had a few questions/suggestions related to `benchmark-datasets-to_json.py`:\r\n \r\n1. You have used only 10_000 and 100_000 batch size. Including more batch sizes may help you find the perfect batch size for your machine and even give you some extra speed-up. \r\nFor eg, I found `load_dataset(\"cc100\", lang=\"eu\")` with batch size 125_000 took less time as compared to batch size 100_000 (71.16 sec v/s 67.26 sec) since this dataset has 2 fields only `['id', 'text']`, so that's why we can go for higher batch size here. \r\n \r\n2. Why have you used `num_procs` 1 and 4 only? \r\n\r\nYou can use:\r\n1. `dataset = load_dataset(\"cc100\", lang=\"af\")`. Even though it has only 2 fields but there are around 9.9 mil samples. (lama had around 1.3 mil samples)\r\n2. `dataset = load_dataset(\"cc100\", lang=\"eu\")` -> 16 mil samples. (if you want something more than 9.9 mil)\r\n3. `dataset = load_dataset(\"neural_code_search\", 'search_corpus')` -> 4.7 mil samples",
"Thank you, @bhavitvyamalik \r\n\r\nMy apologies, at the moment I have not found time to do more benchmark with the proposed other datasets. I will try to do it later, but I don't want it to hold your PR, it's definitely a great improvement based on the benchmarks I did run! And the comparison to sharded is really just of interest to me to see if it's on par or slower.\r\n\r\nSo if other reviewers are happy, this definitely looks like a great improvement to me and addresses the request I made in the first place.\r\n\r\n> Why have you used num_procs 1 and 4 only?\r\n\r\nOh, no particular reason, I was just comparing to 4 shards on my desktop. Typically it's sufficient to go from 1 to 2-4 to see whether the distributed approach is faster or not. Once hit larger numbers you often run into bottlenecks like IO, and then numbers can be less representative. I hope it makes sense.",
"Tested it with a larger dataset (`srwac`) and memory utilisation remained constant with no swap memory used. @lhoestq should I also add test for the same? Last time I tried this, I got `OSError: [Errno 12] Cannot allocate memory` in CircleCI tests"
] |
958,551,619
| 2,746
|
Cannot load `few-nerd` dataset
|
closed
| 2021-08-02T22:18:57
| 2021-11-16T08:51:34
| 2021-08-03T19:45:43
|
https://github.com/huggingface/datasets/issues/2746
| null |
Mehrad0711
| false
|
[
"Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by means of our test suite\r\n- community datasets (their identifier contains a slash \"/\", where before the slash it is the username or the organization name): those datasets are uploaded to the Hub by the community, and we, the Hugging Face team, do not supervise them; it is the responsibility of the user/organization implementing them properly if they want them to be used by other users.\r\n\r\nIn this specific case, there is no \"canonical\" dataset named \"few-nerd\". On the other hand, there are two \"community\" datasets named \"few-nerd\":\r\n- [\"nbroad/few-nerd\"](https://huggingface.co/datasets/nbroad/few-nerd)\r\n- [\"dfki-nlp/few-nerd\"](https://huggingface.co/datasets/dfki-nlp/few-nerd)\r\n\r\nIf they were properly implemented, you should be able to load them this way:\r\n```python\r\n# \"nbroad/few-nerd\" community dataset\r\nds = load_dataset(\"nbroad/few-nerd\", \"supervised\")\r\n\r\n# \"dfki-nlp/few-nerd\" community dataset\r\nds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\n```\r\n\r\nHowever, they are not correctly implemented and both of them give errors:\r\n- \"nbroad/few-nerd\":\r\n ```\r\n TypeError: expected str, bytes or os.PathLike object, not dict\r\n ```\r\n- \"dfki-nlp/few-nerd\":\r\n ```\r\n ConnectionError: Couldn't reach https://cloud.tsinghua.edu.cn/f/09265750ae6340429827/?dl=1\r\n ```\r\n\r\nYou could try to contact their users/organizations to inform them about their bugs and ask them if they are planning to fix them. Alternatively you could try to implement your own script for this dataset.",
"Thanks @albertvillanova for your detailed explanation! I will resort to my own scripts for now. ",
"Hello, @Mehrad0711; Hi, @albertvillanova !\r\nI am the maintainer of the `dfki/few-nerd\" dataset script, sorry for the very late reply and hope this message finds you well!\r\nWe should use\r\n```\r\ndataset = load_dataset(\"dfki-nlp/few-nerd\", name=\"supervised\")\r\n```\r\ninstead of not specifying the \"name\" argument, where name is from `[\"supervised\", \"inter\", \"intra\"]`. Otherwise the method just treats \"supervised\" as `split`, which we reserve after specifying the name, since for each name, there are three splits: train, dev and test.\r\n\r\nAlso we use Tsinghua server source to download data files since it is the official source referred in the paper where the dataset is released (even though it is cc-by-sa-4.0 licensed, means we can copy the data anywhere after mentioning the license\r\n). Sometimes the server just runs down due to high pressure, kinda weird (we encountered the same server problem serveral times a month when we conducted experiments on Few-NERD XD). I tried the script just now and it works perfectly!\r\n```\r\n>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n>>> dataset[\"train\"]\r\nDataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n})\r\n>>> dataset[\"train\"][0]\r\n{'id': '0', 'tokens': ['Paul', 'International', 'airport', '.'], 'ner_tags': [0, 0, 0, 0], 'fine_ner_tags': [0, 0, 0, 0]}\r\n```\r\nAnyways if you cannot stand the pain with the server and its slow download speed, you can also download the `dfki/few-nerd.py` script from HF and change the `_URLs` to your personal drive (after you once successfully download the data and upload to your cloud drive), and then load the .py script locally.\r\n\r\nHope this reply can still be any help. If you still have problems with it, feel free to ask here and I am glad to help!\r\nBest wishes.",
"Hi @chen-yuxuan, thanks for your answer.\r\n\r\nJust a few comments:\r\n\r\n- Please, note that as we use `datasets.load_dataset` implementation, we can pass the configuration name as the second positional argument (no need to pass explicitly `name=`) and it downloads the 3 splits:\r\n```python\r\n In [4]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<00:00, 2.85MB/s]\r\nDownloading and preparing dataset few_nerd/supervised to .cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:16<00:00, 190kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [01:14<00:00, 160kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.0M/12.0M [01:04<00:00, 186kB/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [03:58<00:00, 79.45s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.11it/s]\r\n```\r\n\r\n- On the other hand, please note that your script does not work on Windows machines, because you call `open()` without passing the encoding parameter:\r\n```\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\dfki-nlp___few-nerd\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255\\few-nerd.py in <genexpr>(.0)\r\n 276 assert filepath[-4:] == \".txt\"\r\n 277\r\n--> 278 num_lines = sum(1 for _ in open(filepath))\r\n 279 id = 0\r\n 280\r\n\r\n.venv\\lib\\encodings\\cp1252.py in decode(self, input, final)\r\n 21 class IncrementalDecoder(codecs.IncrementalDecoder):\r\n 22 def decode(self, input, final=False):\r\n---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n 24\r\n 25 class StreamWriter(Codec,codecs.StreamWriter):\r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 5238: character maps to <undefined>\r\n```\r\n\r\nIf you would like your script to be usable on Windows machines, you should pass `encoding=\"utf-8\"` to every `open()` function:\r\n- line 278: `num_lines = sum(1 for _ in open(filepath, encoding=\"utf-8\"))`\r\n- line 281: `with open(filepath, \"r\", encoding=\"utf-8\")`",
"Thank you @albertvillanova for your detailed feedback!\r\n\r\n> no need to pass explicitly `name=`\r\n\r\nGood catch! I thought `split` stands before `name` in the argument list... but now it is all clear to me, sounds cool! Thanks for the explanation.\r\n\r\nAnyways in our old code it still looks bit confusing if we only want one split but the function downloads all, so to allow efficient downloading, I optimized the code a bit so that only the specified split data is downloaded. now we get\r\n```\r\n>>> x = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading and preparing dataset few_nerd/supervised to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885...\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:01<00:00, 238kB/s]\r\n100%|██████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:12<00:00, 275462.84it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 482037/482037 [00:01<00:00, 278633.64it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 958765/958765 [00:03<00:00, 267472.83it/s]\r\nDataset few_nerd downloaded and prepared to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885. Subsequent calls will reuse this data.\r\n```\r\nwhere only one progress bar indicates downloading, and the three others just indicate pre-processing for the train, dev, test set.\r\n\r\nFor the encoding issue, I have made corresponding changes for the two lines you pointed out. However, I have no windows machine at hand, I would really appreciate it if you could help test on your end.\r\n\r\nAll the updates are uploaded to HF under `dfki-nlp` account where I am working for. \r\nThank you again for your kind help!\r\n",
"Hi @chen-yuxuan,\r\n\r\nI have tested on Windows and now it works perfectly, after the fixing of the encoding issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<?, ?B/s]\r\nDownloading and preparing dataset few_nerd/supervised to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511...\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:25<00:00, 129427.23it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 482037/482037 [00:03<00:00, 134513.66it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 958765/958765 [00:06<00:00, 143152.35it/s]\r\nDataset few_nerd downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511. Subsequent calls will reuse this data.765 [00:06<00:00, 139045.03it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 174.71it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n```"
] |
958,269,579
| 2,745
|
added semeval18_emotion_classification dataset
|
closed
| 2021-08-02T15:39:55
| 2021-10-29T09:22:05
| 2021-09-21T09:48:35
|
https://github.com/huggingface/datasets/pull/2745
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2745",
"html_url": "https://github.com/huggingface/datasets/pull/2745",
"diff_url": "https://github.com/huggingface/datasets/pull/2745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2745.patch",
"merged_at": "2021-09-21T09:48:35"
}
|
maxpel
| true
|
[
"For training the multilabel classifier, I would combine the labels into a list, for example for the English dataset:\r\n\r\n```\r\ndfpre=pd.read_csv(path+\"2018-E-c-En-train.txt\",sep=\"\\t\")\r\ndfpre['list'] = dfpre[dfpre.columns[2:]].values.tolist()\r\ndf = dfpre[['Tweet', 'list']].copy()\r\ndf.rename(columns={'list': 'labels'}, inplace=True)\r\n```",
"Hi @maxpel , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)",
"Hi @lhoestq ! I did take your comments into account, changed the naming and tried to add dummy data (manually). I am not sure if the dummy data is correct, maybe you can take a look at that.\r\nThe model card is still missing as I am currently very busy.",
"Thanks ! The dummy data looks all good, good job :)\r\n\r\nThe CI error can be fixed by merging `master` into your branch\r\n```bash\r\ngit fetch upstream\r\ngit merge upstream/master\r\n```",
"Hi! I just added the model card and I did the merge you showed above. Should I then add and commit again? The CI error is still there right now.",
"@lhoestq Unfortunately, I discovered a problem with the test data sets on the competion page (train and dev is fine). They still contain NONE labels for each of the emotions, for example for English: http://saifmohammad.com/WebDocs/AIT-2018/AIT2018-DATA/AIT2018-TEST-DATA/semeval2018englishtestfiles/2018-E-c-En-test.zip\r\nLuckily, a zip file with all data of the competition contains the correct labels also for the test set:\r\nhttp://saifmohammad.com/WebDocs/AIT-2018/AIT2018-DATA/SemEval2018-Task1-all-data.zip\r\nWhat's the best way to correct this?",
"Hi ! I think we can edit the sem_eval_2018_task_1.py file to use this URL instead, and maybe update the `os.path.join` calls to the new paths to the text data in the new ZIP file. Would you like to try to make this work ?"
] |
958,146,637
| 2,744
|
Fix key by recreating metadata JSON for journalists_questions dataset
|
closed
| 2021-08-02T13:27:53
| 2021-08-03T09:25:34
| 2021-08-03T09:25:33
|
https://github.com/huggingface/datasets/pull/2744
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2744",
"html_url": "https://github.com/huggingface/datasets/pull/2744",
"diff_url": "https://github.com/huggingface/datasets/pull/2744.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2744.patch",
"merged_at": "2021-08-03T09:25:33"
}
|
albertvillanova
| true
|
[] |
958,119,251
| 2,743
|
Dataset JSON is incorrect
|
closed
| 2021-08-02T13:01:26
| 2021-08-03T10:06:57
| 2021-08-03T09:25:33
|
https://github.com/huggingface/datasets/issues/2743
| null |
severo
| false
|
[
"As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:\r\n> Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...\r\nIn the meanwhile, in order to be able to use the datasets_info.json file content, you can create the builder without passing the name :\r\n```\r\nIn [25]: builder = datasets.load_dataset_builder(\"journalists_questions\")\r\nIn [26]: builder.info.splits\r\nOut[26]: {'train': SplitInfo(name='train', num_bytes=342296, num_examples=10077, dataset_name='journalists_questions')}\r\n```\r\n\r\nAfter regenerating the metadata JSON file for this dataset, I get the right key:\r\n```\r\n{\"plain_text\": {\"description\": \"The journalists_questions corpus (\r\n```",
"Thanks!"
] |
958,114,064
| 2,742
|
Improve detection of streamable file types
|
closed
| 2021-08-02T12:55:09
| 2021-11-12T17:18:10
| 2021-11-12T17:18:10
|
https://github.com/huggingface/datasets/issues/2742
| null |
severo
| false
|
[
"maybe we should rather attempt to download a `Range` from the server and see if it works?"
] |
957,979,559
| 2,741
|
Add Hypersim dataset
|
open
| 2021-08-02T10:06:50
| 2021-12-08T12:06:51
| null |
https://github.com/huggingface/datasets/issues/2741
| null |
osanseviero
| false
|
[] |
957,911,035
| 2,740
|
Update release instructions
|
closed
| 2021-08-02T08:46:00
| 2021-08-02T14:39:56
| 2021-08-02T14:39:56
|
https://github.com/huggingface/datasets/pull/2740
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2740",
"html_url": "https://github.com/huggingface/datasets/pull/2740",
"diff_url": "https://github.com/huggingface/datasets/pull/2740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2740.patch",
"merged_at": "2021-08-02T14:39:56"
}
|
albertvillanova
| true
|
[] |
957,751,260
| 2,739
|
Pass tokenize to sacrebleu only if explicitly passed by user
|
closed
| 2021-08-02T05:09:05
| 2021-08-03T04:23:37
| 2021-08-03T04:23:37
|
https://github.com/huggingface/datasets/pull/2739
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2739",
"html_url": "https://github.com/huggingface/datasets/pull/2739",
"diff_url": "https://github.com/huggingface/datasets/pull/2739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2739.patch",
"merged_at": "2021-08-03T04:23:37"
}
|
albertvillanova
| true
|
[] |
957,517,746
| 2,738
|
Sunbird AI Ugandan low resource language dataset
|
closed
| 2021-08-01T15:18:00
| 2022-10-03T09:37:30
| 2022-10-03T09:37:30
|
https://github.com/huggingface/datasets/pull/2738
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2738",
"html_url": "https://github.com/huggingface/datasets/pull/2738",
"diff_url": "https://github.com/huggingface/datasets/pull/2738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2738.patch",
"merged_at": null
}
|
ak3ra
| true
|
[
"Hi @ak3ra , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)",
"@lhoestq Working on this, thanks for the detailed review :) ",
"Hi ! Cool thanks :)\r\nFeel free to merge master into your branch to fix the CI issues\r\n\r\nLet me know if you have questions or if I can help",
"Thanks for your contribution, @ak3ra. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
957,124,881
| 2,737
|
SacreBLEU update
|
closed
| 2021-07-30T23:53:08
| 2021-09-22T10:47:41
| 2021-08-03T04:23:37
|
https://github.com/huggingface/datasets/issues/2737
| null |
devrimcavusoglu
| false
|
[
"Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of the party\"]\r\nreferences = [[\"It is a guide to action that ensures that the military will forever heed Party commands\"]] # double brackets here should do the work\r\nresults = sacrebleu.compute(predictions=predictions, references=references)\r\nprint(results)\r\noutput: {'score': 41.180376356915765, 'counts': [11, 8, 6, 4], 'totals': [18, 17, 16, 15], 'precisions': [61.111111111111114, 47.05882352941177, 37.5, 26.666666666666668], 'bp': 1.0, 'sys_len': 18, 'ref_len': 16}\r\n```",
"@bhavitvyamalik hmm. I forgot double brackets, but still didn't work when used it with double brackets. It may be an isseu with platform (using win-10 currently), or versions. What is your platform and your version info for datasets, python, and sacrebleu ?",
"You can check that here, I've reproduced your code in [Google colab](https://colab.research.google.com/drive/1X90fHRgMLKczOVgVk7NDEw_ciZFDjaCM?usp=sharing). Looks like there was some issue in `sacrebleu` which was fixed later from what I've found [here](https://github.com/pytorch/fairseq/issues/2049#issuecomment-622367967). Upgrading `sacrebleu` to latest version should work.",
"It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing\r\n\r\nI'm reopening this Issue and making a Pull Request to fix it.",
"> It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing\r\n> \r\n> I'm reopening this Issue and making a Pull Request to fix it.\r\n\r\nHow did you solve him"
] |
956,895,199
| 2,736
|
Add Microsoft Building Footprints dataset
|
open
| 2021-07-30T16:17:08
| 2021-12-08T12:09:03
| null |
https://github.com/huggingface/datasets/issues/2736
| null |
albertvillanova
| false
|
[
"Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"
] |
956,889,365
| 2,735
|
Add Open Buildings dataset
|
open
| 2021-07-30T16:08:39
| 2021-07-31T05:01:25
| null |
https://github.com/huggingface/datasets/issues/2735
| null |
albertvillanova
| false
|
[] |
956,844,874
| 2,734
|
Update BibTeX entry
|
closed
| 2021-07-30T15:22:51
| 2021-07-30T15:47:58
| 2021-07-30T15:47:58
|
https://github.com/huggingface/datasets/pull/2734
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2734",
"html_url": "https://github.com/huggingface/datasets/pull/2734",
"diff_url": "https://github.com/huggingface/datasets/pull/2734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2734.patch",
"merged_at": "2021-07-30T15:47:58"
}
|
albertvillanova
| true
|
[] |
956,725,476
| 2,733
|
Add missing parquet known extension
|
closed
| 2021-07-30T13:01:20
| 2021-07-30T13:24:31
| 2021-07-30T13:24:30
|
https://github.com/huggingface/datasets/pull/2733
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2733",
"html_url": "https://github.com/huggingface/datasets/pull/2733",
"diff_url": "https://github.com/huggingface/datasets/pull/2733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2733.patch",
"merged_at": "2021-07-30T13:24:30"
}
|
lhoestq
| true
|
[] |
956,676,360
| 2,732
|
Updated TTC4900 Dataset
|
closed
| 2021-07-30T11:52:14
| 2021-07-30T16:00:51
| 2021-07-30T15:58:14
|
https://github.com/huggingface/datasets/pull/2732
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2732",
"html_url": "https://github.com/huggingface/datasets/pull/2732",
"diff_url": "https://github.com/huggingface/datasets/pull/2732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2732.patch",
"merged_at": "2021-07-30T15:58:14"
}
|
yavuzKomecoglu
| true
|
[
"@lhoestq, lütfen bu PR'ı gözden geçirebilir misiniz?",
"> Thanks ! This looks all good now :)\r\n\r\nThanks"
] |
956,087,452
| 2,731
|
Adding to_tf_dataset method
|
closed
| 2021-07-29T18:10:25
| 2021-09-16T13:50:54
| 2021-09-16T13:50:54
|
https://github.com/huggingface/datasets/pull/2731
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2731",
"html_url": "https://github.com/huggingface/datasets/pull/2731",
"diff_url": "https://github.com/huggingface/datasets/pull/2731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2731.patch",
"merged_at": "2021-09-16T13:50:53"
}
|
Rocketknight1
| true
|
[
"This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the dataset followed by accessing sequential chunks, instead of shuffling an index tensor. The combination of all of these gives us a more flexible data loader as well as a ~20X boost in performance compared to the first solution.",
"I made a change to the `TFFormatter` in this PR that will need some changes to the tests, so I wanted to ping @lhoestq and anyone else before I made those changes.\r\n\r\nThe key problem is that up until now the `TFFormatter` always returns `RaggedTensor`, created using the very slow `tf.ragged.constant` function. This is a big performance penalty, but it's also (imo) surprising for users - `RaggedTensor` handles tensors where one dimension has variable length. This is a good choice for tokenized datasets with variable sequence length, but it's an odd choice when the non-batch dimensions are constant, such as in image datasets, or in datasets where all samples are padded to the same length (e.g. for TPU training).\r\n\r\nThe change I made was to try to return standard `Tensor` objects instead of `RaggedTensor` when all the samples in the batch had the same shape, and if that was not the case to fall back to fast `RaggedTensor` creation with `tf.ragged.stack`, and only falling back to the very slow `tf.ragged.constant` function as a last resort. I think this will match user expectations in most cases and greatly improve performance, but it's a (very slightly) breaking change, so any feedback is welcome!",
"Also I really can't emphasize enough how slow `tf.ragged.constant` is, it's bad enough to create a data pipeline bottleneck in more or less any training setup:\r\n\r\n",
"Hi @lhoestq, the tests have been modified and everything is passing. The Windows tests look to be failing for an unrelated reason, but other than that I'm ready to merge if you are!",
"Hi @Rocketknight1 ! Feel free to merge `master` into this branch to fix and run the full CI :)",
"@lhoestq rebased onto master and it looks good! I'm doing some testing with new notebook examples, but are you happy to merge if that looks good?",
"@lhoestq No, I'm happy to merge it as-is and add documentation afterwards!"
] |
955,987,834
| 2,730
|
Update CommonVoice with new release
|
open
| 2021-07-29T15:59:59
| 2021-08-07T16:19:19
| null |
https://github.com/huggingface/datasets/issues/2730
| null |
yjernite
| false
|
[
"cc @patrickvonplaten?",
"Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n",
"Also see: https://github.com/common-voice/common-voice-bundler/issues/15"
] |
955,920,489
| 2,729
|
Fix IndexError while loading Arabic Billion Words dataset
|
closed
| 2021-07-29T14:47:02
| 2021-07-30T13:03:55
| 2021-07-30T13:03:55
|
https://github.com/huggingface/datasets/pull/2729
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2729",
"html_url": "https://github.com/huggingface/datasets/pull/2729",
"diff_url": "https://github.com/huggingface/datasets/pull/2729.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2729.patch",
"merged_at": "2021-07-30T13:03:55"
}
|
albertvillanova
| true
|
[] |
955,892,970
| 2,728
|
Concurrent use of same dataset (already downloaded)
|
open
| 2021-07-29T14:18:38
| 2021-08-02T07:25:57
| null |
https://github.com/huggingface/datasets/issues/2728
| null |
PierreColombo
| false
|
[
"Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.",
"If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in <module>\r\n train_loader, val_loader, test_loader = get_dataloader(args)\r\n File \"/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py\", line 69, in get_dataloader\r\n dataset_train = load_dataset('paws', \"labeled_final\", split='train', download_mode=\"reuse_cache_if_exists\")\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py\", line 582, in download_and_prepare\r\n self._save_info()\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _save_info\r\n self.info.write_to_directory(self._cache_dir)\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/info.py\", line 195, in write_to_directory\r\n with open(os.path.join(dataset_info_dir, config.LICENSE_FILENAME), \"wb\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/LICENSE'",
"You can probably have a solution much faster than me (first time I use the library). But I suspect some write function are used when loading the dataset from cache.",
"I have the same issue:\r\n```\r\nTraceback (most recent call last):\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 1040, in _prepare_split\r\n with ArrowWriter(features=self.info.features, path=fpath) as writer:\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/arrow_writer.py\", line 192, in __init__\r\n self.stream = pa.OSFile(self._path, \"wb\")\r\n File \"pyarrow/io.pxi\", line 829, in pyarrow.lib.OSFile.__cinit__\r\n File \"pyarrow/io.pxi\", line 844, in pyarrow.lib.OSFile._open_writable\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/dccstor/tslm-gen/.cache/csv/default-387f1f95c084d4df/0.0.0/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/dccstor/tslm/elron/tslm-gen/train.py\", line 510, in <module>\r\n main()\r\n File \"/dccstor/tslm/elron/tslm-gen/train.py\", line 246, in main\r\n datasets = prepare_dataset(dataset_args, logger)\r\n File \"/dccstor/tslm/elron/tslm-gen/data.py\", line 157, in prepare_dataset\r\n datasets = load_dataset(extension, data_files=data_files, split=dataset_split, cache_dir=dataset_args.dataset_cache_dir, na_filter=False, download_mode=dataset_args.dataset_generate_mode)\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/load.py\", line 742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 654, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 2] Failed to open local file '/dccstor/tslm-gen/.cache/csv/default-387f1f95c084d4df/0.0.0/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\n```"
] |
955,812,149
| 2,727
|
Error in loading the Arabic Billion Words Corpus
|
closed
| 2021-07-29T12:53:09
| 2021-07-30T13:03:55
| 2021-07-30T13:03:55
|
https://github.com/huggingface/datasets/issues/2727
| null |
M-Salti
| false
|
[
"I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\r\n <ID>TRN_ARB_0248167</ID>\r\n <URL>http://tishreen.news.sy/tishreen/public/read/248240</URL>\r\n <Headline>Removed, because the original articles was in English</Headline>\r\n</Techreen>\r\n```\r\n\r\nand all the 288 faulty records in the `Almustaqbal` config look like:\r\n```\r\n<Almustaqbal>\r\n <ID>MTL_ARB_0028398</ID>\r\n \r\n <URL>http://www.almustaqbal.com/v4/article.aspx?type=NP&ArticleID=179015</URL>\r\n <Headline> Removed because it is not available in the original site</Headline>\r\n</Almustaqbal>\r\n```\r\n\r\nso the error is happening because the articles were removed and so the associated records lack the `Text` tag.\r\n\r\nIn this case, I think we just need to catch the `IndexError` and ignore (pass) it.\r\n",
"Thanks @M-Salti for reporting this issue and for your investigation.\r\n\r\nIndeed, those `IndexError` should be catched and the corresponding record should be ignored.\r\n\r\nI'm opening a Pull Request to fix it."
] |
955,674,388
| 2,726
|
Typo fix `tokenize_exemple`
|
closed
| 2021-07-29T10:03:37
| 2021-07-29T12:00:25
| 2021-07-29T12:00:25
|
https://github.com/huggingface/datasets/pull/2726
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2726",
"html_url": "https://github.com/huggingface/datasets/pull/2726",
"diff_url": "https://github.com/huggingface/datasets/pull/2726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2726.patch",
"merged_at": "2021-07-29T12:00:25"
}
|
shabie
| true
|
[] |
955,020,776
| 2,725
|
Pass use_auth_token to request_etags
|
closed
| 2021-07-28T16:13:29
| 2021-07-28T16:38:02
| 2021-07-28T16:38:02
|
https://github.com/huggingface/datasets/pull/2725
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2725",
"html_url": "https://github.com/huggingface/datasets/pull/2725",
"diff_url": "https://github.com/huggingface/datasets/pull/2725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2725.patch",
"merged_at": "2021-07-28T16:38:01"
}
|
albertvillanova
| true
|
[] |
954,919,607
| 2,724
|
404 Error when loading remote data files from private repo
|
closed
| 2021-07-28T14:24:23
| 2021-07-29T04:58:49
| 2021-07-28T16:38:01
|
https://github.com/huggingface/datasets/issues/2724
| null |
albertvillanova
| false
|
[
"I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160",
"Yes, I remember having properly implemented that: \r\n- https://github.com/huggingface/datasets/commit/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160\r\n- https://github.com/huggingface/datasets/pull/2628/commits/6350a03b4b830339a745f7b1da46ece784ca734c\r\n\r\nBut a subsequent refactoring accidentally removed it...",
"I have opened a PR to fix it @lewtun."
] |
954,864,104
| 2,723
|
Fix en subset by modifying dataset_info with correct validation infos
|
closed
| 2021-07-28T13:36:19
| 2021-07-28T15:22:23
| 2021-07-28T15:22:23
|
https://github.com/huggingface/datasets/pull/2723
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2723",
"html_url": "https://github.com/huggingface/datasets/pull/2723",
"diff_url": "https://github.com/huggingface/datasets/pull/2723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2723.patch",
"merged_at": "2021-07-28T15:22:23"
}
|
thomasw21
| true
|
[] |
954,446,053
| 2,722
|
Missing cache file
|
closed
| 2021-07-28T03:52:07
| 2022-03-21T08:27:51
| 2022-03-21T08:27:51
|
https://github.com/huggingface/datasets/issues/2722
| null |
PosoSAgapo
| false
|
[
"This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.",
"Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset"
] |
954,238,230
| 2,721
|
Deal with the bad check in test_load.py
|
closed
| 2021-07-27T20:23:23
| 2021-07-28T09:58:34
| 2021-07-28T08:53:18
|
https://github.com/huggingface/datasets/pull/2721
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2721",
"html_url": "https://github.com/huggingface/datasets/pull/2721",
"diff_url": "https://github.com/huggingface/datasets/pull/2721.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2721.patch",
"merged_at": "2021-07-28T08:53:18"
}
|
mariosasko
| true
|
[
"Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps://github.com/huggingface/datasets/blob/00686c46b7aaf6bfcd4102cec300a3c031284a5a/tests/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for now :)"
] |
954,024,426
| 2,720
|
fix: 🐛 fix two typos
|
closed
| 2021-07-27T15:50:17
| 2021-07-27T18:38:17
| 2021-07-27T18:38:16
|
https://github.com/huggingface/datasets/pull/2720
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2720",
"html_url": "https://github.com/huggingface/datasets/pull/2720",
"diff_url": "https://github.com/huggingface/datasets/pull/2720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2720.patch",
"merged_at": "2021-07-27T18:38:16"
}
|
severo
| true
|
[] |
953,932,416
| 2,719
|
Use ETag in streaming mode to detect resource updates
|
open
| 2021-07-27T14:17:09
| 2021-10-22T09:36:08
| null |
https://github.com/huggingface/datasets/issues/2719
| null |
severo
| false
|
[] |
953,360,663
| 2,718
|
New documentation structure
|
closed
| 2021-07-26T23:15:13
| 2021-09-13T17:20:53
| 2021-09-13T17:20:52
|
https://github.com/huggingface/datasets/pull/2718
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2718",
"html_url": "https://github.com/huggingface/datasets/pull/2718",
"diff_url": "https://github.com/huggingface/datasets/pull/2718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2718.patch",
"merged_at": "2021-09-13T17:20:52"
}
|
stevhliu
| true
|
[
"I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)",
"I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in more details how to share a community or a canonical dataset - focus in their differences and the steps to upload them.\r\n\r\nAlso given that making a dataset script or a dataset card both require several steps, I feel like it's better to have dedicated pages for them.\r\n\r\nLet me know what you think @stevhliu and others. We can still revert this change if you feel like it was better with everything in the same place.",
"I just added some minor changes to match the style, fix typos, etc. Great work on the conceptual guides, I learned a lot from them and I'm sure they will help a lot of other people too!\r\n\r\nI am fine with splitting `Share` into three separate pages. I think this probably makes it easier for users to navigate, instead of having to scroll up and down on a really long single page.",
"Thanks a lot for all the suggestions ! I'm doing the final changes based on the remaining comments, then we can merge and release v1.12 of `datasets` and the new documentation ^^",
"Alright I think I took all the suggestions and comments into account :)\r\nThanks everyone for the help !"
] |
952,979,976
| 2,717
|
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
|
closed
| 2021-07-26T14:42:22
| 2021-07-26T18:04:14
| 2021-07-26T16:30:06
|
https://github.com/huggingface/datasets/pull/2717
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2717",
"html_url": "https://github.com/huggingface/datasets/pull/2717",
"diff_url": "https://github.com/huggingface/datasets/pull/2717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2717.patch",
"merged_at": "2021-07-26T16:30:05"
}
|
amankhandelia
| true
|
[] |
952,902,778
| 2,716
|
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
|
closed
| 2021-07-26T13:24:59
| 2021-07-26T18:04:43
| 2021-07-26T18:04:43
|
https://github.com/huggingface/datasets/issues/2716
| null |
amankhandelia
| false
|
[
"Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)",
"Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)",
"Fixed by #2717."
] |
952,845,229
| 2,715
|
Update PAN-X data URL in XTREME dataset
|
closed
| 2021-07-26T12:21:17
| 2021-07-26T13:27:59
| 2021-07-26T13:27:59
|
https://github.com/huggingface/datasets/pull/2715
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2715",
"html_url": "https://github.com/huggingface/datasets/pull/2715",
"diff_url": "https://github.com/huggingface/datasets/pull/2715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2715.patch",
"merged_at": "2021-07-26T13:27:59"
}
|
albertvillanova
| true
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.