id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
1,078,543,625 | 3,424 | Add RedCaps dataset | closed | 2021-12-13T13:38:13 | 2022-01-12T14:13:16 | 2022-01-12T14:13:15 | https://github.com/huggingface/datasets/pull/3424 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3424",
"html_url": "https://github.com/huggingface/datasets/pull/3424",
"diff_url": "https://github.com/huggingface/datasets/pull/3424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3424.patch",
"merged_at": "2022-01-12T14:13:15"
} | mariosasko | true | [
"Cool ! If you want you can include `dataset_infos.json` but only for the main configurations. That's what we do for example for translation datasets when there are too many configs",
"@lhoestq I've added an example that uses `map` to download the images."
] |
1,078,049,638 | 3,423 | data duplicate when setting num_works > 1 with streaming data | closed | 2021-12-13T03:43:17 | 2022-12-14T16:04:22 | 2022-12-14T16:04:22 | https://github.com/huggingface/datasets/issues/3423 | null | cloudyuyuyu | false | [
"Hi ! Thanks for reporting :)\r\n\r\nWhen using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.\r\n\r\nWe can probably fix this in `datasets` by checking `torch.utils.data.get_... |
1,078,022,619 | 3,422 | Error about load_metric | closed | 2021-12-13T02:49:51 | 2022-01-07T14:06:47 | 2022-01-07T14:06:47 | https://github.com/huggingface/datasets/issues/3422 | null | jiacheng-ye | false | [
"Hi ! I wasn't able to reproduce your error.\r\n\r\nCan you try to clear your cache at `~/.cache/huggingface/modules` and try again ?"
] |
1,077,966,571 | 3,421 | Adding mMARCO dataset | closed | 2021-12-13T00:56:43 | 2022-10-03T09:37:15 | 2022-10-03T09:37:15 | https://github.com/huggingface/datasets/pull/3421 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3421",
"html_url": "https://github.com/huggingface/datasets/pull/3421",
"diff_url": "https://github.com/huggingface/datasets/pull/3421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3421.patch",
"merged_at": null
} | lhbonifacio | true | [
"Hi @albertvillanova we've made a major overhaul of the loading script including all configurations we're making available. Could you please review it again?",
"@albertvillanova :ping_pong: ",
"Thanks @lhbonifacio for adding this dataset.\r\nHi there, i got an error about mmarco:\r\nConnectionError: Couldn't re... |
1,077,913,468 | 3,420 | Add eli5_category dataset | closed | 2021-12-12T21:30:45 | 2021-12-14T17:53:03 | 2021-12-14T17:53:02 | https://github.com/huggingface/datasets/pull/3420 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3420",
"html_url": "https://github.com/huggingface/datasets/pull/3420",
"diff_url": "https://github.com/huggingface/datasets/pull/3420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3420.patch",
"merged_at": "2021-12-14T17:53:02"
} | jingshenSN2 | true | [
"> Thanks a lot for adding this dataset ! Good job with the dataset card and the dataset scripts - they're really good :)\r\n> \r\n> I just added minor changes\r\n\r\nThanks for fixing typos!"
] |
1,077,350,974 | 3,419 | `.to_json` is extremely slow after `.select` | open | 2021-12-11T01:36:31 | 2021-12-21T15:49:07 | null | https://github.com/huggingface/datasets/issues/3419 | null | eladsegal | false | [
"Hi ! It's slower indeed because a datasets on which `select`/`shard`/`train_test_split`/`shuffle` has been called has to do additional steps to retrieve the data of the dataset table in the right order.\r\n\r\nIndeed, if you call `dataset.select([0, 5, 10])`, the underlying table of the dataset is not altered to k... |
1,077,053,296 | 3,418 | Add Wikisource dataset | closed | 2021-12-10T17:04:44 | 2022-10-04T09:35:56 | 2022-10-03T09:37:20 | https://github.com/huggingface/datasets/pull/3418 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3418",
"html_url": "https://github.com/huggingface/datasets/pull/3418",
"diff_url": "https://github.com/huggingface/datasets/pull/3418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3418.patch",
"merged_at": null
} | albertvillanova | true | [
"As we are removing the dataset scripts from GitHub and moving them to the Hugging Face Hub, I am going to transfer this script to the repo: https://huggingface.co/datasets/wikimedia/wikisource"
] |
1,076,943,343 | 3,417 | Fix type of bridge field in QED | closed | 2021-12-10T15:07:21 | 2021-12-14T14:39:06 | 2021-12-14T14:39:05 | https://github.com/huggingface/datasets/pull/3417 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3417",
"html_url": "https://github.com/huggingface/datasets/pull/3417",
"diff_url": "https://github.com/huggingface/datasets/pull/3417.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3417.patch",
"merged_at": "2021-12-14T14:39:05"
} | mariosasko | true | [] |
1,076,868,771 | 3,416 | disaster_response_messages unavailable | closed | 2021-12-10T13:49:17 | 2021-12-14T14:38:29 | 2021-12-14T14:38:29 | https://github.com/huggingface/datasets/issues/3416 | null | sacdallago | false | [
"Hi, thanks for reporting! This is a duplicate of https://github.com/huggingface/datasets/issues/3240. We are working on a fix.\r\n\r\n"
] |
1,076,472,534 | 3,415 | Non-deterministic tests: CI tests randomly fail | closed | 2021-12-10T06:08:59 | 2022-03-31T16:38:51 | 2022-03-31T16:38:51 | https://github.com/huggingface/datasets/issues/3415 | null | albertvillanova | false | [
"I think it might come from two different issues:\r\n1. Google Drive is an unreliable host, mainly because of quota limitations\r\n2. the staging environment can sometimes raise some errors\r\n\r\nFor Google Drive tests we could set up some retries with backup URLs if necessary I guess.\r\nFor staging on the other ... |
1,076,028,998 | 3,414 | Skip None encoding (line deleted by accident in #3195) | closed | 2021-12-09T21:17:33 | 2021-12-10T11:00:03 | 2021-12-10T11:00:02 | https://github.com/huggingface/datasets/pull/3414 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3414",
"html_url": "https://github.com/huggingface/datasets/pull/3414",
"diff_url": "https://github.com/huggingface/datasets/pull/3414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3414.patch",
"merged_at": "2021-12-10T11:00:02"
} | mariosasko | true | [] |
1,075,854,325 | 3,413 | Add WIDER FACE dataset | closed | 2021-12-09T18:03:38 | 2022-01-12T14:13:47 | 2022-01-12T14:13:47 | https://github.com/huggingface/datasets/pull/3413 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3413",
"html_url": "https://github.com/huggingface/datasets/pull/3413",
"diff_url": "https://github.com/huggingface/datasets/pull/3413.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3413.patch",
"merged_at": "2022-01-12T14:13:47"
} | mariosasko | true | [] |
1,075,846,368 | 3,412 | Fix flaky test again for s3 serialization | closed | 2021-12-09T17:54:41 | 2021-12-09T18:00:52 | 2021-12-09T18:00:52 | https://github.com/huggingface/datasets/pull/3412 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3412",
"html_url": "https://github.com/huggingface/datasets/pull/3412",
"diff_url": "https://github.com/huggingface/datasets/pull/3412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3412.patch",
"merged_at": "2021-12-09T18:00:52"
} | lhoestq | true | [] |
1,075,846,272 | 3,411 | [chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script | open | 2021-12-09T17:54:35 | 2021-12-22T11:21:33 | null | https://github.com/huggingface/datasets/issues/3411 | null | hyusterr | false | [
"@LysandreJik not so sure who to @\r\nCould you help?",
"Hi @hyusterr, I believe it is @wlhgtc from https://github.com/huggingface/transformers/pull/9887"
] |
1,075,815,415 | 3,410 | Fix dependencies conflicts in Windows CI after conda update to 4.11 | closed | 2021-12-09T17:19:11 | 2021-12-09T17:36:20 | 2021-12-09T17:36:19 | https://github.com/huggingface/datasets/pull/3410 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3410",
"html_url": "https://github.com/huggingface/datasets/pull/3410",
"diff_url": "https://github.com/huggingface/datasets/pull/3410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3410.patch",
"merged_at": "2021-12-09T17:36:19"
} | lhoestq | true | [] |
1,075,684,593 | 3,409 | Pass new_fingerprint in multiprocessing | closed | 2021-12-09T15:12:00 | 2022-08-19T10:41:04 | 2021-12-09T17:38:43 | https://github.com/huggingface/datasets/pull/3409 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3409",
"html_url": "https://github.com/huggingface/datasets/pull/3409",
"diff_url": "https://github.com/huggingface/datasets/pull/3409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3409.patch",
"merged_at": "2021-12-09T17:38:43"
} | lhoestq | true | [
"@lhoestq Hi~, does this support that `datasets.map(func, batched=True, batch_size, num_proc>1, new_fingerprint=\"func_v1\")` even if `func` can't pickle. I also notice that you said \"Unfortunately you need picklable mapping functions to make multiprocessing work :confused: Also feel free to open an issue or send ... |
1,075,642,915 | 3,408 | Typo in Dataset viewer error message | closed | 2021-12-09T14:34:02 | 2021-12-22T11:02:53 | 2021-12-22T11:02:53 | https://github.com/huggingface/datasets/issues/3408 | null | lewtun | false | [
"Fixed, thanks\r\n<img width=\"661\" alt=\"Capture d’écran 2021-12-22 à 12 02 30\" src=\"https://user-images.githubusercontent.com/1676121/147082881-cf700e8d-0511-4431-b214-d6cf8137db10.png\">\r\n"
] |
1,074,502,225 | 3,407 | Use max number of data files to infer module | closed | 2021-12-08T14:58:43 | 2021-12-14T17:08:42 | 2021-12-14T17:08:42 | https://github.com/huggingface/datasets/pull/3407 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3407",
"html_url": "https://github.com/huggingface/datasets/pull/3407",
"diff_url": "https://github.com/huggingface/datasets/pull/3407.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3407.patch",
"merged_at": "2021-12-14T17:08:41"
} | albertvillanova | true | [
"Cool thanks :) Feel free to merge if it's all good for you"
] |
1,074,366,050 | 3,406 | Fix module inference for archive with a directory | closed | 2021-12-08T12:39:12 | 2021-12-08T13:03:30 | 2021-12-08T13:03:29 | https://github.com/huggingface/datasets/pull/3406 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3406",
"html_url": "https://github.com/huggingface/datasets/pull/3406",
"diff_url": "https://github.com/huggingface/datasets/pull/3406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3406.patch",
"merged_at": "2021-12-08T13:03:28"
} | albertvillanova | true | [] |
1,074,360,362 | 3,405 | ZIP format inference does not work when files located in a dir inside the archive | closed | 2021-12-08T12:32:15 | 2021-12-08T13:03:29 | 2021-12-08T13:03:29 | https://github.com/huggingface/datasets/issues/3405 | null | albertvillanova | false | [] |
1,073,657,561 | 3,404 | Optimize ZIP format inference | closed | 2021-12-07T18:44:49 | 2021-12-14T17:08:41 | 2021-12-14T17:08:41 | https://github.com/huggingface/datasets/issues/3404 | null | albertvillanova | false | [] |
1,073,622,120 | 3,403 | Cannot import name 'maybe_sync' | closed | 2021-12-07T17:57:59 | 2021-12-17T07:00:35 | 2021-12-17T07:00:35 | https://github.com/huggingface/datasets/issues/3403 | null | KMFODA | false | [
"Hi ! Can you try updating `fsspec` ? The minimum version is `2021.05.0`",
"hey @lhoestq. I'm using `fsspec-2021.11.1` but still getting that error.",
"Maybe this discussion can help:\r\n\r\nhttps://github.com/fsspec/filesystem_spec/issues/597#issuecomment-958646964",
"Thanks @lhoestq. Downgrading `fsspec and... |
1,073,614,815 | 3,402 | More robust first elem check in encode/cast example | closed | 2021-12-07T17:48:16 | 2021-12-08T13:02:16 | 2021-12-08T13:02:15 | https://github.com/huggingface/datasets/pull/3402 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3402",
"html_url": "https://github.com/huggingface/datasets/pull/3402",
"diff_url": "https://github.com/huggingface/datasets/pull/3402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3402.patch",
"merged_at": "2021-12-08T13:02:15"
} | mariosasko | true | [] |
1,073,603,508 | 3,401 | Add Wikimedia pre-processed datasets | closed | 2021-12-07T17:33:19 | 2024-10-09T16:10:47 | 2024-10-09T16:10:47 | https://github.com/huggingface/datasets/issues/3401 | null | albertvillanova | false | [
"As we are planning to stop using Apache Beam (our `datasets.BeamBasedBuilder`) for the generation of some datasets (including [Wikipedia](https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py)), I have been working on [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) to:\r\n- Po... |
1,073,600,382 | 3,400 | Improve Wikipedia loading script | closed | 2021-12-07T17:29:25 | 2022-03-22T16:52:28 | 2022-03-22T16:52:28 | https://github.com/huggingface/datasets/issues/3400 | null | albertvillanova | false | [
"Thanks! See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)",
"Closed by:\r\n- #3435"
] |
1,073,593,861 | 3,399 | Add Wikisource dataset | closed | 2021-12-07T17:21:31 | 2024-10-09T16:11:27 | 2024-10-09T16:11:26 | https://github.com/huggingface/datasets/issues/3399 | null | albertvillanova | false | [
"See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb",
"See: https://huggingface.co/datasets/wikimedia/wikisource"
] |
1,073,590,384 | 3,398 | Add URL field to Wikimedia dataset instances: wikipedia,... | closed | 2021-12-07T17:17:27 | 2022-03-22T16:53:27 | 2022-03-22T16:53:27 | https://github.com/huggingface/datasets/issues/3398 | null | albertvillanova | false | [
"@geohci, I think the field \"url\" does not appear in the Wikimedia dumps. Therefore I guess we should generate it, using the \"title\" field and making some transformation of it (replacing spaces with underscores) and prepending the domain (created using the language)?",
"Indeed:\r\n\r\n> To re-distribute text ... |
1,073,502,444 | 3,397 | add BNL newspapers | closed | 2021-12-07T15:43:21 | 2022-01-17T18:35:34 | 2022-01-17T18:35:34 | https://github.com/huggingface/datasets/pull/3397 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3397",
"html_url": "https://github.com/huggingface/datasets/pull/3397",
"diff_url": "https://github.com/huggingface/datasets/pull/3397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3397.patch",
"merged_at": "2022-01-17T18:35:34"
} | davanstrien | true | [
"\r\n> Also, maybe calling the dataset as \"bnl_historical_newspapers\" and setting \"processed\" as one configuration name?\r\n\r\nThis sounds like a good idea but my only question around this is how easy it would be to use the same approach for processing the other newspaper collections [https://data.bnl.lu/data/... |
1,073,467,183 | 3,396 | Install Audio dependencies to support audio decoding | closed | 2021-12-07T15:11:36 | 2022-04-25T16:12:22 | 2022-04-25T16:12:01 | https://github.com/huggingface/datasets/issues/3396 | null | albertvillanova | false | [
"https://huggingface.co/datasets/projecte-aina/parlament_parla -> works (but we still have to show an audio player)\r\n\r\nhttps://huggingface.co/datasets/openslr -> another issue: `Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/zip:/asr_javanese/data/00/00004fe6aa.flac'`",
... |
1,073,432,650 | 3,395 | Fix formatting in IterableDataset.map docs | closed | 2021-12-07T14:41:01 | 2021-12-08T10:11:33 | 2021-12-08T10:11:33 | https://github.com/huggingface/datasets/pull/3395 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3395",
"html_url": "https://github.com/huggingface/datasets/pull/3395",
"diff_url": "https://github.com/huggingface/datasets/pull/3395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3395.patch",
"merged_at": "2021-12-08T10:11:32"
} | mariosasko | true | [] |
1,073,396,308 | 3,394 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub` | closed | 2021-12-07T14:08:30 | 2021-12-21T17:00:09 | 2021-12-21T17:00:09 | https://github.com/huggingface/datasets/issues/3394 | null | mariosasko | false | [
"According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded !",
"Maybe we can also fix https://github.com/huggingface/datasets/iss... |
1,073,189,777 | 3,393 | Common Voice Belarusian Dataset | open | 2021-12-07T10:37:02 | 2021-12-09T15:56:03 | null | https://github.com/huggingface/datasets/issues/3393 | null | wiedymi | false | [] |
1,073,073,408 | 3,392 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts` | closed | 2021-12-07T08:41:01 | 2021-12-07T14:04:28 | 2021-12-07T14:04:28 | https://github.com/huggingface/datasets/issues/3392 | null | severo | false | [
"This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n"
] |
1,072,849,055 | 3,391 | method to select columns | closed | 2021-12-07T02:44:19 | 2021-12-07T02:45:27 | 2021-12-07T02:45:27 | https://github.com/huggingface/datasets/issues/3391 | null | changjonathanc | false | [
"duplicate of #2655"
] |
1,072,462,456 | 3,390 | Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'" | closed | 2021-12-06T18:22:49 | 2021-12-06T20:22:05 | 2021-12-06T20:22:05 | https://github.com/huggingface/datasets/issues/3390 | null | R4ZZ3 | false | [
"Got solved it with push_to_hub, closing"
] |
1,072,191,865 | 3,389 | Add EDGAR | open | 2021-12-06T14:06:11 | 2022-10-05T10:40:22 | null | https://github.com/huggingface/datasets/issues/3389 | null | philschmid | false | [
"cc @juliensimon ",
"Datasets are not tracked in this repository anymore. But you can make your own dataset in the huggingface hub"
] |
1,072,022,021 | 3,388 | Fix flaky test of the temporary directory used by load_from_disk | closed | 2021-12-06T11:09:31 | 2021-12-06T11:25:03 | 2021-12-06T11:24:49 | https://github.com/huggingface/datasets/pull/3388 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3388",
"html_url": "https://github.com/huggingface/datasets/pull/3388",
"diff_url": "https://github.com/huggingface/datasets/pull/3388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3388.patch",
"merged_at": "2021-12-06T11:24:49"
} | lhoestq | true | [
"CI failed because of a server error - merging"
] |
1,071,836,456 | 3,387 | Create Language Modeling task | closed | 2021-12-06T07:56:07 | 2021-12-17T17:18:28 | 2021-12-17T17:18:27 | https://github.com/huggingface/datasets/pull/3387 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3387",
"html_url": "https://github.com/huggingface/datasets/pull/3387",
"diff_url": "https://github.com/huggingface/datasets/pull/3387.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3387.patch",
"merged_at": "2021-12-17T17:18:27"
} | albertvillanova | true | [] |
1,071,813,141 | 3,386 | Fix typos in dataset cards | closed | 2021-12-06T07:20:40 | 2021-12-06T09:30:55 | 2021-12-06T09:30:54 | https://github.com/huggingface/datasets/pull/3386 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3386",
"html_url": "https://github.com/huggingface/datasets/pull/3386",
"diff_url": "https://github.com/huggingface/datasets/pull/3386.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3386.patch",
"merged_at": "2021-12-06T09:30:54"
} | albertvillanova | true | [] |
1,071,742,310 | 3,385 | None batched `with_transform`, `set_transform` | open | 2021-12-06T05:20:54 | 2022-01-17T15:25:01 | null | https://github.com/huggingface/datasets/issues/3385 | null | changjonathanc | false | [
"Hi ! Thanks for the suggestion :)\r\nIt makes sense to me, and it can surely be implemented by wrapping the user's function to make it a batched function. However I'm not a big fan of the inconsistency it would create with `map`: `with_transform` is batched by default while `map` isn't.\r\n\r\nIs there something y... |
1,071,594,165 | 3,384 | Adding mMARCO dataset | closed | 2021-12-05T23:59:11 | 2021-12-12T15:27:36 | 2021-12-12T15:27:36 | https://github.com/huggingface/datasets/pull/3384 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3384",
"html_url": "https://github.com/huggingface/datasets/pull/3384",
"diff_url": "https://github.com/huggingface/datasets/pull/3384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3384.patch",
"merged_at": null
} | lhbonifacio | true | [] |
1,071,551,884 | 3,383 | add Georgian data in cc100. | closed | 2021-12-05T20:38:09 | 2021-12-14T14:37:23 | 2021-12-14T14:37:22 | https://github.com/huggingface/datasets/pull/3383 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3383",
"html_url": "https://github.com/huggingface/datasets/pull/3383",
"diff_url": "https://github.com/huggingface/datasets/pull/3383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3383.patch",
"merged_at": "2021-12-14T14:37:22"
} | AnzorGozalishvili | true | [] |
1,071,293,299 | 3,382 | #3337 Add typing overloads to Dataset.__getitem__ for mypy | closed | 2021-12-04T20:54:49 | 2021-12-14T10:28:55 | 2021-12-14T10:28:55 | https://github.com/huggingface/datasets/pull/3382 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3382",
"html_url": "https://github.com/huggingface/datasets/pull/3382",
"diff_url": "https://github.com/huggingface/datasets/pull/3382.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3382.patch",
"merged_at": "2021-12-14T10:28:54"
} | Dref360 | true | [
"Locally the `make quality` passes with the same dependencies. I would suggest upgrading flake8. (I can take care of it in another PR)\r\ncc @lhoestq ",
"Thank you for fixing flake8! I think we are ready to merge then. "
] |
1,071,283,879 | 3,381 | Unable to load audio_features from common_voice dataset | closed | 2021-12-04T19:59:11 | 2021-12-06T17:52:42 | 2021-12-06T17:52:42 | https://github.com/huggingface/datasets/issues/3381 | null | ashu5644 | false | [
"Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)",
"Thanks for... |
1,071,166,270 | 3,380 | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | closed | 2021-12-04T09:18:33 | 2022-01-11T12:29:53 | 2022-01-11T12:29:53 | https://github.com/huggingface/datasets/issues/3380 | null | LysandreJik | false | [] |
1,071,079,146 | 3,379 | iter_archive on zipfiles with better compression type check | closed | 2021-12-04T01:04:48 | 2023-01-24T13:00:19 | 2023-01-24T12:53:08 | https://github.com/huggingface/datasets/pull/3379 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3379",
"html_url": "https://github.com/huggingface/datasets/pull/3379",
"diff_url": "https://github.com/huggingface/datasets/pull/3379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3379.patch",
"merged_at": "2023-01-24T12:53:08"
} | Mehdi2402 | true | [
"Hello @lhoestq, thank you for your answer.\r\n\r\nI don't use pytest a lot so I think I might need some help on it :) but I tried some tests for `streaming_download_manager.py` only. I don't know how to test `download_manager.py` since we need to use local files.\r\n\r\n# Comments : \r\n* In **download_manager.py*... |
1,070,580,126 | 3,378 | Add The Pile subsets | closed | 2021-12-03T13:14:54 | 2021-12-09T18:11:25 | 2021-12-09T18:11:23 | https://github.com/huggingface/datasets/pull/3378 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3378",
"html_url": "https://github.com/huggingface/datasets/pull/3378",
"diff_url": "https://github.com/huggingface/datasets/pull/3378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3378.patch",
"merged_at": "2021-12-09T18:11:23"
} | albertvillanova | true | [] |
1,070,562,907 | 3,377 | COCO 🥥 on the 🤗 Hub? | closed | 2021-12-03T12:55:27 | 2021-12-20T14:14:01 | 2021-12-20T14:14:00 | https://github.com/huggingface/datasets/pull/3377 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3377",
"html_url": "https://github.com/huggingface/datasets/pull/3377",
"diff_url": "https://github.com/huggingface/datasets/pull/3377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3377.patch",
"merged_at": null
} | merveenoyan | true | [
"@mariosasko I fixed couple of bugs",
"TO-DO: \r\n- [x] Add unlabeled 2017 splits, train and validation splits of 2015\r\n- [x] Add Class Labels as list instead",
"@mariosasko added fine & coarse grained labels, will fix the bugs (currently getting set up with VM, my internet is too slow to run the tests and do... |
1,070,522,979 | 3,376 | Update clue benchmark | closed | 2021-12-03T12:06:01 | 2021-12-08T14:14:42 | 2021-12-08T14:14:41 | https://github.com/huggingface/datasets/pull/3376 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3376",
"html_url": "https://github.com/huggingface/datasets/pull/3376",
"diff_url": "https://github.com/huggingface/datasets/pull/3376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3376.patch",
"merged_at": "2021-12-08T14:14:41"
} | mariosasko | true | [
"The CI error is due to missing tags in the CLUE dataset card - merging !"
] |
1,070,454,913 | 3,375 | Support streaming zipped dataset repo by passing only repo name | closed | 2021-12-03T10:43:05 | 2021-12-16T18:03:32 | 2021-12-16T18:03:31 | https://github.com/huggingface/datasets/pull/3375 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3375",
"html_url": "https://github.com/huggingface/datasets/pull/3375",
"diff_url": "https://github.com/huggingface/datasets/pull/3375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3375.patch",
"merged_at": "2021-12-16T18:03:31"
} | albertvillanova | true | [
"I just tested and I think this only opens one file ? If there are several files in the ZIP, only the first one is opened. To open several files from a ZIP, one has to call `open` several times.\r\n\r\nWhat about updating the CSV loader to make it `download_and_extract` zip files, and open each extracted file ?",
... |
1,070,426,462 | 3,374 | NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews | closed | 2021-12-03T10:10:54 | 2021-12-08T14:14:41 | 2021-12-08T14:14:41 | https://github.com/huggingface/datasets/issues/3374 | null | Namco0816 | false | [
"Seems like the issue still exists,:\r\n`Downloading and preparing dataset clue/chid (download: 127.15 MiB, generated: 259.71 MiB, post-processed: Unknown size, total: 386.86 MiB) to /mnt/cache/tanhaochen/.cache/huggingface/datasets/clue/chid/1.0.0/e55b490cb7809dcd8db31b9a87119f2e2ec87cdc060da8a9ac070b070ca3e379...... |
1,070,406,391 | 3,373 | Support streaming zipped CSV dataset repo by passing only repo name | closed | 2021-12-03T09:48:24 | 2021-12-16T18:03:31 | 2021-12-16T18:03:31 | https://github.com/huggingface/datasets/issues/3373 | null | albertvillanova | false | [] |
1,069,948,178 | 3,372 | [SEO improvement] Add Dataset Metadata to make datasets indexable | closed | 2021-12-02T20:21:07 | 2022-03-18T09:36:48 | 2022-03-18T09:36:48 | https://github.com/huggingface/datasets/issues/3372 | null | cakiki | false | [] |
1,069,821,335 | 3,371 | New: Americas NLI dataset | closed | 2021-12-02T17:44:59 | 2021-12-08T13:58:12 | 2021-12-08T13:58:11 | https://github.com/huggingface/datasets/pull/3371 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3371",
"html_url": "https://github.com/huggingface/datasets/pull/3371",
"diff_url": "https://github.com/huggingface/datasets/pull/3371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3371.patch",
"merged_at": "2021-12-08T13:58:11"
} | fdschmidt93 | true | [] |
1,069,735,423 | 3,370 | Document a training loop for streaming dataset | closed | 2021-12-02T16:17:00 | 2021-12-03T13:34:35 | 2021-12-03T13:34:34 | https://github.com/huggingface/datasets/pull/3370 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3370",
"html_url": "https://github.com/huggingface/datasets/pull/3370",
"diff_url": "https://github.com/huggingface/datasets/pull/3370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3370.patch",
"merged_at": "2021-12-03T13:34:34"
} | lhoestq | true | [] |
1,069,587,674 | 3,369 | [Audio] Allow resampling for audio datasets in streaming mode | closed | 2021-12-02T14:04:57 | 2021-12-16T15:55:19 | 2021-12-16T15:55:19 | https://github.com/huggingface/datasets/issues/3369 | null | patrickvonplaten | false | [
"This requires implementing `cast_column` for iterable datasets, it could be a very nice addition !\r\n\r\n<s>It can also be useful to be able to disable the audio/image decoding for the dataset viewer (see PR https://github.com/huggingface/datasets/pull/3430) cc @severo </s>\r\nEDIT: actually following https://git... |
1,069,403,624 | 3,368 | Fix dict source_datasets tagset validator | closed | 2021-12-02T10:52:20 | 2021-12-02T15:48:38 | 2021-12-02T15:48:37 | https://github.com/huggingface/datasets/pull/3368 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3368",
"html_url": "https://github.com/huggingface/datasets/pull/3368",
"diff_url": "https://github.com/huggingface/datasets/pull/3368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3368.patch",
"merged_at": "2021-12-02T15:48:37"
} | albertvillanova | true | [] |
1,069,241,274 | 3,367 | Fix typo in other-structured-to-text task tag | closed | 2021-12-02T08:02:27 | 2021-12-02T16:07:14 | 2021-12-02T16:07:13 | https://github.com/huggingface/datasets/pull/3367 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3367",
"html_url": "https://github.com/huggingface/datasets/pull/3367",
"diff_url": "https://github.com/huggingface/datasets/pull/3367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3367.patch",
"merged_at": "2021-12-02T16:07:13"
} | albertvillanova | true | [] |
1,069,214,022 | 3,366 | Add multimodal datasets | open | 2021-12-02T07:24:04 | 2023-02-28T16:29:22 | null | https://github.com/huggingface/datasets/issues/3366 | null | albertvillanova | false | [] |
1,069,195,887 | 3,365 | Add task tags for multimodal datasets | closed | 2021-12-02T06:58:20 | 2023-07-25T18:21:33 | 2023-07-25T18:21:32 | https://github.com/huggingface/datasets/issues/3365 | null | albertvillanova | false | [
"The Hub pulls these tags from [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts) (allows multimodal tasks) now, so I'm closing this issue."
] |
1,068,851,196 | 3,364 | Use the Audio feature in the AutomaticSpeechRecognition template | closed | 2021-12-01T20:42:26 | 2022-03-24T14:34:09 | 2022-03-24T14:34:08 | https://github.com/huggingface/datasets/pull/3364 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3364",
"html_url": "https://github.com/huggingface/datasets/pull/3364",
"diff_url": "https://github.com/huggingface/datasets/pull/3364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3364.patch",
"merged_at": null
} | anton-l | true | [
"Cool !\r\n\r\nI noticed that you removed the `audio_file_path_column` field of the template, note that you also have to update all the dataset_infos.json file that still contain this outdated field. For example in the common_voice you can find this:\r\n```\r\n\"task_templates\": [{\"task\": \"automatic-speech-reco... |
1,068,824,340 | 3,363 | Update URL of Jeopardy! dataset | closed | 2021-12-01T20:08:10 | 2022-10-06T13:45:49 | 2021-12-03T12:35:01 | https://github.com/huggingface/datasets/pull/3363 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3363",
"html_url": "https://github.com/huggingface/datasets/pull/3363",
"diff_url": "https://github.com/huggingface/datasets/pull/3363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3363.patch",
"merged_at": null
} | mariosasko | true | [
"Closing this PR in favor of #3266.",
"I think you should also close this branch"
] |
1,068,809,768 | 3,362 | Adapt image datasets | closed | 2021-12-01T19:52:01 | 2021-12-09T18:37:42 | 2021-12-09T18:37:41 | https://github.com/huggingface/datasets/pull/3362 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3362",
"html_url": "https://github.com/huggingface/datasets/pull/3362",
"diff_url": "https://github.com/huggingface/datasets/pull/3362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3362.patch",
"merged_at": "2021-12-09T18:37:41"
} | mariosasko | true | [
"This PR can be merged after #3163 is merged (this PR is pretty big because I was working on the forked branch).\r\n\r\n@lhoestq @albertvillanova Could you please take a look at the changes in `src/datasets/utils/streaming_download_manager.py`? These changes were required to support streaming of the `cats_vs_dogs` ... |
1,068,736,268 | 3,361 | Jeopardy _URL access denied | closed | 2021-12-01T18:21:33 | 2021-12-11T12:50:23 | 2021-12-06T11:16:31 | https://github.com/huggingface/datasets/issues/3361 | null | tianjianjiang | false | [
"Just a side note: duplicate #3264"
] |
1,068,724,697 | 3,360 | Add The Pile USPTO subset | closed | 2021-12-01T18:08:05 | 2021-12-03T11:45:29 | 2021-12-03T11:45:28 | https://github.com/huggingface/datasets/pull/3360 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3360",
"html_url": "https://github.com/huggingface/datasets/pull/3360",
"diff_url": "https://github.com/huggingface/datasets/pull/3360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3360.patch",
"merged_at": "2021-12-03T11:45:27"
} | albertvillanova | true | [] |
1,068,638,213 | 3,359 | Add The Pile Free Law subset | closed | 2021-12-01T16:46:04 | 2021-12-06T10:12:17 | 2021-12-01T17:30:44 | https://github.com/huggingface/datasets/pull/3359 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3359",
"html_url": "https://github.com/huggingface/datasets/pull/3359",
"diff_url": "https://github.com/huggingface/datasets/pull/3359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3359.patch",
"merged_at": "2021-12-01T17:30:43"
} | albertvillanova | true | [
"@albertvillanova Is there a specific reason you’re adding the Pile under “the” instead of under “pile”? That does not appear to be consistent with other datasets.",
"Hi @StellaAthena,\r\n\r\nI asked myself the same question, but at the end I decided to be consistent with previously added Pile subsets:\r\n- #2817... |
1,068,623,216 | 3,358 | add new field, and get errors | closed | 2021-12-01T16:35:38 | 2021-12-02T02:26:22 | 2021-12-02T02:26:22 | https://github.com/huggingface/datasets/issues/3358 | null | PatricYan | false | [
"Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ",
"> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok."
] |
1,068,607,382 | 3,357 | Update languages in aeslc dataset card | closed | 2021-12-01T16:20:46 | 2022-09-23T13:16:49 | 2022-09-23T13:16:49 | https://github.com/huggingface/datasets/pull/3357 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3357",
"html_url": "https://github.com/huggingface/datasets/pull/3357",
"diff_url": "https://github.com/huggingface/datasets/pull/3357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3357.patch",
"merged_at": "2022-09-23T13:16:48"
} | apergo-ai | true | [] |
1,068,503,932 | 3,356 | to_tf_dataset() refactor | closed | 2021-12-01T14:54:30 | 2021-12-09T10:26:53 | 2021-12-09T10:26:53 | https://github.com/huggingface/datasets/pull/3356 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3356",
"html_url": "https://github.com/huggingface/datasets/pull/3356",
"diff_url": "https://github.com/huggingface/datasets/pull/3356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3356.patch",
"merged_at": "2021-12-09T10:26:53"
} | Rocketknight1 | true | [
"Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!",
"Hi @lhoestq @mariosasko, the other PRs this was depending on in Transformers and huggingface/notebooks are now merged, so thi... |
1,068,468,573 | 3,355 | Extend support for streaming datasets that use pd.read_excel | closed | 2021-12-01T14:22:43 | 2021-12-17T07:24:19 | 2021-12-17T07:24:18 | https://github.com/huggingface/datasets/pull/3355 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3355",
"html_url": "https://github.com/huggingface/datasets/pull/3355",
"diff_url": "https://github.com/huggingface/datasets/pull/3355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3355.patch",
"merged_at": "2021-12-17T07:24:18"
} | albertvillanova | true | [
"TODO in the future: https://github.com/huggingface/datasets/pull/3355#discussion_r761138011\r\n- If we finally find a use case where the `pd.read_excel()` can work in streaming mode (using fsspec), that is, without using the `.read()`, I propose to try this first, catch the ValueError and then try with `.read`, bu... |
1,068,307,271 | 3,354 | Remove duplicate name from dataset cards | closed | 2021-12-01T11:45:40 | 2021-12-01T13:14:30 | 2021-12-01T13:14:29 | https://github.com/huggingface/datasets/pull/3354 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3354",
"html_url": "https://github.com/huggingface/datasets/pull/3354",
"diff_url": "https://github.com/huggingface/datasets/pull/3354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3354.patch",
"merged_at": "2021-12-01T13:14:29"
} | albertvillanova | true | [] |
1,068,173,783 | 3,353 | add one field "example_id", but I can't see it in the "comput_loss" function | closed | 2021-12-01T09:35:09 | 2021-12-01T16:02:39 | 2021-12-01T16:02:39 | https://github.com/huggingface/datasets/issues/3353 | null | PatricYan | false | [
"Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the mod... |
1,068,102,994 | 3,352 | Make LABR dataset streamable | closed | 2021-12-01T08:22:27 | 2021-12-01T10:49:02 | 2021-12-01T10:49:01 | https://github.com/huggingface/datasets/pull/3352 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3352",
"html_url": "https://github.com/huggingface/datasets/pull/3352",
"diff_url": "https://github.com/huggingface/datasets/pull/3352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3352.patch",
"merged_at": "2021-12-01T10:49:01"
} | albertvillanova | true | [] |
1,068,094,873 | 3,351 | Add VCTK dataset | closed | 2021-12-01T08:13:17 | 2022-02-28T09:22:03 | 2021-12-28T15:05:08 | https://github.com/huggingface/datasets/pull/3351 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3351",
"html_url": "https://github.com/huggingface/datasets/pull/3351",
"diff_url": "https://github.com/huggingface/datasets/pull/3351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3351.patch",
"merged_at": "2021-12-28T15:05:07"
} | jaketae | true | [
"Hello @patrickvonplaten, I hope it's okay to ping you with a (dumb) question!\r\n\r\nI've been trying to get `dl_manager.download_and_extract(_DL_URL)` to work with no avail. I verified that this is a problem on two different machines (lab server, GCP), so I doubt it's an issue with network connectivity. Here is t... |
1,068,078,160 | 3,350 | Avoid content-encoding issue while streaming datasets | closed | 2021-12-01T07:56:48 | 2021-12-01T08:15:01 | 2021-12-01T08:15:00 | https://github.com/huggingface/datasets/pull/3350 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3350",
"html_url": "https://github.com/huggingface/datasets/pull/3350",
"diff_url": "https://github.com/huggingface/datasets/pull/3350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3350.patch",
"merged_at": "2021-12-01T08:15:00"
} | albertvillanova | true | [] |
1,067,853,601 | 3,349 | raise exception instead of using assertions. | closed | 2021-12-01T01:37:51 | 2021-12-20T16:07:27 | 2021-12-20T16:07:27 | https://github.com/huggingface/datasets/pull/3349 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3349",
"html_url": "https://github.com/huggingface/datasets/pull/3349",
"diff_url": "https://github.com/huggingface/datasets/pull/3349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3349.patch",
"merged_at": "2021-12-20T16:07:27"
} | manisnesan | true | [
"@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. ",
"@mariosasko - Hello, Are there any additional changes required from my end??. Wondering if this PR can be merged or still pending on additional steps.",
"@mariosasko - The approved changes in the PR now has conflicts with th... |
1,067,831,113 | 3,348 | BLEURT: Match key names to correspond with filename | closed | 2021-12-01T01:01:18 | 2021-12-07T16:06:57 | 2021-12-07T16:06:57 | https://github.com/huggingface/datasets/pull/3348 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3348",
"html_url": "https://github.com/huggingface/datasets/pull/3348",
"diff_url": "https://github.com/huggingface/datasets/pull/3348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3348.patch",
"merged_at": "2021-12-07T16:06:57"
} | jaehlee | true | [
"Thanks for the suggestion! I think the current checked-in `CHECKPOINT_URLS` is already not working. I believe anyone who tried using the new ckpts (`BLEURT-20-X`) can't unless this fix is in. The zip file from bleurt side unzips to directory name matching the filename (capitalized for new ones). For example withou... |
1,067,738,902 | 3,347 | iter_archive for zip files | closed | 2021-11-30T22:34:17 | 2021-12-04T00:22:22 | 2021-12-04T00:22:11 | https://github.com/huggingface/datasets/pull/3347 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3347",
"html_url": "https://github.com/huggingface/datasets/pull/3347",
"diff_url": "https://github.com/huggingface/datasets/pull/3347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3347.patch",
"merged_at": null
} | Mehdi2402 | true | [
"And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://h... |
1,067,632,365 | 3,346 | Failed to convert `string` with pyarrow for QED since 1.15.0 | closed | 2021-11-30T20:11:42 | 2021-12-14T14:39:05 | 2021-12-14T14:39:05 | https://github.com/huggingface/datasets/issues/3346 | null | tianjianjiang | false | [
"Scratch that, probably the old and incompatible usage of dataset builder from promptsource.",
"Actually, re-opening this issue cause the error persists\r\n\r\n```python\r\n>>> load_dataset(\"qed\")\r\nDownloading and preparing dataset qed/qed (download: 13.43 MiB, generated: 9.70 MiB, post-processed: Unknown siz... |
1,067,622,951 | 3,345 | Failed to download species_800 from Google Drive zip file | closed | 2021-11-30T20:00:28 | 2021-12-01T17:53:15 | 2021-12-01T17:53:15 | https://github.com/huggingface/datasets/issues/3345 | null | tianjianjiang | false | [
"Hi,\r\n\r\nthe dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?",
"> Hi,\r\n> \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI have tried that many time... |
1,067,567,603 | 3,344 | Add ArrayXD docs | closed | 2021-11-30T18:53:31 | 2021-12-01T20:16:03 | 2021-12-01T19:35:32 | https://github.com/huggingface/datasets/pull/3344 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3344",
"html_url": "https://github.com/huggingface/datasets/pull/3344",
"diff_url": "https://github.com/huggingface/datasets/pull/3344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3344.patch",
"merged_at": "2021-12-01T19:35:32"
} | stevhliu | true | [] |
1,067,505,507 | 3,343 | Better error message when download fails | closed | 2021-11-30T17:38:50 | 2021-12-01T11:27:59 | 2021-12-01T11:27:58 | https://github.com/huggingface/datasets/pull/3343 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3343",
"html_url": "https://github.com/huggingface/datasets/pull/3343",
"diff_url": "https://github.com/huggingface/datasets/pull/3343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3343.patch",
"merged_at": "2021-12-01T11:27:58"
} | lhoestq | true | [] |
1,067,481,390 | 3,342 | Fix ASSET dataset data URLs | closed | 2021-11-30T17:13:30 | 2021-12-14T14:50:00 | 2021-12-14T14:50:00 | https://github.com/huggingface/datasets/pull/3342 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3342",
"html_url": "https://github.com/huggingface/datasets/pull/3342",
"diff_url": "https://github.com/huggingface/datasets/pull/3342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3342.patch",
"merged_at": "2021-12-14T14:50:00"
} | tianjianjiang | true | [
"> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n"
] |
1,067,449,569 | 3,341 | Mirror the canonical datasets to the Hugging Face Hub | closed | 2021-11-30T16:42:05 | 2022-01-26T14:47:37 | 2022-01-26T14:47:37 | https://github.com/huggingface/datasets/issues/3341 | null | severo | false | [
"I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub",
"I understand that the datasets are mirrored on the Hub now, ... |
1,067,292,636 | 3,340 | Fix JSON ClassLabel casting for integers | closed | 2021-11-30T14:19:54 | 2021-12-01T11:27:30 | 2021-12-01T11:27:30 | https://github.com/huggingface/datasets/pull/3340 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3340",
"html_url": "https://github.com/huggingface/datasets/pull/3340",
"diff_url": "https://github.com/huggingface/datasets/pull/3340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3340.patch",
"merged_at": "2021-12-01T11:27:30"
} | lhoestq | true | [] |
1,066,662,477 | 3,339 | to_tf_dataset fails on TPU | open | 2021-11-30T00:50:52 | 2021-12-02T14:21:27 | null | https://github.com/huggingface/datasets/issues/3339 | null | nbroad1881 | false | [
"This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?\r\n> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe tr... |
1,066,371,235 | 3,338 | [WIP] Add doctests for tutorials | closed | 2021-11-29T18:40:46 | 2023-05-05T17:18:20 | 2023-05-05T17:18:15 | https://github.com/huggingface/datasets/pull/3338 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3338",
"html_url": "https://github.com/huggingface/datasets/pull/3338",
"diff_url": "https://github.com/huggingface/datasets/pull/3338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3338.patch",
"merged_at": null
} | stevhliu | true | [
"I manage to remove the mentions of ellipsis in the code by launching the command as follows:\r\n\r\n```\r\npython -m doctest -v docs/source/load_hub.rst -o=ELLIPSIS\r\n```\r\n\r\nThe way you put your ellipsis will only work on mac, I've adapted it for linux as well with the following:\r\n\r\n```diff\r\n >>> fro... |
1,066,232,936 | 3,337 | Typing of Dataset.__getitem__ could be improved. | closed | 2021-11-29T16:20:11 | 2021-12-14T10:28:54 | 2021-12-14T10:28:54 | https://github.com/huggingface/datasets/issues/3337 | null | Dref360 | false | [
"Hi ! Thanks for the suggestion, I didn't know about this decorator.\r\n\r\nIf you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.\r\n\r\n`Dataset.__getitem__` is ... |
1,066,208,436 | 3,336 | Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays | closed | 2021-11-29T15:58:59 | 2023-09-24T09:53:52 | 2023-05-16T18:24:46 | https://github.com/huggingface/datasets/pull/3336 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3336",
"html_url": "https://github.com/huggingface/datasets/pull/3336",
"diff_url": "https://github.com/huggingface/datasets/pull/3336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3336.patch",
"merged_at": null
} | mariosasko | true | [] |
1,066,064,126 | 3,335 | add Speech commands dataset | closed | 2021-11-29T13:52:47 | 2021-12-10T10:37:21 | 2021-12-10T10:30:15 | https://github.com/huggingface/datasets/pull/3335 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3335",
"html_url": "https://github.com/huggingface/datasets/pull/3335",
"diff_url": "https://github.com/huggingface/datasets/pull/3335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3335.patch",
"merged_at": "2021-12-10T10:30:15"
} | polinaeterna | true | [
"@anton-l ping",
"@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! 🤗\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorro... |
1,065,983,923 | 3,334 | Integrate Polars library | closed | 2021-11-29T12:31:54 | 2024-08-31T05:31:28 | 2024-08-31T05:31:27 | https://github.com/huggingface/datasets/issues/3334 | null | albertvillanova | false | [
"If possible, a neat API could be something like `Dataset.to_polars()`, as well as `Dataset.set_format(\"polars\")`",
"Note they use a \"custom\" implementation of Arrow: [Arrow2](https://github.com/jorgecarleitao/arrow2).",
"Polars has grown rapidly in popularity over the last year - could you consider integra... |
1,065,346,919 | 3,333 | load JSON files, get the errors | closed | 2021-11-28T14:29:58 | 2021-12-01T09:34:31 | 2021-12-01T03:57:48 | https://github.com/huggingface/datasets/issues/3333 | null | PatricYan | false | [
"Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets`",
"> \r\n\r\nbut I want to load local JSON file by command\r\n`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_di... |
1,065,345,853 | 3,332 | Fix error message and add extension fallback | closed | 2021-11-28T14:25:29 | 2021-11-29T13:34:15 | 2021-11-29T13:34:14 | https://github.com/huggingface/datasets/pull/3332 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3332",
"html_url": "https://github.com/huggingface/datasets/pull/3332",
"diff_url": "https://github.com/huggingface/datasets/pull/3332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3332.patch",
"merged_at": "2021-11-29T13:34:14"
} | mariosasko | true | [] |
1,065,275,896 | 3,331 | AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' | closed | 2021-11-28T08:54:05 | 2021-11-29T13:49:44 | 2021-11-29T13:34:14 | https://github.com/huggingface/datasets/issues/3331 | null | luozhouyang | false | [
"Hi,\r\n\r\nthe fix was merged and will be available in the next release of `datasets`.\r\nIn the meantime, you can use it by installing `datasets` directly from master as follows:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```"
] |
1,065,176,619 | 3,330 | Change TriviaQA license (#3313) | closed | 2021-11-28T03:26:45 | 2021-11-29T11:24:21 | 2021-11-29T11:24:21 | https://github.com/huggingface/datasets/pull/3330 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3330",
"html_url": "https://github.com/huggingface/datasets/pull/3330",
"diff_url": "https://github.com/huggingface/datasets/pull/3330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3330.patch",
"merged_at": "2021-11-29T11:24:21"
} | avinashsai | true | [] |
1,065,096,971 | 3,329 | Map function: Type error on iter #999 | closed | 2021-11-27T17:53:05 | 2021-11-29T20:40:15 | 2021-11-29T20:40:15 | https://github.com/huggingface/datasets/issues/3329 | null | josephkready666 | false | [
"Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.",
"```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n ... |
1,065,015,262 | 3,328 | Quick fix error formatting | closed | 2021-11-27T11:47:48 | 2021-11-29T13:32:42 | 2021-11-29T13:32:42 | https://github.com/huggingface/datasets/pull/3328 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3328",
"html_url": "https://github.com/huggingface/datasets/pull/3328",
"diff_url": "https://github.com/huggingface/datasets/pull/3328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3328.patch",
"merged_at": "2021-11-29T13:32:42"
} | NouamaneTazi | true | [] |
1,064,675,888 | 3,327 | "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" | closed | 2021-11-26T16:26:36 | 2021-11-26T16:44:11 | 2021-11-26T16:44:11 | https://github.com/huggingface/datasets/issues/3327 | null | eliasws | false | [
"#3323 "
] |
1,064,664,479 | 3,326 | Fix import `datasets` on python 3.10 | closed | 2021-11-26T16:10:00 | 2021-11-26T16:31:23 | 2021-11-26T16:31:23 | https://github.com/huggingface/datasets/pull/3326 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3326",
"html_url": "https://github.com/huggingface/datasets/pull/3326",
"diff_url": "https://github.com/huggingface/datasets/pull/3326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3326.patch",
"merged_at": "2021-11-26T16:31:23"
} | lhoestq | true | [] |
1,064,663,075 | 3,325 | Update conda dependencies | closed | 2021-11-26T16:08:07 | 2021-11-26T16:20:37 | 2021-11-26T16:20:36 | https://github.com/huggingface/datasets/pull/3325 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3325",
"html_url": "https://github.com/huggingface/datasets/pull/3325",
"diff_url": "https://github.com/huggingface/datasets/pull/3325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3325.patch",
"merged_at": "2021-11-26T16:20:36"
} | lhoestq | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.