id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
974,552,009 | 2,818 | cannot load data from my loacal path | closed | 2021-08-19T11:13:30 | 2023-07-25T17:42:15 | 2023-07-25T17:42:15 | https://github.com/huggingface/datasets/issues/2818 | null | yang-collect | false | [
"Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict.\r\n\r\nCan you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?"
] |
974,486,051 | 2,817 | Rename The Pile subsets | closed | 2021-08-19T09:56:22 | 2021-08-23T16:24:10 | 2021-08-23T16:24:09 | https://github.com/huggingface/datasets/pull/2817 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2817",
"html_url": "https://github.com/huggingface/datasets/pull/2817",
"diff_url": "https://github.com/huggingface/datasets/pull/2817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2817.patch",
"merged_at": "2021-08-23T16:24:09"
} | lhoestq | true | [
"Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?",
"I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subset... |
974,031,404 | 2,816 | Add Mostly Basic Python Problems Dataset | open | 2021-08-18T20:28:39 | 2021-09-10T08:04:20 | null | https://github.com/huggingface/datasets/issues/2816 | null | osanseviero | false | [
"I started working on that."
] |
973,862,024 | 2,815 | Tiny typo fixes of "fo" -> "of" | closed | 2021-08-18T16:36:11 | 2021-08-19T08:03:02 | 2021-08-19T08:03:02 | https://github.com/huggingface/datasets/pull/2815 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2815",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"merged_at": "2021-08-19T08:03:02"
} | aronszanto | true | [] |
973,632,645 | 2,814 | Bump tqdm version | closed | 2021-08-18T12:51:29 | 2021-08-18T13:44:11 | 2021-08-18T13:39:50 | https://github.com/huggingface/datasets/pull/2814 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2814",
"html_url": "https://github.com/huggingface/datasets/pull/2814",
"diff_url": "https://github.com/huggingface/datasets/pull/2814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2814.patch",
"merged_at": "2021-08-18T13:39:49"
} | mariosasko | true | [] |
973,470,580 | 2,813 | Remove compression from xopen | closed | 2021-08-18T09:35:59 | 2021-08-23T15:59:14 | 2021-08-23T15:59:14 | https://github.com/huggingface/datasets/issues/2813 | null | albertvillanova | false | [
"After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http://domain.org/filename.bz2`\r\n- `xopen` parses the `urlpath` a... |
972,936,889 | 2,812 | arXiv Dataset verification problem | open | 2021-08-17T18:01:48 | 2022-01-19T14:15:35 | null | https://github.com/huggingface/datasets/issues/2812 | null | eladsegal | false | [] |
972,522,480 | 2,811 | Fix stream oscar | closed | 2021-08-17T10:10:59 | 2021-08-26T10:26:15 | 2021-08-26T10:26:14 | https://github.com/huggingface/datasets/pull/2811 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2811",
"html_url": "https://github.com/huggingface/datasets/pull/2811",
"diff_url": "https://github.com/huggingface/datasets/pull/2811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2811.patch",
"merged_at": null
} | albertvillanova | true | [
"One additional note: if we can try to not change the code of oscar.py too often, I'm sure users that have it in their cache directory will be happy to not have to redownload it every time they update the library ;)\r\n\r\n(since changing the code changes the cache directory of the dataset)",
"I don't think this ... |
972,040,022 | 2,810 | Add WIT Dataset | closed | 2021-08-16T19:34:09 | 2022-05-06T12:27:29 | 2022-05-06T12:26:16 | https://github.com/huggingface/datasets/pull/2810 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2810",
"html_url": "https://github.com/huggingface/datasets/pull/2810",
"diff_url": "https://github.com/huggingface/datasets/pull/2810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2810.patch",
"merged_at": null
} | hassiahk | true | [
"Google's version of WIT is now available here: https://huggingface.co/datasets/google/wit"
] |
971,902,613 | 2,809 | Add Beans Dataset | closed | 2021-08-16T16:22:33 | 2021-08-26T11:42:27 | 2021-08-26T11:42:27 | https://github.com/huggingface/datasets/pull/2809 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2809",
"html_url": "https://github.com/huggingface/datasets/pull/2809",
"diff_url": "https://github.com/huggingface/datasets/pull/2809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2809.patch",
"merged_at": "2021-08-26T11:42:27"
} | nateraw | true | [] |
971,882,320 | 2,808 | Enable streaming for Wikipedia corpora | closed | 2021-08-16T15:59:12 | 2023-07-20T13:45:30 | 2023-07-20T13:45:30 | https://github.com/huggingface/datasets/issues/2808 | null | lewtun | false | [
"Closing as this has been addressed in https://github.com/huggingface/datasets/pull/5689."
] |
971,849,863 | 2,807 | Add cats_vs_dogs dataset | closed | 2021-08-16T15:21:11 | 2021-08-30T16:35:25 | 2021-08-30T16:35:24 | https://github.com/huggingface/datasets/pull/2807 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2807",
"html_url": "https://github.com/huggingface/datasets/pull/2807",
"diff_url": "https://github.com/huggingface/datasets/pull/2807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2807.patch",
"merged_at": "2021-08-30T16:35:24"
} | nateraw | true | [] |
971,625,449 | 2,806 | Fix streaming tar files from canonical datasets | closed | 2021-08-16T11:10:28 | 2021-10-13T09:04:03 | 2021-10-13T09:04:02 | https://github.com/huggingface/datasets/pull/2806 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2806",
"html_url": "https://github.com/huggingface/datasets/pull/2806",
"diff_url": "https://github.com/huggingface/datasets/pull/2806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2806.patch",
"merged_at": null
} | albertvillanova | true | [
"In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n... |
971,436,456 | 2,805 | Fix streaming zip files from canonical datasets | closed | 2021-08-16T07:11:40 | 2021-08-16T10:34:00 | 2021-08-16T10:34:00 | https://github.com/huggingface/datasets/pull/2805 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2805",
"html_url": "https://github.com/huggingface/datasets/pull/2805",
"diff_url": "https://github.com/huggingface/datasets/pull/2805.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2805.patch",
"merged_at": "2021-08-16T10:34:00"
} | albertvillanova | true | [] |
971,353,437 | 2,804 | Add Food-101 | closed | 2021-08-16T04:26:15 | 2021-08-20T14:31:33 | 2021-08-19T12:48:06 | https://github.com/huggingface/datasets/pull/2804 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2804",
"html_url": "https://github.com/huggingface/datasets/pull/2804",
"diff_url": "https://github.com/huggingface/datasets/pull/2804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2804.patch",
"merged_at": "2021-08-19T12:48:06"
} | nateraw | true | [] |
970,858,928 | 2,803 | add stack exchange | closed | 2021-08-14T08:11:02 | 2021-08-19T10:07:33 | 2021-08-19T08:07:38 | https://github.com/huggingface/datasets/pull/2803 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2803",
"html_url": "https://github.com/huggingface/datasets/pull/2803",
"diff_url": "https://github.com/huggingface/datasets/pull/2803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2803.patch",
"merged_at": "2021-08-19T08:07:38"
} | richarddwang | true | [
"Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming... |
970,848,302 | 2,802 | add openwebtext2 | closed | 2021-08-14T07:09:03 | 2021-08-23T14:06:14 | 2021-08-23T14:06:14 | https://github.com/huggingface/datasets/pull/2802 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2802",
"html_url": "https://github.com/huggingface/datasets/pull/2802",
"diff_url": "https://github.com/huggingface/datasets/pull/2802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2802.patch",
"merged_at": "2021-08-23T14:06:14"
} | richarddwang | true | [
"It seems we need to `pip install jsonlines` to pass the checks ?",
"Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replac... |
970,844,617 | 2,801 | add books3 | closed | 2021-08-14T07:04:25 | 2021-08-19T16:43:09 | 2021-08-18T15:36:59 | https://github.com/huggingface/datasets/pull/2801 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2801",
"html_url": "https://github.com/huggingface/datasets/pull/2801",
"diff_url": "https://github.com/huggingface/datasets/pull/2801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2801.patch",
"merged_at": "2021-08-18T15:36:59"
} | richarddwang | true | [
"> When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797\r\n\r\nThanks for the message, we'll definitely improve this\r\n\r\n> Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675... |
970,819,988 | 2,800 | Support streaming tar files | closed | 2021-08-14T04:40:17 | 2021-08-26T10:02:30 | 2021-08-14T04:55:57 | https://github.com/huggingface/datasets/pull/2800 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2800",
"html_url": "https://github.com/huggingface/datasets/pull/2800",
"diff_url": "https://github.com/huggingface/datasets/pull/2800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2800.patch",
"merged_at": "2021-08-14T04:55:57"
} | albertvillanova | true | [
"Hi ! Why do we need the custom `readline` for exactly ? feel free to add a comment to say why it's needed"
] |
970,507,351 | 2,799 | Loading JSON throws ArrowNotImplementedError | closed | 2021-08-13T15:31:48 | 2022-01-10T18:59:32 | 2022-01-10T18:59:32 | https://github.com/huggingface/datasets/issues/2799 | null | lewtun | false | [
"Hi @lewtun, thanks for reporting.\r\n\r\nApparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.\r\n\r\nI will investigate if there is a way to tell pyarrow not to try that timestamp casting.",
"I think the issue is... |
970,493,126 | 2,798 | Fix streaming zip files | closed | 2021-08-13T15:17:01 | 2021-08-16T14:16:50 | 2021-08-13T15:38:28 | https://github.com/huggingface/datasets/pull/2798 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2798",
"html_url": "https://github.com/huggingface/datasets/pull/2798",
"diff_url": "https://github.com/huggingface/datasets/pull/2798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2798.patch",
"merged_at": "2021-08-13T15:38:28"
} | albertvillanova | true | [
"Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ... |
970,331,634 | 2,797 | Make creating/editing dataset cards easier, by editing on site and dumping info from test command. | open | 2021-08-13T11:54:49 | 2021-08-14T08:42:09 | null | https://github.com/huggingface/datasets/issues/2797 | null | richarddwang | false | [] |
970,235,846 | 2,796 | add cedr dataset | closed | 2021-08-13T09:37:35 | 2021-08-27T16:01:36 | 2021-08-27T16:01:36 | https://github.com/huggingface/datasets/pull/2796 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2796",
"html_url": "https://github.com/huggingface/datasets/pull/2796",
"diff_url": "https://github.com/huggingface/datasets/pull/2796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2796.patch",
"merged_at": "2021-08-27T16:01:35"
} | naumov-al | true | [
"> Hi ! Thanks a lot for adding this one :)\r\n> \r\n> Good job with the dataset card and the dataset script !\r\n> \r\n> I left a few suggestions\r\n\r\nThank you very much for your helpful suggestions. I have tried to carry them all out."
] |
969,728,545 | 2,794 | Warnings and documentation about pickling incorrect | open | 2021-08-12T23:09:13 | 2021-08-12T23:09:31 | null | https://github.com/huggingface/datasets/issues/2794 | null | mbforbes | false | [] |
968,967,773 | 2,793 | Fix type hint for data_files | closed | 2021-08-12T14:42:37 | 2021-08-12T15:35:29 | 2021-08-12T15:35:29 | https://github.com/huggingface/datasets/pull/2793 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2793",
"html_url": "https://github.com/huggingface/datasets/pull/2793",
"diff_url": "https://github.com/huggingface/datasets/pull/2793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2793.patch",
"merged_at": "2021-08-12T15:35:29"
} | albertvillanova | true | [] |
968,650,274 | 2,792 | Update: GooAQ - add train/val/test splits | closed | 2021-08-12T11:40:18 | 2021-08-27T15:58:45 | 2021-08-27T15:58:14 | https://github.com/huggingface/datasets/pull/2792 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2792",
"html_url": "https://github.com/huggingface/datasets/pull/2792",
"diff_url": "https://github.com/huggingface/datasets/pull/2792.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2792.patch",
"merged_at": "2021-08-27T15:58:14"
} | bhavitvyamalik | true | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_l... |
968,360,314 | 2,791 | Fix typo in cnn_dailymail | closed | 2021-08-12T08:38:42 | 2021-08-12T11:17:59 | 2021-08-12T11:17:59 | https://github.com/huggingface/datasets/pull/2791 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2791",
"html_url": "https://github.com/huggingface/datasets/pull/2791",
"diff_url": "https://github.com/huggingface/datasets/pull/2791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2791.patch",
"merged_at": "2021-08-12T11:17:59"
} | omaralsayed | true | [] |
967,772,181 | 2,790 | Fix typo in test_dataset_common | closed | 2021-08-12T01:10:29 | 2021-08-12T11:31:29 | 2021-08-12T11:31:29 | https://github.com/huggingface/datasets/pull/2790 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2790",
"html_url": "https://github.com/huggingface/datasets/pull/2790",
"diff_url": "https://github.com/huggingface/datasets/pull/2790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2790.patch",
"merged_at": "2021-08-12T11:31:29"
} | nateraw | true | [] |
967,361,934 | 2,789 | Updated dataset description of DaNE | closed | 2021-08-11T19:58:48 | 2021-08-12T16:10:59 | 2021-08-12T16:06:01 | https://github.com/huggingface/datasets/pull/2789 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2789",
"html_url": "https://github.com/huggingface/datasets/pull/2789",
"diff_url": "https://github.com/huggingface/datasets/pull/2789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2789.patch",
"merged_at": "2021-08-12T16:06:01"
} | KennethEnevoldsen | true | [
"Thanks for finishing it @albertvillanova "
] |
967,149,389 | 2,788 | How to sample every file in a list of files making up a split in a dataset when loading? | closed | 2021-08-11T17:43:21 | 2023-07-25T17:40:50 | 2023-07-25T17:40:50 | https://github.com/huggingface/datasets/issues/2788 | null | brijow | false | [
"Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\... |
967,018,406 | 2,787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | closed | 2021-08-11T16:19:01 | 2023-10-03T12:39:25 | 2021-08-18T15:09:18 | https://github.com/huggingface/datasets/issues/2787 | null | jinec | false | [
"the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)",
"Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming fr... |
966,282,934 | 2,786 | Support streaming compressed files | closed | 2021-08-11T09:02:06 | 2021-08-17T05:28:39 | 2021-08-16T06:36:19 | https://github.com/huggingface/datasets/pull/2786 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2786",
"html_url": "https://github.com/huggingface/datasets/pull/2786",
"diff_url": "https://github.com/huggingface/datasets/pull/2786.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2786.patch",
"merged_at": "2021-08-16T06:36:19"
} | albertvillanova | true | [] |
965,461,382 | 2,783 | Add KS task to SUPERB | closed | 2021-08-10T22:14:07 | 2021-08-12T16:45:01 | 2021-08-11T20:19:17 | https://github.com/huggingface/datasets/pull/2783 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2783",
"html_url": "https://github.com/huggingface/datasets/pull/2783",
"diff_url": "https://github.com/huggingface/datasets/pull/2783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2783.patch",
"merged_at": "2021-08-11T20:19:17"
} | anton-l | true | [
"thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)",
"@albertvillanova thanks! Everything should be ready now :)",
"> The _background_noise_/_silence_ audio files are much longer t... |
964,858,439 | 2,782 | Fix renaming of corpus_bleu args | closed | 2021-08-10T11:02:34 | 2021-08-10T11:16:07 | 2021-08-10T11:16:07 | https://github.com/huggingface/datasets/pull/2782 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2782",
"html_url": "https://github.com/huggingface/datasets/pull/2782",
"diff_url": "https://github.com/huggingface/datasets/pull/2782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2782.patch",
"merged_at": "2021-08-10T11:16:07"
} | albertvillanova | true | [] |
964,805,351 | 2,781 | Latest v2.0.0 release of sacrebleu has broken some metrics | closed | 2021-08-10T09:59:41 | 2021-08-10T11:16:07 | 2021-08-10T11:16:07 | https://github.com/huggingface/datasets/issues/2781 | null | albertvillanova | false | [] |
964,794,764 | 2,780 | VIVOS dataset for Vietnamese ASR | closed | 2021-08-10T09:47:36 | 2021-08-12T11:09:30 | 2021-08-12T11:09:30 | https://github.com/huggingface/datasets/pull/2780 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2780",
"html_url": "https://github.com/huggingface/datasets/pull/2780",
"diff_url": "https://github.com/huggingface/datasets/pull/2780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2780.patch",
"merged_at": "2021-08-12T11:09:30"
} | binh234 | true | [] |
964,775,085 | 2,779 | Fix sacrebleu tokenizers | closed | 2021-08-10T09:24:27 | 2021-08-10T11:03:08 | 2021-08-10T10:57:54 | https://github.com/huggingface/datasets/pull/2779 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2779",
"html_url": "https://github.com/huggingface/datasets/pull/2779",
"diff_url": "https://github.com/huggingface/datasets/pull/2779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2779.patch",
"merged_at": "2021-08-10T10:57:54"
} | albertvillanova | true | [] |
964,737,422 | 2,778 | Do not pass tokenize to sacrebleu | closed | 2021-08-10T08:40:37 | 2021-08-10T10:03:37 | 2021-08-10T10:03:37 | https://github.com/huggingface/datasets/pull/2778 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2778",
"html_url": "https://github.com/huggingface/datasets/pull/2778",
"diff_url": "https://github.com/huggingface/datasets/pull/2778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2778.patch",
"merged_at": "2021-08-10T10:03:37"
} | albertvillanova | true | [] |
964,696,380 | 2,777 | Use packaging to handle versions | closed | 2021-08-10T07:51:39 | 2021-08-18T13:56:27 | 2021-08-18T13:56:27 | https://github.com/huggingface/datasets/pull/2777 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2777",
"html_url": "https://github.com/huggingface/datasets/pull/2777",
"diff_url": "https://github.com/huggingface/datasets/pull/2777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2777.patch",
"merged_at": "2021-08-18T13:56:27"
} | albertvillanova | true | [] |
964,400,596 | 2,776 | document `config.HF_DATASETS_OFFLINE` and precedence | open | 2021-08-09T21:23:17 | 2021-08-09T21:23:17 | null | https://github.com/huggingface/datasets/issues/2776 | null | stas00 | false | [] |
964,303,626 | 2,775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | closed | 2021-08-09T19:28:51 | 2024-01-26T15:05:36 | 2024-01-26T15:05:35 | https://github.com/huggingface/datasets/issues/2775 | null | mbforbes | false | [
"I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo",
"Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RN... |
963,932,199 | 2,774 | Prevent .map from using multiprocessing when loading from cache | closed | 2021-08-09T12:11:38 | 2021-09-09T10:20:28 | 2021-09-09T10:20:28 | https://github.com/huggingface/datasets/pull/2774 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2774",
"html_url": "https://github.com/huggingface/datasets/pull/2774",
"diff_url": "https://github.com/huggingface/datasets/pull/2774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2774.patch",
"merged_at": "2021-09-09T10:20:28"
} | thomasw21 | true | [
"I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ",
"Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r... |
963,730,497 | 2,773 | Remove dataset_infos.json | closed | 2021-08-09T07:43:19 | 2024-05-04T14:52:10 | 2024-05-04T14:52:10 | https://github.com/huggingface/datasets/issues/2773 | null | albertvillanova | false | [
"This was closed by:\r\n- #4926"
] |
963,348,834 | 2,772 | Remove returned feature constrain | open | 2021-08-08T04:01:30 | 2021-08-08T08:48:01 | null | https://github.com/huggingface/datasets/issues/2772 | null | PosoSAgapo | false | [] |
963,257,036 | 2,771 | [WIP][Common Voice 7] Add common voice 7.0 | closed | 2021-08-07T16:01:10 | 2021-12-06T23:24:02 | 2021-12-06T23:24:02 | https://github.com/huggingface/datasets/pull/2771 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2771",
"html_url": "https://github.com/huggingface/datasets/pull/2771",
"diff_url": "https://github.com/huggingface/datasets/pull/2771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2771.patch",
"merged_at": null
} | patrickvonplaten | true | [
"Hi ! I think the name `common_voice_7` is fine :)\r\nMoreover if the dataset_infos.json is missing I'm pretty sure you don't need to specify `ignore_verifications=True`",
"Hi, how about to add a new parameter \"version\" in the function load_dataset, something like: \r\n`load_dataset(\"common_voice\", \"lg\", ve... |
963,246,512 | 2,770 | Add support for fast tokenizer in BertScore | closed | 2021-08-07T15:00:03 | 2021-08-09T12:34:43 | 2021-08-09T11:16:25 | https://github.com/huggingface/datasets/pull/2770 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2770",
"html_url": "https://github.com/huggingface/datasets/pull/2770",
"diff_url": "https://github.com/huggingface/datasets/pull/2770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2770.patch",
"merged_at": "2021-08-09T11:16:25"
} | mariosasko | true | [] |
963,240,802 | 2,769 | Allow PyArrow from source | closed | 2021-08-07T14:26:44 | 2021-08-09T15:38:39 | 2021-08-09T15:38:39 | https://github.com/huggingface/datasets/pull/2769 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2769",
"html_url": "https://github.com/huggingface/datasets/pull/2769",
"diff_url": "https://github.com/huggingface/datasets/pull/2769.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2769.patch",
"merged_at": "2021-08-09T15:38:39"
} | patrickvonplaten | true | [] |
963,229,173 | 2,768 | `ArrowInvalid: Added column's length must match table's length.` after using `select` | closed | 2021-08-07T13:17:29 | 2021-08-09T11:26:43 | 2021-08-09T11:26:43 | https://github.com/huggingface/datasets/issues/2768 | null | lvwerra | false | [
"Hi,\r\n\r\nthe `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds =... |
963,002,120 | 2,767 | equal operation to perform unbatch for huggingface datasets | closed | 2021-08-06T19:45:52 | 2022-03-07T13:58:00 | 2022-03-07T13:58:00 | https://github.com/huggingface/datasets/issues/2767 | null | dorooddorood606 | false | [
"Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can ... |
962,994,198 | 2,766 | fix typo (ShuffingConfig -> ShufflingConfig) | closed | 2021-08-06T19:31:40 | 2021-08-10T14:17:03 | 2021-08-10T14:17:02 | https://github.com/huggingface/datasets/pull/2766 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2766",
"html_url": "https://github.com/huggingface/datasets/pull/2766",
"diff_url": "https://github.com/huggingface/datasets/pull/2766.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2766.patch",
"merged_at": "2021-08-10T14:17:02"
} | daleevans | true | [] |
962,861,395 | 2,765 | BERTScore Error | closed | 2021-08-06T15:58:57 | 2021-08-09T11:16:25 | 2021-08-09T11:16:25 | https://github.com/huggingface/datasets/issues/2765 | null | gagan3012 | false | [
"Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\... |
962,554,799 | 2,764 | Add DER metric for SUPERB speaker diarization task | closed | 2021-08-06T09:12:36 | 2023-07-11T09:35:23 | 2023-07-11T09:35:23 | https://github.com/huggingface/datasets/pull/2764 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2764",
"html_url": "https://github.com/huggingface/datasets/pull/2764",
"diff_url": "https://github.com/huggingface/datasets/pull/2764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2764.patch",
"merged_at": null
} | albertvillanova | true | [
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
961,895,523 | 2,763 | English wikipedia datasets is not clean | closed | 2021-08-05T14:37:24 | 2023-07-25T17:43:04 | 2023-07-25T17:43:04 | https://github.com/huggingface/datasets/issues/2763 | null | lucadiliello | false | [
"Hi ! Certain users might need these data (for training or simply to explore/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training"
] |
961,652,046 | 2,762 | Add RVL-CDIP dataset | closed | 2021-08-05T09:57:05 | 2022-04-21T17:15:41 | 2022-04-21T17:15:41 | https://github.com/huggingface/datasets/issues/2762 | null | NielsRogge | false | [
"cc @nateraw ",
"#self-assign",
"[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. That’s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) r... |
961,568,287 | 2,761 | Error loading C4 realnewslike dataset | closed | 2021-08-05T08:16:58 | 2021-08-08T19:44:34 | 2021-08-08T19:44:34 | https://github.com/huggingface/datasets/issues/2761 | null | danshirron | false | [
"Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.",
"@bhavitvyamalik @lhoestq ... |
961,372,667 | 2,760 | Add Nuswide dataset | open | 2021-08-05T03:00:41 | 2021-12-08T12:06:23 | null | https://github.com/huggingface/datasets/issues/2760 | null | shivangibithel | false | [] |
960,206,575 | 2,758 | Raise ManualDownloadError when loading a dataset that requires previous manual download | closed | 2021-08-04T10:19:55 | 2021-08-04T11:36:30 | 2021-08-04T11:36:30 | https://github.com/huggingface/datasets/pull/2758 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2758",
"html_url": "https://github.com/huggingface/datasets/pull/2758",
"diff_url": "https://github.com/huggingface/datasets/pull/2758.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2758.patch",
"merged_at": "2021-08-04T11:36:30"
} | albertvillanova | true | [] |
959,984,081 | 2,757 | Unexpected type after `concatenate_datasets` | closed | 2021-08-04T07:10:39 | 2021-08-04T16:01:24 | 2021-08-04T16:01:23 | https://github.com/huggingface/datasets/issues/2757 | null | JulesBelveze | false | [
"Hi @JulesBelveze, thanks for your question.\r\n\r\nNote that 🤗 `datasets` internally store their data in Apache Arrow format.\r\n\r\nHowever, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).\r\n\r\nIf you would like their columns to be returned in a more... |
959,255,646 | 2,756 | Fix metadata JSON for ubuntu_dialogs_corpus dataset | closed | 2021-08-03T15:48:59 | 2021-08-04T09:43:25 | 2021-08-04T09:43:25 | https://github.com/huggingface/datasets/pull/2756 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2756",
"html_url": "https://github.com/huggingface/datasets/pull/2756",
"diff_url": "https://github.com/huggingface/datasets/pull/2756.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2756.patch",
"merged_at": "2021-08-04T09:43:25"
} | albertvillanova | true | [] |
959,115,888 | 2,755 | Fix metadata JSON for turkish_movie_sentiment dataset | closed | 2021-08-03T13:25:44 | 2021-08-04T09:06:54 | 2021-08-04T09:06:53 | https://github.com/huggingface/datasets/pull/2755 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2755",
"html_url": "https://github.com/huggingface/datasets/pull/2755",
"diff_url": "https://github.com/huggingface/datasets/pull/2755.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2755.patch",
"merged_at": "2021-08-04T09:06:53"
} | albertvillanova | true | [] |
959,105,577 | 2,754 | Generate metadata JSON for telugu_books dataset | closed | 2021-08-03T13:14:52 | 2021-08-04T08:49:02 | 2021-08-04T08:49:02 | https://github.com/huggingface/datasets/pull/2754 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2754",
"html_url": "https://github.com/huggingface/datasets/pull/2754",
"diff_url": "https://github.com/huggingface/datasets/pull/2754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2754.patch",
"merged_at": "2021-08-04T08:49:01"
} | albertvillanova | true | [] |
959,036,995 | 2,753 | Generate metadata JSON for reclor dataset | closed | 2021-08-03T11:52:29 | 2021-08-04T08:07:15 | 2021-08-04T08:07:15 | https://github.com/huggingface/datasets/pull/2753 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2753",
"html_url": "https://github.com/huggingface/datasets/pull/2753",
"diff_url": "https://github.com/huggingface/datasets/pull/2753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2753.patch",
"merged_at": "2021-08-04T08:07:15"
} | albertvillanova | true | [] |
959,023,608 | 2,752 | Generate metadata JSON for lm1b dataset | closed | 2021-08-03T11:34:56 | 2021-08-04T06:40:40 | 2021-08-04T06:40:39 | https://github.com/huggingface/datasets/pull/2752 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2752",
"html_url": "https://github.com/huggingface/datasets/pull/2752",
"diff_url": "https://github.com/huggingface/datasets/pull/2752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2752.patch",
"merged_at": "2021-08-04T06:40:39"
} | albertvillanova | true | [] |
959,021,262 | 2,751 | Update metadata for wikihow dataset | closed | 2021-08-03T11:31:57 | 2021-08-03T15:52:09 | 2021-08-03T15:52:09 | https://github.com/huggingface/datasets/pull/2751 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2751",
"html_url": "https://github.com/huggingface/datasets/pull/2751",
"diff_url": "https://github.com/huggingface/datasets/pull/2751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2751.patch",
"merged_at": "2021-08-03T15:52:09"
} | albertvillanova | true | [] |
958,984,730 | 2,750 | Second concatenation of datasets produces errors | closed | 2021-08-03T10:47:04 | 2022-01-19T14:23:43 | 2022-01-19T14:19:05 | https://github.com/huggingface/datasets/issues/2750 | null | Aktsvigun | false | [
"@albertvillanova ",
"Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.",
"Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?",
"Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅 \r\n\r\nIn the meantime, ... |
958,968,748 | 2,749 | Raise a proper exception when trying to stream a dataset that requires to manually download files | closed | 2021-08-03T10:26:27 | 2021-08-09T08:53:35 | 2021-08-04T11:36:30 | https://github.com/huggingface/datasets/issues/2749 | null | severo | false | [
"Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requir... |
958,889,041 | 2,748 | Generate metadata JSON for wikihow dataset | closed | 2021-08-03T08:55:40 | 2021-08-03T10:17:51 | 2021-08-03T10:17:51 | https://github.com/huggingface/datasets/pull/2748 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2748",
"html_url": "https://github.com/huggingface/datasets/pull/2748",
"diff_url": "https://github.com/huggingface/datasets/pull/2748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2748.patch",
"merged_at": "2021-08-03T10:17:51"
} | albertvillanova | true | [] |
958,867,627 | 2,747 | add multi-proc in `to_json` | closed | 2021-08-03T08:30:13 | 2021-10-19T18:24:21 | 2021-09-13T13:56:37 | https://github.com/huggingface/datasets/pull/2747 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2747",
"html_url": "https://github.com/huggingface/datasets/pull/2747",
"diff_url": "https://github.com/huggingface/datasets/pull/2747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2747.patch",
"merged_at": "2021-09-13T13:56:37"
} | bhavitvyamalik | true | [
"Thank you for working on this, @bhavitvyamalik \r\n\r\n10% is not solving the issue, we want 5-10x faster on a machine that has lots of resources, but limited processing time.\r\n\r\nSo let's benchmark it on an instance with many more cores, I can test with 12 on my dev box and 40 on JZ. \r\n\r\nCould you please s... |
958,551,619 | 2,746 | Cannot load `few-nerd` dataset | closed | 2021-08-02T22:18:57 | 2021-11-16T08:51:34 | 2021-08-03T19:45:43 | https://github.com/huggingface/datasets/issues/2746 | null | Mehrad0711 | false | [
"Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure th... |
958,269,579 | 2,745 | added semeval18_emotion_classification dataset | closed | 2021-08-02T15:39:55 | 2021-10-29T09:22:05 | 2021-09-21T09:48:35 | https://github.com/huggingface/datasets/pull/2745 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2745",
"html_url": "https://github.com/huggingface/datasets/pull/2745",
"diff_url": "https://github.com/huggingface/datasets/pull/2745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2745.patch",
"merged_at": "2021-09-21T09:48:35"
} | maxpel | true | [
"For training the multilabel classifier, I would combine the labels into a list, for example for the English dataset:\r\n\r\n```\r\ndfpre=pd.read_csv(path+\"2018-E-c-En-train.txt\",sep=\"\\t\")\r\ndfpre['list'] = dfpre[dfpre.columns[2:]].values.tolist()\r\ndf = dfpre[['Tweet', 'list']].copy()\r\ndf.rename(columns={... |
958,146,637 | 2,744 | Fix key by recreating metadata JSON for journalists_questions dataset | closed | 2021-08-02T13:27:53 | 2021-08-03T09:25:34 | 2021-08-03T09:25:33 | https://github.com/huggingface/datasets/pull/2744 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2744",
"html_url": "https://github.com/huggingface/datasets/pull/2744",
"diff_url": "https://github.com/huggingface/datasets/pull/2744.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2744.patch",
"merged_at": "2021-08-03T09:25:33"
} | albertvillanova | true | [] |
958,119,251 | 2,743 | Dataset JSON is incorrect | closed | 2021-08-02T13:01:26 | 2021-08-03T10:06:57 | 2021-08-03T09:25:33 | https://github.com/huggingface/datasets/issues/2743 | null | severo | false | [
"As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:\r\n> Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...\r\nIn the meanwhile, in orde... |
958,114,064 | 2,742 | Improve detection of streamable file types | closed | 2021-08-02T12:55:09 | 2021-11-12T17:18:10 | 2021-11-12T17:18:10 | https://github.com/huggingface/datasets/issues/2742 | null | severo | false | [
"maybe we should rather attempt to download a `Range` from the server and see if it works?"
] |
957,979,559 | 2,741 | Add Hypersim dataset | open | 2021-08-02T10:06:50 | 2021-12-08T12:06:51 | null | https://github.com/huggingface/datasets/issues/2741 | null | osanseviero | false | [] |
957,911,035 | 2,740 | Update release instructions | closed | 2021-08-02T08:46:00 | 2021-08-02T14:39:56 | 2021-08-02T14:39:56 | https://github.com/huggingface/datasets/pull/2740 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2740",
"html_url": "https://github.com/huggingface/datasets/pull/2740",
"diff_url": "https://github.com/huggingface/datasets/pull/2740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2740.patch",
"merged_at": "2021-08-02T14:39:56"
} | albertvillanova | true | [] |
957,751,260 | 2,739 | Pass tokenize to sacrebleu only if explicitly passed by user | closed | 2021-08-02T05:09:05 | 2021-08-03T04:23:37 | 2021-08-03T04:23:37 | https://github.com/huggingface/datasets/pull/2739 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2739",
"html_url": "https://github.com/huggingface/datasets/pull/2739",
"diff_url": "https://github.com/huggingface/datasets/pull/2739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2739.patch",
"merged_at": "2021-08-03T04:23:37"
} | albertvillanova | true | [] |
957,517,746 | 2,738 | Sunbird AI Ugandan low resource language dataset | closed | 2021-08-01T15:18:00 | 2022-10-03T09:37:30 | 2022-10-03T09:37:30 | https://github.com/huggingface/datasets/pull/2738 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2738",
"html_url": "https://github.com/huggingface/datasets/pull/2738",
"diff_url": "https://github.com/huggingface/datasets/pull/2738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2738.patch",
"merged_at": null
} | ak3ra | true | [
"Hi @ak3ra , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)",
"@lhoestq Working on this, thanks for the detailed review :) ",
"Hi ! Cool thanks :)\r\nFeel free to merge master into your branch to fix the CI issues\r\n\r\nLet me know if you ... |
957,124,881 | 2,737 | SacreBLEU update | closed | 2021-07-30T23:53:08 | 2021-09-22T10:47:41 | 2021-08-03T04:23:37 | https://github.com/huggingface/datasets/issues/2737 | null | devrimcavusoglu | false | [
"Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of t... |
956,895,199 | 2,736 | Add Microsoft Building Footprints dataset | open | 2021-07-30T16:17:08 | 2021-12-08T12:09:03 | null | https://github.com/huggingface/datasets/issues/2736 | null | albertvillanova | false | [
"Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"
] |
956,889,365 | 2,735 | Add Open Buildings dataset | open | 2021-07-30T16:08:39 | 2021-07-31T05:01:25 | null | https://github.com/huggingface/datasets/issues/2735 | null | albertvillanova | false | [] |
956,844,874 | 2,734 | Update BibTeX entry | closed | 2021-07-30T15:22:51 | 2021-07-30T15:47:58 | 2021-07-30T15:47:58 | https://github.com/huggingface/datasets/pull/2734 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2734",
"html_url": "https://github.com/huggingface/datasets/pull/2734",
"diff_url": "https://github.com/huggingface/datasets/pull/2734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2734.patch",
"merged_at": "2021-07-30T15:47:58"
} | albertvillanova | true | [] |
956,725,476 | 2,733 | Add missing parquet known extension | closed | 2021-07-30T13:01:20 | 2021-07-30T13:24:31 | 2021-07-30T13:24:30 | https://github.com/huggingface/datasets/pull/2733 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2733",
"html_url": "https://github.com/huggingface/datasets/pull/2733",
"diff_url": "https://github.com/huggingface/datasets/pull/2733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2733.patch",
"merged_at": "2021-07-30T13:24:30"
} | lhoestq | true | [] |
956,676,360 | 2,732 | Updated TTC4900 Dataset | closed | 2021-07-30T11:52:14 | 2021-07-30T16:00:51 | 2021-07-30T15:58:14 | https://github.com/huggingface/datasets/pull/2732 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2732",
"html_url": "https://github.com/huggingface/datasets/pull/2732",
"diff_url": "https://github.com/huggingface/datasets/pull/2732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2732.patch",
"merged_at": "2021-07-30T15:58:14"
} | yavuzKomecoglu | true | [
"@lhoestq, lütfen bu PR'ı gözden geçirebilir misiniz?",
"> Thanks ! This looks all good now :)\r\n\r\nThanks"
] |
956,087,452 | 2,731 | Adding to_tf_dataset method | closed | 2021-07-29T18:10:25 | 2021-09-16T13:50:54 | 2021-09-16T13:50:54 | https://github.com/huggingface/datasets/pull/2731 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2731",
"html_url": "https://github.com/huggingface/datasets/pull/2731",
"diff_url": "https://github.com/huggingface/datasets/pull/2731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2731.patch",
"merged_at": "2021-09-16T13:50:53"
} | Rocketknight1 | true | [
"This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the... |
955,987,834 | 2,730 | Update CommonVoice with new release | open | 2021-07-29T15:59:59 | 2021-08-07T16:19:19 | null | https://github.com/huggingface/datasets/issues/2730 | null | yjernite | false | [
"cc @patrickvonplaten?",
"Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n",
"Also see... |
955,920,489 | 2,729 | Fix IndexError while loading Arabic Billion Words dataset | closed | 2021-07-29T14:47:02 | 2021-07-30T13:03:55 | 2021-07-30T13:03:55 | https://github.com/huggingface/datasets/pull/2729 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2729",
"html_url": "https://github.com/huggingface/datasets/pull/2729",
"diff_url": "https://github.com/huggingface/datasets/pull/2729.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2729.patch",
"merged_at": "2021-07-30T13:03:55"
} | albertvillanova | true | [] |
955,892,970 | 2,728 | Concurrent use of same dataset (already downloaded) | open | 2021-07-29T14:18:38 | 2021-08-02T07:25:57 | null | https://github.com/huggingface/datasets/issues/2728 | null | PierreColombo | false | [
"Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.",
"If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in <module>\r\n train_loader, val_loade... |
955,812,149 | 2,727 | Error in loading the Arabic Billion Words Corpus | closed | 2021-07-29T12:53:09 | 2021-07-30T13:03:55 | 2021-07-30T13:03:55 | https://github.com/huggingface/datasets/issues/2727 | null | M-Salti | false | [
"I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\... |
955,674,388 | 2,726 | Typo fix `tokenize_exemple` | closed | 2021-07-29T10:03:37 | 2021-07-29T12:00:25 | 2021-07-29T12:00:25 | https://github.com/huggingface/datasets/pull/2726 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2726",
"html_url": "https://github.com/huggingface/datasets/pull/2726",
"diff_url": "https://github.com/huggingface/datasets/pull/2726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2726.patch",
"merged_at": "2021-07-29T12:00:25"
} | shabie | true | [] |
955,020,776 | 2,725 | Pass use_auth_token to request_etags | closed | 2021-07-28T16:13:29 | 2021-07-28T16:38:02 | 2021-07-28T16:38:02 | https://github.com/huggingface/datasets/pull/2725 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2725",
"html_url": "https://github.com/huggingface/datasets/pull/2725",
"diff_url": "https://github.com/huggingface/datasets/pull/2725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2725.patch",
"merged_at": "2021-07-28T16:38:01"
} | albertvillanova | true | [] |
954,919,607 | 2,724 | 404 Error when loading remote data files from private repo | closed | 2021-07-28T14:24:23 | 2021-07-29T04:58:49 | 2021-07-28T16:38:01 | https://github.com/huggingface/datasets/issues/2724 | null | albertvillanova | false | [
"I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160",
"Yes, I remember having properly implemented that: \r... |
954,864,104 | 2,723 | Fix en subset by modifying dataset_info with correct validation infos | closed | 2021-07-28T13:36:19 | 2021-07-28T15:22:23 | 2021-07-28T15:22:23 | https://github.com/huggingface/datasets/pull/2723 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2723",
"html_url": "https://github.com/huggingface/datasets/pull/2723",
"diff_url": "https://github.com/huggingface/datasets/pull/2723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2723.patch",
"merged_at": "2021-07-28T15:22:23"
} | thomasw21 | true | [] |
954,446,053 | 2,722 | Missing cache file | closed | 2021-07-28T03:52:07 | 2022-03-21T08:27:51 | 2022-03-21T08:27:51 | https://github.com/huggingface/datasets/issues/2722 | null | PosoSAgapo | false | [
"This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.",
"Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset"
] |
954,238,230 | 2,721 | Deal with the bad check in test_load.py | closed | 2021-07-27T20:23:23 | 2021-07-28T09:58:34 | 2021-07-28T08:53:18 | https://github.com/huggingface/datasets/pull/2721 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2721",
"html_url": "https://github.com/huggingface/datasets/pull/2721",
"diff_url": "https://github.com/huggingface/datasets/pull/2721.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2721.patch",
"merged_at": "2021-07-28T08:53:18"
} | mariosasko | true | [
"Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps://github.com/huggingface/datasets/blob/00686c46b7aaf6bfcd4102cec300a3c031284a5a/tests/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for... |
954,024,426 | 2,720 | fix: 🐛 fix two typos | closed | 2021-07-27T15:50:17 | 2021-07-27T18:38:17 | 2021-07-27T18:38:16 | https://github.com/huggingface/datasets/pull/2720 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2720",
"html_url": "https://github.com/huggingface/datasets/pull/2720",
"diff_url": "https://github.com/huggingface/datasets/pull/2720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2720.patch",
"merged_at": "2021-07-27T18:38:16"
} | severo | true | [] |
953,932,416 | 2,719 | Use ETag in streaming mode to detect resource updates | open | 2021-07-27T14:17:09 | 2021-10-22T09:36:08 | null | https://github.com/huggingface/datasets/issues/2719 | null | severo | false | [] |
953,360,663 | 2,718 | New documentation structure | closed | 2021-07-26T23:15:13 | 2021-09-13T17:20:53 | 2021-09-13T17:20:52 | https://github.com/huggingface/datasets/pull/2718 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2718",
"html_url": "https://github.com/huggingface/datasets/pull/2718",
"diff_url": "https://github.com/huggingface/datasets/pull/2718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2718.patch",
"merged_at": "2021-09-13T17:20:52"
} | stevhliu | true | [
"I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)",
"I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in... |
952,979,976 | 2,717 | Fix shuffle on IterableDataset that disables batching in case any functions were mapped | closed | 2021-07-26T14:42:22 | 2021-07-26T18:04:14 | 2021-07-26T16:30:06 | https://github.com/huggingface/datasets/pull/2717 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2717",
"html_url": "https://github.com/huggingface/datasets/pull/2717",
"diff_url": "https://github.com/huggingface/datasets/pull/2717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2717.patch",
"merged_at": "2021-07-26T16:30:05"
} | amankhandelia | true | [] |
952,902,778 | 2,716 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped | closed | 2021-07-26T13:24:59 | 2021-07-26T18:04:43 | 2021-07-26T18:04:43 | https://github.com/huggingface/datasets/issues/2716 | null | amankhandelia | false | [
"Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)",
"Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)",
"Fixed by #2717."
] |
952,845,229 | 2,715 | Update PAN-X data URL in XTREME dataset | closed | 2021-07-26T12:21:17 | 2021-07-26T13:27:59 | 2021-07-26T13:27:59 | https://github.com/huggingface/datasets/pull/2715 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2715",
"html_url": "https://github.com/huggingface/datasets/pull/2715",
"diff_url": "https://github.com/huggingface/datasets/pull/2715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2715.patch",
"merged_at": "2021-07-26T13:27:59"
} | albertvillanova | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.