id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
1,031,673,115 | 3,121 | Use huggingface_hub.HfApi to list datasets/metrics | closed | 2021-10-20T17:48:29 | 2021-11-05T11:45:08 | 2021-11-05T09:48:36 | https://github.com/huggingface/datasets/pull/3121 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3121",
"html_url": "https://github.com/huggingface/datasets/pull/3121",
"diff_url": "https://github.com/huggingface/datasets/pull/3121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3121.patch",
"merged_at": "2021-11-05T09:48:35"
} | mariosasko | true | [] |
1,031,574,511 | 3,120 | Correctly update metadata to preserve features when concatenating datasets with axis=1 | closed | 2021-10-20T15:54:58 | 2021-10-22T08:28:51 | 2021-10-21T14:50:21 | https://github.com/huggingface/datasets/pull/3120 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3120",
"html_url": "https://github.com/huggingface/datasets/pull/3120",
"diff_url": "https://github.com/huggingface/datasets/pull/3120.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3120.patch",
"merged_at": "2021-10-21T14:50:21"
} | mariosasko | true | [] |
1,031,328,044 | 3,119 | Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech | closed | 2021-10-20T12:05:07 | 2021-10-22T19:00:52 | 2021-10-22T08:30:22 | https://github.com/huggingface/datasets/issues/3119 | null | tyrius02 | false | [
"Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files."
] |
1,031,309,549 | 3,118 | Fix CI error at each release commit | closed | 2021-10-20T11:44:38 | 2021-10-20T13:02:36 | 2021-10-20T13:02:36 | https://github.com/huggingface/datasets/pull/3118 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3118",
"html_url": "https://github.com/huggingface/datasets/pull/3118",
"diff_url": "https://github.com/huggingface/datasets/pull/3118.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3118.patch",
"merged_at": "2021-10-20T13:02:35"
} | albertvillanova | true | [] |
1,031,308,083 | 3,117 | CI error at each release commit | closed | 2021-10-20T11:42:53 | 2021-10-20T13:02:35 | 2021-10-20T13:02:35 | https://github.com/huggingface/datasets/issues/3117 | null | albertvillanova | false | [] |
1,031,270,611 | 3,116 | Update doc links to point to new docs | closed | 2021-10-20T11:00:47 | 2021-10-22T08:29:28 | 2021-10-22T08:26:45 | https://github.com/huggingface/datasets/pull/3116 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3116",
"html_url": "https://github.com/huggingface/datasets/pull/3116",
"diff_url": "https://github.com/huggingface/datasets/pull/3116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3116.patch",
"merged_at": "2021-10-22T08:26:45"
} | mariosasko | true | [] |
1,030,737,524 | 3,115 | Fill in dataset card for NCBI disease dataset | closed | 2021-10-19T20:57:05 | 2021-10-22T08:25:07 | 2021-10-22T08:25:07 | https://github.com/huggingface/datasets/pull/3115 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3115",
"html_url": "https://github.com/huggingface/datasets/pull/3115",
"diff_url": "https://github.com/huggingface/datasets/pull/3115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3115.patch",
"merged_at": "2021-10-22T08:25:07"
} | edugp | true | [] |
1,030,693,130 | 3,114 | load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem | closed | 2021-10-19T20:01:45 | 2022-02-14T14:00:28 | 2022-02-14T14:00:28 | https://github.com/huggingface/datasets/issues/3114 | null | francisco-perez-sorrosal | false | [
"Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.",
"Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` on... |
1,030,667,547 | 3,113 | Loading Data from HDF files | closed | 2021-10-19T19:26:46 | 2025-08-19T13:28:54 | 2025-08-19T13:28:54 | https://github.com/huggingface/datasets/issues/3113 | null | FeryET | false | [
"I'm currently working on bringing [Ecoset](https://www.pnas.org/doi/10.1073/pnas.2011417118) to huggingface datasets and I would second this request...",
"I would also like this support or something similar. Geospatial datasets come in netcdf which is derived from hdf5, or zarr. I've gotten zarr stores to work w... |
1,030,613,083 | 3,112 | OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB | open | 2021-10-19T18:21:41 | 2021-10-19T18:52:29 | null | https://github.com/huggingface/datasets/issues/3112 | null | BenoitDalFerro | false | [
"I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.",
"fixed",
"Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails wit... |
1,030,598,983 | 3,111 | concatenate_datasets removes ClassLabel typing. | closed | 2021-10-19T18:05:31 | 2021-10-21T14:50:21 | 2021-10-21T14:50:21 | https://github.com/huggingface/datasets/issues/3111 | null | Dref360 | false | [
"Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1"
] |
1,030,558,484 | 3,110 | Stream TAR-based dataset using iter_archive | closed | 2021-10-19T17:16:24 | 2021-11-05T17:48:49 | 2021-11-05T17:48:48 | https://github.com/huggingface/datasets/pull/3110 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3110",
"html_url": "https://github.com/huggingface/datasets/pull/3110",
"diff_url": "https://github.com/huggingface/datasets/pull/3110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3110.patch",
"merged_at": "2021-11-05T17:48:48"
} | lhoestq | true | [
"I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first",
"The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR"
] |
1,030,543,284 | 3,109 | Update BibTeX entry | closed | 2021-10-19T16:59:31 | 2021-10-19T17:13:28 | 2021-10-19T17:13:27 | https://github.com/huggingface/datasets/pull/3109 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3109",
"html_url": "https://github.com/huggingface/datasets/pull/3109",
"diff_url": "https://github.com/huggingface/datasets/pull/3109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3109.patch",
"merged_at": "2021-10-19T17:13:27"
} | albertvillanova | true | [] |
1,030,405,618 | 3,108 | Add Google BLEU (aka GLEU) metric | closed | 2021-10-19T14:48:38 | 2021-10-25T14:07:04 | 2021-10-25T14:07:04 | https://github.com/huggingface/datasets/pull/3108 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3108",
"html_url": "https://github.com/huggingface/datasets/pull/3108",
"diff_url": "https://github.com/huggingface/datasets/pull/3108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3108.patch",
"merged_at": "2021-10-25T14:07:04"
} | slowwavesleep | true | [] |
1,030,357,527 | 3,107 | Add paper BibTeX citation | closed | 2021-10-19T14:08:11 | 2021-10-19T14:26:22 | 2021-10-19T14:26:21 | https://github.com/huggingface/datasets/pull/3107 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3107",
"html_url": "https://github.com/huggingface/datasets/pull/3107",
"diff_url": "https://github.com/huggingface/datasets/pull/3107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3107.patch",
"merged_at": "2021-10-19T14:26:21"
} | albertvillanova | true | [] |
1,030,112,473 | 3,106 | Fix URLs in blog_authorship_corpus dataset | closed | 2021-10-19T10:06:05 | 2021-10-19T12:50:40 | 2021-10-19T12:50:39 | https://github.com/huggingface/datasets/pull/3106 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3106",
"html_url": "https://github.com/huggingface/datasets/pull/3106",
"diff_url": "https://github.com/huggingface/datasets/pull/3106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3106.patch",
"merged_at": "2021-10-19T12:50:39"
} | albertvillanova | true | [] |
1,029,098,843 | 3,105 | download_mode=`force_redownload` does not work on removed datasets | open | 2021-10-18T13:12:38 | 2021-10-22T09:36:10 | null | https://github.com/huggingface/datasets/issues/3105 | null | severo | false | [] |
1,029,080,412 | 3,104 | Missing Zenodo 1.13.3 release | closed | 2021-10-18T12:57:18 | 2021-10-22T13:22:25 | 2021-10-22T13:22:24 | https://github.com/huggingface/datasets/issues/3104 | null | albertvillanova | false | [
"Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150"
] |
1,029,069,310 | 3,103 | Fix project description in PyPI | closed | 2021-10-18T12:47:29 | 2021-10-18T12:59:57 | 2021-10-18T12:59:56 | https://github.com/huggingface/datasets/pull/3103 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3103",
"html_url": "https://github.com/huggingface/datasets/pull/3103",
"diff_url": "https://github.com/huggingface/datasets/pull/3103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3103.patch",
"merged_at": "2021-10-18T12:59:56"
} | albertvillanova | true | [] |
1,029,067,062 | 3,102 | Unsuitable project description in PyPI | closed | 2021-10-18T12:45:00 | 2021-10-18T12:59:56 | 2021-10-18T12:59:56 | https://github.com/huggingface/datasets/issues/3102 | null | albertvillanova | false | [] |
1,028,966,968 | 3,101 | Update SUPERB to use Audio features | closed | 2021-10-18T11:05:18 | 2021-10-18T12:33:54 | 2021-10-18T12:06:46 | https://github.com/huggingface/datasets/pull/3101 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3101",
"html_url": "https://github.com/huggingface/datasets/pull/3101",
"diff_url": "https://github.com/huggingface/datasets/pull/3101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3101.patch",
"merged_at": "2021-10-18T12:06:46"
} | anton-l | true | [
"Thank you! Sorry I forgot this one @albertvillanova"
] |
1,028,738,180 | 3,100 | Replace FSTimeoutError with parent TimeoutError | closed | 2021-10-18T07:37:09 | 2021-10-18T07:51:55 | 2021-10-18T07:51:54 | https://github.com/huggingface/datasets/pull/3100 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3100",
"html_url": "https://github.com/huggingface/datasets/pull/3100",
"diff_url": "https://github.com/huggingface/datasets/pull/3100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3100.patch",
"merged_at": "2021-10-18T07:51:54"
} | albertvillanova | true | [] |
1,028,338,078 | 3,099 | AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' | closed | 2021-10-17T14:17:47 | 2021-11-09T16:42:29 | 2021-11-09T16:42:28 | https://github.com/huggingface/datasets/issues/3099 | null | JTWang2000 | false | [
"Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tre... |
1,028,210,790 | 3,098 | Push to hub capabilities for `Dataset` and `DatasetDict` | closed | 2021-10-17T04:12:44 | 2021-12-08T16:04:50 | 2021-11-24T11:25:36 | https://github.com/huggingface/datasets/pull/3098 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3098",
"html_url": "https://github.com/huggingface/datasets/pull/3098",
"diff_url": "https://github.com/huggingface/datasets/pull/3098.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3098.patch",
"merged_at": "2021-11-24T11:25:36"
} | LysandreJik | true | [
"Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's sti... |
1,027,750,811 | 3,097 | `ModuleNotFoundError: No module named 'fsspec.exceptions'` | closed | 2021-10-15T19:34:38 | 2021-10-18T07:51:54 | 2021-10-18T07:51:54 | https://github.com/huggingface/datasets/issues/3097 | null | VictorSanh | false | [
"Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it."
] |
1,027,535,685 | 3,096 | Fix Audio feature mp3 resampling | closed | 2021-10-15T15:05:19 | 2021-10-15T15:38:30 | 2021-10-15T15:38:30 | https://github.com/huggingface/datasets/pull/3096 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3096",
"html_url": "https://github.com/huggingface/datasets/pull/3096",
"diff_url": "https://github.com/huggingface/datasets/pull/3096.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3096.patch",
"merged_at": "2021-10-15T15:38:29"
} | albertvillanova | true | [] |
1,027,453,146 | 3,095 | `cast_column` makes audio decoding fail | closed | 2021-10-15T13:36:58 | 2023-04-07T09:43:20 | 2021-10-15T15:38:30 | https://github.com/huggingface/datasets/issues/3095 | null | patrickvonplaten | false | [
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_datas... |
1,027,328,633 | 3,094 | Support loading a dataset from SQLite files | closed | 2021-10-15T10:58:41 | 2022-10-03T16:32:29 | 2022-10-03T16:32:29 | https://github.com/huggingface/datasets/issues/3094 | null | albertvillanova | false | [
"for reference Kaggle has a good number of open source datasets stored in sqlite\r\n\r\nAlternatively a tutorial or tool on how to convert from sqlite to parquet would be cool too",
"Hello, could we leverage [`pandas.read_sql`](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) for this? \r\n\r\nT... |
1,027,262,124 | 3,093 | Error loading json dataset with multiple splits if keys in nested dicts have a different order | closed | 2021-10-15T09:33:25 | 2022-04-10T14:06:29 | 2022-04-10T14:06:29 | https://github.com/huggingface/datasets/issues/3093 | null | dthulke | false | [
"Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf =... |
1,027,260,383 | 3,092 | Fix JNLBA dataset | closed | 2021-10-15T09:31:14 | 2022-07-10T14:36:49 | 2021-10-22T08:23:57 | https://github.com/huggingface/datasets/pull/3092 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3092",
"html_url": "https://github.com/huggingface/datasets/pull/3092",
"diff_url": "https://github.com/huggingface/datasets/pull/3092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3092.patch",
"merged_at": "2021-10-22T08:23:57"
} | bhavitvyamalik | true | [
"Fix #3089.",
"@albertvillanova all tests are passing now. Either you or @lhoestq can review it!"
] |
1,027,251,530 | 3,091 | `blog_authorship_corpus` is broken | closed | 2021-10-15T09:20:40 | 2021-10-19T13:06:10 | 2021-10-19T12:50:39 | https://github.com/huggingface/datasets/issues/3091 | null | fdtomasi | false | [
"Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.",
"Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be acces... |
1,027,100,371 | 3,090 | Update BibTeX entry | closed | 2021-10-15T05:39:27 | 2021-10-15T07:35:57 | 2021-10-15T07:35:57 | https://github.com/huggingface/datasets/pull/3090 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3090",
"html_url": "https://github.com/huggingface/datasets/pull/3090",
"diff_url": "https://github.com/huggingface/datasets/pull/3090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3090.patch",
"merged_at": "2021-10-15T07:35:57"
} | albertvillanova | true | [] |
1,026,973,360 | 3,089 | JNLPBA Dataset | closed | 2021-10-15T01:16:02 | 2021-10-22T08:23:57 | 2021-10-22T08:23:57 | https://github.com/huggingface/datasets/issues/3089 | null | sciarrilli | false | [
"# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, ... |
1,026,920,369 | 3,088 | Use template column_mapping to transmit_format instead of template features | closed | 2021-10-14T23:49:40 | 2021-10-15T14:40:05 | 2021-10-15T10:11:04 | https://github.com/huggingface/datasets/pull/3088 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3088",
"html_url": "https://github.com/huggingface/datasets/pull/3088",
"diff_url": "https://github.com/huggingface/datasets/pull/3088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3088.patch",
"merged_at": "2021-10-15T10:11:04"
} | mariosasko | true | [
"Thanks for fixing!"
] |
1,026,780,469 | 3,087 | Removing label column in a text classification dataset yields to errors | closed | 2021-10-14T20:12:50 | 2021-10-15T10:11:04 | 2021-10-15T10:11:04 | https://github.com/huggingface/datasets/issues/3087 | null | sgugger | false | [] |
1,026,481,905 | 3,086 | Remove _resampler from Audio fields | closed | 2021-10-14T14:38:50 | 2021-10-14T15:13:41 | 2021-10-14T15:13:40 | https://github.com/huggingface/datasets/pull/3086 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3086",
"html_url": "https://github.com/huggingface/datasets/pull/3086",
"diff_url": "https://github.com/huggingface/datasets/pull/3086.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3086.patch",
"merged_at": "2021-10-14T15:13:40"
} | albertvillanova | true | [] |
1,026,467,384 | 3,085 | Fixes to `to_tf_dataset` | closed | 2021-10-14T14:25:56 | 2021-10-21T15:05:29 | 2021-10-21T15:05:28 | https://github.com/huggingface/datasets/pull/3085 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3085",
"html_url": "https://github.com/huggingface/datasets/pull/3085",
"diff_url": "https://github.com/huggingface/datasets/pull/3085.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3085.patch",
"merged_at": "2021-10-21T15:05:28"
} | Rocketknight1 | true | [
"Hi ! Can you give some details about why you need these changes ?",
"Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I ... |
1,026,428,992 | 3,084 | VisibleDeprecationWarning when using `set_format("numpy")` | closed | 2021-10-14T13:53:01 | 2021-10-22T16:04:14 | 2021-10-22T16:04:14 | https://github.com/huggingface/datasets/issues/3084 | null | Rocketknight1 | false | [
"I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)"
] |
1,026,397,062 | 3,083 | Datasets with Audio feature raise error when loaded from cache due to _resampler parameter | closed | 2021-10-14T13:23:53 | 2021-10-14T15:13:40 | 2021-10-14T15:13:40 | https://github.com/huggingface/datasets/issues/3083 | null | albertvillanova | false | [] |
1,026,388,994 | 3,082 | Fix error related to huggingface_hub timeout parameter | closed | 2021-10-14T13:17:47 | 2021-10-14T14:39:52 | 2021-10-14T14:39:51 | https://github.com/huggingface/datasets/pull/3082 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3082",
"html_url": "https://github.com/huggingface/datasets/pull/3082",
"diff_url": "https://github.com/huggingface/datasets/pull/3082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3082.patch",
"merged_at": "2021-10-14T14:39:51"
} | albertvillanova | true | [] |
1,026,383,749 | 3,081 | [Audio datasets] Adapting all audio datasets | closed | 2021-10-14T13:13:45 | 2021-10-15T12:52:03 | 2021-10-15T12:22:33 | https://github.com/huggingface/datasets/pull/3081 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3081",
"html_url": "https://github.com/huggingface/datasets/pull/3081",
"diff_url": "https://github.com/huggingface/datasets/pull/3081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3081.patch",
"merged_at": "2021-10-15T12:22:33"
} | patrickvonplaten | true | [
"@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise",
"@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section",
"Hi @patri... |
1,026,380,626 | 3,080 | Error related to timeout keyword argument | closed | 2021-10-14T13:10:58 | 2021-10-14T14:39:51 | 2021-10-14T14:39:51 | https://github.com/huggingface/datasets/issues/3080 | null | albertvillanova | false | [] |
1,026,150,362 | 3,077 | Fix loading a metric with internal import | closed | 2021-10-14T09:06:58 | 2021-10-14T09:14:56 | 2021-10-14T09:14:55 | https://github.com/huggingface/datasets/pull/3077 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3077",
"html_url": "https://github.com/huggingface/datasets/pull/3077",
"diff_url": "https://github.com/huggingface/datasets/pull/3077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3077.patch",
"merged_at": "2021-10-14T09:14:55"
} | albertvillanova | true | [] |
1,026,113,484 | 3,076 | Error when loading a metric | closed | 2021-10-14T08:29:27 | 2021-10-14T09:14:55 | 2021-10-14T09:14:55 | https://github.com/huggingface/datasets/issues/3076 | null | albertvillanova | false | [] |
1,026,103,388 | 3,075 | Updates LexGLUE and MultiEURLEX README.md files | closed | 2021-10-14T08:19:16 | 2021-10-18T10:13:40 | 2021-10-18T10:13:40 | https://github.com/huggingface/datasets/pull/3075 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3075",
"html_url": "https://github.com/huggingface/datasets/pull/3075",
"diff_url": "https://github.com/huggingface/datasets/pull/3075.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3075.patch",
"merged_at": "2021-10-18T10:13:40"
} | iliaschalkidis | true | [] |
1,025,940,085 | 3,074 | add XCSR dataset | closed | 2021-10-14T04:39:59 | 2021-11-08T13:52:36 | 2021-11-08T13:52:36 | https://github.com/huggingface/datasets/pull/3074 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3074",
"html_url": "https://github.com/huggingface/datasets/pull/3074",
"diff_url": "https://github.com/huggingface/datasets/pull/3074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3074.patch",
"merged_at": "2021-11-08T13:52:36"
} | yangxqiao | true | [
"> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are... |
1,025,718,469 | 3,073 | Import error installing with ppc64le | closed | 2021-10-13T21:37:23 | 2021-10-14T16:35:46 | 2021-10-14T16:33:28 | https://github.com/huggingface/datasets/issues/3073 | null | gcervantes8 | false | [
"This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n"
] |
1,025,233,152 | 3,072 | Fix pathlib patches for streaming | closed | 2021-10-13T13:11:15 | 2021-10-13T13:31:05 | 2021-10-13T13:31:05 | https://github.com/huggingface/datasets/pull/3072 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3072",
"html_url": "https://github.com/huggingface/datasets/pull/3072",
"diff_url": "https://github.com/huggingface/datasets/pull/3072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3072.patch",
"merged_at": "2021-10-13T13:31:05"
} | lhoestq | true | [] |
1,024,893,493 | 3,071 | Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder | closed | 2021-10-13T07:32:10 | 2021-10-13T08:27:04 | 2021-10-13T08:27:03 | https://github.com/huggingface/datasets/issues/3071 | null | zixiliuUSC | false | [
"Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```"
] |
1,024,856,745 | 3,070 | Fix Windows CI with FileNotFoundError when stting up s3_base fixture | closed | 2021-10-13T06:49:01 | 2021-10-13T08:55:13 | 2021-10-13T06:49:48 | https://github.com/huggingface/datasets/pull/3070 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3070",
"html_url": "https://github.com/huggingface/datasets/pull/3070",
"diff_url": "https://github.com/huggingface/datasets/pull/3070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3070.patch",
"merged_at": "2021-10-13T06:49:48"
} | albertvillanova | true | [
"Thanks ! Sorry for the inconvenience ^^' "
] |
1,024,818,680 | 3,069 | CI fails on Windows with FileNotFoundError when stting up s3_base fixture | closed | 2021-10-13T05:52:26 | 2021-10-13T08:05:49 | 2021-10-13T06:49:48 | https://github.com/huggingface/datasets/issues/3069 | null | albertvillanova | false | [] |
1,024,681,264 | 3,068 | feat: increase streaming retry config | closed | 2021-10-13T02:00:50 | 2021-10-13T09:25:56 | 2021-10-13T09:25:54 | https://github.com/huggingface/datasets/pull/3068 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3068",
"html_url": "https://github.com/huggingface/datasets/pull/3068",
"diff_url": "https://github.com/huggingface/datasets/pull/3068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3068.patch",
"merged_at": "2021-10-13T09:25:54"
} | borisdayma | true | [
"@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a h... |
1,024,023,185 | 3,067 | add story_cloze | closed | 2021-10-12T16:36:53 | 2021-10-13T13:48:13 | 2021-10-13T13:48:13 | https://github.com/huggingface/datasets/pull/3067 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3067",
"html_url": "https://github.com/huggingface/datasets/pull/3067",
"diff_url": "https://github.com/huggingface/datasets/pull/3067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3067.patch",
"merged_at": "2021-10-13T13:48:13"
} | zaidalyafeai | true | [
"Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npre... |
1,024,005,311 | 3,066 | Add iter_archive | closed | 2021-10-12T16:17:16 | 2022-09-21T14:10:10 | 2021-10-18T09:12:46 | https://github.com/huggingface/datasets/pull/3066 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3066",
"html_url": "https://github.com/huggingface/datasets/pull/3066",
"diff_url": "https://github.com/huggingface/datasets/pull/3066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3066.patch",
"merged_at": "2021-10-18T09:12:46"
} | lhoestq | true | [] |
1,023,951,322 | 3,065 | Fix test command after refac | closed | 2021-10-12T15:23:30 | 2021-10-12T15:28:47 | 2021-10-12T15:28:46 | https://github.com/huggingface/datasets/pull/3065 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3065",
"html_url": "https://github.com/huggingface/datasets/pull/3065",
"diff_url": "https://github.com/huggingface/datasets/pull/3065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3065.patch",
"merged_at": "2021-10-12T15:28:46"
} | lhoestq | true | [] |
1,023,900,075 | 3,064 | Make `interleave_datasets` more robust | open | 2021-10-12T14:34:53 | 2022-07-30T08:47:26 | null | https://github.com/huggingface/datasets/issues/3064 | null | sbmaruf | false | [
"Hi @lhoestq Any response on this issue?",
"Hi ! Sorry for the late response\r\n\r\nI agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop=\"first_exhausted\"` (default) or `stop=\"all_exhauste... |
1,023,588,297 | 3,063 | Windows CI is unable to test streaming properly because of SSL issues | closed | 2021-10-12T09:33:40 | 2022-08-24T14:59:29 | 2022-08-24T14:59:29 | https://github.com/huggingface/datasets/issues/3063 | null | lhoestq | false | [
"I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTP... |
1,023,209,592 | 3,062 | Update summary on PyPi beyond NLP | closed | 2021-10-11T23:27:46 | 2021-10-13T08:55:54 | 2021-10-13T08:55:54 | https://github.com/huggingface/datasets/pull/3062 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3062",
"html_url": "https://github.com/huggingface/datasets/pull/3062",
"diff_url": "https://github.com/huggingface/datasets/pull/3062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3062.patch",
"merged_at": "2021-10-13T08:55:53"
} | thomwolf | true | [] |
1,023,103,119 | 3,061 | Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?) | open | 2021-10-11T20:49:49 | 2021-10-22T09:34:10 | null | https://github.com/huggingface/datasets/issues/3061 | null | BenoitDalFerro | false | [
"@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.",
"Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as ... |
1,022,936,396 | 3,060 | load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" | closed | 2021-10-11T17:05:27 | 2021-10-28T05:52:21 | 2021-10-28T05:52:21 | https://github.com/huggingface/datasets/issues/3060 | null | RylanSchaeffer | false | [
"Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload... |
1,022,620,057 | 3,059 | Fix task reloading from cache | closed | 2021-10-11T12:03:04 | 2021-10-11T12:23:39 | 2021-10-11T12:23:39 | https://github.com/huggingface/datasets/pull/3059 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3059",
"html_url": "https://github.com/huggingface/datasets/pull/3059",
"diff_url": "https://github.com/huggingface/datasets/pull/3059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3059.patch",
"merged_at": "2021-10-11T12:23:38"
} | lhoestq | true | [] |
1,022,612,664 | 3,058 | Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader. | closed | 2021-10-11T11:54:59 | 2022-01-19T14:03:49 | 2022-01-19T14:03:49 | https://github.com/huggingface/datasets/issues/3058 | null | hobbitlzy | false | [
"Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column.... |
1,022,508,315 | 3,057 | Error in per class precision computation | closed | 2021-10-11T10:05:19 | 2021-10-11T10:17:44 | 2021-10-11T10:16:16 | https://github.com/huggingface/datasets/issues/3057 | null | tidhamecha2 | false | [
"Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/hugg... |
1,022,345,564 | 3,056 | Fix meteor metric for version >= 3.6.4 | closed | 2021-10-11T07:11:44 | 2021-10-11T07:29:20 | 2021-10-11T07:29:19 | https://github.com/huggingface/datasets/pull/3056 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3056",
"html_url": "https://github.com/huggingface/datasets/pull/3056",
"diff_url": "https://github.com/huggingface/datasets/pull/3056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3056.patch",
"merged_at": "2021-10-11T07:29:19"
} | albertvillanova | true | [] |
1,022,319,238 | 3,055 | CI test suite fails after meteor metric update | closed | 2021-10-11T06:37:12 | 2021-10-11T07:30:31 | 2021-10-11T07:30:31 | https://github.com/huggingface/datasets/issues/3055 | null | albertvillanova | false | [] |
1,022,108,186 | 3,054 | Update Biosses | closed | 2021-10-10T22:25:12 | 2021-10-13T09:04:27 | 2021-10-13T09:04:27 | https://github.com/huggingface/datasets/pull/3054 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3054",
"html_url": "https://github.com/huggingface/datasets/pull/3054",
"diff_url": "https://github.com/huggingface/datasets/pull/3054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3054.patch",
"merged_at": "2021-10-13T09:04:27"
} | bwang482 | true | [] |
1,022,076,905 | 3,053 | load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type | closed | 2021-10-10T19:55:21 | 2023-02-24T14:02:20 | 2023-02-24T14:02:20 | https://github.com/huggingface/datasets/issues/3053 | null | davidbau | false | [
"I encountered the same bug using different datasets.\r\nany suggestions?",
"+1, can reproduce here!",
"I get the same error\r\nPlatform: Windows 10\r\nPython: python 3.8.8\r\nPyArrow: 5.0",
"I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this i... |
1,021,944,435 | 3,052 | load_dataset cannot download the data and hangs on forever if cache dir specified | closed | 2021-10-10T10:31:36 | 2021-10-11T10:57:09 | 2021-10-11T10:56:36 | https://github.com/huggingface/datasets/issues/3052 | null | BenoitDalFerro | false | [
"Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The fo... |
1,021,852,234 | 3,051 | Non-Matching Checksum Error with crd3 dataset | closed | 2021-10-10T01:32:43 | 2022-03-15T15:54:26 | 2022-03-15T15:54:26 | https://github.com/huggingface/datasets/issues/3051 | null | RylanSchaeffer | false | [
"I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/... |
1,021,772,622 | 3,050 | Fix streaming: catch Timeout error | closed | 2021-10-09T18:19:20 | 2021-10-12T15:28:18 | 2021-10-11T09:35:38 | https://github.com/huggingface/datasets/pull/3050 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3050",
"html_url": "https://github.com/huggingface/datasets/pull/3050",
"diff_url": "https://github.com/huggingface/datasets/pull/3050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3050.patch",
"merged_at": "2021-10-11T09:35:38"
} | borisdayma | true | [
"I'm running a large test.\r\nLet's see if I get any error within a few days.",
"This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): ... |
1,021,770,008 | 3,049 | TimeoutError during streaming | closed | 2021-10-09T18:06:51 | 2021-10-11T09:35:38 | 2021-10-11T09:35:38 | https://github.com/huggingface/datasets/issues/3049 | null | borisdayma | false | [] |
1,021,765,661 | 3,048 | Identify which shard data belongs to | open | 2021-10-09T17:46:35 | 2021-10-09T20:24:17 | null | https://github.com/huggingface/datasets/issues/3048 | null | borisdayma | false | [
"Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch."
] |
1,021,360,616 | 3,047 | Loading from cache a dataset for LM built from a text classification dataset sometimes errors | closed | 2021-10-08T18:23:11 | 2021-11-03T17:13:08 | 2021-11-03T17:13:08 | https://github.com/huggingface/datasets/issues/3047 | null | sgugger | false | [
"This has been fixed in 1.15, let me know if you still have this issue"
] |
1,021,021,368 | 3,046 | Fix MedDialog metadata JSON | closed | 2021-10-08T12:04:40 | 2021-10-11T07:46:43 | 2021-10-11T07:46:42 | https://github.com/huggingface/datasets/pull/3046 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3046",
"html_url": "https://github.com/huggingface/datasets/pull/3046",
"diff_url": "https://github.com/huggingface/datasets/pull/3046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3046.patch",
"merged_at": "2021-10-11T07:46:42"
} | albertvillanova | true | [] |
1,020,968,704 | 3,045 | Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044 | closed | 2021-10-08T10:59:21 | 2021-10-21T16:58:32 | 2021-10-21T14:22:44 | https://github.com/huggingface/datasets/pull/3045 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3045",
"html_url": "https://github.com/huggingface/datasets/pull/3045",
"diff_url": "https://github.com/huggingface/datasets/pull/3045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3045.patch",
"merged_at": null
} | vlievin | true | [
"Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\... |
1,020,869,778 | 3,044 | Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` | open | 2021-10-08T09:07:10 | 2025-03-04T07:16:00 | null | https://github.com/huggingface/datasets/issues/3044 | null | vlievin | false | [
"Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementi... |
1,020,252,114 | 3,043 | Add PASS dataset | closed | 2021-10-07T16:43:43 | 2022-01-20T16:50:47 | 2022-01-20T16:50:47 | https://github.com/huggingface/datasets/issues/3043 | null | osanseviero | false | [] |
1,020,047,289 | 3,042 | Improving elasticsearch integration | open | 2021-10-07T13:28:35 | 2022-07-06T15:19:48 | null | https://github.com/huggingface/datasets/pull/3042 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3042",
"html_url": "https://github.com/huggingface/datasets/pull/3042",
"diff_url": "https://github.com/huggingface/datasets/pull/3042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3042.patch",
"merged_at": null
} | ggdupont | true | [
"@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?"
] |
1,018,911,385 | 3,041 | Load private data files + use glob on ZIP archives for json/csv/etc. module inference | closed | 2021-10-06T18:16:36 | 2021-10-12T15:25:48 | 2021-10-12T15:25:46 | https://github.com/huggingface/datasets/pull/3041 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3041",
"html_url": "https://github.com/huggingface/datasets/pull/3041",
"diff_url": "https://github.com/huggingface/datasets/pull/3041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3041.patch",
"merged_at": "2021-10-12T15:25:46"
} | lhoestq | true | [
"I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat th... |
1,018,782,475 | 3,040 | [save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset | closed | 2021-10-06T17:08:47 | 2021-11-02T15:41:08 | 2021-11-02T15:41:08 | https://github.com/huggingface/datasets/issues/3040 | null | patrickvonplaten | false | [
"Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.",
"That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices ... |
1,018,219,800 | 3,039 | Add sberquad dataset | closed | 2021-10-06T12:32:02 | 2021-10-13T10:19:11 | 2021-10-13T10:16:04 | https://github.com/huggingface/datasets/pull/3039 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3039",
"html_url": "https://github.com/huggingface/datasets/pull/3039",
"diff_url": "https://github.com/huggingface/datasets/pull/3039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3039.patch",
"merged_at": "2021-10-13T10:16:04"
} | Alenush | true | [] |
1,018,113,499 | 3,038 | add sberquad dataset | closed | 2021-10-06T11:33:39 | 2021-10-06T11:58:01 | 2021-10-06T11:58:01 | https://github.com/huggingface/datasets/pull/3038 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3038",
"html_url": "https://github.com/huggingface/datasets/pull/3038",
"diff_url": "https://github.com/huggingface/datasets/pull/3038.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3038.patch",
"merged_at": null
} | Alenush | true | [] |
1,018,091,919 | 3,037 | SberQuad | closed | 2021-10-06T11:21:08 | 2021-10-06T11:33:08 | 2021-10-06T11:33:08 | https://github.com/huggingface/datasets/pull/3037 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3037",
"html_url": "https://github.com/huggingface/datasets/pull/3037",
"diff_url": "https://github.com/huggingface/datasets/pull/3037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3037.patch",
"merged_at": null
} | Alenush | true | [] |
1,017,687,944 | 3,036 | Protect master branch to force contributions via Pull Requests | closed | 2021-10-06T07:34:17 | 2021-10-07T06:51:47 | 2021-10-07T06:49:52 | https://github.com/huggingface/datasets/issues/3036 | null | albertvillanova | false | [
"It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?",
"you can if you're an admin o... |
1,016,770,071 | 3,035 | `load_dataset` does not work with uploaded arrow file | open | 2021-10-05T20:15:10 | 2021-10-06T17:01:37 | null | https://github.com/huggingface/datasets/issues/3035 | null | patrickvonplaten | false | [
"Hi ! This is not a bug, this is simply not implemented.\r\n`save_to_disk` is for on-disk serialization and was not made compatible for the Hub.\r\nThat being said, I agree we actually should make it work with the Hub x)",
"cc @LysandreJik maybe we can solve this at the same time as adding `push_to_hub`"
] |
1,016,759,202 | 3,034 | Errors loading dataset using fs = a gcsfs.GCSFileSystem | open | 2021-10-05T20:07:08 | 2021-10-05T20:26:39 | null | https://github.com/huggingface/datasets/issues/3034 | null | dconatha | false | [] |
1,016,619,572 | 3,033 | Actual "proper" install of ruamel.yaml in the windows CI | closed | 2021-10-05T17:52:07 | 2021-10-05T17:54:57 | 2021-10-05T17:54:57 | https://github.com/huggingface/datasets/pull/3033 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3033",
"html_url": "https://github.com/huggingface/datasets/pull/3033",
"diff_url": "https://github.com/huggingface/datasets/pull/3033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3033.patch",
"merged_at": "2021-10-05T17:54:56"
} | lhoestq | true | [] |
1,016,488,475 | 3,032 | Error when loading private dataset with "data_files" arg | closed | 2021-10-05T15:46:27 | 2021-10-12T15:26:22 | 2021-10-12T15:25:46 | https://github.com/huggingface/datasets/issues/3032 | null | borisdayma | false | [
"We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !"
] |
1,016,458,496 | 3,031 | Align tqdm control with cache control | closed | 2021-10-05T15:18:49 | 2021-10-18T15:00:21 | 2021-10-18T14:59:30 | https://github.com/huggingface/datasets/pull/3031 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3031",
"html_url": "https://github.com/huggingface/datasets/pull/3031",
"diff_url": "https://github.com/huggingface/datasets/pull/3031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3031.patch",
"merged_at": "2021-10-18T14:59:30"
} | mariosasko | true | [
"Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`"
] |
1,016,435,324 | 3,030 | Add `remove_columns` to `IterableDataset` | closed | 2021-10-05T14:58:33 | 2021-10-08T15:33:15 | 2021-10-08T15:31:53 | https://github.com/huggingface/datasets/pull/3030 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3030",
"html_url": "https://github.com/huggingface/datasets/pull/3030",
"diff_url": "https://github.com/huggingface/datasets/pull/3030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3030.patch",
"merged_at": "2021-10-08T15:31:53"
} | changjonathanc | true | [
"Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`... |
1,016,389,901 | 3,029 | Use standard open-domain validation split in nq_open | closed | 2021-10-05T14:19:27 | 2021-10-05T14:56:46 | 2021-10-05T14:56:45 | https://github.com/huggingface/datasets/pull/3029 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3029",
"html_url": "https://github.com/huggingface/datasets/pull/3029",
"diff_url": "https://github.com/huggingface/datasets/pull/3029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3029.patch",
"merged_at": "2021-10-05T14:56:45"
} | craffel | true | [
"I had to run datasets-cli with --ignore_verifications the first time since it was complaining about a missing file, but now it runs without that flag fine. I moved dummy_data.zip to the new folder, but also had to modify the filename of the test file in the zip (should I not have done that?). Finally, I added the ... |
1,016,230,272 | 3,028 | Properly install ruamel-yaml for windows CI | closed | 2021-10-05T11:51:15 | 2021-10-05T14:02:12 | 2021-10-05T11:51:22 | https://github.com/huggingface/datasets/pull/3028 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3028",
"html_url": "https://github.com/huggingface/datasets/pull/3028",
"diff_url": "https://github.com/huggingface/datasets/pull/3028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3028.patch",
"merged_at": "2021-10-05T11:51:22"
} | lhoestq | true | [
"@lhoestq I would say this does not \"properly\" install `ruamel-yaml`, but the contrary, you overwrite the previous version without desinstalling it first.\r\n\r\nAccording to `pip` docs:\r\n> This can break your system if the existing package is of a different version or was installed with a different package ma... |
1,016,150,117 | 3,027 | Resolve data_files by split name | closed | 2021-10-05T10:24:36 | 2021-11-05T17:49:58 | 2021-11-05T17:49:57 | https://github.com/huggingface/datasets/issues/3027 | null | lhoestq | false | [
"Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ",
"From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the fil... |
1,016,067,794 | 3,026 | added arxiv paper inswiss_judgment_prediction dataset card | closed | 2021-10-05T09:02:01 | 2021-10-08T16:01:44 | 2021-10-08T16:01:24 | https://github.com/huggingface/datasets/pull/3026 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3026",
"html_url": "https://github.com/huggingface/datasets/pull/3026",
"diff_url": "https://github.com/huggingface/datasets/pull/3026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3026.patch",
"merged_at": "2021-10-08T16:01:24"
} | JoelNiklaus | true | [] |
1,016,061,222 | 3,025 | Fix Windows test suite | closed | 2021-10-05T08:55:22 | 2021-10-05T09:58:28 | 2021-10-05T09:58:27 | https://github.com/huggingface/datasets/pull/3025 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3025",
"html_url": "https://github.com/huggingface/datasets/pull/3025",
"diff_url": "https://github.com/huggingface/datasets/pull/3025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3025.patch",
"merged_at": "2021-10-05T09:58:27"
} | albertvillanova | true | [] |
1,016,052,911 | 3,024 | Windows test suite fails | closed | 2021-10-05T08:46:46 | 2021-10-05T09:58:27 | 2021-10-05T09:58:27 | https://github.com/huggingface/datasets/issues/3024 | null | albertvillanova | false | [] |
1,015,923,031 | 3,023 | Fix typo | closed | 2021-10-05T06:06:11 | 2021-10-05T11:56:55 | 2021-10-05T11:56:55 | https://github.com/huggingface/datasets/pull/3023 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"merged_at": "2021-10-05T11:56:55"
} | qqaatw | true | [] |
1,015,750,221 | 3,022 | MeDAL dataset: Add further description and update download URL | closed | 2021-10-05T00:13:28 | 2021-10-13T09:03:09 | 2021-10-13T09:03:09 | https://github.com/huggingface/datasets/pull/3022 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3022",
"html_url": "https://github.com/huggingface/datasets/pull/3022",
"diff_url": "https://github.com/huggingface/datasets/pull/3022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3022.patch",
"merged_at": "2021-10-13T09:03:09"
} | xhluca | true | [
"@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?",
"I just added the missing `pretty_name` tag in the YAML - sorry about that ;)",
"Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I ... |
1,015,444,094 | 3,021 | Support loading dataset from multiple zipped CSV data files | closed | 2021-10-04T17:33:57 | 2021-10-06T08:36:46 | 2021-10-06T08:36:45 | https://github.com/huggingface/datasets/pull/3021 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3021",
"html_url": "https://github.com/huggingface/datasets/pull/3021",
"diff_url": "https://github.com/huggingface/datasets/pull/3021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3021.patch",
"merged_at": "2021-10-06T08:36:45"
} | albertvillanova | true | [] |
1,015,406,105 | 3,020 | Add a metric for the MATH dataset (competition_math). | closed | 2021-10-04T16:52:16 | 2021-10-22T10:29:31 | 2021-10-22T10:29:31 | https://github.com/huggingface/datasets/pull/3020 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3020",
"html_url": "https://github.com/huggingface/datasets/pull/3020",
"diff_url": "https://github.com/huggingface/datasets/pull/3020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3020.patch",
"merged_at": "2021-10-22T10:29:31"
} | hacobe | true | [
"I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://g... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.