id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
1,676,716,662 | 5,774 | Fix style | closed | 2023-04-20T13:21:32 | 2023-04-20T13:34:26 | 2023-04-20T13:24:28 | https://github.com/huggingface/datasets/pull/5774 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5774",
"html_url": "https://github.com/huggingface/datasets/pull/5774",
"diff_url": "https://github.com/huggingface/datasets/pull/5774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5774.patch",
"merged_at": "2023-04-20T13:24:28"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,675,984,633 | 5,773 | train_dataset does not implement __len__ | open | 2023-04-20T04:37:05 | 2023-07-19T20:33:13 | null | https://github.com/huggingface/datasets/issues/5773 | null | ben-8543 | false | [
"Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?",
"this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\",... |
1,675,033,510 | 5,772 | Fix JSON builder when missing keys in first row | closed | 2023-04-19T14:32:57 | 2023-04-21T06:45:13 | 2023-04-21T06:35:27 | https://github.com/huggingface/datasets/pull/5772 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"merged_at": "2023-04-21T06:35:27"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,674,828,380 | 5,771 | Support cloud storage for loading datasets | closed | 2023-04-19T12:43:53 | 2023-05-07T17:47:41 | 2023-05-07T17:47:41 | https://github.com/huggingface/datasets/issues/5771 | null | eli-osherovich | false | [
"A duplicate of https://github.com/huggingface/datasets/issues/5281"
] |
1,673,581,555 | 5,770 | Add IterableDataset.from_spark | closed | 2023-04-18T17:47:53 | 2023-05-17T14:07:32 | 2023-05-17T14:00:38 | https://github.com/huggingface/datasets/pull/5770 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"merged_at": "2023-05-17T14:00:38"
} | maddiedawson | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it ... |
1,673,441,182 | 5,769 | Tiktoken tokenizers are not pickable | closed | 2023-04-18T16:07:40 | 2023-05-04T18:55:57 | 2023-05-04T18:55:57 | https://github.com/huggingface/datasets/issues/5769 | null | markovalexander | false | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure y... |
1,672,494,561 | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | closed | 2023-04-18T07:10:56 | 2023-04-20T10:27:23 | 2023-04-20T10:27:22 | https://github.com/huggingface/datasets/issues/5768 | null | yaseen157 | false | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nI... |
1,672,433,979 | 5,767 | How to use Distill-BERT with different datasets? | closed | 2023-04-18T06:25:12 | 2023-04-20T16:52:05 | 2023-04-20T16:52:05 | https://github.com/huggingface/datasets/issues/5767 | null | sauravtii | false | [
"Closing this one in favor of the same issue opened in the `transformers` repo."
] |
1,671,485,882 | 5,766 | Support custom feature types | open | 2023-04-17T15:46:41 | 2024-03-10T11:11:22 | null | https://github.com/huggingface/datasets/issues/5766 | null | jmontalt | false | [
"Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed",
"An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing somethi... |
1,671,388,824 | 5,765 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] | open | 2023-04-17T15:00:50 | 2023-04-25T13:50:45 | null | https://github.com/huggingface/datasets/issues/5765 | null | sauravtii | false | [
"You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n",
"Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most rece... |
1,670,740,198 | 5,764 | ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 | closed | 2023-04-17T09:08:18 | 2023-04-18T07:18:20 | 2023-04-18T07:18:20 | https://github.com/huggingface/datasets/issues/5764 | null | sauravtii | false | [
"Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r... |
1,670,476,302 | 5,763 | fix typo: "mow" -> "now" | closed | 2023-04-17T06:03:44 | 2023-04-17T15:01:53 | 2023-04-17T14:54:46 | https://github.com/huggingface/datasets/pull/5763 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5763",
"html_url": "https://github.com/huggingface/datasets/pull/5763",
"diff_url": "https://github.com/huggingface/datasets/pull/5763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5763.patch",
"merged_at": "2023-04-17T14:54:46"
} | csris | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,670,326,470 | 5,762 | Not able to load the pile | closed | 2023-04-17T03:09:10 | 2023-04-17T09:37:27 | 2023-04-17T09:37:27 | https://github.com/huggingface/datasets/issues/5762 | null | surya-narayanan | false | [
"Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!"
] |
1,670,034,582 | 5,761 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory | open | 2023-04-16T16:21:55 | 2023-04-19T11:53:24 | null | https://github.com/huggingface/datasets/issues/5761 | null | blghtr | false | [
"Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.",
"Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory.... |
1,670,028,072 | 5,760 | Multi-image loading in Imagefolder dataset | open | 2023-04-16T16:01:05 | 2024-12-01T11:16:09 | null | https://github.com/huggingface/datasets/issues/5760 | null | vvvm23 | false | [
"Supporting this could be useful (I remember a use-case for this on the Hub). Do you agree @polinaeterna? \r\n\r\nImplementing this should be possible if we iterate over metadata files and build image/audio file paths instead of iterating over image/audio files and looking for the corresponding entries in metadata ... |
1,669,977,848 | 5,759 | Can I load in list of list of dict format? | open | 2023-04-16T13:50:14 | 2023-04-19T12:04:36 | null | https://github.com/huggingface/datasets/issues/5759 | null | LZY-the-boys | false | [
"Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is comp... |
1,669,920,923 | 5,758 | Fixes #5757 | closed | 2023-04-16T11:56:01 | 2023-04-20T15:37:49 | 2023-04-20T15:30:48 | https://github.com/huggingface/datasets/pull/5758 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5758",
"html_url": "https://github.com/huggingface/datasets/pull/5758",
"diff_url": "https://github.com/huggingface/datasets/pull/5758.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5758.patch",
"merged_at": "2023-04-20T15:30:48"
} | eli-osherovich | true | [
"The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Ca... |
1,669,910,503 | 5,757 | Tilde (~) is not supported | closed | 2023-04-16T11:48:10 | 2023-04-20T15:30:51 | 2023-04-20T15:30:51 | https://github.com/huggingface/datasets/issues/5757 | null | eli-osherovich | false | [] |
1,669,678,080 | 5,756 | Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array" | closed | 2023-04-16T04:59:47 | 2023-04-18T03:40:56 | 2023-04-18T03:40:56 | https://github.com/huggingface/datasets/issues/5756 | null | rohfle | false | [
"Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3",
"Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files"
] |
1,669,048,438 | 5,755 | ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' | closed | 2023-04-14T23:28:54 | 2023-04-14T23:36:19 | 2023-04-14T23:36:19 | https://github.com/huggingface/datasets/issues/5755 | null | fivejjs | false | [
"update the version. fix"
] |
1,668,755,035 | 5,754 | Minor tqdm fixes | closed | 2023-04-14T18:15:14 | 2023-04-20T15:27:58 | 2023-04-20T15:21:00 | https://github.com/huggingface/datasets/pull/5754 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5754",
"html_url": "https://github.com/huggingface/datasets/pull/5754",
"diff_url": "https://github.com/huggingface/datasets/pull/5754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5754.patch",
"merged_at": "2023-04-20T15:21:00"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,668,659,536 | 5,753 | [IterableDatasets] Add column followed by interleave datasets gives bogus outputs | closed | 2023-04-14T17:32:31 | 2025-07-04T05:22:53 | 2023-04-14T17:36:37 | https://github.com/huggingface/datasets/issues/5753 | null | sanchit-gandhi | false | [
"Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new co... |
1,668,574,209 | 5,752 | Streaming dataset looses `.feature` method after `.add_column` | open | 2023-04-14T16:39:50 | 2024-01-18T10:15:20 | null | https://github.com/huggingface/datasets/issues/5752 | null | sanchit-gandhi | false | [
"I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r... |
1,668,333,316 | 5,751 | Consistent ArrayXD Python formatting + better NumPy/Pandas formatting | closed | 2023-04-14T14:13:59 | 2023-04-20T14:43:20 | 2023-04-20T14:40:34 | https://github.com/huggingface/datasets/pull/5751 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5751",
"html_url": "https://github.com/huggingface/datasets/pull/5751",
"diff_url": "https://github.com/huggingface/datasets/pull/5751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5751.patch",
"merged_at": "2023-04-20T14:40:34"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,668,289,067 | 5,750 | Fail to create datasets from a generator when using Google Big Query | closed | 2023-04-14T13:50:59 | 2023-04-17T12:20:43 | 2023-04-17T12:20:43 | https://github.com/huggingface/datasets/issues/5750 | null | ivanprado | false | [
"`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-d... |
1,668,016,321 | 5,749 | AttributeError: 'Version' object has no attribute 'match' | closed | 2023-04-14T10:48:06 | 2023-06-30T11:31:17 | 2023-04-18T12:57:08 | https://github.com/huggingface/datasets/issues/5749 | null | gulnaz-zh | false | [
"I got the same error, and the official website for visual genome is down. Did you solve this problem? ",
"I am in the same situation now :( ",
"Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.",
"The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.",... |
1,667,517,024 | 5,748 | [BUG FIX] Issue 5739 | open | 2023-04-14T05:07:31 | 2023-04-14T05:07:31 | null | https://github.com/huggingface/datasets/pull/5748 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5748",
"html_url": "https://github.com/huggingface/datasets/pull/5748",
"diff_url": "https://github.com/huggingface/datasets/pull/5748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5748.patch",
"merged_at": null
} | airlsyn | true | [] |
1,667,270,412 | 5,747 | [WIP] Add Dataset.to_spark | closed | 2023-04-13T23:20:03 | 2024-01-08T18:31:50 | 2024-01-08T18:31:50 | https://github.com/huggingface/datasets/pull/5747 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5747",
"html_url": "https://github.com/huggingface/datasets/pull/5747",
"diff_url": "https://github.com/huggingface/datasets/pull/5747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5747.patch",
"merged_at": null
} | maddiedawson | true | [] |
1,667,102,459 | 5,746 | Fix link in docs | closed | 2023-04-13T20:45:19 | 2023-04-14T13:15:38 | 2023-04-14T13:08:42 | https://github.com/huggingface/datasets/pull/5746 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5746",
"html_url": "https://github.com/huggingface/datasets/pull/5746",
"diff_url": "https://github.com/huggingface/datasets/pull/5746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5746.patch",
"merged_at": "2023-04-14T13:08:42"
} | bbbxyz | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,667,086,143 | 5,745 | [BUG FIX] Issue 5744 | open | 2023-04-13T20:29:55 | 2023-04-21T15:22:43 | null | https://github.com/huggingface/datasets/pull/5745 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5745",
"html_url": "https://github.com/huggingface/datasets/pull/5745",
"diff_url": "https://github.com/huggingface/datasets/pull/5745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5745.patch",
"merged_at": null
} | keyboardAnt | true | [
"Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.",
"Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only pa... |
1,667,076,620 | 5,744 | [BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'` | closed | 2023-04-13T20:21:28 | 2024-04-09T16:13:59 | 2023-07-06T17:01:59 | https://github.com/huggingface/datasets/issues/5744 | null | keyboardAnt | false | [
"Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?",
"This has been fixed in `datasets` 2.11",
"I am still getting this bug with the latest pandas and ... |
1,666,843,832 | 5,743 | dataclass.py in virtual environment is overriding the stdlib module "dataclasses" | closed | 2023-04-13T17:28:33 | 2023-04-17T12:23:18 | 2023-04-17T12:23:18 | https://github.com/huggingface/datasets/issues/5743 | null | syedabdullahhassan | false | [
"We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses."
] |
1,666,209,738 | 5,742 | Warning specifying future change in to_tf_dataset behaviour | closed | 2023-04-13T11:10:00 | 2023-04-21T13:18:14 | 2023-04-21T13:11:09 | https://github.com/huggingface/datasets/pull/5742 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5742",
"html_url": "https://github.com/huggingface/datasets/pull/5742",
"diff_url": "https://github.com/huggingface/datasets/pull/5742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5742.patch",
"merged_at": "2023-04-21T13:11:09"
} | amyeroberts | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,665,860,919 | 5,741 | Fix CI warnings | closed | 2023-04-13T07:17:02 | 2023-04-13T09:48:10 | 2023-04-13T09:40:50 | https://github.com/huggingface/datasets/pull/5741 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5741",
"html_url": "https://github.com/huggingface/datasets/pull/5741",
"diff_url": "https://github.com/huggingface/datasets/pull/5741.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5741.patch",
"merged_at": "2023-04-13T09:40:50"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,664,132,130 | 5,740 | Fix CI mock filesystem fixtures | closed | 2023-04-12T08:52:35 | 2023-04-13T11:01:24 | 2023-04-13T10:54:13 | https://github.com/huggingface/datasets/pull/5740 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5740",
"html_url": "https://github.com/huggingface/datasets/pull/5740",
"diff_url": "https://github.com/huggingface/datasets/pull/5740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5740.patch",
"merged_at": "2023-04-13T10:54:13"
} | albertvillanova | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,663,762,901 | 5,739 | weird result during dataset split when data path starts with `/data` | open | 2023-04-12T04:51:35 | 2023-04-21T14:20:59 | null | https://github.com/huggingface/datasets/issues/5739 | null | airlsyn | false | [
"Same problem.",
"hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ",
"> hi! I think you can run python ... |
1,663,477,690 | 5,738 | load_dataset("text","dataset.txt") loads the wrong dataset! | closed | 2023-04-12T01:07:46 | 2023-04-19T12:08:27 | 2023-04-19T12:08:27 | https://github.com/huggingface/datasets/issues/5738 | null | Tylersuard | false | [
"You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir."
] |
1,662,919,811 | 5,737 | ClassLabel Error | closed | 2023-04-11T17:14:13 | 2023-04-13T16:49:57 | 2023-04-13T16:49:57 | https://github.com/huggingface/datasets/issues/5737 | null | mrcaelumn | false | [
"Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassL... |
1,662,286,061 | 5,736 | FORCE_REDOWNLOAD raises "Directory not empty" exception on second run | open | 2023-04-11T11:29:15 | 2023-11-30T07:16:58 | null | https://github.com/huggingface/datasets/issues/5736 | null | rcasero | false | [
"Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?",
"I have the same... |
1,662,150,903 | 5,735 | Implement sharding on merged iterable datasets | closed | 2023-04-11T10:02:25 | 2023-04-27T16:39:04 | 2023-04-27T16:32:09 | https://github.com/huggingface/datasets/pull/5735 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5735",
"html_url": "https://github.com/huggingface/datasets/pull/5735",
"diff_url": "https://github.com/huggingface/datasets/pull/5735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5735.patch",
"merged_at": "2023-04-27T16:32:09"
} | bruno-hays | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable",
"Hi ! \r\nI just tested this ou... |
1,662,058,028 | 5,734 | Remove temporary pin of fsspec | closed | 2023-04-11T09:04:17 | 2023-04-11T11:04:52 | 2023-04-11T11:04:52 | https://github.com/huggingface/datasets/issues/5734 | null | albertvillanova | false | [] |
1,662,039,191 | 5,733 | Unpin fsspec | closed | 2023-04-11T08:52:12 | 2023-04-11T11:11:45 | 2023-04-11T11:04:51 | https://github.com/huggingface/datasets/pull/5733 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5733",
"html_url": "https://github.com/huggingface/datasets/pull/5733",
"diff_url": "https://github.com/huggingface/datasets/pull/5733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5733.patch",
"merged_at": "2023-04-11T11:04:51"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,662,020,571 | 5,732 | Enwik8 should support the standard split | closed | 2023-04-11T08:38:53 | 2023-04-11T09:28:17 | 2023-04-11T09:28:16 | https://github.com/huggingface/datasets/issues/5732 | null | lucaslingle | false | [
"#self-assign",
"The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. "
] |
1,662,012,913 | 5,731 | Temporarily pin fsspec | closed | 2023-04-11T08:33:15 | 2023-04-11T08:57:45 | 2023-04-11T08:47:55 | https://github.com/huggingface/datasets/pull/5731 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5731",
"html_url": "https://github.com/huggingface/datasets/pull/5731",
"diff_url": "https://github.com/huggingface/datasets/pull/5731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5731.patch",
"merged_at": "2023-04-11T08:47:55"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,662,007,926 | 5,730 | CI is broken: ValueError: Name (mock) already in the registry and clobber is False | closed | 2023-04-11T08:29:46 | 2023-04-11T08:47:56 | 2023-04-11T08:47:56 | https://github.com/huggingface/datasets/issues/5730 | null | albertvillanova | false | [] |
1,661,929,923 | 5,729 | Fix nondeterministic sharded data split order | closed | 2023-04-11T07:34:20 | 2023-04-26T15:12:25 | 2023-04-26T15:05:12 | https://github.com/huggingface/datasets/pull/5729 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5729",
"html_url": "https://github.com/huggingface/datasets/pull/5729",
"diff_url": "https://github.com/huggingface/datasets/pull/5729.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5729.patch",
"merged_at": "2023-04-26T15:05:12"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### B... |
1,661,925,932 | 5,728 | The order of data split names is nondeterministic | closed | 2023-04-11T07:31:25 | 2023-04-26T15:05:13 | 2023-04-26T15:05:13 | https://github.com/huggingface/datasets/issues/5728 | null | albertvillanova | false | [] |
1,661,536,363 | 5,727 | load_dataset fails with FileNotFound error on Windows | closed | 2023-04-10T23:21:12 | 2023-07-21T14:08:20 | 2023-07-21T14:08:19 | https://github.com/huggingface/datasets/issues/5727 | null | joelkowalewski | false | [
"Hi! Can you please paste the entire error stack trace, not only the last few lines?",
"`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, ... |
1,660,944,807 | 5,726 | Fallback JSON Dataset loading does not load all values when features specified manually | closed | 2023-04-10T15:22:14 | 2023-04-21T06:35:28 | 2023-04-21T06:35:28 | https://github.com/huggingface/datasets/issues/5726 | null | myluki2000 | false | [
"Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix."
] |
1,660,455,202 | 5,725 | How to limit the number of examples in dataset, for testing? | closed | 2023-04-10T08:41:43 | 2023-04-21T06:16:24 | 2023-04-21T06:16:24 | https://github.com/huggingface/datasets/issues/5725 | null | ndvbd | false | [
"Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```",
"@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`",
"I misread the format in which the dataset is stored - the `nrows` parameter wo... |
1,659,938,135 | 5,724 | Error after shuffling streaming IterableDatasets with downloaded dataset | closed | 2023-04-09T16:58:44 | 2023-04-20T20:37:30 | 2023-04-20T20:37:30 | https://github.com/huggingface/datasets/issues/5724 | null | szxiangjn | false | [
"Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\... |
1,659,837,510 | 5,722 | Distributed Training Error on Customized Dataset | closed | 2023-04-09T11:04:59 | 2023-07-24T14:50:46 | 2023-07-24T14:50:46 | https://github.com/huggingface/datasets/issues/5722 | null | wlhgtc | false | [
"Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node... |
1,659,680,682 | 5,721 | Calling datasets.load_dataset("text" ...) results in a wrong split. | open | 2023-04-08T23:55:12 | 2023-04-08T23:55:12 | null | https://github.com/huggingface/datasets/issues/5721 | null | cyrilzakka | false | [] |
1,659,610,705 | 5,720 | Streaming IterableDatasets do not work with torch DataLoaders | open | 2023-04-08T18:45:48 | 2025-03-19T14:06:47 | null | https://github.com/huggingface/datasets/issues/5720 | null | jlehrer1 | false | [
"Edit: This behavior is true even without `.take/.set`",
"I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n... |
1,659,203,222 | 5,719 | Array2D feature creates a list of list instead of a numpy array | closed | 2023-04-07T21:04:08 | 2023-04-20T15:34:41 | 2023-04-20T15:34:41 | https://github.com/huggingface/datasets/issues/5719 | null | offchan42 | false | [
"Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I th... |
1,658,958,406 | 5,718 | Reorder default data splits to have validation before test | closed | 2023-04-07T16:01:26 | 2023-04-27T14:43:13 | 2023-04-27T14:35:52 | https://github.com/huggingface/datasets/pull/5718 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5718",
"html_url": "https://github.com/huggingface/datasets/pull/5718",
"diff_url": "https://github.com/huggingface/datasets/pull/5718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5718.patch",
"merged_at": "2023-04-27T14:35:52"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['ran... |
1,658,729,866 | 5,717 | Errror when saving to disk a dataset of images | open | 2023-04-07T11:59:17 | 2025-07-13T08:27:47 | null | https://github.com/huggingface/datasets/issues/5717 | null | jplu | false | [
"Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately... |
1,658,613,092 | 5,716 | Handle empty audio | closed | 2023-04-07T09:51:40 | 2023-09-27T17:47:08 | 2023-09-27T17:47:08 | https://github.com/huggingface/datasets/issues/5716 | null | ben-8543 | false | [
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example(... |
1,657,479,788 | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | closed | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 | https://github.com/huggingface/datasets/issues/5715 | null | jungbaepark | false | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] |
1,657,388,033 | 5,714 | Fix xnumpy_load for .npz files | closed | 2023-04-06T13:01:45 | 2023-04-07T09:23:54 | 2023-04-07T09:16:57 | https://github.com/huggingface/datasets/pull/5714 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5714",
"html_url": "https://github.com/huggingface/datasets/pull/5714",
"diff_url": "https://github.com/huggingface/datasets/pull/5714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5714.patch",
"merged_at": "2023-04-07T09:16:57"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,657,141,251 | 5,713 | ArrowNotImplementedError when loading dataset from the hub | closed | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 | https://github.com/huggingface/datasets/issues/5713 | null | jplu | false | [
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. ... |
1,655,972,106 | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | closed | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 | https://github.com/huggingface/datasets/issues/5712 | null | rcasero | false | [
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] |
1,655,971,647 | 5,711 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | closed | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 | https://github.com/huggingface/datasets/issues/5711 | null | rcasero | false | [
"It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```pyt... |
1,655,703,534 | 5,710 | OSError: Memory mapping file failed: Cannot allocate memory | closed | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 | https://github.com/huggingface/datasets/issues/5710 | null | Saibo-creator | false | [
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they a... |
1,655,423,503 | 5,709 | Manually dataset info made not taken into account | closed | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 | https://github.com/huggingface/datasets/issues/5709 | null | jplu | false | [
"hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually... |
1,655,023,642 | 5,708 | Dataset sizes are in MiB instead of MB in dataset cards | closed | 2023-04-05T06:36:03 | 2023-12-21T10:20:28 | 2023-12-21T10:20:27 | https://github.com/huggingface/datasets/issues/5708 | null | albertvillanova | false | [
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming tha... |
1,653,545,835 | 5,706 | Support categorical data types for Parquet | closed | 2023-04-04T09:45:35 | 2024-06-07T12:20:43 | 2024-06-07T12:20:43 | https://github.com/huggingface/datasets/issues/5706 | null | kklemon | false | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:... |
1,653,500,383 | 5,705 | Getting next item from IterableDataset took forever. | closed | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 | https://github.com/huggingface/datasets/issues/5705 | null | HongtaoYang | false | [
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beh... |
1,653,471,356 | 5,704 | 5537 speedup load | open | 2023-04-04T08:58:14 | 2023-04-07T16:10:55 | null | https://github.com/huggingface/datasets/pull/5704 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5704",
"html_url": "https://github.com/huggingface/datasets/pull/5704",
"diff_url": "https://github.com/huggingface/datasets/pull/5704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5704.patch",
"merged_at": null
} | semajyllek | true | [
"Awesome ! cc @mariosasko :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5704). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this!\r\n\r\nYour solution only works if the `root` is `\"\"`, e.g., this would yield an... |
1,653,158,955 | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | closed | 2023-04-04T04:37:49 | 2023-04-20T03:17:37 | 2023-04-20T03:17:32 | https://github.com/huggingface/datasets/pull/5703 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5703",
"html_url": "https://github.com/huggingface/datasets/pull/5703",
"diff_url": "https://github.com/huggingface/datasets/pull/5703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5703.patch",
"merged_at": null
} | hvaara | true | [
"`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).",
"That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impa... |
1,653,104,720 | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | closed | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 | https://github.com/huggingface/datasets/issues/5702 | null | gitforziio | false | [
"Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"... |
1,652,931,399 | 5,701 | Add Dataset.from_spark | closed | 2023-04-03T23:51:29 | 2023-06-16T16:39:32 | 2023-04-26T15:43:39 | https://github.com/huggingface/datasets/pull/5701 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5701",
"html_url": "https://github.com/huggingface/datasets/pull/5701",
"diff_url": "https://github.com/huggingface/datasets/pull/5701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5701.patch",
"merged_at": "2023-04-26T15:43:39"
} | maddiedawson | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Would you or another HF datasets maintainer be able to review this, please?",
"Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `fil... |
1,652,527,530 | 5,700 | fix: fix wrong modification of the 'cache_file_name' -related paramet… | open | 2023-04-03T18:05:26 | 2023-04-06T17:17:27 | null | https://github.com/huggingface/datasets/pull/5700 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"merged_at": null
} | FrancoisNoyez | true | [
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing i... |
1,652,437,419 | 5,699 | Issue when wanting to split in memory a cached dataset | open | 2023-04-03T17:00:07 | 2024-05-15T13:12:18 | null | https://github.com/huggingface/datasets/issues/5699 | null | FrancoisNoyez | false | [
"Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)",
"Facing the same issue. Kindly fix this bug."
] |
1,652,183,611 | 5,698 | Add Qdrant as another search index | open | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null | https://github.com/huggingface/datasets/issues/5698 | null | kacperlukawski | false | [
"@mariosasko I'd appreciate your feedback on this. "
] |
1,651,812,614 | 5,697 | Raise an error on missing distributed seed | closed | 2023-04-03T10:44:58 | 2023-04-04T15:05:24 | 2023-04-04T14:58:16 | https://github.com/huggingface/datasets/pull/5697 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5697",
"html_url": "https://github.com/huggingface/datasets/pull/5697",
"diff_url": "https://github.com/huggingface/datasets/pull/5697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5697.patch",
"merged_at": "2023-04-04T14:58:16"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,651,707,008 | 5,696 | Shuffle a sharded iterable dataset without seed can lead to duplicate data | closed | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 | https://github.com/huggingface/datasets/issues/5696 | null | lhoestq | false | [] |
1,650,974,156 | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | closed | 2023-04-02T14:42:44 | 2024-05-15T12:04:47 | 2023-04-10T08:04:04 | https://github.com/huggingface/datasets/issues/5695 | null | amariucaitheodor | false | [
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid... |
1,650,467,793 | 5,694 | Dataset configuration | open | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null | https://github.com/huggingface/datasets/issues/5694 | null | lhoestq | false | [
"Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to mod... |
1,649,934,749 | 5,693 | [docs] Split pattern search order | closed | 2023-03-31T19:51:38 | 2023-04-03T18:43:30 | 2023-04-03T18:29:58 | https://github.com/huggingface/datasets/pull/5693 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5693",
"html_url": "https://github.com/huggingface/datasets/pull/5693",
"diff_url": "https://github.com/huggingface/datasets/pull/5693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5693.patch",
"merged_at": "2023-04-03T18:29:58"
} | stevhliu | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,649,818,644 | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | open | 2023-03-31T18:19:40 | 2024-01-14T07:24:21 | null | https://github.com/huggingface/datasets/issues/5692 | null | cyanic-selkie | false | [
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n",
"@cyanic-selkie cou... |
1,649,737,526 | 5,691 | [docs] Compress data files | closed | 2023-03-31T17:17:26 | 2023-04-19T13:37:32 | 2023-04-19T07:25:58 | https://github.com/huggingface/datasets/pull/5691 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5691",
"html_url": "https://github.com/huggingface/datasets/pull/5691",
"diff_url": "https://github.com/huggingface/datasets/pull/5691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5691.patch",
"merged_at": "2023-04-19T07:25:58"
} | stevhliu | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<deta... |
1,648,956,349 | 5,689 | Support streaming Beam datasets from HF GCS preprocessed data | closed | 2023-03-31T08:44:24 | 2023-04-12T05:57:55 | 2023-04-12T05:50:31 | https://github.com/huggingface/datasets/pull/5689 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5689",
"html_url": "https://github.com/huggingface/datasets/pull/5689",
"diff_url": "https://github.com/huggingface/datasets/pull/5689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5689.patch",
"merged_at": "2023-04-12T05:50:30"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en.... |
1,649,289,883 | 5,690 | raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api | closed | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 | https://github.com/huggingface/datasets/issues/5690 | null | wccccp | false | [
"Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of ... |
1,648,463,504 | 5,688 | Wikipedia download_and_prepare for GCS | closed | 2023-03-30T23:43:22 | 2024-03-15T15:59:18 | 2024-03-15T15:59:18 | https://github.com/huggingface/datasets/issues/5688 | null | adrianfagerland | false | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processin... |
1,647,009,018 | 5,687 | Document to compress data files before uploading | closed | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 | https://github.com/huggingface/datasets/issues/5687 | null | albertvillanova | false | [
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`,... |
1,646,308,228 | 5,686 | set dev version | closed | 2023-03-29T18:24:13 | 2023-03-29T18:33:49 | 2023-03-29T18:24:22 | https://github.com/huggingface/datasets/pull/5686 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5686",
"html_url": "https://github.com/huggingface/datasets/pull/5686",
"diff_url": "https://github.com/huggingface/datasets/pull/5686.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5686.patch",
"merged_at": "2023-03-29T18:24:22"
} | lhoestq | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,646,048,667 | 5,685 | Broken Image render on the hub website | closed | 2023-03-29T15:25:30 | 2023-03-30T07:54:25 | 2023-03-30T07:54:25 | https://github.com/huggingface/datasets/issues/5685 | null | FrancescoSaverioZuppichini | false | [
"Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard... |
1,646,013,226 | 5,684 | Release: 2.11.0 | closed | 2023-03-29T15:06:07 | 2023-03-29T18:30:34 | 2023-03-29T18:15:54 | https://github.com/huggingface/datasets/pull/5684 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5684",
"html_url": "https://github.com/huggingface/datasets/pull/5684",
"diff_url": "https://github.com/huggingface/datasets/pull/5684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5684.patch",
"merged_at": "2023-03-29T18:15:54"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,646,001,197 | 5,683 | Fix verification_mode when ignore_verifications is passed | closed | 2023-03-29T15:00:50 | 2023-03-29T17:36:06 | 2023-03-29T17:28:57 | https://github.com/huggingface/datasets/pull/5683 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5683",
"html_url": "https://github.com/huggingface/datasets/pull/5683",
"diff_url": "https://github.com/huggingface/datasets/pull/5683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5683.patch",
"merged_at": "2023-03-29T17:28:57"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,646,000,571 | 5,682 | ValueError when passing ignore_verifications | closed | 2023-03-29T15:00:30 | 2023-03-29T17:28:58 | 2023-03-29T17:28:58 | https://github.com/huggingface/datasets/issues/5682 | null | albertvillanova | false | [] |
1,645,630,784 | 5,681 | Add information about patterns search order to the doc about structuring repo | closed | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 | https://github.com/huggingface/datasets/issues/5681 | null | polinaeterna | false | [
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] |
1,645,430,103 | 5,680 | Fix a description error for interleave_datasets. | closed | 2023-03-29T09:50:23 | 2023-03-30T13:14:19 | 2023-03-30T13:07:18 | https://github.com/huggingface/datasets/pull/5680 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5680",
"html_url": "https://github.com/huggingface/datasets/pull/5680",
"diff_url": "https://github.com/huggingface/datasets/pull/5680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5680.patch",
"merged_at": "2023-03-30T13:07:18"
} | QizhiPei | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_a... |
1,645,184,622 | 5,679 | Allow load_dataset to take a working dir for intermediate data | open | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null | https://github.com/huggingface/datasets/issues/5679 | null | lu-wang-dl | false | [
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud stora... |
1,645,018,359 | 5,678 | Add support to create a Dataset from spark dataframe | closed | 2023-03-29T04:36:28 | 2024-08-27T14:43:19 | 2023-07-21T14:15:38 | https://github.com/huggingface/datasets/issues/5678 | null | lu-wang-dl | false | [
"if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a b... |
1,644,828,606 | 5,677 | Dataset.map() crashes when any column contains more than 1000 empty dictionaries | closed | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 | https://github.com/huggingface/datasets/issues/5677 | null | mtoles | false | [] |
1,641,763,478 | 5,675 | Filter datasets by language code | closed | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 | https://github.com/huggingface/datasets/issues/5675 | null | named-entity | false | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missi... |
1,641,084,105 | 5,674 | Stored XSS | closed | 2023-03-26T20:55:58 | 2024-04-30T22:56:41 | 2023-03-27T21:01:55 | https://github.com/huggingface/datasets/issues/5674 | null | Fadavvi | false | [
"Hi! You can contact `security@huggingface.co` to report this vulnerability."
] |
1,641,066,352 | 5,673 | Pass down storage options | closed | 2023-03-26T20:09:37 | 2023-03-28T15:03:38 | 2023-03-28T14:54:17 | https://github.com/huggingface/datasets/pull/5673 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5673",
"html_url": "https://github.com/huggingface/datasets/pull/5673",
"diff_url": "https://github.com/huggingface/datasets/pull/5673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5673.patch",
"merged_at": "2023-03-28T14:54:17"
} | dwyatte | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\... |
1,641,005,322 | 5,672 | Pushing dataset to hub crash | closed | 2023-03-26T17:42:13 | 2023-03-30T08:11:05 | 2023-03-30T08:11:05 | https://github.com/huggingface/datasets/issues/5672 | null | tzvc | false | [
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.