id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,090,413,758 | https://api.github.com/repos/huggingface/datasets/issues/3501 | https://github.com/huggingface/datasets/pull/3501 | 3,501 | Update pib dataset card | closed | 0 | 2021-12-29T10:14:40 | 2021-12-29T11:13:21 | 2021-12-29T11:13:21 | albertvillanova | [] | Related to #3496 | true |
1,090,406,133 | https://api.github.com/repos/huggingface/datasets/issues/3500 | https://github.com/huggingface/datasets/pull/3500 | 3,500 | Docs: Add VCTK dataset description | closed | 0 | 2021-12-29T10:02:05 | 2022-01-04T10:46:02 | 2022-01-04T10:25:09 | jaketae | [] | This PR is a very minor followup to #1837, with only docs changes (single comment string). | true |
1,090,132,618 | https://api.github.com/repos/huggingface/datasets/issues/3499 | https://github.com/huggingface/datasets/issues/3499 | 3,499 | Adjusting chunk size for streaming datasets | closed | 2 | 2021-12-28T21:17:53 | 2022-05-06T16:29:05 | 2022-05-06T16:29:05 | JoelNiklaus | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the ... | false |
1,090,096,332 | https://api.github.com/repos/huggingface/datasets/issues/3498 | https://github.com/huggingface/datasets/pull/3498 | 3,498 | update `pretty_name` for first 200 datasets | closed | 0 | 2021-12-28T19:50:07 | 2022-07-10T14:36:53 | 2022-01-05T16:38:21 | bhavitvyamalik | [] | I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were loo... | true |
1,090,050,148 | https://api.github.com/repos/huggingface/datasets/issues/3497 | https://github.com/huggingface/datasets/issues/3497 | 3,497 | Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug | closed | 2 | 2021-12-28T18:03:49 | 2022-01-21T13:22:27 | 2022-01-21T13:22:27 | patrickvonplaten | [
"bug"
] | Running:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2ve... | false |
1,089,989,155 | https://api.github.com/repos/huggingface/datasets/issues/3496 | https://github.com/huggingface/datasets/pull/3496 | 3,496 | Update version of pib dataset and make it streamable | closed | 3 | 2021-12-28T16:01:55 | 2022-01-03T14:42:28 | 2021-12-29T08:42:57 | albertvillanova | [] | This PR:
- Updates version of pib dataset: from 0.0.0 to 1.3.0
- Makes the dataset streamable
Fix #3491.
CC: @severo | true |
1,089,983,632 | https://api.github.com/repos/huggingface/datasets/issues/3495 | https://github.com/huggingface/datasets/issues/3495 | 3,495 | Add VoxLingua107 | open | 0 | 2021-12-28T15:51:43 | 2021-12-28T15:51:43 | null | jaketae | [
"dataset request"
] | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** 107 languages, totaling 6628 hours for the train sp... | false |
1,089,983,103 | https://api.github.com/repos/huggingface/datasets/issues/3494 | https://github.com/huggingface/datasets/pull/3494 | 3,494 | Clone full repo to detect new tags when mirroring datasets on the Hub | closed | 2 | 2021-12-28T15:50:47 | 2021-12-28T16:07:21 | 2021-12-28T16:07:20 | lhoestq | [] | The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags.
By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly
cc @SBrandeis | true |
1,089,967,286 | https://api.github.com/repos/huggingface/datasets/issues/3493 | https://github.com/huggingface/datasets/pull/3493 | 3,493 | Fix VCTK encoding | closed | 0 | 2021-12-28T15:23:36 | 2021-12-28T15:48:18 | 2021-12-28T15:48:17 | lhoestq | [] | utf-8 encoding was missing in the VCTK dataset builder added in #3351 | true |
1,089,952,943 | https://api.github.com/repos/huggingface/datasets/issues/3492 | https://github.com/huggingface/datasets/pull/3492 | 3,492 | Add `gzip` for `to_json` | closed | 0 | 2021-12-28T15:01:11 | 2022-07-10T14:36:52 | 2022-01-05T13:03:36 | bhavitvyamalik | [] | (Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required. | true |
1,089,918,018 | https://api.github.com/repos/huggingface/datasets/issues/3491 | https://github.com/huggingface/datasets/issues/3491 | 3,491 | Update version of pib dataset | closed | 0 | 2021-12-28T14:03:58 | 2021-12-29T08:42:57 | 2021-12-29T08:42:57 | albertvillanova | [
"dataset request"
] | On the Hub we have v0, while there exists v1.3.
Related to bigscience-workshop/data_tooling#130
| false |
1,089,730,181 | https://api.github.com/repos/huggingface/datasets/issues/3490 | https://github.com/huggingface/datasets/issues/3490 | 3,490 | Does datasets support load text from HDFS? | open | 1 | 2021-12-28T08:56:02 | 2022-02-14T14:00:51 | null | dancingpipi | [
"enhancement"
] | The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs? | false |
1,089,401,926 | https://api.github.com/repos/huggingface/datasets/issues/3489 | https://github.com/huggingface/datasets/pull/3489 | 3,489 | Avoid unnecessary list creations | open | 1 | 2021-12-27T18:20:56 | 2022-07-06T15:19:49 | null | bryant1410 | [] | Like in `join([... for s in ...])`. Also changed other things that I saw:
* Use a `with` statement for many `open` that missed them, so the files don't remain open.
* Remove unused variables.
* Many HTTP links converted into HTTPS (verified).
* Remove unnecessary "r" mode arg in `open` (double-checked it was actual... | true |
1,089,345,653 | https://api.github.com/repos/huggingface/datasets/issues/3488 | https://github.com/huggingface/datasets/issues/3488 | 3,488 | URL query parameters are set as path in the compression hop for fsspec | open | 1 | 2021-12-27T16:29:00 | 2022-01-05T15:15:25 | null | albertvillanova | [
"bug"
] | ## Describe the bug
There is an ssue with `StreamingDownloadManager._extract`.
I don't know how the test `test_streaming_gg_drive_gzipped` passes:
For
```python
TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz"
urlpath = StreamingDownloadManager().... | false |
1,089,209,031 | https://api.github.com/repos/huggingface/datasets/issues/3487 | https://github.com/huggingface/datasets/pull/3487 | 3,487 | Update ADD_NEW_DATASET.md | closed | 0 | 2021-12-27T12:24:51 | 2021-12-27T15:00:45 | 2021-12-27T15:00:45 | apergo-ai | [] | fixed make style prompt for Windows Terminal | true |
1,089,171,551 | https://api.github.com/repos/huggingface/datasets/issues/3486 | https://github.com/huggingface/datasets/pull/3486 | 3,486 | Fix weird spacing in ManualDownloadError message | closed | 0 | 2021-12-27T11:20:36 | 2021-12-28T09:03:26 | 2021-12-28T09:00:28 | bryant1410 | [] | `textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space. | true |
1,089,027,581 | https://api.github.com/repos/huggingface/datasets/issues/3485 | https://github.com/huggingface/datasets/issues/3485 | 3,485 | skip columns which cannot set to specific format when set_format | closed | 2 | 2021-12-27T07:19:55 | 2021-12-27T09:07:07 | 2021-12-27T09:07:07 | tshu-w | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns.
**Describe the solution you'd like**
skip columns which cannot set to specific forma... | false |
1,088,910,402 | https://api.github.com/repos/huggingface/datasets/issues/3484 | https://github.com/huggingface/datasets/issues/3484 | 3,484 | make shape verification to use ArrayXD instead of nested lists for map | open | 1 | 2021-12-27T02:16:02 | 2022-01-05T13:54:03 | null | tshu-w | [
"enhancement"
] | As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nest... | false |
1,088,784,157 | https://api.github.com/repos/huggingface/datasets/issues/3483 | https://github.com/huggingface/datasets/pull/3483 | 3,483 | Remove unused phony rule from Makefile | closed | 1 | 2021-12-26T14:37:13 | 2022-01-05T19:44:56 | 2022-01-05T16:34:12 | bryant1410 | [] | null | true |
1,088,317,921 | https://api.github.com/repos/huggingface/datasets/issues/3482 | https://github.com/huggingface/datasets/pull/3482 | 3,482 | Fix duplicate keys in NewsQA | closed | 2 | 2021-12-24T11:01:59 | 2022-09-23T12:57:10 | 2022-09-23T12:57:10 | bryant1410 | [
"dataset contribution"
] | * Fix duplicate keys in NewsQA when loading from CSV files.
* Fix s/narqa/newsqa/ in the download manually error message.
* Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues.
* Fix the format of the license text.
* Reformat the code to make it simple... | true |
1,088,308,343 | https://api.github.com/repos/huggingface/datasets/issues/3481 | https://github.com/huggingface/datasets/pull/3481 | 3,481 | Fix overriding of filesystem info | closed | 0 | 2021-12-24T10:42:31 | 2021-12-24T11:08:59 | 2021-12-24T11:08:59 | albertvillanova | [] | Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict.
This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`.
This PR:
- Adds tests for `fs.isfile` (that use `fs.info`).
- Fixes custom `BaseCompressedFileFileSystem.info` by rem... | true |
1,088,267,110 | https://api.github.com/repos/huggingface/datasets/issues/3480 | https://github.com/huggingface/datasets/issues/3480 | 3,480 | the compression format requested when saving a dataset in json format is not respected | closed | 3 | 2021-12-24T09:23:51 | 2022-01-05T13:03:35 | 2022-01-05T13:03:35 | SaulLu | [
"bug"
] | ## Describe the bug
In the documentation of the `to_json` method, it is stated in the parameters that
> **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json.
however when we pass for example `compression="gzip"`, the saved file is not compressed.
Would you also have expected compression t... | false |
1,088,232,880 | https://api.github.com/repos/huggingface/datasets/issues/3479 | https://github.com/huggingface/datasets/issues/3479 | 3,479 | Dataset preview is not available (I think for all Hugging Face datasets) | closed | 4 | 2021-12-24T08:18:48 | 2021-12-24T14:27:46 | 2021-12-24T14:27:46 | Abirate | [
"bug",
"dataset-viewer"
] | ## Dataset viewer issue for '*french_book_reviews*'
**Link:** https://huggingface.co/datasets/Abirate/french_book_reviews
**short description of the issue**
For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...)
And, after lo... | false |
1,087,860,180 | https://api.github.com/repos/huggingface/datasets/issues/3478 | https://github.com/huggingface/datasets/pull/3478 | 3,478 | Extend support for streaming datasets that use os.walk | closed | 1 | 2021-12-23T16:42:55 | 2021-12-24T10:50:20 | 2021-12-24T10:50:19 | albertvillanova | [] | This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function.
This PR adds support for streaming mode to datasets:
1. autshumato
1. code_x_glue_cd_code_to_text
1. code_x_glue_tc_nl_code_search_adv
1. nchlt
CC: @severo | true |
1,087,850,253 | https://api.github.com/repos/huggingface/datasets/issues/3477 | https://github.com/huggingface/datasets/pull/3477 | 3,477 | Use `iter_files` instead of `str(Path(...)` in image dataset | closed | 6 | 2021-12-23T16:26:55 | 2021-12-28T15:15:02 | 2021-12-28T15:15:02 | mariosasko | [] | Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova.
Additional changes:
* Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)... | true |
1,087,622,872 | https://api.github.com/repos/huggingface/datasets/issues/3476 | https://github.com/huggingface/datasets/pull/3476 | 3,476 | Extend support for streaming datasets that use ET.parse | closed | 0 | 2021-12-23T11:18:46 | 2021-12-23T15:34:30 | 2021-12-23T15:34:30 | albertvillanova | [] | This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function.
This PR adds support for streaming mode to datasets:
1. ami
1. assin
1. assin2
1. counter
1. enriched_web_nlg
1. europarl_bilingual
1. hyperpartisan_news_detection
1. polsum
1. qa4mre
1. quail
1. ted_... | true |
1,087,352,041 | https://api.github.com/repos/huggingface/datasets/issues/3475 | https://github.com/huggingface/datasets/issues/3475 | 3,475 | The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish | open | 2 | 2021-12-23T03:56:43 | 2021-12-24T00:23:03 | null | puzzler10 | [
"bug"
] | ## Describe the bug
See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user.
## Steps to reproduce the bug
Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomato... | false |
1,086,945,384 | https://api.github.com/repos/huggingface/datasets/issues/3474 | https://github.com/huggingface/datasets/pull/3474 | 3,474 | Decode images when iterating | closed | 0 | 2021-12-22T15:34:49 | 2023-09-24T09:54:04 | 2021-12-28T16:08:10 | lhoestq | [] | If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned.
This PR enables image decoding in `Dataset.__iter__`
Close https://github.com/huggingface/datasets/issues/3473 | true |
1,086,937,610 | https://api.github.com/repos/huggingface/datasets/issues/3473 | https://github.com/huggingface/datasets/issues/3473 | 3,473 | Iterating over a vision dataset doesn't decode the images | closed | 9 | 2021-12-22T15:26:32 | 2021-12-27T14:13:21 | 2021-12-23T15:21:57 | lhoestq | [
"bug",
"vision"
] | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"... | false |
1,086,908,508 | https://api.github.com/repos/huggingface/datasets/issues/3472 | https://github.com/huggingface/datasets/pull/3472 | 3,472 | Fix `str(Path(...))` conversion in streaming on Linux | closed | 0 | 2021-12-22T15:06:03 | 2021-12-22T16:52:53 | 2021-12-22T16:52:52 | mariosasko | [] | Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets. | true |
1,086,588,074 | https://api.github.com/repos/huggingface/datasets/issues/3471 | https://github.com/huggingface/datasets/pull/3471 | 3,471 | Fix Tashkeela dataset to yield stripped text | closed | 0 | 2021-12-22T08:41:30 | 2021-12-22T10:12:08 | 2021-12-22T10:12:07 | albertvillanova | [] | This PR:
- Yields stripped text
- Fix path for Windows
- Adds license
- Adds more info in dataset card
Close bigscience-workshop/data_tooling#279 | true |
1,086,049,888 | https://api.github.com/repos/huggingface/datasets/issues/3470 | https://github.com/huggingface/datasets/pull/3470 | 3,470 | Fix rendering of docs | closed | 0 | 2021-12-21T17:17:01 | 2021-12-22T09:23:47 | 2021-12-22T09:23:47 | albertvillanova | [] | Minor fix in docs.
Currently, `ClassLabel` docstring rendering is not right. | true |
1,085,882,664 | https://api.github.com/repos/huggingface/datasets/issues/3469 | https://github.com/huggingface/datasets/pull/3469 | 3,469 | Fix METEOR missing NLTK's omw-1.4 | closed | 1 | 2021-12-21T14:19:11 | 2021-12-21T14:52:28 | 2021-12-21T14:49:28 | lhoestq | [] | NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work.
This should fix the CI on master | true |
1,085,871,301 | https://api.github.com/repos/huggingface/datasets/issues/3468 | https://github.com/huggingface/datasets/pull/3468 | 3,468 | Add COCO dataset | closed | 7 | 2021-12-21T14:07:50 | 2023-09-24T09:33:31 | 2022-10-03T09:36:08 | mariosasko | [
"dataset contribution"
] | This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection.
Some notes:
* the data exposed by TFDS is contained in the `... | true |
1,085,870,665 | https://api.github.com/repos/huggingface/datasets/issues/3467 | https://github.com/huggingface/datasets/pull/3467 | 3,467 | Push dataset infos.json to Hub | closed | 1 | 2021-12-21T14:07:13 | 2021-12-21T17:00:10 | 2021-12-21T17:00:09 | lhoestq | [] | When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394).
This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types.
Other minor changes:
- renamed the `___` separator to `--`, since `--` is now disallowed in... | true |
1,085,722,837 | https://api.github.com/repos/huggingface/datasets/issues/3466 | https://github.com/huggingface/datasets/pull/3466 | 3,466 | Add CRASS dataset | closed | 2 | 2021-12-21T11:17:22 | 2022-10-03T09:37:06 | 2022-10-03T09:37:06 | apergo-ai | [
"dataset contribution"
] | Added crass dataset | true |
1,085,400,432 | https://api.github.com/repos/huggingface/datasets/issues/3465 | https://github.com/huggingface/datasets/issues/3465 | 3,465 | Unable to load 'cnn_dailymail' dataset | closed | 4 | 2021-12-21T03:32:21 | 2024-06-12T14:41:17 | 2022-02-17T14:13:57 | talha1503 | [
"bug",
"duplicate",
"dataset bug"
] | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True)
```
## Expe... | false |
1,085,399,097 | https://api.github.com/repos/huggingface/datasets/issues/3464 | https://github.com/huggingface/datasets/issues/3464 | 3,464 | struct.error: 'i' format requires -2147483648 <= number <= 2147483647 | open | 2 | 2021-12-21T03:29:01 | 2022-11-21T19:55:11 | null | koukoulala | [
"bug"
] | ## Describe the bug
A clear and concise description of what the bug is.
using latest datasets=datasets-1.16.1-py3-none-any.whl
process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256:
` | closed | 3 | 2021-12-20T16:50:49 | 2023-09-25T10:28:30 | 2023-09-25T09:20:28 | lhoestq | [] | Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error.
This PR fixes this and should fix the issue with WER showing low values if the input format is not right. | true |
1,084,969,672 | https://api.github.com/repos/huggingface/datasets/issues/3459 | https://github.com/huggingface/datasets/issues/3459 | 3,459 | dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected. | closed | 2 | 2021-12-20T16:16:49 | 2021-12-20T16:34:57 | 2021-12-20T16:34:57 | mmajurski | [
"bug"
] | ## Describe the bug
When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset.
The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is.
However, if you then use a... | false |
1,084,926,025 | https://api.github.com/repos/huggingface/datasets/issues/3458 | https://github.com/huggingface/datasets/pull/3458 | 3,458 | Fix duplicated tag in wikicorpus dataset card | closed | 1 | 2021-12-20T15:34:16 | 2021-12-20T16:03:25 | 2021-12-20T16:03:24 | lhoestq | [] | null | true |
1,084,862,121 | https://api.github.com/repos/huggingface/datasets/issues/3457 | https://github.com/huggingface/datasets/issues/3457 | 3,457 | Add CMU Graphics Lab Motion Capture dataset | open | 3 | 2021-12-20T14:34:39 | 2022-03-16T16:53:09 | null | osanseviero | [
"dataset request",
"vision"
] | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.c... | false |
1,084,687,973 | https://api.github.com/repos/huggingface/datasets/issues/3456 | https://github.com/huggingface/datasets/pull/3456 | 3,456 | [WER] Better error message for wer | closed | 4 | 2021-12-20T11:38:40 | 2021-12-20T16:53:37 | 2021-12-20T16:53:36 | patrickvonplaten | [] | Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following:
```python
from datasets import load_metric
wer = load_metric("wer")
target_str ... | true |
1,084,599,650 | https://api.github.com/repos/huggingface/datasets/issues/3455 | https://github.com/huggingface/datasets/issues/3455 | 3,455 | Easier information editing | closed | 2 | 2021-12-20T10:10:43 | 2023-07-25T15:36:14 | 2023-07-25T15:36:14 | borgr | [
"enhancement",
"generic discussion"
] | **Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branc... | false |
1,084,519,107 | https://api.github.com/repos/huggingface/datasets/issues/3454 | https://github.com/huggingface/datasets/pull/3454 | 3,454 | Fix iter_archive generator | closed | 0 | 2021-12-20T08:50:15 | 2021-12-20T10:05:00 | 2021-12-20T10:04:59 | albertvillanova | [] | This PR:
- Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs
- Fixes bugs in `iter_archive` introduced in:
- #3443
Fix #3453. | true |
1,084,515,911 | https://api.github.com/repos/huggingface/datasets/issues/3453 | https://github.com/huggingface/datasets/issues/3453 | 3,453 | ValueError while iter_archive | closed | 0 | 2021-12-20T08:46:18 | 2021-12-20T10:04:59 | 2021-12-20T10:04:59 | albertvillanova | [
"bug"
] | ## Describe the bug
After the merge of:
- #3443
the method `iter_archive` throws a ValueError:
```
ValueError: read of closed file
```
## Steps to reproduce the bug
```python
for path, file in dl_manager.iter_archive(archive_path):
pass
```
| false |
1,083,803,178 | https://api.github.com/repos/huggingface/datasets/issues/3452 | https://github.com/huggingface/datasets/issues/3452 | 3,452 | why the stratify option is omitted from test_train_split function? | closed | 4 | 2021-12-18T10:37:47 | 2022-05-25T20:43:51 | 2022-05-25T20:43:51 | j-sieger | [
"enhancement",
"good second issue"
] | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | false |
1,083,459,137 | https://api.github.com/repos/huggingface/datasets/issues/3451 | https://github.com/huggingface/datasets/pull/3451 | 3,451 | [Staging] Update dataset repos automatically on the Hub | closed | 2 | 2021-12-17T17:12:11 | 2021-12-21T10:25:46 | 2021-12-20T14:09:51 | lhoestq | [] | Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod.
Related to https://github.com/huggingface/datasets/issues/3341
The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes... | true |
1,083,450,158 | https://api.github.com/repos/huggingface/datasets/issues/3450 | https://github.com/huggingface/datasets/issues/3450 | 3,450 | Unexpected behavior doing Split + Filter | closed | 1 | 2021-12-17T17:00:39 | 2023-07-25T15:38:47 | 2023-07-25T15:38:47 | jbrachat | [
"bug"
] | ## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets import Dataset
import pandas as pd
dic = {'x'... | false |
1,083,373,018 | https://api.github.com/repos/huggingface/datasets/issues/3449 | https://github.com/huggingface/datasets/issues/3449 | 3,449 | Add `__add__()`, `__iadd__()` and similar to `Dataset` class | closed | 2 | 2021-12-17T15:29:11 | 2024-02-29T16:47:56 | 2023-07-25T15:33:56 | sgraaf | [
"enhancement",
"generic discussion"
] | **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
I would like to be able to concatenate datasets as follows:
```python
>>> dataset["train"] += dataset["validation"]
```
... instead of using `concatenate_datasets()`:
```python
>>> raw_datasets["trai... | false |
1,083,231,080 | https://api.github.com/repos/huggingface/datasets/issues/3448 | https://github.com/huggingface/datasets/issues/3448 | 3,448 | JSONDecodeError with HuggingFace dataset viewer | closed | 3 | 2021-12-17T12:52:41 | 2022-02-24T09:10:26 | 2022-02-24T09:10:26 | kathrynchapman | [
"dataset-viewer"
] | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not u... | false |
1,082,539,790 | https://api.github.com/repos/huggingface/datasets/issues/3447 | https://github.com/huggingface/datasets/issues/3447 | 3,447 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading | closed | 3 | 2021-12-16T18:51:13 | 2022-02-17T14:16:27 | 2022-02-17T14:16:27 | dunalduck0 | [
"bug"
] | ## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON... | false |
1,082,414,229 | https://api.github.com/repos/huggingface/datasets/issues/3446 | https://github.com/huggingface/datasets/pull/3446 | 3,446 | Remove redundant local path information in audio/image datasets | closed | 3 | 2021-12-16T16:35:15 | 2023-09-24T10:09:30 | 2023-09-24T10:09:27 | mariosasko | [
"dataset contribution"
] | Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828
TODOs:
* [ ] merge https://github.com/huggingface/datasets/pull/3430
* [ ] merge https://github.com/huggingface/datasets/pull/3364
* [ ] re-generate the info fi... | true |
1,082,370,968 | https://api.github.com/repos/huggingface/datasets/issues/3445 | https://github.com/huggingface/datasets/issues/3445 | 3,445 | question | closed | 1 | 2021-12-16T15:57:00 | 2022-01-03T10:09:00 | 2022-01-03T10:09:00 | BAKAYOKO0232 | [
"dataset-viewer"
] | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| false |
1,082,078,961 | https://api.github.com/repos/huggingface/datasets/issues/3444 | https://github.com/huggingface/datasets/issues/3444 | 3,444 | Align the Dataset and IterableDataset processing API | open | 11 | 2021-12-16T11:26:11 | 2025-01-31T11:07:07 | null | lhoestq | [
"enhancement",
"generic discussion"
] | ## Intro
items marked like <s>this</s> are done already :)
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
<s>with_indices</s>, with_rank, <s>input_columns</s>,... | false |
1,082,052,833 | https://api.github.com/repos/huggingface/datasets/issues/3443 | https://github.com/huggingface/datasets/pull/3443 | 3,443 | Extend iter_archive to support file object input | closed | 0 | 2021-12-16T10:59:14 | 2021-12-17T17:53:03 | 2021-12-17T17:53:02 | albertvillanova | [] | This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`.
With this feature, we can iterate over a tar file inside another tar file. | true |
1,081,862,747 | https://api.github.com/repos/huggingface/datasets/issues/3442 | https://github.com/huggingface/datasets/pull/3442 | 3,442 | Extend text to support yielding lines, paragraphs or documents | closed | 5 | 2021-12-16T07:33:17 | 2021-12-20T16:59:10 | 2021-12-20T16:39:18 | albertvillanova | [] | Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents.
Feel free to comment on the name of the config parameter `row`:
- Currently, the docs state datasets are made of rows and columns
- Other names I considered: `example`, `item` | true |
1,081,571,784 | https://api.github.com/repos/huggingface/datasets/issues/3441 | https://github.com/huggingface/datasets/issues/3441 | 3,441 | Add QuALITY dataset | open | 1 | 2021-12-15T22:26:19 | 2021-12-28T15:17:05 | null | lewtun | [
"dataset request"
] | ## Adding a Dataset
- **Name:** QuALITY
- **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20))
- **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.... | false |
1,081,528,426 | https://api.github.com/repos/huggingface/datasets/issues/3440 | https://github.com/huggingface/datasets/issues/3440 | 3,440 | datasets keeps reading from cached files, although I disabled it | closed | 1 | 2021-12-15T21:26:22 | 2022-02-24T09:12:22 | 2022-02-24T09:12:22 | dorost1234 | [
"bug"
] | ## Describe the bug
Hi,
I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings:
```
from datasets import set_caching_enabled
set_caching_enabled(False)
```
also force redownlaod:
```
download_mode='force_redownloa... | false |
1,081,389,723 | https://api.github.com/repos/huggingface/datasets/issues/3439 | https://github.com/huggingface/datasets/pull/3439 | 3,439 | Add `cast_column` to `IterableDataset` | closed | 1 | 2021-12-15T19:00:45 | 2021-12-16T15:55:20 | 2021-12-16T15:55:19 | mariosasko | [] | Closes #3369.
cc: @patrickvonplaten | true |
1,081,302,203 | https://api.github.com/repos/huggingface/datasets/issues/3438 | https://github.com/huggingface/datasets/pull/3438 | 3,438 | Update supported versions of Python in setup.py | closed | 0 | 2021-12-15T17:30:12 | 2021-12-20T14:22:13 | 2021-12-20T14:22:12 | mariosasko | [] | Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated. | true |
1,081,247,889 | https://api.github.com/repos/huggingface/datasets/issues/3437 | https://github.com/huggingface/datasets/pull/3437 | 3,437 | Update BLEURT hyperlink | closed | 2 | 2021-12-15T16:34:47 | 2021-12-17T13:28:26 | 2021-12-17T13:28:25 | lewtun | [] | The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not.
 scheme. | true |
1,081,043,756 | https://api.github.com/repos/huggingface/datasets/issues/3435 | https://github.com/huggingface/datasets/pull/3435 | 3,435 | Improve Wikipedia Loading Script | closed | 9 | 2021-12-15T13:30:06 | 2022-03-04T08:16:00 | 2022-03-04T08:16:00 | geohci | [] | * More structured approach to detecting redirects
* Remove redundant template filter code (covered by strip_code)
* Add language-specific lists of additional media namespace aliases for filtering
* Add language-specific lists of category namespace aliases for new link text cleaning step
* Remove magic words (parser... | true |
1,080,917,446 | https://api.github.com/repos/huggingface/datasets/issues/3434 | https://github.com/huggingface/datasets/issues/3434 | 3,434 | Add The People's Speech | closed | 1 | 2021-12-15T11:21:21 | 2023-02-28T16:22:29 | 2023-02-28T16:22:28 | mariosasko | [
"dataset request",
"speech"
] | ## Adding a Dataset
- **Name:** The People's Speech
- **Description:** a massive English-language dataset of audio transcriptions of full sentences.
- **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT
- **Data:** https://mlcommons.org/en/peoples-speech/
- **Motivation:** With over 30,000 hours of speech, this ... | false |
1,080,910,724 | https://api.github.com/repos/huggingface/datasets/issues/3433 | https://github.com/huggingface/datasets/issues/3433 | 3,433 | Add Multilingual Spoken Words dataset | closed | 0 | 2021-12-15T11:14:44 | 2022-02-22T10:03:53 | 2022-02-22T10:03:53 | albertvillanova | [
"dataset request",
"speech"
] | ## Adding a Dataset
- **Name:** Multilingual Spoken Words
- **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contain... | false |
1,079,910,769 | https://api.github.com/repos/huggingface/datasets/issues/3432 | https://github.com/huggingface/datasets/pull/3432 | 3,432 | Correctly indent builder config in dataset script docs | closed | 0 | 2021-12-14T15:39:47 | 2021-12-14T17:35:17 | 2021-12-14T17:35:17 | mariosasko | [] | null | true |
1,079,866,083 | https://api.github.com/repos/huggingface/datasets/issues/3431 | https://github.com/huggingface/datasets/issues/3431 | 3,431 | Unable to resolve any data file after loading once | closed | 2 | 2021-12-14T15:02:15 | 2022-12-11T10:53:04 | 2022-02-24T09:13:52 | LzyFischer | [] | when I rerun my program, it occurs this error
" Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem?
thx.
And below is my code .
... | false |
1,079,811,124 | https://api.github.com/repos/huggingface/datasets/issues/3430 | https://github.com/huggingface/datasets/pull/3430 | 3,430 | Make decoding of Audio and Image feature optional | closed | 7 | 2021-12-14T14:15:08 | 2022-01-25T18:57:52 | 2022-01-25T18:57:52 | mariosasko | [] | Add the `decode` argument (`True` by default) to the `Audio` and the `Image` feature to make it possible to toggle on/off decoding of these features.
Even though we've discussed that on Slack, I'm not removing the `_storage_dtype` argument of the Audio feature in this PR to avoid breaking the Audio feature tests. | true |
1,078,902,390 | https://api.github.com/repos/huggingface/datasets/issues/3429 | https://github.com/huggingface/datasets/pull/3429 | 3,429 | Make cast cacheable (again) on Windows | closed | 0 | 2021-12-13T19:32:02 | 2021-12-14T14:39:51 | 2021-12-14T14:39:50 | mariosasko | [] | `cast` currently emits the following warning when called on Windows:
```
Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameter... | true |
1,078,863,468 | https://api.github.com/repos/huggingface/datasets/issues/3428 | https://github.com/huggingface/datasets/pull/3428 | 3,428 | Clean squad dummy data | closed | 0 | 2021-12-13T18:46:29 | 2021-12-13T18:57:50 | 2021-12-13T18:57:50 | lhoestq | [] | Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file | true |
1,078,782,159 | https://api.github.com/repos/huggingface/datasets/issues/3427 | https://github.com/huggingface/datasets/pull/3427 | 3,427 | Add The Pile Enron Emails subset | closed | 0 | 2021-12-13T17:14:16 | 2021-12-14T17:30:59 | 2021-12-14T17:30:57 | albertvillanova | [] | Add:
- Enron Emails subset of The Pile: "enron_emails" config
Close bigscience-workshop/data_tooling#310.
CC: @StellaAthena | true |
1,078,670,031 | https://api.github.com/repos/huggingface/datasets/issues/3426 | https://github.com/huggingface/datasets/pull/3426 | 3,426 | Update disaster_response_messages download urls (+ add validation split) | closed | 0 | 2021-12-13T15:30:12 | 2021-12-14T14:38:30 | 2021-12-14T14:38:29 | mariosasko | [] | Fixes #3240, fixes #3416 | true |
1,078,598,140 | https://api.github.com/repos/huggingface/datasets/issues/3425 | https://github.com/huggingface/datasets/issues/3425 | 3,425 | Getting configs names takes too long | open | 3 | 2021-12-13T14:27:57 | 2021-12-13T14:53:33 | null | severo | [
"bug"
] |
## Steps to reproduce the bug
```python
from datasets import get_dataset_config_names
get_dataset_config_names("allenai/c4")
```
## Expected results
I would expect to get the answer quickly, at least in less than 10s
## Actual results
It takes about 45s on my environment
## Environment info
- `d... | false |
1,078,543,625 | https://api.github.com/repos/huggingface/datasets/issues/3424 | https://github.com/huggingface/datasets/pull/3424 | 3,424 | Add RedCaps dataset | closed | 2 | 2021-12-13T13:38:13 | 2022-01-12T14:13:16 | 2022-01-12T14:13:15 | mariosasko | [] | Add the RedCaps dataset. I'm not adding the generated `dataset_infos.json` file for now due to its size (11 MB).
TODOs:
- [x] dummy data
- [x] dataset card
Close #3316 | true |
1,078,049,638 | https://api.github.com/repos/huggingface/datasets/issues/3423 | https://github.com/huggingface/datasets/issues/3423 | 3,423 | data duplicate when setting num_works > 1 with streaming data | closed | 14 | 2021-12-13T03:43:17 | 2022-12-14T16:04:22 | 2022-12-14T16:04:22 | cloudyuyuyu | [
"bug",
"streaming"
] | ## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from tor... | false |
1,078,022,619 | https://api.github.com/repos/huggingface/datasets/issues/3422 | https://github.com/huggingface/datasets/issues/3422 | 3,422 | Error about load_metric | closed | 1 | 2021-12-13T02:49:51 | 2022-01-07T14:06:47 | 2022-01-07T14:06:47 | jiacheng-ye | [
"bug"
] | ## Describe the bug
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
metric = load_metric("glue", "sst2")
```
## Environment info
- `datasets` version: ... | false |
1,077,966,571 | https://api.github.com/repos/huggingface/datasets/issues/3421 | https://github.com/huggingface/datasets/pull/3421 | 3,421 | Adding mMARCO dataset | closed | 7 | 2021-12-13T00:56:43 | 2022-10-03T09:37:15 | 2022-10-03T09:37:15 | lhbonifacio | [
"dataset contribution"
] | Adding mMARCO (v1.1) to HF datasets. | true |
1,077,913,468 | https://api.github.com/repos/huggingface/datasets/issues/3420 | https://github.com/huggingface/datasets/pull/3420 | 3,420 | Add eli5_category dataset | closed | 1 | 2021-12-12T21:30:45 | 2021-12-14T17:53:03 | 2021-12-14T17:53:02 | jingshenSN2 | [] | This pull request adds a categorized Long-form question answering dataset `ELI5_Category`. It's a new variant of the [ELI5](https://huggingface.co/datasets/eli5) dataset that uses the Reddit tags to alleviate the training/validation overlapping in the origin ELI5 dataset.
A [report](https://celeritasml.netlify.app/p... | true |
1,077,350,974 | https://api.github.com/repos/huggingface/datasets/issues/3419 | https://github.com/huggingface/datasets/issues/3419 | 3,419 | `.to_json` is extremely slow after `.select` | open | 6 | 2021-12-11T01:36:31 | 2021-12-21T15:49:07 | null | eladsegal | [
"bug"
] | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | false |
1,077,053,296 | https://api.github.com/repos/huggingface/datasets/issues/3418 | https://github.com/huggingface/datasets/pull/3418 | 3,418 | Add Wikisource dataset | closed | 1 | 2021-12-10T17:04:44 | 2022-10-04T09:35:56 | 2022-10-03T09:37:20 | albertvillanova | [
"dataset contribution"
] | Add loading script for Wikisource dataset.
Fix #3399.
CC: @geohci, @yjernite | true |
1,076,943,343 | https://api.github.com/repos/huggingface/datasets/issues/3417 | https://github.com/huggingface/datasets/pull/3417 | 3,417 | Fix type of bridge field in QED | closed | 0 | 2021-12-10T15:07:21 | 2021-12-14T14:39:06 | 2021-12-14T14:39:05 | mariosasko | [] | Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`.
The following paragraph in the QED repo explains the purpose of this field:
>Each annotation in referential_equalities is a pair of spans, the question_reference ... | true |
1,076,868,771 | https://api.github.com/repos/huggingface/datasets/issues/3416 | https://github.com/huggingface/datasets/issues/3416 | 3,416 | disaster_response_messages unavailable | closed | 1 | 2021-12-10T13:49:17 | 2021-12-14T14:38:29 | 2021-12-14T14:38:29 | sacdallago | [
"dataset-viewer"
] | ## Dataset viewer issue for '* disaster_response_messages*'
**Link:** https://huggingface.co/datasets/disaster_response_messages
Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
Am I the one who added this dataset ?No
| false |
1,076,472,534 | https://api.github.com/repos/huggingface/datasets/issues/3415 | https://github.com/huggingface/datasets/issues/3415 | 3,415 | Non-deterministic tests: CI tests randomly fail | closed | 2 | 2021-12-10T06:08:59 | 2022-03-31T16:38:51 | 2022-03-31T16:38:51 | albertvillanova | [
"bug"
] | ## Describe the bug
Some CI tests fail randomly.
1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux:
```
=========================== short test summary info ============================
FAILED tests/test_str... | false |
1,076,028,998 | https://api.github.com/repos/huggingface/datasets/issues/3414 | https://github.com/huggingface/datasets/pull/3414 | 3,414 | Skip None encoding (line deleted by accident in #3195) | closed | 0 | 2021-12-09T21:17:33 | 2021-12-10T11:00:03 | 2021-12-10T11:00:02 | mariosasko | [] | Return the line deleted by accident in #3195 while [resolving merge conflicts](https://github.com/huggingface/datasets/pull/3195/commits/8b0ed15be08559056b817836a07d47acda0c4510).
Fix #3181 (finally :))
| true |
1,075,854,325 | https://api.github.com/repos/huggingface/datasets/issues/3413 | https://github.com/huggingface/datasets/pull/3413 | 3,413 | Add WIDER FACE dataset | closed | 0 | 2021-12-09T18:03:38 | 2022-01-12T14:13:47 | 2022-01-12T14:13:47 | mariosasko | [] | Adds the WIDER FACE face detection benchmark.
TODOs:
* [x] dataset card
* [x] dummy data | true |
1,075,846,368 | https://api.github.com/repos/huggingface/datasets/issues/3412 | https://github.com/huggingface/datasets/pull/3412 | 3,412 | Fix flaky test again for s3 serialization | closed | 0 | 2021-12-09T17:54:41 | 2021-12-09T18:00:52 | 2021-12-09T18:00:52 | lhoestq | [] | Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985)) | true |
1,075,846,272 | https://api.github.com/repos/huggingface/datasets/issues/3411 | https://github.com/huggingface/datasets/issues/3411 | 3,411 | [chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script | open | 2 | 2021-12-09T17:54:35 | 2021-12-22T11:21:33 | null | hyusterr | [
"bug"
] | ## Describe the bug
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py`
The tasks I am working on is: pretraining whole ... | false |
1,075,815,415 | https://api.github.com/repos/huggingface/datasets/issues/3410 | https://github.com/huggingface/datasets/pull/3410 | 3,410 | Fix dependencies conflicts in Windows CI after conda update to 4.11 | closed | 0 | 2021-12-09T17:19:11 | 2021-12-09T17:36:20 | 2021-12-09T17:36:19 | lhoestq | [] | For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11 | true |
1,075,684,593 | https://api.github.com/repos/huggingface/datasets/issues/3409 | https://github.com/huggingface/datasets/pull/3409 | 3,409 | Pass new_fingerprint in multiprocessing | closed | 2 | 2021-12-09T15:12:00 | 2022-08-19T10:41:04 | 2021-12-09T17:38:43 | lhoestq | [] | Following https://github.com/huggingface/datasets/pull/3045
Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`.
In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when... | true |
1,075,642,915 | https://api.github.com/repos/huggingface/datasets/issues/3408 | https://github.com/huggingface/datasets/issues/3408 | 3,408 | Typo in Dataset viewer error message | closed | 1 | 2021-12-09T14:34:02 | 2021-12-22T11:02:53 | 2021-12-22T11:02:53 | lewtun | [
"dataset-viewer"
] | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource"
... | false |
1,074,502,225 | https://api.github.com/repos/huggingface/datasets/issues/3407 | https://github.com/huggingface/datasets/pull/3407 | 3,407 | Use max number of data files to infer module | closed | 1 | 2021-12-08T14:58:43 | 2021-12-14T17:08:42 | 2021-12-14T17:08:42 | albertvillanova | [] | When inferring the module for datasets without script, set a maximum number of iterations over data files.
This PR fixes the issue of taking too long when hundred of data files present.
Please, feel free to agree on both numbers:
```
# Datasets without script
DATA_FILES_MAX_NUMBER = 10
ARCHIVED_DATA_FILES_MAX... | true |
1,074,366,050 | https://api.github.com/repos/huggingface/datasets/issues/3406 | https://github.com/huggingface/datasets/pull/3406 | 3,406 | Fix module inference for archive with a directory | closed | 0 | 2021-12-08T12:39:12 | 2021-12-08T13:03:30 | 2021-12-08T13:03:29 | albertvillanova | [] | Fix module inference for an archive file that contains files within a directory.
Fix #3405. | true |
1,074,360,362 | https://api.github.com/repos/huggingface/datasets/issues/3405 | https://github.com/huggingface/datasets/issues/3405 | 3,405 | ZIP format inference does not work when files located in a dir inside the archive | closed | 0 | 2021-12-08T12:32:15 | 2021-12-08T13:03:29 | 2021-12-08T13:03:29 | albertvillanova | [
"bug"
] | ## Describe the bug
When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work.
It only works for files located in the root directory of the ZIP file.
## Steps to reproduce the bug
```python
infer_module_for_data_files_in_archives(["path/... | false |
1,073,657,561 | https://api.github.com/repos/huggingface/datasets/issues/3404 | https://github.com/huggingface/datasets/issues/3404 | 3,404 | Optimize ZIP format inference | closed | 0 | 2021-12-07T18:44:49 | 2021-12-14T17:08:41 | 2021-12-14T17:08:41 | albertvillanova | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
When hundreds of ZIP files are present in a dataset, format inference takes too long.
See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497
**Describe the solution you'd like**
Iterate over a maximum number o... | false |
1,073,622,120 | https://api.github.com/repos/huggingface/datasets/issues/3403 | https://github.com/huggingface/datasets/issues/3403 | 3,403 | Cannot import name 'maybe_sync' | closed | 4 | 2021-12-07T17:57:59 | 2021-12-17T07:00:35 | 2021-12-17T07:00:35 | KMFODA | [
"bug"
] | ## Describe the bug
Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
No error
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in... | false |
1,073,614,815 | https://api.github.com/repos/huggingface/datasets/issues/3402 | https://github.com/huggingface/datasets/pull/3402 | 3,402 | More robust first elem check in encode/cast example | closed | 0 | 2021-12-07T17:48:16 | 2021-12-08T13:02:16 | 2021-12-08T13:02:15 | mariosasko | [] | Fix #3306 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.