id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,336,040,168 | https://api.github.com/repos/huggingface/datasets/issues/4828 | https://github.com/huggingface/datasets/pull/4828 | 4,828 | Support PIL Image objects in `add_item`/`add_column` | open | 3 | 2022-08-11T14:25:45 | 2023-09-24T10:15:33 | null | mariosasko | [] | Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR. | true |
1,335,994,312 | https://api.github.com/repos/huggingface/datasets/issues/4827 | https://github.com/huggingface/datasets/pull/4827 | 4,827 | Add license metadata to pg19 | closed | 1 | 2022-08-11T13:52:20 | 2022-08-11T15:01:03 | 2022-08-11T14:46:38 | julien-c | [] | As reported over email by Roy Rijkers | true |
1,335,987,583 | https://api.github.com/repos/huggingface/datasets/issues/4826 | https://github.com/huggingface/datasets/pull/4826 | 4,826 | Fix language tags in dataset cards | closed | 2 | 2022-08-11T13:47:14 | 2022-08-11T14:17:48 | 2022-08-11T14:03:12 | albertvillanova | [] | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | true |
1,335,856,882 | https://api.github.com/repos/huggingface/datasets/issues/4825 | https://github.com/huggingface/datasets/pull/4825 | 4,825 | [Windows] Fix Access Denied when using os.rename() | closed | 6 | 2022-08-11T11:57:15 | 2022-08-24T13:09:07 | 2022-08-24T13:09:07 | DougTrajano | [] | In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937 | true |
1,335,826,639 | https://api.github.com/repos/huggingface/datasets/issues/4824 | https://github.com/huggingface/datasets/pull/4824 | 4,824 | Fix titles in dataset cards | closed | 2 | 2022-08-11T11:27:48 | 2022-08-11T13:46:11 | 2022-08-11T12:56:49 | albertvillanova | [] | Fix all the titles in the dataset cards, so that they conform to the required format. | true |
1,335,687,033 | https://api.github.com/repos/huggingface/datasets/issues/4823 | https://github.com/huggingface/datasets/pull/4823 | 4,823 | Update data URL in mkqa dataset | closed | 1 | 2022-08-11T09:16:13 | 2022-08-11T09:51:50 | 2022-08-11T09:37:52 | albertvillanova | [] | Update data URL in mkqa dataset.
Fix #4817. | true |
1,335,664,588 | https://api.github.com/repos/huggingface/datasets/issues/4821 | https://github.com/huggingface/datasets/pull/4821 | 4,821 | Fix train_test_split docs | closed | 1 | 2022-08-11T08:55:45 | 2022-08-11T09:59:29 | 2022-08-11T09:45:40 | NielsRogge | [] | I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated. | true |
1,335,117,132 | https://api.github.com/repos/huggingface/datasets/issues/4820 | https://github.com/huggingface/datasets/issues/4820 | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | closed | 1 | 2022-08-10T19:42:33 | 2022-08-10T19:53:10 | 2022-08-10T19:53:10 | talhaanwarch | [
"bug"
] | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is un... | false |
1,335,064,449 | https://api.github.com/repos/huggingface/datasets/issues/4819 | https://github.com/huggingface/datasets/pull/4819 | 4,819 | Add missing language tags to resources | closed | 1 | 2022-08-10T19:06:42 | 2022-08-10T19:45:49 | 2022-08-10T19:32:15 | albertvillanova | [] | Add missing language tags to resources, required by existing datasets on GitHub. | true |
1,334,941,810 | https://api.github.com/repos/huggingface/datasets/issues/4818 | https://github.com/huggingface/datasets/pull/4818 | 4,818 | Add add cc-by-sa-2.5 license tag | closed | 2 | 2022-08-10T17:18:39 | 2022-10-04T13:47:24 | 2022-10-04T13:47:24 | polinaeterna | [] | - [ ] add it to moon-landing
- [ ] add it to hub-docs | true |
1,334,572,163 | https://api.github.com/repos/huggingface/datasets/issues/4817 | https://github.com/huggingface/datasets/issues/4817 | 4,817 | Outdated Link for mkqa Dataset | closed | 1 | 2022-08-10T12:45:45 | 2022-08-11T09:37:52 | 2022-08-11T09:37:52 | liaeh | [
"bug"
] | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## ... | false |
1,334,099,454 | https://api.github.com/repos/huggingface/datasets/issues/4816 | https://github.com/huggingface/datasets/pull/4816 | 4,816 | Update version of opus_paracrawl dataset | closed | 1 | 2022-08-10T05:39:44 | 2022-08-12T14:32:29 | 2022-08-12T14:17:56 | albertvillanova | [] | This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815. | true |
1,334,078,303 | https://api.github.com/repos/huggingface/datasets/issues/4815 | https://github.com/huggingface/datasets/issues/4815 | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | closed | 0 | 2022-08-10T05:12:34 | 2022-08-12T14:17:57 | 2022-08-12T14:17:57 | albertvillanova | [
"dataset bug"
] | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| false |
1,333,356,230 | https://api.github.com/repos/huggingface/datasets/issues/4814 | https://github.com/huggingface/datasets/issues/4814 | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | closed | 0 | 2022-08-09T14:36:49 | 2022-08-31T11:59:08 | 2022-08-31T11:59:08 | mariosasko | [
"enhancement"
] | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | false |
1,333,287,756 | https://api.github.com/repos/huggingface/datasets/issues/4813 | https://github.com/huggingface/datasets/pull/4813 | 4,813 | Fix loading example in opus dataset cards | closed | 1 | 2022-08-09T13:47:38 | 2022-08-09T17:52:15 | 2022-08-09T17:38:18 | albertvillanova | [] | This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a ... | true |
1,333,051,730 | https://api.github.com/repos/huggingface/datasets/issues/4812 | https://github.com/huggingface/datasets/pull/4812 | 4,812 | Fix bug in function validate_type for Python >= 3.9 | closed | 1 | 2022-08-09T10:32:42 | 2022-08-12T13:41:23 | 2022-08-12T13:27:04 | albertvillanova | [] | Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811. | true |
1,333,043,421 | https://api.github.com/repos/huggingface/datasets/issues/4811 | https://github.com/huggingface/datasets/issues/4811 | 4,811 | Bug in function validate_type for Python >= 3.9 | closed | 0 | 2022-08-09T10:25:21 | 2022-08-12T13:27:05 | 2022-08-12T13:27:05 | albertvillanova | [
"bug"
] | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Opt... | false |
1,333,038,702 | https://api.github.com/repos/huggingface/datasets/issues/4810 | https://github.com/huggingface/datasets/pull/4810 | 4,810 | Add description to hellaswag dataset | closed | 2 | 2022-08-09T10:21:14 | 2022-09-23T11:35:38 | 2022-09-23T11:33:44 | julien-c | [
"dataset contribution"
] | null | true |
1,332,842,747 | https://api.github.com/repos/huggingface/datasets/issues/4809 | https://github.com/huggingface/datasets/pull/4809 | 4,809 | Complete the mlqa dataset card | closed | 4 | 2022-08-09T07:38:06 | 2022-08-09T16:26:21 | 2022-08-09T13:26:43 | el2e10 | [] | I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808. | true |
1,332,840,217 | https://api.github.com/repos/huggingface/datasets/issues/4808 | https://github.com/huggingface/datasets/issues/4808 | 4,808 | Add more information to the dataset card of mlqa dataset | closed | 2 | 2022-08-09T07:35:42 | 2022-08-09T13:33:23 | 2022-08-09T13:33:23 | el2e10 | [] | null | false |
1,332,784,110 | https://api.github.com/repos/huggingface/datasets/issues/4807 | https://github.com/huggingface/datasets/pull/4807 | 4,807 | document fix in opus_gnome dataset | closed | 1 | 2022-08-09T06:38:13 | 2022-08-09T07:28:03 | 2022-08-09T07:28:03 | gojiteji | [] | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary). | true |
1,332,664,038 | https://api.github.com/repos/huggingface/datasets/issues/4806 | https://github.com/huggingface/datasets/pull/4806 | 4,806 | Fix opus_gnome dataset card | closed | 20 | 2022-08-09T03:40:15 | 2022-08-09T12:06:46 | 2022-08-09T11:52:04 | gojiteji | [] | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805 | true |
1,332,653,531 | https://api.github.com/repos/huggingface/datasets/issues/4805 | https://github.com/huggingface/datasets/issues/4805 | 4,805 | Wrong example in opus_gnome dataset card | closed | 0 | 2022-08-09T03:21:27 | 2022-08-09T11:52:05 | 2022-08-09T11:52:05 | gojiteji | [
"bug"
] | ## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected r... | false |
1,332,630,358 | https://api.github.com/repos/huggingface/datasets/issues/4804 | https://github.com/huggingface/datasets/issues/4804 | 4,804 | streaming dataset with concatenating splits raises an error | open | 4 | 2022-08-09T02:41:56 | 2023-11-25T14:52:09 | null | Bing-su | [
"bug"
] | ## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# er... | false |
1,332,079,562 | https://api.github.com/repos/huggingface/datasets/issues/4803 | https://github.com/huggingface/datasets/issues/4803 | 4,803 | Support `pipeline` argument in inspect.py functions | open | 1 | 2022-08-08T16:01:24 | 2023-09-25T12:21:35 | null | severo | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/hu... | false |
1,331,676,691 | https://api.github.com/repos/huggingface/datasets/issues/4802 | https://github.com/huggingface/datasets/issues/4802 | 4,802 | `with_format` behavior is inconsistent on different datasets | open | 1 | 2022-08-08T10:41:34 | 2022-08-09T16:49:09 | null | fxmarty | [
"bug"
] | ## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw =... | false |
1,331,337,418 | https://api.github.com/repos/huggingface/datasets/issues/4801 | https://github.com/huggingface/datasets/pull/4801 | 4,801 | Fix fine classes in trec dataset | closed | 1 | 2022-08-08T05:11:02 | 2022-08-22T16:29:14 | 2022-08-22T16:14:15 | albertvillanova | [] | This PR:
- replaces the fine labels, so that there are 50 instead of 47
- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html
- the feature names have been fixed: `fine_label` instead of `label-fi... | true |
1,331,288,128 | https://api.github.com/repos/huggingface/datasets/issues/4800 | https://github.com/huggingface/datasets/pull/4800 | 4,800 | support LargeListArray in pyarrow | closed | 22 | 2022-08-08T03:58:46 | 2024-09-27T09:54:17 | 2024-08-12T14:43:46 | Jiaxin-Wen | [] | ```python
import numpy as np
import datasets
a = np.zeros((5000000, 768))
res = datasets.Dataset.from_dict({'embedding': a})
'''
File '/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py', line 178, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/h... | true |
1,330,889,854 | https://api.github.com/repos/huggingface/datasets/issues/4799 | https://github.com/huggingface/datasets/issues/4799 | 4,799 | video dataset loader/parser | closed | 3 | 2022-08-07T01:54:12 | 2023-10-01T00:08:31 | 2022-08-09T16:42:51 | verbiiyo | [
"enhancement"
] | you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really... | false |
1,330,699,942 | https://api.github.com/repos/huggingface/datasets/issues/4798 | https://github.com/huggingface/datasets/pull/4798 | 4,798 | Shard generator | closed | 6 | 2022-08-06T09:14:06 | 2022-10-03T15:35:10 | 2022-10-03T15:35:10 | marianna13 | [] | Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided... | true |
1,330,000,998 | https://api.github.com/repos/huggingface/datasets/issues/4797 | https://github.com/huggingface/datasets/pull/4797 | 4,797 | Torgo dataset creation | closed | 1 | 2022-08-05T14:18:26 | 2022-08-09T18:46:00 | 2022-08-09T18:46:00 | YingLi001 | [] | null | true |
1,329,887,810 | https://api.github.com/repos/huggingface/datasets/issues/4796 | https://github.com/huggingface/datasets/issues/4796 | 4,796 | ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset | open | 19 | 2022-08-05T12:41:19 | 2024-11-29T16:35:17 | null | NielsRogge | [
"bug"
] | ## Describe the bug
When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from PIL import Image
dataset = load_dataset("hf-inte... | false |
1,329,525,732 | https://api.github.com/repos/huggingface/datasets/issues/4795 | https://github.com/huggingface/datasets/issues/4795 | 4,795 | Missing MBPP splits | closed | 4 | 2022-08-05T06:51:01 | 2022-09-13T12:27:24 | 2022-09-13T12:27:24 | stadlerb | [
"bug"
] | (@albertvillanova)
The [MBPP dataset on the Hub](https://huggingface.co/datasets/mbpp) has only a test split for both its "full" and its "sanitized" subset, while the [paper](https://arxiv.org/abs/2108.07732) states in subsection 2.1 regarding the full split:
> In the experiments described later in the paper, we hold... | false |
1,328,593,929 | https://api.github.com/repos/huggingface/datasets/issues/4792 | https://github.com/huggingface/datasets/issues/4792 | 4,792 | Add DocVQA | open | 1 | 2022-08-04T13:07:26 | 2022-08-08T05:31:20 | null | NielsRogge | [
"dataset request"
] | ## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this ... | false |
1,328,571,064 | https://api.github.com/repos/huggingface/datasets/issues/4791 | https://github.com/huggingface/datasets/issues/4791 | 4,791 | Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english | closed | 1 | 2022-08-04T12:49:16 | 2022-08-04T13:43:16 | 2022-08-04T13:43:16 | xplip | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
... | false |
1,328,546,904 | https://api.github.com/repos/huggingface/datasets/issues/4790 | https://github.com/huggingface/datasets/issues/4790 | 4,790 | Issue with fine classes in trec dataset | closed | 0 | 2022-08-04T12:28:51 | 2022-08-22T16:14:16 | 2022-08-22T16:14:16 | albertvillanova | [
"bug"
] | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated fo... | false |
1,328,409,253 | https://api.github.com/repos/huggingface/datasets/issues/4789 | https://github.com/huggingface/datasets/pull/4789 | 4,789 | Update doc upload_dataset.mdx | closed | 1 | 2022-08-04T10:24:00 | 2022-09-09T16:37:10 | 2022-09-09T16:34:58 | mishig25 | [] | null | true |
1,328,246,021 | https://api.github.com/repos/huggingface/datasets/issues/4788 | https://github.com/huggingface/datasets/pull/4788 | 4,788 | Fix NonMatchingChecksumError in mbpp dataset | closed | 4 | 2022-08-04T08:17:40 | 2022-08-04T17:34:00 | 2022-08-04T17:21:01 | albertvillanova | [] | Fix issue reported on the Hub: https://huggingface.co/datasets/mbpp/discussions/1
Fix #4787. | true |
1,328,243,911 | https://api.github.com/repos/huggingface/datasets/issues/4787 | https://github.com/huggingface/datasets/issues/4787 | 4,787 | NonMatchingChecksumError in mbpp dataset | closed | 0 | 2022-08-04T08:15:51 | 2022-08-04T17:21:01 | 2022-08-04T17:21:01 | albertvillanova | [
"bug"
] | ## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset with... | false |
1,327,340,828 | https://api.github.com/repos/huggingface/datasets/issues/4786 | https://github.com/huggingface/datasets/issues/4786 | 4,786 | .save_to_disk('path', fs=s3) TypeError | closed | 0 | 2022-08-03T14:49:29 | 2022-08-03T15:23:00 | 2022-08-03T15:23:00 | h-k-dev | [
"bug"
] | The following code:
```python
import datasets
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)
train_dataset.save_to_disk("s3://datasets/", fs=s3)
```
produces following traceback:
```she... | false |
1,327,225,826 | https://api.github.com/repos/huggingface/datasets/issues/4785 | https://github.com/huggingface/datasets/pull/4785 | 4,785 | Require torchaudio<0.12.0 in docs | closed | 1 | 2022-08-03T13:32:00 | 2022-08-03T15:07:43 | 2022-08-03T14:52:16 | albertvillanova | [] | This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.
Subsequent to PR:
- #4777 | true |
1,326,395,280 | https://api.github.com/repos/huggingface/datasets/issues/4784 | https://github.com/huggingface/datasets/issues/4784 | 4,784 | Add Multiface dataset | open | 3 | 2022-08-02T21:00:22 | 2022-08-08T14:42:36 | null | osanseviero | [
"dataset request",
"vision"
] | ## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **... | false |
1,326,375,011 | https://api.github.com/repos/huggingface/datasets/issues/4783 | https://github.com/huggingface/datasets/pull/4783 | 4,783 | Docs for creating a loading script for image datasets | closed | 7 | 2022-08-02T20:36:03 | 2022-09-09T17:08:14 | 2022-09-07T19:07:34 | stevhliu | [
"documentation"
] | This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations. | true |
1,326,247,158 | https://api.github.com/repos/huggingface/datasets/issues/4782 | https://github.com/huggingface/datasets/issues/4782 | 4,782 | pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648 | closed | 5 | 2022-08-02T18:36:05 | 2022-08-22T09:46:28 | 2022-08-20T02:11:53 | conceptofmind | [
"bug"
] | ## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.u... | false |
1,326,114,161 | https://api.github.com/repos/huggingface/datasets/issues/4781 | https://github.com/huggingface/datasets/pull/4781 | 4,781 | Fix label renaming and add a battery of tests | closed | 12 | 2022-08-02T16:42:07 | 2022-09-12T11:27:06 | 2022-09-12T11:24:45 | Rocketknight1 | [] | This PR makes some changes to label renaming in `to_tf_dataset()`, both to fix some issues when users input something we weren't expecting, and also to make it easier to deprecate label renaming in future, if/when we want to move this special-casing logic to a function in `transformers`.
The main changes are:
- Lab... | true |
1,326,034,767 | https://api.github.com/repos/huggingface/datasets/issues/4780 | https://github.com/huggingface/datasets/pull/4780 | 4,780 | Remove apache_beam import from module level in natural_questions dataset | closed | 1 | 2022-08-02T15:34:54 | 2022-08-02T16:16:33 | 2022-08-02T16:03:17 | albertvillanova | [] | Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`.
Fix #4779. | true |
1,325,997,225 | https://api.github.com/repos/huggingface/datasets/issues/4779 | https://github.com/huggingface/datasets/issues/4779 | 4,779 | Loading natural_questions requires apache_beam even with existing preprocessed data | closed | 0 | 2022-08-02T15:06:57 | 2022-08-02T16:03:18 | 2022-08-02T16:03:18 | albertvillanova | [
"bug"
] | ## Describe the bug
When loading "natural_questions", the package "apache_beam" is required:
```
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
This requirement is unnecessary, once ... | false |
1,324,928,750 | https://api.github.com/repos/huggingface/datasets/issues/4778 | https://github.com/huggingface/datasets/pull/4778 | 4,778 | Update local loading script docs | closed | 5 | 2022-08-01T20:21:07 | 2022-08-23T16:32:26 | 2022-08-23T16:32:22 | stevhliu | [
"documentation"
] | This PR clarifies the local loading script section to include how to load a dataset after you've modified the local loading script (closes #4732). | true |
1,324,548,784 | https://api.github.com/repos/huggingface/datasets/issues/4777 | https://github.com/huggingface/datasets/pull/4777 | 4,777 | Require torchaudio<0.12.0 to avoid RuntimeError | closed | 1 | 2022-08-01T14:50:50 | 2022-08-02T17:35:14 | 2022-08-02T17:21:39 | albertvillanova | [] | Related to:
- https://github.com/huggingface/transformers/issues/18379
Fix partially #4776. | true |
1,324,493,860 | https://api.github.com/repos/huggingface/datasets/issues/4776 | https://github.com/huggingface/datasets/issues/4776 | 4,776 | RuntimeError when using torchaudio 0.12.0 to load MP3 audio file | closed | 3 | 2022-08-01T14:11:23 | 2023-03-02T15:58:16 | 2023-03-02T15:58:15 | albertvillanova | [] | Current version of `torchaudio` (0.12.0) raises a RuntimeError when trying to use `sox_io` backend but non-Python dependency `sox` is not installed:
https://github.com/pytorch/audio/blob/2e1388401c434011e9f044b40bc8374f2ddfc414/torchaudio/backend/sox_io_backend.py#L21-L29
```python
def _fail_load(
filepath: str... | false |
1,324,136,486 | https://api.github.com/repos/huggingface/datasets/issues/4775 | https://github.com/huggingface/datasets/issues/4775 | 4,775 | Streaming not supported in Theivaprakasham/wildreceipt | closed | 1 | 2022-08-01T09:46:17 | 2022-08-01T10:30:29 | 2022-08-01T10:30:29 | NitishkKarra | [
"streaming"
] | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | false |
1,323,375,844 | https://api.github.com/repos/huggingface/datasets/issues/4774 | https://github.com/huggingface/datasets/issues/4774 | 4,774 | Training hangs at the end of epoch, with set_transform/with_transform+multiple workers | open | 0 | 2022-07-31T06:32:28 | 2022-07-31T06:36:43 | null | memray | [
"bug"
] | ## Describe the bug
I use load_dataset() (I tried with [wiki](https://huggingface.co/datasets/wikipedia) and my own json data) and use set_transform/with_transform for preprocessing. But it hangs at the end of the 1st epoch if dataloader_num_workers>=1. No problem with single worker.
## Steps to reproduce the bu... | false |
1,322,796,721 | https://api.github.com/repos/huggingface/datasets/issues/4773 | https://github.com/huggingface/datasets/pull/4773 | 4,773 | Document loading from relative path | closed | 5 | 2022-07-29T23:32:21 | 2022-08-25T18:36:45 | 2022-08-25T18:34:23 | stevhliu | [
"documentation"
] | This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757). | true |
1,322,693,123 | https://api.github.com/repos/huggingface/datasets/issues/4772 | https://github.com/huggingface/datasets/issues/4772 | 4,772 | AssertionError when using label_cols in to_tf_dataset | closed | 5 | 2022-07-29T21:32:12 | 2022-09-12T11:24:46 | 2022-09-12T11:24:46 | lehrig | [
"bug"
] | ## Describe the bug
An incorrect `AssertionError` is raised when using `label_cols` in `to_tf_dataset` and the label's key name is `label`.
The assertion is in this line:
https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/arrow_dataset.py#L475
## Steps to reproduce the bug
```python
from datasets... | false |
1,322,600,725 | https://api.github.com/repos/huggingface/datasets/issues/4771 | https://github.com/huggingface/datasets/pull/4771 | 4,771 | Remove dummy data generation docs | closed | 1 | 2022-07-29T19:20:46 | 2022-08-03T00:04:01 | 2022-08-02T23:50:29 | stevhliu | [
"documentation"
] | This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo.
Close #4744 | true |
1,322,147,855 | https://api.github.com/repos/huggingface/datasets/issues/4770 | https://github.com/huggingface/datasets/pull/4770 | 4,770 | fix typo | closed | 2 | 2022-07-29T11:46:12 | 2022-07-29T16:02:07 | 2022-07-29T16:02:07 | Jiaxin-Wen | [] | By defaul -> By default | true |
1,322,121,554 | https://api.github.com/repos/huggingface/datasets/issues/4769 | https://github.com/huggingface/datasets/issues/4769 | 4,769 | Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96. | open | 0 | 2022-07-29T11:18:24 | 2022-07-29T11:18:24 | null | zhuango | [
"bug"
] | ## Describe the bug
datasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets["train"].train_dataset.map().
## Steps to reproduce the bug
I used huggingface[ TF2 question-answering examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-a... | false |
1,321,913,645 | https://api.github.com/repos/huggingface/datasets/issues/4768 | https://github.com/huggingface/datasets/pull/4768 | 4,768 | Unpin rouge_score test dependency | closed | 1 | 2022-07-29T08:17:40 | 2022-07-29T16:42:28 | 2022-07-29T16:29:17 | albertvillanova | [] | Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735 | true |
1,321,843,538 | https://api.github.com/repos/huggingface/datasets/issues/4767 | https://github.com/huggingface/datasets/pull/4767 | 4,767 | Add 2.4.0 version added to docstrings | closed | 1 | 2022-07-29T07:01:56 | 2022-07-29T11:16:49 | 2022-07-29T11:03:58 | albertvillanova | [] | null | true |
1,321,787,428 | https://api.github.com/repos/huggingface/datasets/issues/4765 | https://github.com/huggingface/datasets/pull/4765 | 4,765 | Fix version in map_nested docstring | closed | 1 | 2022-07-29T05:44:32 | 2022-07-29T11:51:25 | 2022-07-29T11:38:36 | albertvillanova | [] | After latest release, `map_nested` docstring needs being updated with the right version for versionchanged and versionadded. | true |
1,321,295,961 | https://api.github.com/repos/huggingface/datasets/issues/4764 | https://github.com/huggingface/datasets/pull/4764 | 4,764 | Update CI badge | closed | 1 | 2022-07-28T18:04:20 | 2022-07-29T11:36:37 | 2022-07-29T11:23:51 | mariosasko | [] | Replace the old CircleCI badge with a new one for GH Actions. | true |
1,321,295,876 | https://api.github.com/repos/huggingface/datasets/issues/4763 | https://github.com/huggingface/datasets/pull/4763 | 4,763 | More rigorous shape inference in to_tf_dataset | closed | 1 | 2022-07-28T18:04:15 | 2022-09-08T19:17:54 | 2022-09-08T19:15:41 | Rocketknight1 | [] | `tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause trainin... | true |
1,321,261,733 | https://api.github.com/repos/huggingface/datasets/issues/4762 | https://github.com/huggingface/datasets/pull/4762 | 4,762 | Improve features resolution in streaming | closed | 2 | 2022-07-28T17:28:11 | 2022-09-09T17:17:39 | 2022-09-09T17:15:30 | lhoestq | [] | `IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well.
I also fixed `interleave_datasets`... | true |
1,321,068,411 | https://api.github.com/repos/huggingface/datasets/issues/4761 | https://github.com/huggingface/datasets/issues/4761 | 4,761 | parallel searching in multi-gpu setting using faiss | open | 26 | 2022-07-28T14:57:03 | 2023-07-21T02:07:10 | null | Jiaxin-Wen | [] | While I notice that `add_faiss_index` has supported assigning multiple GPUs, I am still confused about how it works.
Does the `search-batch` function automatically parallelizes the input queries to different gpus?https://github.com/huggingface/datasets/blob/d76599bdd4d186b2e7c4f468b05766016055a0a5/src/datasets/sea... | false |
1,320,878,223 | https://api.github.com/repos/huggingface/datasets/issues/4760 | https://github.com/huggingface/datasets/issues/4760 | 4,760 | Issue with offline mode | closed | 17 | 2022-07-28T12:45:14 | 2025-05-04T16:44:59 | 2024-01-23T10:58:22 | SaulLu | [
"bug"
] | ## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_i... | false |
1,320,783,300 | https://api.github.com/repos/huggingface/datasets/issues/4759 | https://github.com/huggingface/datasets/issues/4759 | 4,759 | Dataset Viewer issue for Toygar/turkish-offensive-language-detection | closed | 1 | 2022-07-28T11:21:43 | 2022-07-28T13:17:56 | 2022-07-28T13:17:48 | tanyelai | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection
### Description
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
Hi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist.
Should I n... | false |
1,320,602,532 | https://api.github.com/repos/huggingface/datasets/issues/4757 | https://github.com/huggingface/datasets/issues/4757 | 4,757 | Document better when relative paths are transformed to URLs | closed | 0 | 2022-07-28T08:46:27 | 2022-08-25T18:34:24 | 2022-08-25T18:34:24 | albertvillanova | [
"documentation"
] | As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.
Currently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize split... | false |
1,319,687,044 | https://api.github.com/repos/huggingface/datasets/issues/4755 | https://github.com/huggingface/datasets/issues/4755 | 4,755 | Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size | open | 3 | 2022-07-27T14:54:11 | 2023-12-13T19:34:43 | null | srobertjames | [
"bug"
] | ## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because ea... | false |
1,319,681,541 | https://api.github.com/repos/huggingface/datasets/issues/4754 | https://github.com/huggingface/datasets/pull/4754 | 4,754 | Remove "unkown" language tags | closed | 1 | 2022-07-27T14:50:12 | 2022-07-27T15:03:00 | 2022-07-27T14:51:06 | lhoestq | [] | Following https://github.com/huggingface/datasets/pull/4753 there was still a "unknown" langauge tag in `wikipedia` so the job at https://github.com/huggingface/datasets/runs/7542567336?check_suite_focus=true failed for wikipedia | true |
1,319,571,745 | https://api.github.com/repos/huggingface/datasets/issues/4753 | https://github.com/huggingface/datasets/pull/4753 | 4,753 | Add `language_bcp47` tag | closed | 1 | 2022-07-27T13:31:16 | 2022-07-27T14:50:03 | 2022-07-27T14:37:56 | lhoestq | [] | Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed ... | true |
1,319,464,409 | https://api.github.com/repos/huggingface/datasets/issues/4752 | https://github.com/huggingface/datasets/issues/4752 | 4,752 | DatasetInfo issue when testing multiple configs: mixed task_templates | open | 3 | 2022-07-27T12:04:54 | 2022-08-08T18:20:50 | null | BramVanroy | [
"bug"
] | ## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data fr... | false |
1,319,440,903 | https://api.github.com/repos/huggingface/datasets/issues/4751 | https://github.com/huggingface/datasets/pull/4751 | 4,751 | Added dataset information in clinic oos dataset card | closed | 1 | 2022-07-27T11:44:28 | 2022-07-28T10:53:21 | 2022-07-28T10:40:37 | arnav-ladkat | [] | This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card. | true |
1,319,333,645 | https://api.github.com/repos/huggingface/datasets/issues/4750 | https://github.com/huggingface/datasets/issues/4750 | 4,750 | Easily create loading script for benchmark comprising multiple huggingface datasets | closed | 2 | 2022-07-27T10:13:38 | 2022-07-27T13:58:07 | 2022-07-27T13:58:07 | JoelNiklaus | [] | Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a si... | false |
1,318,874,913 | https://api.github.com/repos/huggingface/datasets/issues/4748 | https://github.com/huggingface/datasets/pull/4748 | 4,748 | Add image classification processing guide | closed | 1 | 2022-07-27T00:11:11 | 2022-07-27T17:28:21 | 2022-07-27T17:16:12 | stevhliu | [
"documentation"
] | This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset. | true |
1,318,586,932 | https://api.github.com/repos/huggingface/datasets/issues/4747 | https://github.com/huggingface/datasets/pull/4747 | 4,747 | Shard parquet in `download_and_prepare` | closed | 2 | 2022-07-26T18:05:01 | 2022-09-15T13:43:55 | 2022-09-15T13:41:26 | lhoestq | [] | Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datase... | true |
1,318,486,599 | https://api.github.com/repos/huggingface/datasets/issues/4746 | https://github.com/huggingface/datasets/issues/4746 | 4,746 | Dataset Viewer issue for yanekyuk/wikikey | closed | 2 | 2022-07-26T16:25:16 | 2022-09-08T08:15:22 | 2022-09-08T08:15:22 | ai-ashok | [
"dataset-viewer"
] | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | false |
1,318,016,655 | https://api.github.com/repos/huggingface/datasets/issues/4745 | https://github.com/huggingface/datasets/issues/4745 | 4,745 | Allow `list_datasets` to include private datasets | closed | 4 | 2022-07-26T10:16:08 | 2023-07-25T15:01:49 | 2023-07-25T15:01:49 | ola13 | [
"enhancement"
] | I am working with a large collection of private datasets, it would be convenient for me to be able to list them.
I would envision extending the convention of using `use_auth_token` keyword argument to `list_datasets` function, then calling:
```
list_datasets(use_auth_token="my_token")
```
would return the li... | false |
1,317,822,345 | https://api.github.com/repos/huggingface/datasets/issues/4744 | https://github.com/huggingface/datasets/issues/4744 | 4,744 | Remove instructions to generate dummy data from our docs | closed | 2 | 2022-07-26T07:32:58 | 2022-08-02T23:50:30 | 2022-08-02T23:50:30 | albertvillanova | [
"documentation"
] | In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI t... | false |
1,317,362,561 | https://api.github.com/repos/huggingface/datasets/issues/4743 | https://github.com/huggingface/datasets/pull/4743 | 4,743 | Update map docs | closed | 1 | 2022-07-25T20:59:35 | 2022-07-27T16:22:04 | 2022-07-27T16:10:04 | stevhliu | [
"documentation"
] | This PR updates the `map` docs for processing text to include `return_tensors="np"` to make it run faster (see #4676). | true |
1,317,260,663 | https://api.github.com/repos/huggingface/datasets/issues/4742 | https://github.com/huggingface/datasets/issues/4742 | 4,742 | Dummy data nowhere to be found | closed | 3 | 2022-07-25T19:18:42 | 2022-11-04T14:04:24 | 2022-11-04T14:04:10 | BramVanroy | [
"bug"
] | ## Describe the bug
To finalize my dataset, I wanted to create dummy data as per the guide and I ran
```shell
datasets-cli dummy_data datasets/hebban-reviews --auto_generate
```
where hebban-reviews is [this repo](https://huggingface.co/datasets/BramVanroy/hebban-reviews). And even though the scripts runs an... | false |
1,316,621,272 | https://api.github.com/repos/huggingface/datasets/issues/4741 | https://github.com/huggingface/datasets/pull/4741 | 4,741 | Fix to dict conversion of `DatasetInfo`/`Features` | closed | 1 | 2022-07-25T10:41:27 | 2022-07-25T12:50:36 | 2022-07-25T12:37:53 | mariosasko | [] | Fix #4681 | true |
1,316,478,007 | https://api.github.com/repos/huggingface/datasets/issues/4740 | https://github.com/huggingface/datasets/pull/4740 | 4,740 | Fix multiprocessing in map_nested | closed | 3 | 2022-07-25T08:44:19 | 2022-07-28T10:53:23 | 2022-07-28T10:40:31 | albertvillanova | [] | As previously discussed:
Before, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.
- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download
- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was ... | true |
1,316,400,915 | https://api.github.com/repos/huggingface/datasets/issues/4739 | https://github.com/huggingface/datasets/pull/4739 | 4,739 | Deprecate metrics | closed | 4 | 2022-07-25T07:35:55 | 2022-07-28T11:44:27 | 2022-07-28T11:32:16 | albertvillanova | [] | Deprecate metrics:
- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning
- test deprecation warnings are issues
- deprecate metrics in all docs
- remove mentions to metrics in docs and README
- deprecate internal functions/classes
Maybe we should also stop testi... | true |
1,315,222,166 | https://api.github.com/repos/huggingface/datasets/issues/4738 | https://github.com/huggingface/datasets/pull/4738 | 4,738 | Use CI unit/integration tests | closed | 2 | 2022-07-22T16:48:00 | 2022-07-26T20:19:22 | 2022-07-26T20:07:05 | albertvillanova | [] | This PR:
- Implements separate unit/integration tests
- A fail in integration tests does not cancel the rest of the jobs
- We should implement more robust integration tests: work in progress in a subsequent PR
- For the moment, test involving network requests are marked as integration: to be evolved | true |
1,315,011,004 | https://api.github.com/repos/huggingface/datasets/issues/4737 | https://github.com/huggingface/datasets/issues/4737 | 4,737 | Download error on scene_parse_150 | closed | 2 | 2022-07-22T13:28:28 | 2022-09-01T15:37:11 | 2022-09-01T15:37:11 | juliensimon | [
"bug"
] | ```
from datasets import load_dataset
dataset = load_dataset("scene_parse_150", "scene_parsing")
FileNotFoundError: Couldn't find file at http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
```
| false |
1,314,931,996 | https://api.github.com/repos/huggingface/datasets/issues/4736 | https://github.com/huggingface/datasets/issues/4736 | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | closed | 1 | 2022-07-22T12:14:18 | 2022-07-22T13:46:38 | 2022-07-22T13:46:38 | dk-crazydiv | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is cs... | false |
1,314,501,641 | https://api.github.com/repos/huggingface/datasets/issues/4735 | https://github.com/huggingface/datasets/pull/4735 | 4,735 | Pin rouge_score test dependency | closed | 1 | 2022-07-22T07:18:21 | 2022-07-22T07:58:14 | 2022-07-22T07:45:18 | albertvillanova | [] | Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.
Fix #4734 | true |
1,314,495,382 | https://api.github.com/repos/huggingface/datasets/issues/4734 | https://github.com/huggingface/datasets/issues/4734 | 4,734 | Package rouge-score cannot be imported | closed | 1 | 2022-07-22T07:15:05 | 2022-07-22T07:45:19 | 2022-07-22T07:45:18 | albertvillanova | [
"bug"
] | ## Describe the bug
After the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https://github.com/huggingface/datasets/runs/7463218591?check_suite_focus=true
```
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench
FAILED tests/test_dataset_common.py::L... | false |
1,314,479,616 | https://api.github.com/repos/huggingface/datasets/issues/4733 | https://github.com/huggingface/datasets/issues/4733 | 4,733 | rouge metric | closed | 1 | 2022-07-22T07:06:51 | 2022-07-22T09:08:02 | 2022-07-22T09:05:35 | asking28 | [
"bug"
] | ## Describe the bug
A clear and concise description of what the bug is.
Loading Rouge metric gives error after latest rouge-score==0.0.7 release.
Downgrading rougemetric==0.0.4 works fine.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concis... | false |
1,314,371,566 | https://api.github.com/repos/huggingface/datasets/issues/4732 | https://github.com/huggingface/datasets/issues/4732 | 4,732 | Document better that loading a dataset passing its name does not use the local script | closed | 3 | 2022-07-22T06:07:31 | 2022-08-23T16:32:23 | 2022-08-23T16:32:23 | albertvillanova | [
"documentation"
] | As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/... | false |
1,313,773,348 | https://api.github.com/repos/huggingface/datasets/issues/4731 | https://github.com/huggingface/datasets/pull/4731 | 4,731 | docs: ✏️ fix TranslationVariableLanguages example | closed | 1 | 2022-07-21T20:35:41 | 2022-07-22T07:01:00 | 2022-07-22T06:48:42 | severo | [] | null | true |
1,313,421,263 | https://api.github.com/repos/huggingface/datasets/issues/4730 | https://github.com/huggingface/datasets/issues/4730 | 4,730 | Loading imagenet-1k validation split takes much more RAM than expected | closed | 1 | 2022-07-21T15:14:06 | 2022-07-21T16:41:04 | 2022-07-21T16:41:04 | fxmarty | [
"bug"
] | ## Describe the bug
Loading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.
## Steps to reproduce the bug
```python
from datasets import... | false |
1,313,374,015 | https://api.github.com/repos/huggingface/datasets/issues/4729 | https://github.com/huggingface/datasets/pull/4729 | 4,729 | Refactor Hub tests | closed | 1 | 2022-07-21T14:43:13 | 2022-07-22T15:09:49 | 2022-07-22T14:56:29 | albertvillanova | [] | This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests ... | true |
1,312,897,454 | https://api.github.com/repos/huggingface/datasets/issues/4728 | https://github.com/huggingface/datasets/issues/4728 | 4,728 | load_dataset gives "403" error when using Financial Phrasebank | closed | 3 | 2022-07-21T08:43:32 | 2022-08-04T08:32:35 | 2022-08-04T08:32:35 | rohitvincent | [] | I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.
```
from datasets import load_dataset, DownloadMode
load... | false |
1,312,645,391 | https://api.github.com/repos/huggingface/datasets/issues/4727 | https://github.com/huggingface/datasets/issues/4727 | 4,727 | Dataset Viewer issue for TheNoob3131/mosquito-data | closed | 1 | 2022-07-21T05:24:48 | 2022-07-21T07:51:56 | 2022-07-21T07:45:01 | thenerd31 | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/TheNoob3131/mosquito-data/viewer/TheNoob3131--mosquito-data/test
### Description
Dataset preview not showing with large files. Says 'split cache is empty' even though there are train and test splits.
### Owner
_No response_ | false |
1,312,082,175 | https://api.github.com/repos/huggingface/datasets/issues/4726 | https://github.com/huggingface/datasets/pull/4726 | 4,726 | Fix broken link to the Hub | closed | 1 | 2022-07-20T22:57:27 | 2022-07-21T14:33:18 | 2022-07-21T08:00:54 | stevhliu | [] | The Markdown link fails to render if it is in the same line as the `<span>`. This PR implements @mishig25's fix by using `<a href=" ">` instead.
 | true |
1,311,907,096 | https://api.github.com/repos/huggingface/datasets/issues/4725 | https://github.com/huggingface/datasets/issues/4725 | 4,725 | the_pile datasets URL broken. | closed | 5 | 2022-07-20T20:57:30 | 2022-07-22T06:09:46 | 2022-07-21T07:38:19 | TrentBrick | [
"bug"
] | https://github.com/huggingface/datasets/pull/3627 changed the Eleuther AI Pile dataset URL from https://the-eye.eu/ to https://mystic.the-eye.eu/ but the latter is now broken and the former works again.
Note that when I git clone the repo and use `pip install -e .` and then edit the URL back the codebase doesn't se... | false |
1,311,127,404 | https://api.github.com/repos/huggingface/datasets/issues/4724 | https://github.com/huggingface/datasets/pull/4724 | 4,724 | Download and prepare as Parquet for cloud storage | closed | 8 | 2022-07-20T13:39:02 | 2022-09-05T17:27:25 | 2022-09-05T17:25:27 | lhoestq | [] | Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark/dask/ray.
This PR adds support for `fsspec` URIs like `s3://...`, `gcs://...` etc. and ads the `file_format` to save as parquet instead of arrow:
```python
from datasets import *
cache_dir = "s3://..."
build... | true |
1,310,970,604 | https://api.github.com/repos/huggingface/datasets/issues/4723 | https://github.com/huggingface/datasets/pull/4723 | 4,723 | Refactor conftest fixtures | closed | 1 | 2022-07-20T12:15:22 | 2022-07-21T14:37:11 | 2022-07-21T14:24:18 | albertvillanova | [] | Previously, fixture modules `hub_fixtures` and `s3_fixtures`:
- were both at the root test directory
- were imported using `import *`
- as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`
This PR:
- puts both fixture modules in a dedicated directory `fixtures`
- re... | true |
1,310,785,916 | https://api.github.com/repos/huggingface/datasets/issues/4722 | https://github.com/huggingface/datasets/pull/4722 | 4,722 | Docs: Fix same-page haslinks | closed | 1 | 2022-07-20T10:04:37 | 2022-07-20T17:02:33 | 2022-07-20T16:49:36 | mishig25 | [] | `href="/docs/datasets/quickstart#audio"` implicitly goes to `href="/docs/datasets/{$LATEST_STABLE_VERSION}/quickstart#audio"`. Therefore, https://huggingface.co/docs/datasets/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)
to preserve the version, it... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.