id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 β | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k β | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,292,797,878 | https://api.github.com/repos/huggingface/datasets/issues/4620 | https://github.com/huggingface/datasets/issues/4620 | 4,620 | Data type is not recognized when using datetime.time | closed | 2 | 2022-07-04T08:13:38 | 2022-07-07T13:57:11 | 2022-07-07T13:57:11 | severo | [
"bug"
] | ## Describe the bug
Creating a dataset from a pandas dataframe with `datetime.time` format generates an error.
## Steps to reproduce the bug
```python
import pandas as pd
from datetime import time
from datasets import Dataset
df = pd.DataFrame({"feature_name": [time(1, 1, 1)]})
dataset = Dataset.from_pandas... | false |
1,292,107,275 | https://api.github.com/repos/huggingface/datasets/issues/4619 | https://github.com/huggingface/datasets/issues/4619 | 4,619 | np arrays get turned into native lists | open | 3 | 2022-07-02T17:54:57 | 2022-07-03T20:27:07 | null | ZhaofengWu | [
"bug"
] | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datas... | false |
1,292,078,225 | https://api.github.com/repos/huggingface/datasets/issues/4618 | https://github.com/huggingface/datasets/issues/4618 | 4,618 | contribute data loading for object detection datasets with yolo data format | open | 4 | 2022-07-02T15:21:59 | 2022-07-21T14:10:44 | null | faizankshaikh | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://hugging... | false |
1,291,307,428 | https://api.github.com/repos/huggingface/datasets/issues/4615 | https://github.com/huggingface/datasets/pull/4615 | 4,615 | Fix `embed_storage` on features inside lists/sequences | closed | 1 | 2022-07-01T11:52:08 | 2022-07-08T12:13:10 | 2022-07-08T12:01:36 | mariosasko | [] | Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general).
Fix #4591
~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done! | true |
1,291,218,020 | https://api.github.com/repos/huggingface/datasets/issues/4614 | https://github.com/huggingface/datasets/pull/4614 | 4,614 | Ensure ConcatenationTable.cast uses target_schema metadata | closed | 2 | 2022-07-01T10:22:08 | 2022-07-19T13:48:45 | 2022-07-19T13:36:24 | dtuit | [] | Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable.
Code example of where issue arrises:
```
from datasets import Dataset, Image
column1 = [0, 1]
image_paths = ['/images/im... | true |
1,291,181,193 | https://api.github.com/repos/huggingface/datasets/issues/4613 | https://github.com/huggingface/datasets/pull/4613 | 4,613 | Align/fix license metadata info | closed | 3 | 2022-07-01T09:50:50 | 2022-07-01T12:53:57 | 2022-07-01T12:42:47 | julien-c | [] | fix bad "other-*" licenses and add the corresponding "license_details" when relevant | true |
1,290,984,660 | https://api.github.com/repos/huggingface/datasets/issues/4612 | https://github.com/huggingface/datasets/issues/4612 | 4,612 | Release 2.3.0 broke custom iterable datasets | closed | 3 | 2022-07-01T06:46:07 | 2022-07-05T15:08:21 | 2022-07-05T15:08:21 | aapot | [
"bug"
] | ## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should retu... | false |
1,290,940,874 | https://api.github.com/repos/huggingface/datasets/issues/4611 | https://github.com/huggingface/datasets/pull/4611 | 4,611 | Preserve member order by MockDownloadManager.iter_archive | closed | 1 | 2022-07-01T05:48:20 | 2022-07-01T16:59:11 | 2022-07-01T16:48:28 | albertvillanova | [] | Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yield... | true |
1,290,603,827 | https://api.github.com/repos/huggingface/datasets/issues/4610 | https://github.com/huggingface/datasets/issues/4610 | 4,610 | codeparrot/github-code failing to load | closed | 8 | 2022-06-30T20:24:48 | 2022-07-05T14:24:13 | 2022-07-05T09:19:56 | PyDataBlog | [
"bug"
] | ## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
`... | false |
1,290,392,083 | https://api.github.com/repos/huggingface/datasets/issues/4609 | https://github.com/huggingface/datasets/issues/4609 | 4,609 | librispeech dataset has to download whole subset when specifing the split to use | closed | 2 | 2022-06-30T16:38:24 | 2022-07-12T21:44:32 | 2022-07-12T21:44:32 | sunhaozhepy | [
"bug"
] | ## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
... | false |
1,290,298,002 | https://api.github.com/repos/huggingface/datasets/issues/4608 | https://github.com/huggingface/datasets/pull/4608 | 4,608 | Fix xisfile, xgetsize, xisdir, xlistdir in private repo | closed | 2 | 2022-06-30T15:23:21 | 2022-07-06T12:45:59 | 2022-07-06T12:34:19 | lhoestq | [] | `xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip... | true |
1,290,171,941 | https://api.github.com/repos/huggingface/datasets/issues/4607 | https://github.com/huggingface/datasets/pull/4607 | 4,607 | Align more metadata with other repo types (models,spaces) | closed | 5 | 2022-06-30T13:52:12 | 2022-07-01T12:00:37 | 2022-07-01T11:49:14 | julien-c | [] | see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged) | true |
1,290,083,534 | https://api.github.com/repos/huggingface/datasets/issues/4606 | https://github.com/huggingface/datasets/issues/4606 | 4,606 | evaluation result changes after `datasets` version change | closed | 1 | 2022-06-30T12:43:26 | 2023-07-25T15:05:26 | 2023-07-25T15:05:26 | thnkinbtfly | [
"bug"
] | ## Describe the bug
evaluation result changes after `datasets` version change
## Steps to reproduce the bug
1. Train a model on WikiAnn
2. reload the ckpt -> test accuracy becomes same as eval accuracy
3. such behavior is gone after downgrading `datasets`
https://colab.research.google.com/drive/1kYz7-aZRGdaya... | false |
1,290,058,970 | https://api.github.com/repos/huggingface/datasets/issues/4605 | https://github.com/huggingface/datasets/issues/4605 | 4,605 | Dataset Viewer issue for boris/gis_filtered | closed | 5 | 2022-06-30T12:23:34 | 2022-07-06T12:34:19 | 2022-07-06T12:34:19 | WaterKnight1998 | [
"streaming"
] | ### Link
https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train
### Description
When I try to access this from the website I get this error:
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datase... | false |
1,289,963,962 | https://api.github.com/repos/huggingface/datasets/issues/4604 | https://github.com/huggingface/datasets/pull/4604 | 4,604 | Update CI Windows orb | closed | 1 | 2022-06-30T11:00:31 | 2022-06-30T13:33:11 | 2022-06-30T13:22:26 | albertvillanova | [] | This PR tries to fix recurrent random CI failures on Windows.
After 2 runs, it seems to have fixed the issue.
Fix #4603. | true |
1,289,963,331 | https://api.github.com/repos/huggingface/datasets/issues/4603 | https://github.com/huggingface/datasets/issues/4603 | 4,603 | CI fails recurrently and randomly on Windows | closed | 0 | 2022-06-30T10:59:58 | 2022-06-30T13:22:25 | 2022-06-30T13:22:25 | albertvillanova | [
"bug"
] | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\to... | false |
1,289,950,379 | https://api.github.com/repos/huggingface/datasets/issues/4602 | https://github.com/huggingface/datasets/pull/4602 | 4,602 | Upgrade setuptools in windows CI | closed | 1 | 2022-06-30T10:48:41 | 2023-09-24T10:05:10 | 2022-06-30T12:46:17 | lhoestq | [] | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | true |
1,289,924,715 | https://api.github.com/repos/huggingface/datasets/issues/4601 | https://github.com/huggingface/datasets/pull/4601 | 4,601 | Upgrade pip in WIN CI | closed | 2 | 2022-06-30T10:25:42 | 2023-09-24T10:04:25 | 2022-06-30T10:43:38 | lhoestq | [] | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | true |
1,289,177,042 | https://api.github.com/repos/huggingface/datasets/issues/4600 | https://github.com/huggingface/datasets/pull/4600 | 4,600 | Remove multiple config section | closed | 1 | 2022-06-29T19:09:21 | 2022-07-04T17:41:20 | 2022-07-04T17:29:41 | stevhliu | [
"documentation"
] | This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :) | true |
1,288,849,933 | https://api.github.com/repos/huggingface/datasets/issues/4599 | https://github.com/huggingface/datasets/pull/4599 | 4,599 | Smooth-BLEU bug fixed | closed | 1 | 2022-06-29T14:51:42 | 2022-09-23T07:42:40 | 2022-09-23T07:42:40 | Aktsvigun | [
"transfer-to-evaluate"
] | Hi,
the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image).
This however contradicts the source paper suggesting the smoot... | true |
1,288,774,514 | https://api.github.com/repos/huggingface/datasets/issues/4598 | https://github.com/huggingface/datasets/pull/4598 | 4,598 | Host financial_phrasebank data on the Hub | closed | 1 | 2022-06-29T13:59:31 | 2022-07-01T09:41:14 | 2022-07-01T09:29:36 | albertvillanova | [] |
Fix #4597. | true |
1,288,672,007 | https://api.github.com/repos/huggingface/datasets/issues/4597 | https://github.com/huggingface/datasets/issues/4597 | 4,597 | Streaming issue for financial_phrasebank | closed | 3 | 2022-06-29T12:45:43 | 2022-07-01T09:29:36 | 2022-07-01T09:29:36 | lewtun | [
"hosted-on-google-drive"
] | ### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dat... | false |
1,288,381,735 | https://api.github.com/repos/huggingface/datasets/issues/4596 | https://github.com/huggingface/datasets/issues/4596 | 4,596 | Dataset Viewer issue for universal_dependencies | closed | 2 | 2022-06-29T08:50:29 | 2022-09-07T11:29:28 | 2022-09-07T11:29:27 | Jordy-VL | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_ | false |
1,288,275,976 | https://api.github.com/repos/huggingface/datasets/issues/4595 | https://github.com/huggingface/datasets/issues/4595 | 4,595 | Dataset Viewer issue with False positive PII redaction | closed | 2 | 2022-06-29T07:15:57 | 2022-06-29T08:29:41 | 2022-06-29T08:27:49 | cakiki | [] | ### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_ | false |
1,288,070,023 | https://api.github.com/repos/huggingface/datasets/issues/4594 | https://github.com/huggingface/datasets/issues/4594 | 4,594 | load_from_disk suggests incorrect fix when used to load DatasetDict | closed | 0 | 2022-06-29T01:40:01 | 2022-06-29T04:03:44 | 2022-06-29T04:03:44 | dvsth | [
"bug"
] | Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indi... | false |
1,288,067,699 | https://api.github.com/repos/huggingface/datasets/issues/4593 | https://github.com/huggingface/datasets/pull/4593 | 4,593 | Fix error message when using load_from_disk to load DatasetDict | closed | 0 | 2022-06-29T01:34:27 | 2022-06-29T04:01:59 | 2022-06-29T04:01:39 | dvsth | [] | Issue #4594
Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error.
Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.
Chan... | true |
1,288,029,377 | https://api.github.com/repos/huggingface/datasets/issues/4592 | https://github.com/huggingface/datasets/issues/4592 | 4,592 | Issue with jalFaizy/detect_chess_pieces when running datasets-cli test | closed | 3 | 2022-06-29T00:15:54 | 2022-06-29T10:30:03 | 2022-06-29T07:49:27 | faizankshaikh | [] | ### Link
https://huggingface.co/datasets/jalFaizy/detect_chess_pieces
### Description
I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_c... | false |
1,288,021,332 | https://api.github.com/repos/huggingface/datasets/issues/4591 | https://github.com/huggingface/datasets/issues/4591 | 4,591 | Can't push Images to hub with manual Dataset | closed | 1 | 2022-06-29T00:01:23 | 2022-07-08T12:01:36 | 2022-07-08T12:01:35 | cceyda | [
"bug"
] | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is compli... | false |
1,287,941,058 | https://api.github.com/repos/huggingface/datasets/issues/4590 | https://github.com/huggingface/datasets/pull/4590 | 4,590 | Generalize meta_path json file creation in load.py [#4540] | closed | 4 | 2022-06-28T21:48:06 | 2022-07-08T14:55:13 | 2022-07-07T13:17:45 | VijayKalmath | [] | # What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to gener... | true |
1,287,600,029 | https://api.github.com/repos/huggingface/datasets/issues/4589 | https://github.com/huggingface/datasets/issues/4589 | 4,589 | Permission denied: '/home/.cache' when load_dataset with local script | closed | 0 | 2022-06-28T16:26:03 | 2022-06-29T06:26:28 | 2022-06-29T06:25:08 | jiangh0 | [
"bug"
] | null | false |
1,287,368,751 | https://api.github.com/repos/huggingface/datasets/issues/4588 | https://github.com/huggingface/datasets/pull/4588 | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | closed | 3 | 2022-06-28T13:39:28 | 2022-07-05T16:01:15 | 2022-07-05T15:49:52 | albertvillanova | [] | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | true |
1,287,291,494 | https://api.github.com/repos/huggingface/datasets/issues/4587 | https://github.com/huggingface/datasets/pull/4587 | 4,587 | Validate new_fingerprint passed by user | closed | 1 | 2022-06-28T12:46:21 | 2022-06-28T14:11:57 | 2022-06-28T14:00:44 | lhoestq | [] | Users can pass the dataset fingerprint they want in `map` and other dataset transforms.
However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long | true |
1,287,105,636 | https://api.github.com/repos/huggingface/datasets/issues/4586 | https://github.com/huggingface/datasets/pull/4586 | 4,586 | Host pn_summary data on the Hub instead of Google Drive | closed | 1 | 2022-06-28T10:05:05 | 2022-06-28T14:52:56 | 2022-06-28T14:42:03 | albertvillanova | [] | Fix #4581. | true |
1,287,064,929 | https://api.github.com/repos/huggingface/datasets/issues/4585 | https://github.com/huggingface/datasets/pull/4585 | 4,585 | Host multi_news data on the Hub instead of Google Drive | closed | 1 | 2022-06-28T09:32:06 | 2022-06-28T14:19:35 | 2022-06-28T14:08:48 | albertvillanova | [] | Host data files of multi_news dataset on the Hub.
They were on Google Drive.
Fix #4580. | true |
1,286,911,993 | https://api.github.com/repos/huggingface/datasets/issues/4584 | https://github.com/huggingface/datasets/pull/4584 | 4,584 | Add binary classification task IDs | closed | 4 | 2022-06-28T07:30:39 | 2023-09-24T10:04:04 | 2023-01-26T09:27:52 | lewtun | [] | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishek... | true |
1,286,790,871 | https://api.github.com/repos/huggingface/datasets/issues/4583 | https://github.com/huggingface/datasets/pull/4583 | 4,583 | <code> implementation of FLAC support using torchaudio | closed | 0 | 2022-06-28T05:24:21 | 2022-06-28T05:47:02 | 2022-06-28T05:47:02 | rafael-ariascalles | [] | I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/ | true |
1,286,517,060 | https://api.github.com/repos/huggingface/datasets/issues/4582 | https://github.com/huggingface/datasets/pull/4582 | 4,582 | add_column should preserve _indexes | open | 1 | 2022-06-27T22:35:47 | 2022-07-06T15:19:54 | null | cceyda | [] | https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126
doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case.
This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init.
with this PR now... | true |
1,286,362,907 | https://api.github.com/repos/huggingface/datasets/issues/4581 | https://github.com/huggingface/datasets/issues/4581 | 4,581 | Dataset Viewer issue for pn_summary | closed | 3 | 2022-06-27T20:56:12 | 2022-06-28T14:42:03 | 2022-06-28T14:42:03 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation
### Description
Getting an index error on the `validation` and `test` splits:
```
Server error
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | false |
1,286,312,912 | https://api.github.com/repos/huggingface/datasets/issues/4580 | https://github.com/huggingface/datasets/issues/4580 | 4,580 | Dataset Viewer issue for multi_news | closed | 2 | 2022-06-27T20:25:25 | 2022-06-28T14:08:48 | 2022-06-28T14:08:48 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | false |
1,286,106,285 | https://api.github.com/repos/huggingface/datasets/issues/4579 | https://github.com/huggingface/datasets/pull/4579 | 4,579 | Support streaming cfq dataset | closed | 6 | 2022-06-27T17:11:23 | 2022-07-04T19:35:01 | 2022-07-04T19:23:57 | albertvillanova | [] | Support streaming cfq dataset. | true |
1,286,086,400 | https://api.github.com/repos/huggingface/datasets/issues/4578 | https://github.com/huggingface/datasets/issues/4578 | 4,578 | [Multi Configs] Use directories to differentiate between subsets/configurations | open | 3 | 2022-06-27T16:55:11 | 2023-06-14T15:43:05 | null | lhoestq | [
"enhancement"
] | Currently to define several subsets/configurations of your dataset, you need to use a dataset script.
However it would be nice to have a no-code way to to this.
For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per confi... | false |
1,285,703,775 | https://api.github.com/repos/huggingface/datasets/issues/4577 | https://github.com/huggingface/datasets/pull/4577 | 4,577 | Add authentication tip to `load_dataset` | closed | 1 | 2022-06-27T12:05:34 | 2022-07-04T13:13:15 | 2022-07-04T13:01:30 | mariosasko | [] | Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`. | true |
1,285,698,576 | https://api.github.com/repos/huggingface/datasets/issues/4576 | https://github.com/huggingface/datasets/pull/4576 | 4,576 | Include `metadata.jsonl` in resolved data files | closed | 5 | 2022-06-27T12:01:29 | 2022-07-01T12:44:55 | 2022-06-30T10:15:32 | mariosasko | [] | Include `metadata.jsonl` in resolved data files.
Fix #4548
@lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts fo... | true |
1,285,446,700 | https://api.github.com/repos/huggingface/datasets/issues/4575 | https://github.com/huggingface/datasets/issues/4575 | 4,575 | Problem about wmt17 zh-en dataset | closed | 5 | 2022-06-27T08:35:42 | 2022-08-23T10:01:02 | 2022-08-23T10:00:21 | winterfell2021 | [
"bug"
] | It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.
So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception:
```
Traceback (most recent call last):
File "train.py", line 78, in <module>
data = load_dataset(args.... | false |
1,285,380,616 | https://api.github.com/repos/huggingface/datasets/issues/4574 | https://github.com/huggingface/datasets/pull/4574 | 4,574 | Support streaming mlsum dataset | closed | 7 | 2022-06-27T07:37:03 | 2022-07-21T13:37:30 | 2022-07-21T12:40:00 | albertvillanova | [] | Support streaming mlsum dataset.
This PR:
- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`
- https://github.com/fsspec/filesystem_spec/pull/830
- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`
> s3fs 2021.8.1 requires fsspec==2021.08.1
- s... | true |
1,285,023,629 | https://api.github.com/repos/huggingface/datasets/issues/4573 | https://github.com/huggingface/datasets/pull/4573 | 4,573 | Fix evaluation metadata for ncbi_disease | closed | 2 | 2022-06-26T20:29:32 | 2023-09-24T09:35:07 | 2022-09-23T09:38:02 | lewtun | [
"dataset contribution"
] | This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream. | true |
1,285,022,499 | https://api.github.com/repos/huggingface/datasets/issues/4572 | https://github.com/huggingface/datasets/issues/4572 | 4,572 | Dataset Viewer issue for mlsum | closed | 1 | 2022-06-26T20:24:17 | 2022-07-21T12:40:01 | 2022-07-21T12:40:01 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | false |
1,284,883,289 | https://api.github.com/repos/huggingface/datasets/issues/4571 | https://github.com/huggingface/datasets/issues/4571 | 4,571 | move under the facebook org? | open | 3 | 2022-06-26T11:19:09 | 2023-09-25T12:05:18 | null | lewtun | [] | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | false |
1,284,846,168 | https://api.github.com/repos/huggingface/datasets/issues/4570 | https://github.com/huggingface/datasets/issues/4570 | 4,570 | Dataset sharding non-contiguous? | closed | 5 | 2022-06-26T08:34:05 | 2022-06-30T11:00:47 | 2022-06-26T14:36:20 | cakiki | [
"bug"
] | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggi... | false |
1,284,833,694 | https://api.github.com/repos/huggingface/datasets/issues/4569 | https://github.com/huggingface/datasets/issues/4569 | 4,569 | Dataset Viewer issue for sst2 | closed | 2 | 2022-06-26T07:32:54 | 2022-06-27T06:37:48 | 2022-06-27T06:37:48 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with Connectio... | false |
1,284,655,624 | https://api.github.com/repos/huggingface/datasets/issues/4568 | https://github.com/huggingface/datasets/issues/4568 | 4,568 | XNLI cache reload is very slow | closed | 3 | 2022-06-25T16:43:56 | 2022-07-04T14:29:40 | 2022-07-04T14:29:40 | Muennighoff | [
"bug"
] | ### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the ... | false |
1,284,528,474 | https://api.github.com/repos/huggingface/datasets/issues/4567 | https://github.com/huggingface/datasets/pull/4567 | 4,567 | Add evaluation data for amazon_reviews_multi | closed | 2 | 2022-06-25T09:40:52 | 2023-09-24T09:35:22 | 2022-09-23T09:37:23 | lewtun | [
"dataset contribution"
] | null | true |
1,284,397,594 | https://api.github.com/repos/huggingface/datasets/issues/4566 | https://github.com/huggingface/datasets/issues/4566 | 4,566 | Document link #load_dataset_enhancing_performance points to nowhere | closed | 2 | 2022-06-25T01:18:19 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | subercui | [
"bug"
] | ## Describe the bug
A clear and concise description of what the bug is.

The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dat... | false |
1,284,141,666 | https://api.github.com/repos/huggingface/datasets/issues/4565 | https://github.com/huggingface/datasets/issues/4565 | 4,565 | Add UFSC OCPap dataset | closed | 1 | 2022-06-24T20:07:54 | 2022-07-06T19:03:02 | 2022-07-06T19:03:02 | johnnv1 | [
"dataset request"
] | ## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.... | false |
1,283,932,333 | https://api.github.com/repos/huggingface/datasets/issues/4564 | https://github.com/huggingface/datasets/pull/4564 | 4,564 | Support streaming bookcorpus dataset | closed | 1 | 2022-06-24T16:13:39 | 2022-07-06T09:34:48 | 2022-07-06T09:23:04 | albertvillanova | [] | Support streaming bookcorpus dataset. | true |
1,283,914,383 | https://api.github.com/repos/huggingface/datasets/issues/4563 | https://github.com/huggingface/datasets/pull/4563 | 4,563 | Support streaming allocine dataset | closed | 1 | 2022-06-24T15:55:03 | 2022-06-24T16:54:57 | 2022-06-24T16:44:41 | albertvillanova | [] | Support streaming allocine dataset.
Fix #4562. | true |
1,283,779,557 | https://api.github.com/repos/huggingface/datasets/issues/4562 | https://github.com/huggingface/datasets/issues/4562 | 4,562 | Dataset Viewer issue for allocine | closed | 5 | 2022-06-24T13:50:38 | 2022-06-27T06:39:32 | 2022-06-24T16:44:41 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/allocine
### Description
Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:
```
Status code: 400
Exception: AttributeError
Message: 'TarContainedFile' object has no attribute 'readable'
```
### Owner
No | false |
1,283,624,242 | https://api.github.com/repos/huggingface/datasets/issues/4561 | https://github.com/huggingface/datasets/pull/4561 | 4,561 | Add evaluation data to acronym_identification | closed | 1 | 2022-06-24T11:17:33 | 2022-06-27T09:37:55 | 2022-06-27T08:49:22 | lewtun | [] | null | true |
1,283,558,873 | https://api.github.com/repos/huggingface/datasets/issues/4560 | https://github.com/huggingface/datasets/pull/4560 | 4,560 | Add evaluation metadata to imagenet-1k | closed | 2 | 2022-06-24T10:12:41 | 2023-09-24T09:35:32 | 2022-09-23T09:37:03 | lewtun | [
"dataset contribution"
] | null | true |
1,283,544,937 | https://api.github.com/repos/huggingface/datasets/issues/4559 | https://github.com/huggingface/datasets/pull/4559 | 4,559 | Add action names in schema_guided_dstc8 dataset card | closed | 1 | 2022-06-24T10:00:01 | 2022-06-24T10:54:28 | 2022-06-24T10:43:47 | lhoestq | [] | As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card | true |
1,283,479,650 | https://api.github.com/repos/huggingface/datasets/issues/4558 | https://github.com/huggingface/datasets/pull/4558 | 4,558 | Add evaluation metadata to wmt14 | closed | 2 | 2022-06-24T09:08:54 | 2023-09-24T09:35:39 | 2022-09-23T09:36:50 | lewtun | [
"dataset contribution"
] | null | true |
1,283,473,889 | https://api.github.com/repos/huggingface/datasets/issues/4557 | https://github.com/huggingface/datasets/pull/4557 | 4,557 | Add evaluation metadata to wmt16 | closed | 3 | 2022-06-24T09:04:23 | 2023-09-24T09:35:49 | 2022-09-23T09:36:32 | lewtun | [
"dataset contribution"
] | Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right? | true |
1,283,462,881 | https://api.github.com/repos/huggingface/datasets/issues/4556 | https://github.com/huggingface/datasets/issues/4556 | 4,556 | Dataset Viewer issue for conll2003 | closed | 1 | 2022-06-24T08:55:18 | 2022-06-24T09:50:39 | 2022-06-24T09:50:39 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/conll2003/viewer/conll2003/test
### Description
Seems like a cache problem with this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll... | false |
1,283,451,651 | https://api.github.com/repos/huggingface/datasets/issues/4555 | https://github.com/huggingface/datasets/issues/4555 | 4,555 | Dataset Viewer issue for xtreme | closed | 1 | 2022-06-24T08:46:08 | 2022-06-24T09:50:45 | 2022-06-24T09:50:45 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test
### Description
There seems to be a problem with the cache of this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/data... | false |
1,283,369,453 | https://api.github.com/repos/huggingface/datasets/issues/4554 | https://github.com/huggingface/datasets/pull/4554 | 4,554 | Fix WMT dataset loading issue and docs update (Re-opened) | closed | 1 | 2022-06-24T07:26:16 | 2022-07-08T15:39:20 | 2022-07-08T15:27:44 | khushmeeet | [] | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
Let me know, if any additional changes are required.
Thanks | true |
1,282,779,560 | https://api.github.com/repos/huggingface/datasets/issues/4553 | https://github.com/huggingface/datasets/pull/4553 | 4,553 | Stop dropping columns in to_tf_dataset() before we load batches | closed | 4 | 2022-06-23T18:21:05 | 2022-07-04T19:00:13 | 2022-07-04T18:49:01 | Rocketknight1 | [] | `to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instea... | true |
1,282,615,646 | https://api.github.com/repos/huggingface/datasets/issues/4552 | https://github.com/huggingface/datasets/pull/4552 | 4,552 | Tell users to upload on the hub directly | closed | 2 | 2022-06-23T15:47:52 | 2022-06-26T15:49:46 | 2022-06-26T15:39:11 | lhoestq | [] | As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs.
Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can... | true |
1,282,534,807 | https://api.github.com/repos/huggingface/datasets/issues/4551 | https://github.com/huggingface/datasets/pull/4551 | 4,551 | Perform hidden file check on relative data file path | closed | 5 | 2022-06-23T14:49:11 | 2022-06-30T14:49:20 | 2022-06-30T14:38:18 | mariosasko | [] | Fix #4549 | true |
1,282,374,441 | https://api.github.com/repos/huggingface/datasets/issues/4550 | https://github.com/huggingface/datasets/issues/4550 | 4,550 | imdb source error | closed | 1 | 2022-06-23T13:02:52 | 2022-06-23T13:47:05 | 2022-06-23T13:47:04 | Muhtasham | [
"bug"
] | ## Describe the bug
imdb dataset not loading
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imdb")
```
## Expected results
## Actual results
```bash
06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and pr... | false |
1,282,312,975 | https://api.github.com/repos/huggingface/datasets/issues/4549 | https://github.com/huggingface/datasets/issues/4549 | 4,549 | FileNotFoundError when passing a data_file inside a directory starting with double underscores | closed | 2 | 2022-06-23T12:19:24 | 2022-06-30T14:38:18 | 2022-06-30T14:38:18 | lhoestq | [
"bug"
] | Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true
This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412 | false |
1,282,218,096 | https://api.github.com/repos/huggingface/datasets/issues/4548 | https://github.com/huggingface/datasets/issues/4548 | 4,548 | Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix | closed | 1 | 2022-06-23T10:58:57 | 2022-06-30T10:15:32 | 2022-06-30T10:15:32 | polinaeterna | [] | If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored.
This happens when a directory is structured like as follows:
```
train/
file_1.jpg
file_2.jpg
test/
file_3.jpg
file_4.jpg
metadata.jsonl
```
or like as follows:... | false |
1,282,160,517 | https://api.github.com/repos/huggingface/datasets/issues/4547 | https://github.com/huggingface/datasets/pull/4547 | 4,547 | [CI] Fix some warnings | closed | 4 | 2022-06-23T10:10:49 | 2022-06-28T14:10:57 | 2022-06-28T13:59:54 | lhoestq | [] | There are some warnings in the CI that are annoying, I tried to remove most of them | true |
1,282,093,288 | https://api.github.com/repos/huggingface/datasets/issues/4546 | https://github.com/huggingface/datasets/pull/4546 | 4,546 | [CI] fixing seqeval install in ci by pinning setuptools-scm | closed | 1 | 2022-06-23T09:24:37 | 2022-06-23T10:24:16 | 2022-06-23T10:13:44 | lhoestq | [] | The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work.
I fixed this by pinning the version of setuptools-scm in the circleci job
Fix https://github.com/huggingface/datasets/issues/4544 | true |
1,280,899,028 | https://api.github.com/repos/huggingface/datasets/issues/4545 | https://github.com/huggingface/datasets/pull/4545 | 4,545 | Make DuplicateKeysError more user friendly [For Issue #2556] | closed | 2 | 2022-06-22T21:01:34 | 2022-06-28T09:37:06 | 2022-06-28T09:26:04 | VijayKalmath | [] | # What does this PR do?
## Summary
*DuplicateKeysError error does not provide any information regarding the examples which have the same the key.*
*This information is very helpful for debugging the dataset generator script.*
## Additions
-
## Changes
- Changed `DuplicateKeysError Class` in `src/datase... | true |
1,280,500,340 | https://api.github.com/repos/huggingface/datasets/issues/4544 | https://github.com/huggingface/datasets/issues/4544 | 4,544 | [CI] seqeval installation fails sometimes on python 3.6 | closed | 0 | 2022-06-22T16:35:23 | 2022-06-23T10:13:44 | 2022-06-23T10:13:44 | lhoestq | [] | The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|ββββββββ | 10 kB 42.1 MB/s eta 0:00:01
|βββββββββββββββ ... | false |
1,280,379,781 | https://api.github.com/repos/huggingface/datasets/issues/4543 | https://github.com/huggingface/datasets/pull/4543 | 4,543 | [CI] Fix upstream hub test url | closed | 2 | 2022-06-22T15:34:27 | 2022-06-22T16:37:40 | 2022-06-22T16:27:37 | lhoestq | [] | Some tests were still using moon-stagign instead of hub-ci.
I also updated the token to use one dedicated to `datasets` | true |
1,280,269,445 | https://api.github.com/repos/huggingface/datasets/issues/4542 | https://github.com/huggingface/datasets/issues/4542 | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | open | 48 | 2022-06-22T14:42:00 | 2022-10-11T08:45:45 | null | lhoestq | [
"generic discussion"
] | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_... | false |
1,280,161,436 | https://api.github.com/repos/huggingface/datasets/issues/4541 | https://github.com/huggingface/datasets/pull/4541 | 4,541 | Fix timestamp conversion from Pandas to Python datetime in streaming mode | closed | 2 | 2022-06-22T13:40:01 | 2022-06-22T16:39:27 | 2022-06-22T16:29:09 | lhoestq | [] | Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays.
However a timestamp array is always converted to datetime.datetime objects.
This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.tim... | true |
1,280,142,942 | https://api.github.com/repos/huggingface/datasets/issues/4540 | https://github.com/huggingface/datasets/issues/4540 | 4,540 | Avoid splitting by` .py` for the file. | closed | 4 | 2022-06-22T13:26:55 | 2022-07-07T13:17:44 | 2022-07-07T13:17:44 | espoirMur | [
"good first issue"
] | https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272
Hello,
Thanks you for this library .
I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module thi... | false |
1,279,779,829 | https://api.github.com/repos/huggingface/datasets/issues/4539 | https://github.com/huggingface/datasets/pull/4539 | 4,539 | Replace deprecated logging.warn with logging.warning | closed | 0 | 2022-06-22T08:32:29 | 2022-06-22T13:43:23 | 2022-06-22T12:51:51 | hugovk | [] | Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).
* https://docs.python.org/3/library/log... | true |
1,279,409,786 | https://api.github.com/repos/huggingface/datasets/issues/4538 | https://github.com/huggingface/datasets/issues/4538 | 4,538 | Dataset Viewer issue for Pile of Law | closed | 5 | 2022-06-22T02:48:40 | 2022-06-27T07:30:23 | 2022-06-26T22:26:22 | Breakend | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines... | false |
1,279,144,310 | https://api.github.com/repos/huggingface/datasets/issues/4537 | https://github.com/huggingface/datasets/pull/4537 | 4,537 | Fix WMT dataset loading issue and docs update | closed | 2 | 2022-06-21T21:48:02 | 2022-06-24T07:05:43 | 2022-06-24T07:05:10 | khushmeeet | [] | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not... | true |
1,278,734,727 | https://api.github.com/repos/huggingface/datasets/issues/4536 | https://github.com/huggingface/datasets/pull/4536 | 4,536 | Properly raise FileNotFound even if the dataset is private | closed | 1 | 2022-06-21T17:05:50 | 2022-06-28T10:46:51 | 2022-06-28T10:36:10 | lhoestq | [] | `tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub.
... | true |
1,278,365,039 | https://api.github.com/repos/huggingface/datasets/issues/4535 | https://github.com/huggingface/datasets/pull/4535 | 4,535 | Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays` | closed | 5 | 2022-06-21T12:18:49 | 2022-06-27T16:25:09 | 2022-06-27T16:14:36 | alvarobartt | [] | Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR ad... | true |
1,277,897,197 | https://api.github.com/repos/huggingface/datasets/issues/4534 | https://github.com/huggingface/datasets/pull/4534 | 4,534 | Add `tldr_news` dataset | closed | 2 | 2022-06-21T05:02:43 | 2022-06-23T14:33:54 | 2022-06-21T14:21:11 | JulesBelveze | [] | This PR aims at adding support for a news dataset: `tldr news`.
This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter. | true |
1,277,211,490 | https://api.github.com/repos/huggingface/datasets/issues/4533 | https://github.com/huggingface/datasets/issues/4533 | 4,533 | Timestamp not returned as datetime objects in streaming mode | closed | 0 | 2022-06-20T17:28:47 | 2022-06-22T16:29:09 | 2022-06-22T16:29:09 | lhoestq | [
"streaming"
] | As reported in (internal) https://github.com/huggingface/datasets-server/issues/397
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("ett", name="h2", split="test", streaming=True)
>>> d = next(iter(dataset))
>>> d['start']
Timestamp('2016-07-01 00:00:00')
```
while loading in non-... | false |
1,277,167,129 | https://api.github.com/repos/huggingface/datasets/issues/4532 | https://github.com/huggingface/datasets/pull/4532 | 4,532 | Add Video feature | closed | 3 | 2022-06-20T16:36:41 | 2022-11-10T16:59:51 | 2022-11-10T16:59:51 | nateraw | [] | The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature... | true |
1,277,054,172 | https://api.github.com/repos/huggingface/datasets/issues/4531 | https://github.com/huggingface/datasets/issues/4531 | 4,531 | Dataset Viewer issue for CSV datasets | closed | 2 | 2022-06-20T14:56:24 | 2022-06-21T08:28:46 | 2022-06-21T08:28:27 | merveenoyan | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin
### Description
I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well.
You can replicate the problem by sim... | false |
1,276,884,962 | https://api.github.com/repos/huggingface/datasets/issues/4530 | https://github.com/huggingface/datasets/pull/4530 | 4,530 | Add AudioFolder packaged loader | closed | 10 | 2022-06-20T12:54:02 | 2022-08-22T14:36:49 | 2022-08-22T14:20:40 | polinaeterna | [
"enhancement"
] | will close #3964
AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though.
The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` i... | true |
1,276,729,303 | https://api.github.com/repos/huggingface/datasets/issues/4529 | https://github.com/huggingface/datasets/issues/4529 | 4,529 | Ecoset | closed | 3 | 2022-06-20T10:39:34 | 2023-10-26T09:12:32 | 2023-10-04T18:19:52 | DiGyt | [
"dataset request"
] | ## Adding a Dataset
- **Name:** *Ecoset*
- **Description:** *https://www.kietzmannlab.org/ecoset/*
- **Paper:** *https://doi.org/10.1073/pnas.2011417118*
- **Data:** *https://codeocean.com/capsule/9570390/tree/v1*
- **Motivation:**
**Ecoset** was created as a clean and ecologically valid alternative to **Imagen... | false |
1,276,679,155 | https://api.github.com/repos/huggingface/datasets/issues/4528 | https://github.com/huggingface/datasets/issues/4528 | 4,528 | Memory leak when iterating a Dataset | closed | 5 | 2022-06-20T10:03:14 | 2022-09-12T08:51:39 | 2022-09-12T08:51:39 | NouamaneTazi | [
"bug"
] | e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.ba... | false |
1,276,583,536 | https://api.github.com/repos/huggingface/datasets/issues/4527 | https://github.com/huggingface/datasets/issues/4527 | 4,527 | Dataset Viewer issue for vadis/sv-ident | closed | 1 | 2022-06-20T08:47:42 | 2022-06-21T16:42:46 | 2022-06-21T16:42:45 | albertvillanova | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/vadis/sv-ident
### Description
The dataset preview does not work:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
However, the dataset is streamable and works locally:
```python
In [1]: from dataset... | false |
1,276,580,185 | https://api.github.com/repos/huggingface/datasets/issues/4526 | https://github.com/huggingface/datasets/issues/4526 | 4,526 | split cache used when processing different split | open | 2 | 2022-06-20T08:44:58 | 2022-06-28T14:04:58 | null | gpucce | [
"bug"
] | ## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
... | false |
1,276,491,386 | https://api.github.com/repos/huggingface/datasets/issues/4525 | https://github.com/huggingface/datasets/issues/4525 | 4,525 | Out of memory error on workers while running Beam+Dataflow | closed | 10 | 2022-06-20T07:28:12 | 2024-10-09T16:09:50 | 2024-10-09T16:09:50 | albertvillanova | [
"bug"
] | ## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently worker... | false |
1,275,909,186 | https://api.github.com/repos/huggingface/datasets/issues/4524 | https://github.com/huggingface/datasets/issues/4524 | 4,524 | Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException) | open | 2 | 2022-06-18T23:36:45 | 2022-06-21T00:38:20 | null | ddegenaro | [
"bug"
] | ## Describe the bug
When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packag... | false |
1,275,002,639 | https://api.github.com/repos/huggingface/datasets/issues/4523 | https://github.com/huggingface/datasets/pull/4523 | 4,523 | Update download url and improve card of `cats_vs_dogs` dataset | closed | 1 | 2022-06-17T12:59:44 | 2022-06-21T14:23:26 | 2022-06-21T14:13:08 | mariosasko | [] | Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card. | true |
1,274,929,328 | https://api.github.com/repos/huggingface/datasets/issues/4522 | https://github.com/huggingface/datasets/issues/4522 | 4,522 | Try to reduce the number of datasets that require manual download | open | 0 | 2022-06-17T11:42:03 | 2022-06-17T11:52:48 | null | severo | [] | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to β 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, w... | false |
1,274,919,437 | https://api.github.com/repos/huggingface/datasets/issues/4521 | https://github.com/huggingface/datasets/issues/4521 | 4,521 | Datasets method `.map` not hashing | closed | 3 | 2022-06-17T11:31:10 | 2022-08-04T12:08:16 | 2022-06-28T13:23:05 | sanchit-gandhi | [
"bug"
] | ## Describe the bug
Datasets method `.map` not hashing, even with an empty no-op function
## Steps to reproduce the bug
```python
from datasets import load_dataset
# download 9MB dummy dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
def prepare_dataset(batch):
return(b... | false |
1,274,879,180 | https://api.github.com/repos/huggingface/datasets/issues/4520 | https://github.com/huggingface/datasets/issues/4520 | 4,520 | Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map` | closed | 2 | 2022-06-17T10:47:17 | 2022-06-28T14:47:17 | 2022-06-28T14:04:29 | sanchit-gandhi | [
"bug"
] | Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since... | false |
1,274,110,623 | https://api.github.com/repos/huggingface/datasets/issues/4519 | https://github.com/huggingface/datasets/pull/4519 | 4,519 | Create new sections for audio and vision in guides | closed | 2 | 2022-06-16T21:38:24 | 2022-07-07T15:36:37 | 2022-07-07T15:24:58 | stevhliu | [
"documentation"
] | This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - whi... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.