id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,550,084,450
https://api.github.com/repos/huggingface/datasets/issues/5442
https://github.com/huggingface/datasets/issues/5442
5,442
OneDrive Integrations with HF Datasets
closed
2
2023-01-19T23:12:08
2023-02-24T16:17:51
2023-02-24T16:17:51
Mohammed20201991
[ "enhancement" ]
### Feature request First of all , I would like to thank all community who are developed DataSet storage and make it free available How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section. For example, if I have **50GB** on my **Onedrive*...
false
1,548,417,594
https://api.github.com/repos/huggingface/datasets/issues/5441
https://github.com/huggingface/datasets/pull/5441
5,441
resolving a weird tar extract issue
open
4
2023-01-19T02:17:21
2023-01-20T16:49:22
null
stas00
[]
ok, every so often, I have been getting a strange failure on dataset install: ``` $ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing No config specified, defaulting to: general-pmd-synthetic-testing/100.unique Downloading and prep...
true
1,538,361,143
https://api.github.com/repos/huggingface/datasets/issues/5440
https://github.com/huggingface/datasets/pull/5440
5,440
Fix documentation about batch samplers
closed
3
2023-01-18T17:04:27
2023-01-18T17:57:29
2023-01-18T17:50:04
thomasw21
[]
null
true
1,537,973,564
https://api.github.com/repos/huggingface/datasets/issues/5439
https://github.com/huggingface/datasets/issues/5439
5,439
[dataset request] Add Common Voice 12.0
closed
2
2023-01-18T13:07:05
2023-07-21T14:26:10
2023-07-21T14:26:09
MohammedRakib
[ "enhancement" ]
### Feature request Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets. ### Motivation The dataset link: https://commonvoice.mozilla.org/en/datasets
false
1,537,489,730
https://api.github.com/repos/huggingface/datasets/issues/5438
https://github.com/huggingface/datasets/pull/5438
5,438
Update actions/checkout in CD Conda release
closed
2
2023-01-18T06:53:15
2023-01-18T13:49:51
2023-01-18T13:42:49
albertvillanova
[]
This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
true
1,536,837,144
https://api.github.com/repos/huggingface/datasets/issues/5437
https://github.com/huggingface/datasets/issues/5437
5,437
Can't load png dataset with 4 channel (RGBA)
closed
3
2023-01-17T18:22:27
2023-01-18T20:20:15
2023-01-18T20:20:15
WiNE-iNEFF
[]
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046...
false
1,536,633,173
https://api.github.com/repos/huggingface/datasets/issues/5436
https://github.com/huggingface/datasets/pull/5436
5,436
Revert container image pin in CI benchmarks
closed
2
2023-01-17T15:59:50
2023-01-18T09:05:49
2023-01-18T06:29:06
0x2b3bfa0
[]
Closes #5433, reverts #5432, and also: * Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed * Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead...
true
1,536,099,300
https://api.github.com/repos/huggingface/datasets/issues/5435
https://github.com/huggingface/datasets/issues/5435
5,435
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage
closed
4
2023-01-17T10:04:16
2023-01-19T09:56:03
2023-01-19T09:56:03
DanielYang59
[]
### Describe the bug In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states: > Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples cou...
false
1,536,090,042
https://api.github.com/repos/huggingface/datasets/issues/5434
https://github.com/huggingface/datasets/issues/5434
5,434
sample_dataset module not found
closed
3
2023-01-17T09:57:54
2023-01-19T13:52:12
2023-01-19T07:55:11
nickums
[]
null
false
1,536,017,901
https://api.github.com/repos/huggingface/datasets/issues/5433
https://github.com/huggingface/datasets/issues/5433
5,433
Support latest Docker image in CI benchmarks
closed
3
2023-01-17T09:06:08
2023-01-18T06:29:08
2023-01-18T06:29:08
albertvillanova
[ "enhancement" ]
Once we find out the root cause of: - #5431 we should revert the temporary pin on the Docker image version introduced by: - #5432
false
1,535,893,019
https://api.github.com/repos/huggingface/datasets/issues/5432
https://github.com/huggingface/datasets/pull/5432
5,432
Fix CI benchmarks by temporarily pinning Docker image version
closed
2
2023-01-17T07:15:31
2023-01-17T08:58:22
2023-01-17T08:51:17
albertvillanova
[]
This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag. It also updates deprecated `cml-send-comment` command and using `cml comment create` instead. Fix #5431.
true
1,535,862,621
https://api.github.com/repos/huggingface/datasets/issues/5431
https://github.com/huggingface/datasets/issues/5431
5,431
CI benchmarks are broken: Unknown arguments: runnerPath, path
closed
0
2023-01-17T06:49:57
2023-01-18T06:33:24
2023-01-17T08:51:18
albertvillanova
[ "maintenance" ]
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161 ``` Unknown arguments: runnerPath, path ``` Stack trace: ``` 100%|██████████| 500/500 [00:01<00:00, 338.98ba/s] Updating lock file 'dvc.lock' To track the changes ...
false
1,535,856,503
https://api.github.com/repos/huggingface/datasets/issues/5430
https://github.com/huggingface/datasets/issues/5430
5,430
Support Apache Beam >= 2.44.0
closed
1
2023-01-17T06:42:12
2024-02-06T19:24:21
2024-02-06T19:24:21
albertvillanova
[ "enhancement" ]
Once we find out the root cause of: - #5426 we should revert the temporary pin on apache-beam introduced by: - #5429
false
1,535,192,687
https://api.github.com/repos/huggingface/datasets/issues/5429
https://github.com/huggingface/datasets/pull/5429
5,429
Fix CI by temporarily pinning apache-beam < 2.44.0
closed
1
2023-01-16T16:20:09
2023-01-16T16:51:42
2023-01-16T16:49:03
albertvillanova
[]
Temporarily pin apache-beam < 2.44.0 Fix #5426.
true
1,535,166,139
https://api.github.com/repos/huggingface/datasets/issues/5428
https://github.com/huggingface/datasets/issues/5428
5,428
Load/Save FAISS index using fsspec
closed
2
2023-01-16T16:08:12
2023-03-27T15:18:22
2023-03-27T15:18:22
Dref360
[ "enhancement" ]
### Feature request From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support) I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`. ### Motivation In...
false
1,535,162,889
https://api.github.com/repos/huggingface/datasets/issues/5427
https://github.com/huggingface/datasets/issues/5427
5,427
Unable to download dataset id_clickbait
closed
1
2023-01-16T16:05:36
2023-01-18T09:51:28
2023-01-18T09:25:19
ilos-vigil
[]
### Describe the bug I tried to download dataset `id_clickbait`, but receive this error message. ``` FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip ``` When i open the link using browser, i got this XML data. ```xml <?xml versi...
false
1,535,158,555
https://api.github.com/repos/huggingface/datasets/issues/5426
https://github.com/huggingface/datasets/issues/5426
5,426
CI tests are broken: SchemaInferenceError
closed
0
2023-01-16T16:02:07
2023-06-02T06:40:32
2023-01-16T16:49:04
albertvillanova
[ "bug" ]
CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004 ``` FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `feat...
false
1,534,581,850
https://api.github.com/repos/huggingface/datasets/issues/5425
https://github.com/huggingface/datasets/issues/5425
5,425
Sort on multiple keys with datasets.Dataset.sort()
closed
10
2023-01-16T09:22:26
2023-02-24T16:15:11
2023-02-24T16:15:11
rocco-fortuna
[ "enhancement", "good first issue" ]
### Feature request From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1 `sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function. The suggested solution: > ... having something similar to panda...
false
1,534,394,756
https://api.github.com/repos/huggingface/datasets/issues/5424
https://github.com/huggingface/datasets/issues/5424
5,424
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
closed
1
2023-01-16T06:54:28
2023-02-24T16:19:00
2023-02-24T16:19:00
macabdul9
[]
### Describe the bug I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`. ### Steps to reproduce the bug Steps to reproduc...
false
1,533,385,239
https://api.github.com/repos/huggingface/datasets/issues/5422
https://github.com/huggingface/datasets/issues/5422
5,422
Datasets load error for saved github issues
open
7
2023-01-14T17:29:38
2023-09-14T11:39:57
null
folterj
[]
### Describe the bug Loading a previously downloaded & saved dataset as described in the HuggingFace course: issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") Gives this error: datasets.builder.DatasetGenerationError: An error occurred while generating the dataset...
false
1,532,278,307
https://api.github.com/repos/huggingface/datasets/issues/5421
https://github.com/huggingface/datasets/issues/5421
5,421
Support case-insensitive Hub dataset name in load_dataset
closed
1
2023-01-13T13:07:07
2023-01-13T20:12:32
2023-01-13T20:12:32
severo
[ "enhancement" ]
### Feature request The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue. Ideally, we could load the glue dataset using the following: ``` from d...
false
1,532,265,742
https://api.github.com/repos/huggingface/datasets/issues/5420
https://github.com/huggingface/datasets/pull/5420
5,420
ci: 🎡 remove two obsolete issue templates
closed
3
2023-01-13T12:58:43
2023-01-13T13:36:00
2023-01-13T13:29:01
severo
[]
add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project. See https://github.com/huggingface/datasets/issues/new/choose <img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubuserconten...
true
1,531,999,850
https://api.github.com/repos/huggingface/datasets/issues/5419
https://github.com/huggingface/datasets/issues/5419
5,419
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
closed
2
2023-01-13T09:40:07
2023-07-21T14:27:08
2023-07-21T14:27:08
CreatixEA
[]
### Describe the bug When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem. It is required to rename the column...
false
1,530,111,184
https://api.github.com/repos/huggingface/datasets/issues/5418
https://github.com/huggingface/datasets/issues/5418
5,418
Add ProgressBar for `to_parquet`
closed
4
2023-01-12T05:06:20
2023-01-24T18:18:24
2023-01-24T18:18:24
zanussbaum
[ "enhancement" ]
### Feature request Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works. ### Motivation It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar ### Your contribution Sure I can help if needed
false
1,526,988,113
https://api.github.com/repos/huggingface/datasets/issues/5416
https://github.com/huggingface/datasets/pull/5416
5,416
Fix RuntimeError: Sharding is ambiguous for this dataset
closed
4
2023-01-10T08:43:19
2023-01-18T17:12:17
2023-01-18T14:09:02
albertvillanova
[]
This PR fixes the RuntimeError: Sharding is ambiguous for this dataset. The error for ambiguous sharding will be raised only if num_proc > 1. Fix #5415, fix #5414. Fix https://huggingface.co/datasets/ami/discussions/3.
true
1,526,904,861
https://api.github.com/repos/huggingface/datasets/issues/5415
https://github.com/huggingface/datasets/issues/5415
5,415
RuntimeError: Sharding is ambiguous for this dataset
closed
0
2023-01-10T07:36:11
2023-01-18T14:09:04
2023-01-18T14:09:03
albertvillanova
[]
### Describe the bug When loading some datasets, a RuntimeError is raised. For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3 ``` .../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) ...
false
1,525,733,818
https://api.github.com/repos/huggingface/datasets/issues/5414
https://github.com/huggingface/datasets/issues/5414
5,414
Sharding error with Multilingual LibriSpeech
closed
4
2023-01-09T14:45:31
2023-01-18T14:09:04
2023-01-18T14:09:04
Nithin-Holla
[]
### Describe the bug Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace: ``` Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/...
false
1,524,591,837
https://api.github.com/repos/huggingface/datasets/issues/5413
https://github.com/huggingface/datasets/issues/5413
5,413
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers
closed
1
2023-01-08T17:01:52
2023-01-26T09:27:21
2023-01-26T09:27:21
ZeguanXiao
[]
### Describe the bug When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails: ``` File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets return _concatenate_map_style_data...
false
1,524,250,269
https://api.github.com/repos/huggingface/datasets/issues/5412
https://github.com/huggingface/datasets/issues/5412
5,412
load_dataset() cannot find dataset_info.json with multiple training runs in parallel
closed
4
2023-01-08T00:44:32
2023-01-19T20:28:43
2023-01-19T20:28:43
mtoles
[]
### Describe the bug I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error. If there is a workaround to ignore the cache I think that would ...
false
1,523,297,786
https://api.github.com/repos/huggingface/datasets/issues/5411
https://github.com/huggingface/datasets/pull/5411
5,411
Update docs of S3 filesystem with async aiobotocore
closed
2
2023-01-06T23:19:17
2023-01-18T11:18:59
2023-01-18T11:12:04
maheshpec
[]
[s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf). Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets
true
1,521,168,032
https://api.github.com/repos/huggingface/datasets/issues/5410
https://github.com/huggingface/datasets/pull/5410
5,410
Map-style Dataset to IterableDataset
closed
22
2023-01-05T18:12:17
2023-02-01T18:11:45
2023-02-01T16:36:01
lhoestq
[]
Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset. It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets. TODO: - [x] tests - [x] docs Fi...
true
1,520,374,219
https://api.github.com/repos/huggingface/datasets/issues/5409
https://github.com/huggingface/datasets/pull/5409
5,409
Fix deprecation warning when use_auth_token passed to download_and_prepare
closed
2
2023-01-05T09:10:58
2023-01-06T11:06:16
2023-01-06T10:59:13
albertvillanova
[]
The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in: - #5302 However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...). This PR fixes it, so that no deprecation warning is raised. Fix #5407.
true
1,519,890,752
https://api.github.com/repos/huggingface/datasets/issues/5408
https://github.com/huggingface/datasets/issues/5408
5,408
dataset map function could not be hash properly
closed
2
2023-01-05T01:59:59
2023-01-06T13:22:19
2023-01-06T13:22:18
Tungway1990
[]
### Describe the bug I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model. When using map function to prepare dataset, following warning pop out: `common_voice = common_voice.map(prepare_dataset, remove_...
false
1,519,797,345
https://api.github.com/repos/huggingface/datasets/issues/5407
https://github.com/huggingface/datasets/issues/5407
5,407
Datasets.from_sql() generates deprecation warning
closed
1
2023-01-05T00:43:17
2023-01-06T10:59:14
2023-01-06T10:59:14
msummerfield
[]
### Describe the bug Calling `Datasets.from_sql()` generates a warning: `.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.` ### Steps to reproduce the ...
false
1,519,140,544
https://api.github.com/repos/huggingface/datasets/issues/5406
https://github.com/huggingface/datasets/issues/5406
5,406
[2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str`
open
11
2023-01-04T15:10:04
2023-06-21T18:45:38
null
lhoestq
[]
`datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets. When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets: ```python TypeError: can only concatenate str (not "int") to str ``` This is because we started to update the metadat...
false
1,517,879,386
https://api.github.com/repos/huggingface/datasets/issues/5405
https://github.com/huggingface/datasets/issues/5405
5,405
size_in_bytes the same for all splits
open
1
2023-01-03T20:25:48
2023-01-04T09:22:59
null
Breakend
[]
### Describe the bug Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example: ``` >>> from datasets import load_da...
false
1,517,566,331
https://api.github.com/repos/huggingface/datasets/issues/5404
https://github.com/huggingface/datasets/issues/5404
5,404
Better integration of BIG-bench
open
1
2023-01-03T15:37:57
2023-02-09T20:30:26
null
albertvillanova
[ "enhancement" ]
### Feature request Ideally, it would be nice to have a maintained PyPI package for `bigbench`. ### Motivation We'd like to allow anyone to access, explore and use any task. ### Your contribution @lhoestq has opened an issue in their repo: - https://github.com/google/BIG-bench/issues/906
false
1,517,466,492
https://api.github.com/repos/huggingface/datasets/issues/5403
https://github.com/huggingface/datasets/pull/5403
5,403
Replace one letter import in docs
closed
4
2023-01-03T14:26:32
2023-01-03T15:06:18
2023-01-03T14:59:01
MKhalusova
[]
This PR updates a code example for consistency across the docs based on [feedback from this comment](https://github.com/huggingface/transformers/pull/20925/files/9fda31634d203a47d3212e4e8d43d3267faf9808#r1058769500): "In terms of style we usually stay away from one-letter imports like this (even if the community use...
true
1,517,409,429
https://api.github.com/repos/huggingface/datasets/issues/5402
https://github.com/huggingface/datasets/issues/5402
5,402
Missing state.json when creating a cloud dataset using a dataset_builder
open
3
2023-01-03T13:39:59
2023-01-04T17:23:57
null
danielfleischer
[]
### Describe the bug Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example: ```python from aiobotocore.session import AioSession as Session from datasets import load_from_disk, load_da...
false
1,517,160,935
https://api.github.com/repos/huggingface/datasets/issues/5401
https://github.com/huggingface/datasets/pull/5401
5,401
Support Dataset conversion from/to Spark
open
4
2023-01-03T09:57:40
2023-01-05T14:21:33
null
albertvillanova
[]
This PR implements Spark integration by supporting `Dataset` conversion from/to Spark `DataFrame`.
true
1,517,032,972
https://api.github.com/repos/huggingface/datasets/issues/5400
https://github.com/huggingface/datasets/pull/5400
5,400
Support streaming datasets with os.path.exists and Path.exists
closed
2
2023-01-03T07:42:37
2023-01-06T10:42:44
2023-01-06T10:35:44
albertvillanova
[]
Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`.
true
1,515,548,427
https://api.github.com/repos/huggingface/datasets/issues/5399
https://github.com/huggingface/datasets/issues/5399
5,399
Got disconnected from remote data host. Retrying in 5sec [2/20]
closed
0
2023-01-01T13:00:11
2023-01-02T07:21:52
2023-01-02T07:21:52
alhuri
[]
### Describe the bug While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs ### Steps to reproduce the bug ``` df = pd.read_csv('x.csv', encoding='utf-8-sig') features = Features({ 'link': Ima...
false
1,514,425,231
https://api.github.com/repos/huggingface/datasets/issues/5398
https://github.com/huggingface/datasets/issues/5398
5,398
Unpin pydantic
closed
0
2022-12-30T10:37:31
2022-12-30T10:43:41
2022-12-30T10:43:41
albertvillanova
[]
Once `pydantic` fixes their issue in their 1.10.3 version, unpin it. See issue: - #5394 See temporary fix: - #5395
false
1,514,412,246
https://api.github.com/repos/huggingface/datasets/issues/5397
https://github.com/huggingface/datasets/pull/5397
5,397
Unpin pydantic test dependency
closed
2
2022-12-30T10:22:09
2022-12-30T10:53:11
2022-12-30T10:43:40
albertvillanova
[]
Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/ See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807 ``` v1.10.3 has been yanked. ``` in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issu...
true
1,514,002,934
https://api.github.com/repos/huggingface/datasets/issues/5396
https://github.com/huggingface/datasets/pull/5396
5,396
Fix checksum verification
closed
7
2022-12-29T19:45:17
2023-02-13T11:11:22
2023-02-13T11:11:22
daskol
[]
Expected checksum was verified against checksum dict (not checksum).
true
1,513,997,335
https://api.github.com/repos/huggingface/datasets/issues/5395
https://github.com/huggingface/datasets/pull/5395
5,395
Temporarily pin pydantic test dependency
closed
3
2022-12-29T19:34:19
2022-12-30T06:36:57
2022-12-29T21:00:26
albertvillanova
[]
Temporarily pin `pydantic` until a permanent solution is found. Fix #5394.
true
1,513,976,229
https://api.github.com/repos/huggingface/datasets/issues/5394
https://github.com/huggingface/datasets/issues/5394
5,394
CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'
closed
2
2022-12-29T18:58:44
2022-12-30T10:40:51
2022-12-29T21:00:27
albertvillanova
[]
### Describe the bug While installing the dependencies, the CI raises a TypeError: ``` Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/opt/hoste...
false
1,512,908,613
https://api.github.com/repos/huggingface/datasets/issues/5393
https://github.com/huggingface/datasets/pull/5393
5,393
Finish deprecating the fs argument
closed
6
2022-12-28T15:33:17
2023-01-18T12:42:33
2023-01-18T12:35:32
dconathan
[]
See #5385 for some discussion on this The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar beha...
true
1,512,712,529
https://api.github.com/repos/huggingface/datasets/issues/5392
https://github.com/huggingface/datasets/pull/5392
5,392
Fix Colab notebook link
closed
2
2022-12-28T11:44:53
2023-01-03T15:36:14
2023-01-03T15:27:31
albertvillanova
[]
Fix notebook link to open in Colab.
true
1,510,350,400
https://api.github.com/repos/huggingface/datasets/issues/5391
https://github.com/huggingface/datasets/issues/5391
5,391
Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it]
closed
2
2022-12-25T15:17:14
2023-07-21T14:29:47
2023-07-21T14:29:47
catswithbats
[]
Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions. Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1...
false
1,509,357,553
https://api.github.com/repos/huggingface/datasets/issues/5390
https://github.com/huggingface/datasets/issues/5390
5,390
Error when pushing to the CI hub
closed
5
2022-12-23T13:36:37
2022-12-23T20:29:02
2022-12-23T20:29:02
severo
[]
### Describe the bug Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co). The call to `dataset.push_to_hub(` fails: ``` Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████...
false
1,509,348,626
https://api.github.com/repos/huggingface/datasets/issues/5389
https://github.com/huggingface/datasets/pull/5389
5,389
Fix link in `load_dataset` docstring
closed
6
2022-12-23T13:26:31
2023-01-25T19:00:43
2023-01-24T16:33:38
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566
true
1,509,042,348
https://api.github.com/repos/huggingface/datasets/issues/5388
https://github.com/huggingface/datasets/issues/5388
5,388
Getting Value Error while loading a dataset..
closed
4
2022-12-23T08:16:43
2022-12-29T08:36:33
2022-12-27T17:59:09
valmetisrinivas
[]
### Describe the bug I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook. ``` WARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd ---...
false
1,508,740,177
https://api.github.com/repos/huggingface/datasets/issues/5387
https://github.com/huggingface/datasets/issues/5387
5,387
Missing documentation page : improve-performance
closed
1
2022-12-23T01:12:57
2023-01-24T16:33:40
2023-01-24T16:33:40
astariul
[]
### Describe the bug Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing. The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory ### Steps to reproduce t...
false
1,508,592,918
https://api.github.com/repos/huggingface/datasets/issues/5386
https://github.com/huggingface/datasets/issues/5386
5,386
`max_shard_size` in `datasets.push_to_hub()` breaks with large files
closed
2
2022-12-22T21:50:58
2022-12-26T23:45:51
2022-12-26T23:45:51
salieri
[]
### Describe the bug `max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit. In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_siz...
false
1,508,535,532
https://api.github.com/repos/huggingface/datasets/issues/5385
https://github.com/huggingface/datasets/issues/5385
5,385
Is `fs=` deprecated in `load_from_disk()` as well?
closed
3
2022-12-22T21:00:45
2023-01-23T10:50:05
2023-01-23T10:50:04
dconathan
[]
### Describe the bug The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec: https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340 Is there a reason the...
false
1,508,152,598
https://api.github.com/repos/huggingface/datasets/issues/5384
https://github.com/huggingface/datasets/pull/5384
5,384
Handle 0-dim tensors in `cast_to_python_objects`
closed
2
2022-12-22T16:15:30
2023-01-13T16:10:15
2023-01-13T16:00:52
mariosasko
[]
Fix #5229
true
1,507,293,968
https://api.github.com/repos/huggingface/datasets/issues/5383
https://github.com/huggingface/datasets/issues/5383
5,383
IterableDataset missing column_names, differs from Dataset interface
closed
6
2022-12-22T05:27:02
2023-03-13T19:03:33
2023-03-13T19:03:33
iceboundflame
[ "enhancement", "good first issue" ]
### Describe the bug The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like ``` dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...) ``` will not work because `.colu...
false
1,504,788,691
https://api.github.com/repos/huggingface/datasets/issues/5382
https://github.com/huggingface/datasets/pull/5382
5,382
Raise from disconnect error in xopen
closed
3
2022-12-20T15:52:44
2023-01-26T09:51:13
2023-01-26T09:42:45
lhoestq
[]
this way we can know the cause of the disconnect related to https://github.com/huggingface/datasets/issues/5374
true
1,504,498,387
https://api.github.com/repos/huggingface/datasets/issues/5381
https://github.com/huggingface/datasets/issues/5381
5,381
Wrong URL for the_pile dataset
closed
1
2022-12-20T12:40:14
2023-02-15T16:24:57
2023-02-15T16:24:57
LeoGrin
[]
### Describe the bug When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error. ### Steps to reproduce the bug Steps to reproduce: Run: ``` from datasets import load_dataset dataset = load_dataset("the_pile") ``` I get the output: "name": "FileNotFoundError", "message...
false
1,504,404,043
https://api.github.com/repos/huggingface/datasets/issues/5380
https://github.com/huggingface/datasets/issues/5380
5,380
Improve dataset `.skip()` speed in streaming mode
open
10
2022-12-20T11:25:23
2023-03-08T10:47:12
null
versae
[ "enhancement", "good second issue" ]
### Feature request Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT...
false
1,504,010,639
https://api.github.com/repos/huggingface/datasets/issues/5379
https://github.com/huggingface/datasets/pull/5379
5,379
feat: depth estimation dataset guide.
closed
8
2022-12-20T05:32:11
2023-01-13T12:30:31
2023-01-13T12:23:34
sayakpaul
[]
This PR adds a guide for prepping datasets for depth estimation. PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22
true
1,503,887,508
https://api.github.com/repos/huggingface/datasets/issues/5378
https://github.com/huggingface/datasets/issues/5378
5,378
The dataset "the_pile", subset "enron_emails" , load_dataset() failure
closed
1
2022-12-20T02:19:13
2022-12-20T07:52:54
2022-12-20T07:52:54
shaoyuta
[]
### Describe the bug When run "datasets.load_dataset("the_pile","enron_emails")" failure ![image](https://user-images.githubusercontent.com/52023469/208565302-cfab7b89-0b97-4fa6-a5ba-c11b0b629b1a.png) ### Steps to reproduce the bug Run below code in python cli: >>> import datasets >>> datasets.load_dataset(...
false
1,503,477,833
https://api.github.com/repos/huggingface/datasets/issues/5377
https://github.com/huggingface/datasets/pull/5377
5,377
Add a parallel implementation of to_tf_dataset()
closed
32
2022-12-19T19:40:27
2023-01-25T16:28:44
2023-01-25T16:21:40
Rocketknight1
[]
Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests. The core idea is that we do everything using `multiprocessing` and...
true
1,502,730,559
https://api.github.com/repos/huggingface/datasets/issues/5376
https://github.com/huggingface/datasets/pull/5376
5,376
set dev version
closed
1
2022-12-19T10:56:56
2022-12-19T11:01:55
2022-12-19T10:57:16
lhoestq
[]
null
true
1,502,720,404
https://api.github.com/repos/huggingface/datasets/issues/5375
https://github.com/huggingface/datasets/pull/5375
5,375
Release: 2.8.0
closed
1
2022-12-19T10:48:26
2022-12-19T10:55:43
2022-12-19T10:53:15
lhoestq
[]
null
true
1,501,872,945
https://api.github.com/repos/huggingface/datasets/issues/5374
https://github.com/huggingface/datasets/issues/5374
5,374
Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec
closed
7
2022-12-18T11:38:58
2023-07-24T15:23:07
2023-07-24T15:23:07
Muennighoff
[]
### Describe the bug `streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐 The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200. Possibly related: - https://github.com/huggingface/datasets/pull/3100 - https://github.com/...
false
1,501,484,197
https://api.github.com/repos/huggingface/datasets/issues/5373
https://github.com/huggingface/datasets/pull/5373
5,373
Simplify skipping
closed
1
2022-12-17T17:23:52
2022-12-18T21:43:31
2022-12-18T21:40:21
Muennighoff
[]
Was hoping to find a way to speed up the skipping as I'm running into bottlenecks skipping 100M examples on C4 (it takes 12 hours to skip), but didn't find anything better than this small change :( Maybe there's a way to directly skip whole shards to speed it up? 🧐
true
1,501,377,802
https://api.github.com/repos/huggingface/datasets/issues/5372
https://github.com/huggingface/datasets/pull/5372
5,372
Fix streaming pandas.read_excel
closed
2
2022-12-17T12:58:52
2023-01-06T11:50:58
2023-01-06T11:43:37
albertvillanova
[]
This PR fixes `xpandas_read_excel`: - Support passing a path string, besides a file-like object - Support passing `use_auth_token` - First assumes the host server supports HTTP range requests; only if a ValueError is thrown (Cannot seek streaming HTTP file), then it preserves previous behavior (see [#3355](https://g...
true
1,501,369,036
https://api.github.com/repos/huggingface/datasets/issues/5371
https://github.com/huggingface/datasets/issues/5371
5,371
Add a robustness benchmark dataset for vision
open
1
2022-12-17T12:35:13
2022-12-20T06:21:41
null
sayakpaul
[ "dataset request" ]
### Name ImageNet-C ### Paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations ### Data https://github.com/hendrycks/robustness ### Motivation It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also corre...
false
1,500,622,276
https://api.github.com/repos/huggingface/datasets/issues/5369
https://github.com/huggingface/datasets/pull/5369
5,369
Distributed support
closed
11
2022-12-16T17:43:47
2023-07-25T12:00:31
2023-01-16T13:33:32
lhoestq
[]
To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]: ```python import os from datasets.distributed import split_dataset_by_node ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"])) ``` This wor...
true
1,500,322,973
https://api.github.com/repos/huggingface/datasets/issues/5368
https://github.com/huggingface/datasets/pull/5368
5,368
Align remove columns behavior and input dict mutation in `map` with previous behavior
closed
1
2022-12-16T14:28:47
2022-12-16T16:28:08
2022-12-16T16:25:12
mariosasko
[]
Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252.
true
1,499,174,749
https://api.github.com/repos/huggingface/datasets/issues/5367
https://github.com/huggingface/datasets/pull/5367
5,367
Fix remove columns from lazy dict
closed
1
2022-12-15T22:04:12
2022-12-15T22:27:53
2022-12-15T22:24:50
lhoestq
[]
This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597 Basically this code should return a dataset with only one column: `...
true
1,498,530,851
https://api.github.com/repos/huggingface/datasets/issues/5366
https://github.com/huggingface/datasets/pull/5366
5,366
ExamplesIterable fixes
closed
1
2022-12-15T14:23:05
2022-12-15T14:44:47
2022-12-15T14:41:45
lhoestq
[]
fix typing and ExamplesIterable.shard_data_sources
true
1,498,422,466
https://api.github.com/repos/huggingface/datasets/issues/5365
https://github.com/huggingface/datasets/pull/5365
5,365
fix: image array should support other formats than uint8
closed
4
2022-12-15T13:17:50
2023-01-26T18:46:45
2023-01-26T18:39:36
vigsterkr
[]
Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank. `PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/e...
true
1,498,360,628
https://api.github.com/repos/huggingface/datasets/issues/5364
https://github.com/huggingface/datasets/pull/5364
5,364
Support for writing arrow files directly with BeamWriter
closed
6
2022-12-15T12:38:05
2024-01-11T14:52:33
2024-01-11T14:45:15
mariosasko
[]
Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used.
true
1,498,171,317
https://api.github.com/repos/huggingface/datasets/issues/5363
https://github.com/huggingface/datasets/issues/5363
5,363
Dataset.from_generator() crashes on simple example
closed
0
2022-12-15T10:21:28
2022-12-15T11:51:33
2022-12-15T11:51:33
villmow
[]
null
false
1,497,643,744
https://api.github.com/repos/huggingface/datasets/issues/5362
https://github.com/huggingface/datasets/issues/5362
5,362
Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' )
closed
2
2022-12-15T01:23:03
2022-12-15T07:45:54
2022-12-15T07:45:53
shaoyuta
[]
### Describe the bug Run model "GPT-J" with dataset "the_pile" fail. The fail out is as below: ![image](https://user-images.githubusercontent.com/52023469/207750127-118d9896-35f4-4ee9-90d4-d0ab9aae9c74.png) Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable . ### Steps to ...
false
1,497,153,889
https://api.github.com/repos/huggingface/datasets/issues/5361
https://github.com/huggingface/datasets/issues/5361
5,361
How concatenate `Audio` elements using batch mapping
closed
3
2022-12-14T18:13:55
2023-07-21T14:30:51
2023-07-21T14:30:51
bayartsogt-ya
[]
### Describe the bug I am trying to do concatenate audios in a dataset e.g. `google/fleurs`. ```python print(dataset) # Dataset({ # features: ['path', 'audio'], # num_rows: 24 # }) def mapper_function(batch): # to merge every 3 audio # np.concatnate(audios[i: i+3]) for i in range(i, len(batc...
false
1,496,947,177
https://api.github.com/repos/huggingface/datasets/issues/5360
https://github.com/huggingface/datasets/issues/5360
5,360
IterableDataset returns duplicated data using PyTorch DDP
closed
11
2022-12-14T16:06:19
2023-06-15T09:51:13
2023-01-16T13:33:33
lhoestq
[]
As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()`
false
1,495,297,857
https://api.github.com/repos/huggingface/datasets/issues/5359
https://github.com/huggingface/datasets/pull/5359
5,359
Raise error if ClassLabel names is not python list
closed
3
2022-12-13T23:04:06
2022-12-22T16:35:49
2022-12-22T16:32:49
freddyheppell
[]
Checks type of names provided to ClassLabel to avoid easy and hard to debug errors (closes #5332 - see for discussion)
true
1,495,270,822
https://api.github.com/repos/huggingface/datasets/issues/5358
https://github.com/huggingface/datasets/pull/5358
5,358
Fix `fs.open` resource leaks
closed
3
2022-12-13T22:35:51
2023-01-05T16:46:31
2023-01-05T15:59:51
tkukurin
[]
Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix. Introduces no significant logic changes.
true
1,495,029,602
https://api.github.com/repos/huggingface/datasets/issues/5357
https://github.com/huggingface/datasets/pull/5357
5,357
Support torch dataloader without torch formatting
closed
7
2022-12-13T19:39:24
2023-01-04T12:45:40
2022-12-15T19:15:54
lhoestq
[]
In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors. The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `to...
true
1,494,961,609
https://api.github.com/repos/huggingface/datasets/issues/5356
https://github.com/huggingface/datasets/pull/5356
5,356
Clean filesystem and logging docstrings
closed
1
2022-12-13T18:54:09
2022-12-14T17:25:58
2022-12-14T17:22:16
stevhliu
[]
This PR cleans the `Filesystems` and `Logging` docstrings.
true
1,493,076,860
https://api.github.com/repos/huggingface/datasets/issues/5355
https://github.com/huggingface/datasets/pull/5355
5,355
Clean up Table class docstrings
closed
1
2022-12-13T00:29:47
2022-12-13T18:17:56
2022-12-13T18:14:42
stevhliu
[]
This PR cleans up the `Table` class docstrings :)
true
1,492,174,125
https://api.github.com/repos/huggingface/datasets/issues/5354
https://github.com/huggingface/datasets/issues/5354
5,354
Consider using "Sequence" instead of "List"
open
11
2022-12-12T15:39:45
2025-06-21T13:56:58
null
tranhd95
[ "enhancement", "good first issue" ]
### Feature request Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below. **How to reproduce** ```py ...
false
1,491,880,500
https://api.github.com/repos/huggingface/datasets/issues/5353
https://github.com/huggingface/datasets/issues/5353
5,353
Support remote file systems for `Audio`
closed
1
2022-12-12T13:22:13
2022-12-12T13:37:14
2022-12-12T13:37:14
OllieBroadhurst
[ "enhancement" ]
### Feature request Hi there! It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system. ### Motivation Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datas...
false
1,490,796,414
https://api.github.com/repos/huggingface/datasets/issues/5352
https://github.com/huggingface/datasets/issues/5352
5,352
__init__() got an unexpected keyword argument 'input_size'
open
2
2022-12-12T02:52:03
2022-12-19T01:38:48
null
J-shel
[]
### Describe the bug I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html But when I load the dataset, I got an error "__init__() got an unexpected keyword argument...
false
1,490,659,504
https://api.github.com/repos/huggingface/datasets/issues/5351
https://github.com/huggingface/datasets/issues/5351
5,351
Do we need to implement `_prepare_split`?
closed
11
2022-12-12T01:38:54
2022-12-20T18:20:57
2022-12-12T16:48:56
jmwoloso
[]
### Describe the bug I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to im...
false
1,487,559,904
https://api.github.com/repos/huggingface/datasets/issues/5350
https://github.com/huggingface/datasets/pull/5350
5,350
Clean up Loading methods docstrings
closed
1
2022-12-09T22:25:30
2022-12-12T17:27:20
2022-12-12T17:24:01
stevhliu
[]
Clean up for the docstrings in Loading methods!
true
1,487,396,780
https://api.github.com/repos/huggingface/datasets/issues/5349
https://github.com/huggingface/datasets/pull/5349
5,349
Clean up remaining Main Classes docstrings
closed
1
2022-12-09T20:17:15
2022-12-12T17:27:17
2022-12-12T17:24:13
stevhliu
[]
This PR cleans up the remaining docstrings in Main Classes (`IterableDataset`, `IterableDatasetDict`, and `Features`).
true
1,486,975,626
https://api.github.com/repos/huggingface/datasets/issues/5348
https://github.com/huggingface/datasets/issues/5348
5,348
The data downloaded in the download folder of the cache does not respect `umask`
open
1
2022-12-09T15:46:27
2022-12-09T17:21:26
null
SaulLu
[]
### Describe the bug For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache. Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the com...
false
1,486,920,261
https://api.github.com/repos/huggingface/datasets/issues/5347
https://github.com/huggingface/datasets/pull/5347
5,347
Force soundfile to return float32 instead of the default float64
open
8
2022-12-09T15:10:24
2023-01-17T16:12:49
null
qmeeus
[]
(Fixes issue #5345)
true
1,486,884,983
https://api.github.com/repos/huggingface/datasets/issues/5346
https://github.com/huggingface/datasets/issues/5346
5,346
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
closed
3
2022-12-09T14:48:02
2023-06-02T20:24:44
2023-01-25T19:35:40
LysandreJik
[]
Thanks to all of you, Datasets is just about to pass 15k stars! Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face...
false
1,486,555,384
https://api.github.com/repos/huggingface/datasets/issues/5345
https://github.com/huggingface/datasets/issues/5345
5,345
Wrong dtype for array in audio features
open
3
2022-12-09T11:05:11
2023-02-10T14:39:28
null
qmeeus
[]
### Describe the bug When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged. ### Steps to repro...
false
1,485,628,319
https://api.github.com/repos/huggingface/datasets/issues/5344
https://github.com/huggingface/datasets/pull/5344
5,344
Clean up Dataset and DatasetDict
closed
1
2022-12-09T00:02:08
2022-12-13T00:56:07
2022-12-13T00:53:02
stevhliu
[]
This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`.
true
1,485,297,823
https://api.github.com/repos/huggingface/datasets/issues/5343
https://github.com/huggingface/datasets/issues/5343
5,343
T5 for Q&A produces truncated sentence
closed
0
2022-12-08T19:48:46
2022-12-08T19:57:17
2022-12-08T19:57:17
junyongyou
[]
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to...
false
1,485,244,178
https://api.github.com/repos/huggingface/datasets/issues/5342
https://github.com/huggingface/datasets/issues/5342
5,342
Emotion dataset cannot be downloaded
closed
7
2022-12-08T19:07:09
2023-02-23T19:13:19
2022-12-09T10:46:11
cbarond
[ "duplicate" ]
### Describe the bug The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`. It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022). ### Steps to reproduce the bug ...
false
1,484,376,644
https://api.github.com/repos/huggingface/datasets/issues/5341
https://github.com/huggingface/datasets/pull/5341
5,341
Remove tasks.json
closed
1
2022-12-08T11:04:35
2022-12-09T12:26:21
2022-12-09T12:23:20
lhoestq
[]
After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead.
true
1,483,182,158
https://api.github.com/repos/huggingface/datasets/issues/5340
https://github.com/huggingface/datasets/pull/5340
5,340
Clean up DatasetInfo and Dataset docstrings
closed
1
2022-12-08T00:17:53
2022-12-08T19:33:14
2022-12-08T19:30:10
stevhliu
[]
This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`.
true