id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,757,397,507
https://api.github.com/repos/huggingface/datasets/issues/5959
https://github.com/huggingface/datasets/issues/5959
5,959
read metric glue.py from local file
closed
1
2023-06-14T17:59:35
2023-06-14T18:04:16
2023-06-14T18:04:16
JiazhaoLi
[]
### Describe the bug Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx'...
false
1,757,265,971
https://api.github.com/repos/huggingface/datasets/issues/5958
https://github.com/huggingface/datasets/pull/5958
5,958
set dev version
closed
3
2023-06-14T16:26:34
2023-06-14T16:34:55
2023-06-14T16:26:51
lhoestq
[]
null
true
1,757,252,466
https://api.github.com/repos/huggingface/datasets/issues/5957
https://github.com/huggingface/datasets/pull/5957
5,957
Release: 2.13.0
closed
4
2023-06-14T16:17:26
2023-06-14T16:33:39
2023-06-14T16:24:39
lhoestq
[]
null
true
1,756,959,367
https://api.github.com/repos/huggingface/datasets/issues/5956
https://github.com/huggingface/datasets/pull/5956
5,956
Fix ArrowExamplesIterable.shard_data_sources
closed
4
2023-06-14T13:50:38
2023-06-14T14:43:12
2023-06-14T14:33:45
lhoestq
[]
ArrowExamplesIterable.shard_data_sources was outdated I also fixed a warning message by not using format_type= in with_format()
true
1,756,827,133
https://api.github.com/repos/huggingface/datasets/issues/5955
https://github.com/huggingface/datasets/issues/5955
5,955
Strange bug in loading local JSON files, using load_dataset
closed
4
2023-06-14T12:46:00
2023-06-21T14:42:15
2023-06-21T14:42:15
Night-Quiet
[]
### Describe the bug I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this err...
false
1,756,572,994
https://api.github.com/repos/huggingface/datasets/issues/5954
https://github.com/huggingface/datasets/pull/5954
5,954
Better filenotfound for gated
closed
3
2023-06-14T10:33:10
2023-06-14T12:33:27
2023-06-14T12:26:31
lhoestq
[]
close https://github.com/huggingface/datasets/issues/5953 <img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
true
1,756,520,523
https://api.github.com/repos/huggingface/datasets/issues/5953
https://github.com/huggingface/datasets/issues/5953
5,953
Bad error message when trying to download gated dataset
closed
8
2023-06-14T10:03:39
2023-06-14T16:36:51
2023-06-14T12:26:32
patrickvonplaten
[]
### Describe the bug When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.: E.g. ```sh Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0. Please make sure you specified the correct `repo_id` and `repo_type`. I...
false
1,756,481,591
https://api.github.com/repos/huggingface/datasets/issues/5952
https://github.com/huggingface/datasets/pull/5952
5,952
Add Arrow builder docs
closed
3
2023-06-14T09:42:46
2023-06-14T14:42:31
2023-06-14T14:34:39
lhoestq
[]
following https://github.com/huggingface/datasets/pull/5944
true
1,756,363,546
https://api.github.com/repos/huggingface/datasets/issues/5951
https://github.com/huggingface/datasets/issues/5951
5,951
What is the Right way to use discofuse dataset??
closed
2
2023-06-14T08:38:39
2023-06-14T13:25:06
2023-06-14T12:10:16
akesh1235
[]
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) **Below is the following way, as per my understanding , Is it correct :question: :question:** The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** ar...
false
1,755,197,946
https://api.github.com/repos/huggingface/datasets/issues/5950
https://github.com/huggingface/datasets/issues/5950
5,950
Support for data with instance-wise dictionary as features
open
11
2023-06-13T15:49:00
2025-04-07T13:20:37
null
richardwth
[ "enhancement" ]
### Feature request I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section. It is possible to avoid this behavior, i.e., load dictionary features as it i...
false
1,754,843,717
https://api.github.com/repos/huggingface/datasets/issues/5949
https://github.com/huggingface/datasets/pull/5949
5,949
Replace metadata utils with `huggingface_hub`'s RepoCard API
closed
8
2023-06-13T13:03:19
2023-06-27T16:47:51
2023-06-27T16:38:32
mariosasko
[]
Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`. After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources f...
true
1,754,794,611
https://api.github.com/repos/huggingface/datasets/issues/5948
https://github.com/huggingface/datasets/pull/5948
5,948
Fix sequence of array support for most dtype
closed
2
2023-06-13T12:38:59
2023-06-14T15:11:55
2023-06-14T15:03:33
qgallouedec
[]
Fixes #5936 Also, a related fix to #5927
true
1,754,359,316
https://api.github.com/repos/huggingface/datasets/issues/5947
https://github.com/huggingface/datasets/issues/5947
5,947
Return the audio filename when decoding fails due to corrupt files
open
2
2023-06-13T08:44:09
2023-06-14T12:45:01
null
wetdog
[ "enhancement" ]
### Feature request Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file. ### Motivation When you try to load an object file dataset and the...
false
1,754,234,469
https://api.github.com/repos/huggingface/datasets/issues/5946
https://github.com/huggingface/datasets/issues/5946
5,946
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
open
6
2023-06-13T07:34:15
2023-07-14T12:04:48
null
syngokhan
[]
### Describe the bug in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train ...
false
1,754,084,577
https://api.github.com/repos/huggingface/datasets/issues/5945
https://github.com/huggingface/datasets/issues/5945
5,945
Failing to upload dataset to the hub
closed
3
2023-06-13T05:46:46
2023-07-24T11:56:40
2023-07-24T11:56:40
Ar770
[]
### Describe the bug Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work. From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable. Please...
false
1,752,882,200
https://api.github.com/repos/huggingface/datasets/issues/5944
https://github.com/huggingface/datasets/pull/5944
5,944
Arrow dataset builder to be able to load and stream Arrow datasets
closed
4
2023-06-12T14:21:49
2023-06-13T17:36:02
2023-06-13T17:29:01
mariusz-jachimowicz-83
[]
This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files. It's related to https://github.com/huggingface/datasets/issues/3035
true
1,752,021,681
https://api.github.com/repos/huggingface/datasets/issues/5942
https://github.com/huggingface/datasets/pull/5942
5,942
Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`
open
0
2023-06-12T06:50:50
2023-06-30T09:15:00
null
graelo
[]
Hi, Following this <https://discuss.huggingface.co/t/how-to-preprocess-a-wikipedia-dataset-using-dataflowrunner/41991/3>, here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`. I also took the liberty to add missing setup steps to the `beam.mdx` docs in o...
true
1,751,838,897
https://api.github.com/repos/huggingface/datasets/issues/5941
https://github.com/huggingface/datasets/issues/5941
5,941
Load Data Sets Too Slow In Train Seq2seq Model
closed
10
2023-06-12T03:58:43
2023-08-15T02:52:22
2023-08-15T02:52:22
xyx361100238
[]
### Describe the bug step 'Generating train split' in load_dataset is too slow: ![image](https://github.com/huggingface/datasets/assets/19569322/d9b08eee-95fe-4741-a346-b70416c948f8) ### Steps to reproduce the bug Data: own data,16K16B Mono wav Oficial Script:[ run_speech_recognition_seq2seq.py](https://github...
false
1,774,389,854
https://api.github.com/repos/huggingface/datasets/issues/5990
https://github.com/huggingface/datasets/issues/5990
5,990
Pushing a large dataset on the hub consistently hangs
open
46
2023-06-10T14:46:47
2025-02-15T09:29:10
null
AntreasAntoniou
[ "bug" ]
### Describe the bug Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catc...
false
1,749,955,883
https://api.github.com/repos/huggingface/datasets/issues/5939
https://github.com/huggingface/datasets/issues/5939
5,939
.
closed
0
2023-06-09T14:01:34
2023-06-12T12:19:34
2023-06-12T12:19:19
flckv
[]
null
false
1,749,462,851
https://api.github.com/repos/huggingface/datasets/issues/5938
https://github.com/huggingface/datasets/pull/5938
5,938
Make get_from_cache use custom temp filename that is locked
closed
2
2023-06-09T09:01:13
2023-06-14T13:35:38
2023-06-14T13:27:24
albertvillanova
[]
This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache. This PR stops using `tempfile` to generate the temporary filename. Additionally, the behavior now is aligned for both `resume_download` `True` and `False`. Refactor temp_file_manager so that i...
true
1,749,388,597
https://api.github.com/repos/huggingface/datasets/issues/5937
https://github.com/huggingface/datasets/pull/5937
5,937
Avoid parallel redownload in cache
closed
2
2023-06-09T08:18:36
2023-06-14T12:30:59
2023-06-14T12:23:57
albertvillanova
[]
Avoid parallel redownload in cache by retrying inside the lock if path exists.
true
1,748,424,388
https://api.github.com/repos/huggingface/datasets/issues/5936
https://github.com/huggingface/datasets/issues/5936
5,936
Sequence of array not supported for most dtype
closed
4
2023-06-08T18:18:07
2023-06-14T15:03:34
2023-06-14T15:03:34
qgallouedec
[]
### Describe the bug Create a dataset composed of sequence of array fails for most dtypes (see code below). ### Steps to reproduce the bug ```python from datasets import Sequence, Array2D, Features, Dataset import numpy as np for dtype in [ "bool", # ok "int8", # failed "int16", # failed ...
false
1,748,090,220
https://api.github.com/repos/huggingface/datasets/issues/5935
https://github.com/huggingface/datasets/pull/5935
5,935
Better row group size in push_to_hub
closed
10
2023-06-08T15:01:15
2023-06-09T17:47:37
2023-06-09T17:40:09
lhoestq
[]
This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets. This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF
true
1,747,904,840
https://api.github.com/repos/huggingface/datasets/issues/5934
https://github.com/huggingface/datasets/pull/5934
5,934
Modify levels of some logging messages
closed
2
2023-06-08T13:31:44
2023-07-12T18:21:03
2023-07-12T18:21:02
Laurent2916
[]
Some warning messages didn't quite sound like warnings so I modified their logging levels to info.
true
1,747,382,500
https://api.github.com/repos/huggingface/datasets/issues/5933
https://github.com/huggingface/datasets/pull/5933
5,933
Fix `to_numpy` when None values in the sequence
closed
4
2023-06-08T08:38:56
2023-06-09T13:49:41
2023-06-09T13:23:48
qgallouedec
[]
Closes #5927 I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence. Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests...
true
1,746,249,161
https://api.github.com/repos/huggingface/datasets/issues/5932
https://github.com/huggingface/datasets/pull/5932
5,932
[doc build] Use secrets
closed
4
2023-06-07T16:09:39
2023-06-09T10:16:58
2023-06-09T09:53:16
mishig25
[]
Companion pr to https://github.com/huggingface/doc-builder/pull/379
true
1,745,408,784
https://api.github.com/repos/huggingface/datasets/issues/5931
https://github.com/huggingface/datasets/issues/5931
5,931
`datasets.map` not reusing cached copy by default
closed
1
2023-06-07T09:03:33
2023-06-21T16:15:40
2023-06-21T16:15:40
bhavitvyamalik
[]
### Describe the bug When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was...
false
1,745,184,395
https://api.github.com/repos/huggingface/datasets/issues/5930
https://github.com/huggingface/datasets/issues/5930
5,930
loading private custom dataset script - authentication error
closed
1
2023-06-07T06:58:23
2023-06-15T14:49:21
2023-06-15T14:49:20
flckv
[]
### Describe the bug Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ? I am logged in in the terminal, in the browser. I receive this error: /python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from...
false
1,744,478,456
https://api.github.com/repos/huggingface/datasets/issues/5929
https://github.com/huggingface/datasets/issues/5929
5,929
Importing PyTorch reduces multiprocessing performance for map
closed
2
2023-06-06T19:42:25
2023-06-16T13:09:12
2023-06-16T13:09:12
Maxscha
[]
### Describe the bug I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported. ### Steps to reproduce the bug I created two example scripts to reproduce this behavior: ``` import datasets datasets.disable_caching() from datasets import Da...
false
1,744,098,371
https://api.github.com/repos/huggingface/datasets/issues/5928
https://github.com/huggingface/datasets/pull/5928
5,928
Fix link to quickstart docs in README.md
closed
3
2023-06-06T15:23:01
2023-06-06T15:52:34
2023-06-06T15:43:53
mariosasko
[]
null
true
1,744,009,032
https://api.github.com/repos/huggingface/datasets/issues/5927
https://github.com/huggingface/datasets/issues/5927
5,927
`IndexError` when indexing `Sequence` of `Array2D` with `None` values
closed
2
2023-06-06T14:36:22
2023-06-13T12:39:39
2023-06-09T13:23:50
qgallouedec
[]
### Describe the bug Having `None` values in a `Sequence` of `ArrayND` fails. ### Steps to reproduce the bug ```python from datasets import Array2D, Dataset, Features, Sequence data = [ [ [[0]], None, None, ] ] feature = Sequence(Array2D((1, 1), dtype="int64")) dataset =...
false
1,743,922,028
https://api.github.com/repos/huggingface/datasets/issues/5926
https://github.com/huggingface/datasets/issues/5926
5,926
Uncaught exception when generating the splits from a dataset that miss data
open
1
2023-06-06T13:51:01
2023-06-07T07:53:16
null
severo
[]
### Describe the bug Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error. But when trying to generate the split names, we get an exception which is now corr...
false
1,741,941,436
https://api.github.com/repos/huggingface/datasets/issues/5925
https://github.com/huggingface/datasets/issues/5925
5,925
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
closed
0
2023-06-05T14:46:04
2023-06-19T17:22:43
2023-06-19T17:22:43
mtkinit
[]
### Describe the bug Hi all, after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes re...
false
1,738,889,236
https://api.github.com/repos/huggingface/datasets/issues/5924
https://github.com/huggingface/datasets/pull/5924
5,924
Add parallel module using joblib for Spark
closed
7
2023-06-02T22:25:25
2023-06-14T10:25:10
2023-06-14T10:15:46
es94129
[]
Discussion in https://github.com/huggingface/datasets/issues/5798
true
1,737,436,227
https://api.github.com/repos/huggingface/datasets/issues/5923
https://github.com/huggingface/datasets/issues/5923
5,923
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
closed
25
2023-06-02T04:16:32
2024-06-27T10:07:49
2024-02-25T16:38:03
ehuangc
[]
### Describe the bug When trying to import datasets, I get a pyarrow ValueError: Traceback (most recent call last): File "/Users/edward/test/test.py", line 1, in <module> import datasets File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>...
false
1,736,898,953
https://api.github.com/repos/huggingface/datasets/issues/5922
https://github.com/huggingface/datasets/issues/5922
5,922
Length of table does not accurately reflect the split
closed
2
2023-06-01T18:56:26
2023-06-02T16:13:31
2023-06-02T16:13:31
amogkam
[ "wontfix" ]
### Describe the bug I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not. ### Steps to reproduce the bug ![image](https://github.com/huggingface/datasets/assets/8068268/83e5768f-8b4c-422a-945c-832a7585afff) ### Expected behavior...
false
1,736,563,023
https://api.github.com/repos/huggingface/datasets/issues/5921
https://github.com/huggingface/datasets/pull/5921
5,921
Fix streaming parquet with image feature in schema
closed
4
2023-06-01T15:23:10
2023-06-02T10:02:54
2023-06-02T09:53:11
lhoestq
[]
It was not reading the feature type from the parquet arrow schema
true
1,736,196,991
https://api.github.com/repos/huggingface/datasets/issues/5920
https://github.com/huggingface/datasets/pull/5920
5,920
Optimize IterableDataset.from_file using ArrowExamplesIterable
closed
3
2023-06-01T12:14:36
2023-06-01T12:42:10
2023-06-01T12:35:14
lhoestq
[]
following https://github.com/huggingface/datasets/pull/5893
true
1,735,519,227
https://api.github.com/repos/huggingface/datasets/issues/5919
https://github.com/huggingface/datasets/pull/5919
5,919
add support for storage_options for load_dataset API
closed
12
2023-06-01T05:52:32
2023-07-18T06:14:32
2023-07-17T17:02:00
janineguo
[]
to solve the issue in #5880 1. add s3 support in the link check step, previous we only check `http` and `https`, 2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files. 3...
true
1,735,313,549
https://api.github.com/repos/huggingface/datasets/issues/5918
https://github.com/huggingface/datasets/issues/5918
5,918
File not found for audio dataset
open
1
2023-06-01T02:15:29
2023-06-11T06:02:25
null
RobertBaruch
[]
### Describe the bug After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist. ### Steps to reproduce the bug Run bug.py: ```py import os.path from datasets import load_dataset def run() -> None: cv1...
false
1,733,661,588
https://api.github.com/repos/huggingface/datasets/issues/5917
https://github.com/huggingface/datasets/pull/5917
5,917
Refactor extensions
closed
2
2023-05-31T08:33:02
2023-05-31T13:34:35
2023-05-31T13:25:57
albertvillanova
[]
Related to: - #5850
true
1,732,456,392
https://api.github.com/repos/huggingface/datasets/issues/5916
https://github.com/huggingface/datasets/pull/5916
5,916
Unpin responses
closed
4
2023-05-30T14:59:48
2023-05-30T18:03:10
2023-05-30T17:53:29
mariosasko
[]
Fix #5906
true
1,732,389,984
https://api.github.com/repos/huggingface/datasets/issues/5915
https://github.com/huggingface/datasets/pull/5915
5,915
Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"`
closed
4
2023-05-30T14:27:55
2023-05-31T13:31:21
2023-05-31T13:23:54
mariosasko
[]
Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring) Fix #5874
true
1,731,483,996
https://api.github.com/repos/huggingface/datasets/issues/5914
https://github.com/huggingface/datasets/issues/5914
5,914
array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets
open
2
2023-05-30T04:25:00
2024-10-27T04:09:18
null
ravenouse
[]
### Describe the bug When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size." Detailed error message: Traceback (most recent call last): File "data_processing...
false
1,731,427,484
https://api.github.com/repos/huggingface/datasets/issues/5913
https://github.com/huggingface/datasets/issues/5913
5,913
I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.
closed
2
2023-05-30T02:55:26
2023-07-24T12:00:38
2023-07-24T12:00:38
cjt222
[]
### Describe the bug File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f94...
false
1,730,299,852
https://api.github.com/repos/huggingface/datasets/issues/5912
https://github.com/huggingface/datasets/issues/5912
5,912
Missing elements in `map` a batched dataset
closed
1
2023-05-29T08:09:19
2023-07-26T15:48:15
2023-07-26T15:48:15
sachinruk
[]
### Describe the bug As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kag...
false
1,728,909,790
https://api.github.com/repos/huggingface/datasets/issues/5910
https://github.com/huggingface/datasets/issues/5910
5,910
Cannot use both set_format and set_transform
closed
5
2023-05-27T19:22:23
2023-07-09T21:40:54
2023-06-16T14:41:24
ybouane
[]
### Describe the bug I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it. I don't see anywhere in the documentation something that says that both methods cannot be used at the same time. ### Steps to reproduce the bug ``` from...
false
1,728,900,068
https://api.github.com/repos/huggingface/datasets/issues/5909
https://github.com/huggingface/datasets/pull/5909
5,909
Use more efficient and idiomatic way to construct list.
closed
3
2023-05-27T18:54:47
2023-05-31T15:37:11
2023-05-31T13:28:29
ttsugriy
[]
Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not?
true
1,728,653,935
https://api.github.com/repos/huggingface/datasets/issues/5908
https://github.com/huggingface/datasets/issues/5908
5,908
Unbearably slow sorting on big mapped datasets
open
6
2023-05-27T11:08:32
2023-06-13T17:45:10
null
maximxlss
[]
### Describe the bug For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lin...
false
1,728,648,560
https://api.github.com/repos/huggingface/datasets/issues/5907
https://github.com/huggingface/datasets/pull/5907
5,907
Add `flatten_indices` to `DatasetDict`
closed
2
2023-05-27T10:55:44
2023-06-01T11:46:35
2023-06-01T11:39:36
maximxlss
[]
Add `flatten_indices` to `DatasetDict` for convinience
true
1,728,171,113
https://api.github.com/repos/huggingface/datasets/issues/5906
https://github.com/huggingface/datasets/issues/5906
5,906
Could you unpin responses version?
closed
0
2023-05-26T20:02:14
2023-05-30T17:53:31
2023-05-30T17:53:31
kenimou
[]
### Describe the bug Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version. ### Steps to reproduce the bug could not install this librar...
false
1,727,541,392
https://api.github.com/repos/huggingface/datasets/issues/5905
https://github.com/huggingface/datasets/issues/5905
5,905
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
open
1
2023-05-26T12:33:02
2023-06-15T13:34:18
null
bruno-hays
[ "enhancement" ]
### Feature request I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset. ### Motivation I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally...
false
1,727,415,626
https://api.github.com/repos/huggingface/datasets/issues/5904
https://github.com/huggingface/datasets/pull/5904
5,904
Validate name parameter in make_file_instructions
closed
2
2023-05-26T11:12:46
2023-05-31T07:43:32
2023-05-31T07:34:57
albertvillanova
[]
Validate `name` parameter in `make_file_instructions`. This way users get more informative error messages, instead of: ```stacktrace .../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) 110 name2len = {info.name: info...
true
1,727,372,549
https://api.github.com/repos/huggingface/datasets/issues/5903
https://github.com/huggingface/datasets/pull/5903
5,903
Relax `ci.yml` trigger for `pull_request` based on modified paths
open
3
2023-05-26T10:46:52
2023-09-07T15:52:36
null
alvarobartt
[]
## What's in this PR? As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths...
true
1,727,342,194
https://api.github.com/repos/huggingface/datasets/issues/5902
https://github.com/huggingface/datasets/pull/5902
5,902
Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository
closed
13
2023-05-26T10:25:01
2023-07-25T13:50:06
2023-07-25T13:38:33
alvarobartt
[]
## What's in this PR? This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` ...
true
1,727,179,016
https://api.github.com/repos/huggingface/datasets/issues/5901
https://github.com/huggingface/datasets/pull/5901
5,901
Make prepare_split more robust if errors in metadata dataset_info splits
closed
3
2023-05-26T08:48:22
2023-06-02T06:06:38
2023-06-01T13:39:40
albertvillanova
[]
This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits). Please note that `split_info` is only used by the logger. Fix #5895...
true
1,727,129,617
https://api.github.com/repos/huggingface/datasets/issues/5900
https://github.com/huggingface/datasets/pull/5900
5,900
Fix minor typo in docs loading.mdx
closed
3
2023-05-26T08:10:54
2023-05-26T09:34:15
2023-05-26T09:25:12
albertvillanova
[]
Minor fix.
true
1,726,279,011
https://api.github.com/repos/huggingface/datasets/issues/5899
https://github.com/huggingface/datasets/pull/5899
5,899
canonicalize data dir in config ID hash
closed
2
2023-05-25T18:17:10
2023-06-02T16:02:15
2023-06-02T15:52:04
kylrth
[]
fixes #5871 The second commit is optional but improves readability.
true
1,726,190,481
https://api.github.com/repos/huggingface/datasets/issues/5898
https://github.com/huggingface/datasets/issues/5898
5,898
Loading The flores data set for specific language
closed
1
2023-05-25T17:08:55
2023-05-25T17:21:38
2023-05-25T17:21:37
106AbdulBasit
[]
### Describe the bug I am trying to load the Flores data set the code which is given is ``` from datasets import load_dataset dataset = load_dataset("facebook/flores") ``` This gives the error of config name ""ValueError: Config name is missing" Now if I add some config it gives me the some error ...
false
1,726,135,494
https://api.github.com/repos/huggingface/datasets/issues/5897
https://github.com/huggingface/datasets/pull/5897
5,897
Fix `FixedSizeListArray` casting
closed
4
2023-05-25T16:26:33
2023-05-26T12:22:04
2023-05-26T11:57:16
mariosasko
[]
Fix cast on sliced `FixedSizeListArray`s. Fix #5866
true
1,726,022,500
https://api.github.com/repos/huggingface/datasets/issues/5896
https://github.com/huggingface/datasets/issues/5896
5,896
HuggingFace does not cache downloaded files aggressively/early enough
closed
2
2023-05-25T15:14:36
2024-03-15T15:36:07
2024-03-15T15:36:07
jack-jjm
[]
### Describe the bug I wrote the following script: ``` import datasets dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]") ``` I ran it and spent 90 minutes downloading a 20GB file. Then I saw: ``` Downloading: 100%|████████████████████████████████████████████████████...
false
1,725,467,252
https://api.github.com/repos/huggingface/datasets/issues/5895
https://github.com/huggingface/datasets/issues/5895
5,895
The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset
closed
2
2023-05-25T09:39:06
2023-05-29T02:32:12
2023-05-29T02:32:12
DongHande
[]
### Describe the bug When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset. When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", ...
false
1,724,774,910
https://api.github.com/repos/huggingface/datasets/issues/5894
https://github.com/huggingface/datasets/pull/5894
5,894
Force overwrite existing filesystem protocol
closed
2
2023-05-24T21:41:53
2023-05-25T06:52:08
2023-05-25T06:42:33
baskrahmer
[]
Fix #5876
true
1,722,519,056
https://api.github.com/repos/huggingface/datasets/issues/5893
https://github.com/huggingface/datasets/pull/5893
5,893
Load cached dataset as iterable
closed
8
2023-05-23T17:40:35
2023-06-01T11:58:24
2023-06-01T11:51:29
mariusz-jachimowicz-83
[]
To be used to train models it allows to load an IterableDataset from the cached Arrow file. See https://github.com/huggingface/datasets/issues/5481
true
1,722,503,824
https://api.github.com/repos/huggingface/datasets/issues/5892
https://github.com/huggingface/datasets/issues/5892
5,892
User access requests with manual review do not notify the dataset owner
closed
2
2023-05-23T17:27:46
2023-07-21T13:55:37
2023-07-21T13:55:36
leondz
[]
### Describe the bug When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that part...
false
1,722,384,135
https://api.github.com/repos/huggingface/datasets/issues/5891
https://github.com/huggingface/datasets/pull/5891
5,891
Make split slicing consistent with list slicing
closed
4
2023-05-23T16:04:33
2024-01-31T16:00:26
2024-01-31T15:54:17
mariosasko
[]
Fix #1774, fix #5875
true
1,722,373,618
https://api.github.com/repos/huggingface/datasets/issues/5889
https://github.com/huggingface/datasets/issues/5889
5,889
Token Alignment for input and output data over train and test batch/dataset.
open
0
2023-05-23T15:58:55
2023-05-23T15:58:55
null
akesh1235
[]
`data` > DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: 4500 }) test: Dataset({ features: ['input', 'output'], num_rows: 500 }) }) **# input (in-correct sentence)** `data['train'][0]['input']` **>>** 'We are meet sunday 10am12pmET i...
false
1,722,166,382
https://api.github.com/repos/huggingface/datasets/issues/5887
https://github.com/huggingface/datasets/issues/5887
5,887
HuggingsFace dataset example give error
closed
4
2023-05-23T14:09:05
2023-07-25T14:01:01
2023-07-25T14:01:00
donhuvy
[]
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/1328316/1f4f0086-3db9-4c79-906b-05a375357cce) ![image](https://github.com/huggingface/datasets/assets/1328316/733ebd3d-89b9-4ece-b80a-00ab5b0a4122) ### Steps to reproduce the bug Use link as reference document written https://c...
false
1,721,070,225
https://api.github.com/repos/huggingface/datasets/issues/5886
https://github.com/huggingface/datasets/issues/5886
5,886
Use work-stealing algorithm when parallel computing
open
1
2023-05-23T03:08:44
2023-05-24T15:30:09
null
1014661165
[ "enhancement" ]
### Feature request when i used Dataset.map api to process data concurrently, i found that it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task ...
false
1,720,954,440
https://api.github.com/repos/huggingface/datasets/issues/5885
https://github.com/huggingface/datasets/pull/5885
5,885
Modify `is_remote_filesystem` to return True for FUSE-mounted paths
closed
5
2023-05-23T01:04:54
2024-01-08T18:31:00
2024-01-08T18:31:00
maddiedawson
[]
null
true
1,722,290,363
https://api.github.com/repos/huggingface/datasets/issues/5888
https://github.com/huggingface/datasets/issues/5888
5,888
A way to upload and visualize .mp4 files (millions of them) as part of a dataset
open
9
2023-05-22T18:05:26
2023-06-23T03:37:16
null
AntreasAntoniou
[]
**Is your feature request related to a problem? Please describe.** I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI It combines images, text, audio and video. Now, I could very easily upload a dataset made via datase...
false
1,719,548,172
https://api.github.com/repos/huggingface/datasets/issues/5884
https://github.com/huggingface/datasets/issues/5884
5,884
`Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_`
closed
2
2023-05-22T12:03:06
2023-06-09T16:04:56
2023-06-09T16:04:55
alvarobartt
[]
### Describe the bug When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`. ### Steps to rep...
false
1,719,527,597
https://api.github.com/repos/huggingface/datasets/issues/5883
https://github.com/huggingface/datasets/pull/5883
5,883
Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset`
closed
29
2023-05-22T11:51:07
2023-06-08T11:09:03
2023-06-06T16:49:15
alvarobartt
[]
## What's in this PR? This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset. The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `nu...
true
1,719,402,643
https://api.github.com/repos/huggingface/datasets/issues/5881
https://github.com/huggingface/datasets/issues/5881
5,881
Split dataset by node: index error when sharding iterable dataset
open
5
2023-05-22T10:36:13
2025-01-31T16:36:30
null
sanchit-gandhi
[]
### Describe the bug Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers When we iterate over it for 5 steps, we don't get an error When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many wo...
false
1,719,090,101
https://api.github.com/repos/huggingface/datasets/issues/5880
https://github.com/huggingface/datasets/issues/5880
5,880
load_dataset from s3 file system through streaming can't not iterate data
open
4
2023-05-22T07:40:27
2023-05-26T12:52:08
null
janineguo
[]
### Describe the bug I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it <img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0"> <img width="1144" alt="image" src="https://github.c...
false
1,718,203,843
https://api.github.com/repos/huggingface/datasets/issues/5878
https://github.com/huggingface/datasets/issues/5878
5,878
Prefetching for IterableDataset
open
7
2023-05-20T15:25:40
2025-01-24T17:13:55
null
vyeevani
[ "enhancement" ]
### Feature request Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop. ### Motivation The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk...
false
1,717,983,961
https://api.github.com/repos/huggingface/datasets/issues/5877
https://github.com/huggingface/datasets/issues/5877
5,877
Request for text deduplication feature
open
4
2023-05-20T01:56:00
2024-01-25T14:40:09
null
SupreethRao99
[ "enhancement" ]
### Feature request It would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library. ### Motivation Motivated by this blog post https://huggingface.co/blog/dedup and this library https://github.com/google-research/deduplicate-text-datase...
false
1,717,978,985
https://api.github.com/repos/huggingface/datasets/issues/5876
https://github.com/huggingface/datasets/issues/5876
5,876
Incompatibility with DataLab
closed
2
2023-05-20T01:39:11
2023-05-25T06:42:34
2023-05-25T06:42:34
helpmefindaname
[ "good first issue" ]
### Describe the bug Hello, I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies. I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSyste...
false
1,716,770,394
https://api.github.com/repos/huggingface/datasets/issues/5875
https://github.com/huggingface/datasets/issues/5875
5,875
Why split slicing doesn't behave like list slicing ?
closed
1
2023-05-19T07:21:10
2024-01-31T15:54:18
2024-01-31T15:54:18
astariul
[ "duplicate" ]
### Describe the bug If I want to get the first 10 samples of my dataset, I can do : ``` ds = datasets.load_dataset('mnist', split='train[:10]') ``` But if I exceed the number of samples in the dataset, an exception is raised : ``` ds = datasets.load_dataset('mnist', split='train[:999999999]') ``` > V...
false
1,715,708,930
https://api.github.com/repos/huggingface/datasets/issues/5874
https://github.com/huggingface/datasets/issues/5874
5,874
Using as_dataset on a "parquet" builder
closed
1
2023-05-18T14:09:03
2023-05-31T13:23:55
2023-05-31T13:23:55
rems75
[]
### Describe the bug I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)). ```...
false
1,713,269,724
https://api.github.com/repos/huggingface/datasets/issues/5873
https://github.com/huggingface/datasets/issues/5873
5,873
Allow setting the environment variable for the lock file path
open
0
2023-05-17T07:10:02
2023-05-17T07:11:05
null
xin3he
[ "enhancement" ]
### Feature request Add an environment variable to replace the default lock file path. ### Motivation Usually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually. ### Your contribution ```/src/datasets/utils/fi...
false
1,713,174,662
https://api.github.com/repos/huggingface/datasets/issues/5872
https://github.com/huggingface/datasets/pull/5872
5,872
Fix infer module for uppercase extensions
closed
2
2023-05-17T05:56:45
2023-05-17T14:26:59
2023-05-17T14:19:18
albertvillanova
[]
Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`. Before, `None` module was returned.
true
1,712,573,073
https://api.github.com/repos/huggingface/datasets/issues/5871
https://github.com/huggingface/datasets/issues/5871
5,871
data configuration hash suffix depends on uncanonicalized data_dir
closed
3
2023-05-16T18:56:04
2023-06-02T15:52:05
2023-06-02T15:52:05
kylrth
[ "good first issue" ]
### Describe the bug I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that...
false
1,712,156,282
https://api.github.com/repos/huggingface/datasets/issues/5870
https://github.com/huggingface/datasets/issues/5870
5,870
Behaviour difference between datasets.map and IterableDatasets.map
open
1
2023-05-16T14:32:57
2023-05-16T14:36:05
null
llStringll
[]
### Describe the bug All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs. I basically need to ...
false
1,711,990,003
https://api.github.com/repos/huggingface/datasets/issues/5869
https://github.com/huggingface/datasets/issues/5869
5,869
Image Encoding Issue when submitting a Parquet Dataset
closed
16
2023-05-16T09:42:58
2023-06-16T12:48:38
2023-06-16T09:30:48
PhilippeMoussalli
[ "bug" ]
### Describe the bug Hello, I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details: We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet...
false
1,711,173,098
https://api.github.com/repos/huggingface/datasets/issues/5868
https://github.com/huggingface/datasets/issues/5868
5,868
Is it possible to change a cached file and 're-cache' it instead of re-generating?
closed
2
2023-05-16T03:45:42
2023-05-17T11:21:36
2023-05-17T11:21:36
zyh3826
[ "enhancement" ]
### Feature request Hi, I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours ### Motivation For large datasets, I think it is very important because we always f...
false
1,710,656,067
https://api.github.com/repos/huggingface/datasets/issues/5867
https://github.com/huggingface/datasets/pull/5867
5,867
Add logic for hashing modules/functions optimized with `torch.compile`
closed
5
2023-05-15T19:03:35
2024-01-11T06:30:50
2023-11-27T20:03:31
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/5839 PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point.
true
1,710,496,993
https://api.github.com/repos/huggingface/datasets/issues/5866
https://github.com/huggingface/datasets/issues/5866
5,866
Issue with Sequence features
closed
1
2023-05-15T17:13:29
2023-05-26T11:57:17
2023-05-26T11:57:17
alialamiidrissi
[]
### Describe the bug Sequences features sometimes causes errors when the specified length is not -1 ### Steps to reproduce the bug ```python import numpy as np from datasets import Features, ClassLabel, Sequence, Value, Dataset feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Va...
false
1,710,455,738
https://api.github.com/repos/huggingface/datasets/issues/5865
https://github.com/huggingface/datasets/pull/5865
5,865
Deprecate task api
closed
9
2023-05-15T16:48:24
2023-07-10T12:33:59
2023-07-10T12:24:01
mariosasko
[]
The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?). These are the projects that still use the task API : * the image classification example in Transformers: [here](https://github.com/huggingfac...
true
1,710,450,047
https://api.github.com/repos/huggingface/datasets/issues/5864
https://github.com/huggingface/datasets/issues/5864
5,864
Slow iteration over Torch tensors
open
2
2023-05-15T16:43:58
2024-10-08T10:21:48
null
crisostomi
[]
### Describe the bug I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vani...
false
1,710,335,905
https://api.github.com/repos/huggingface/datasets/issues/5863
https://github.com/huggingface/datasets/pull/5863
5,863
Use a new low-memory approach for tf dataset index shuffling
closed
36
2023-05-15T15:28:34
2023-06-08T16:40:18
2023-06-08T16:32:51
Rocketknight1
[]
This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it! Fixes #5855
true
1,710,140,646
https://api.github.com/repos/huggingface/datasets/issues/5862
https://github.com/huggingface/datasets/issues/5862
5,862
IndexError: list index out of range with data hosted on Zenodo
open
1
2023-05-15T13:47:19
2023-09-25T12:09:51
null
albertvillanova
[ "bug" ]
The dataset viewer sometimes raises an `IndexError`: ``` IndexError: list index out of range ``` See: - huggingface/datasets-server#1151 - https://huggingface.co/datasets/reddit/discussions/5 - huggingface/datasets-server#1118 - https://huggingface.co/datasets/krr-oxford/OntoLAMA/discussions/1 - https://hu...
false
1,709,807,340
https://api.github.com/repos/huggingface/datasets/issues/5861
https://github.com/huggingface/datasets/pull/5861
5,861
Better error message when combining dataset dicts instead of datasets
closed
7
2023-05-15T10:36:24
2023-05-23T10:40:13
2023-05-23T10:32:58
lhoestq
[]
close https://github.com/huggingface/datasets/issues/5851
true
1,709,727,460
https://api.github.com/repos/huggingface/datasets/issues/5860
https://github.com/huggingface/datasets/pull/5860
5,860
Minor tqdm optim
closed
3
2023-05-15T09:49:37
2023-05-17T18:46:46
2023-05-17T18:39:35
lhoestq
[]
Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`. On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts.
true
1,709,554,829
https://api.github.com/repos/huggingface/datasets/issues/5859
https://github.com/huggingface/datasets/pull/5859
5,859
Raise TypeError when indexing a dataset with bool
closed
7
2023-05-15T08:08:42
2023-05-25T16:31:24
2023-05-25T16:23:17
albertvillanova
[]
Fix #5858.
true
1,709,332,632
https://api.github.com/repos/huggingface/datasets/issues/5858
https://github.com/huggingface/datasets/issues/5858
5,858
Throw an error when dataset improperly indexed
closed
1
2023-05-15T05:15:53
2023-05-25T16:23:19
2023-05-25T16:23:19
sarahwie
[]
### Describe the bug Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition. ### Steps to reproduce the bug Steps to reproduce the behavior: 1. `squad = datasets.load_dataset("squad_v2", split="validati...
false
1,709,326,622
https://api.github.com/repos/huggingface/datasets/issues/5857
https://github.com/huggingface/datasets/issues/5857
5,857
Adding chemistry dataset/models in huggingface
closed
1
2023-05-15T05:09:49
2023-07-21T13:45:40
2023-07-21T13:45:40
knc6
[ "enhancement" ]
### Feature request Huggingface is really amazing platform for open science. In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers. We have been working on a comprehensive ben...
false
1,709,218,242
https://api.github.com/repos/huggingface/datasets/issues/5856
https://github.com/huggingface/datasets/issues/5856
5,856
Error loading natural_questions
closed
2
2023-05-15T02:46:04
2023-06-05T09:11:19
2023-06-05T09:11:18
Crownor
[]
### Describe the bug When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9: ```python import datasets datasets.load_dataset('natural_questions',beam_runner='DirectRunner') ``` It failed with following info: `pyarrow.lib.ArrowNotImplementedError: Nested data conversions not impl...
false
1,708,784,943
https://api.github.com/repos/huggingface/datasets/issues/5855
https://github.com/huggingface/datasets/issues/5855
5,855
`to_tf_dataset` consumes too much memory
closed
6
2023-05-14T01:22:29
2023-06-08T16:32:52
2023-06-08T16:32:52
massquantity
[]
### Describe the bug Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`. After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github....
false