id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,482,817,424
https://api.github.com/repos/huggingface/datasets/issues/5339
https://github.com/huggingface/datasets/pull/5339
5,339
Add Video feature, videofolder, and video-classification task
closed
4
2022-12-07T20:48:34
2024-01-11T06:30:24
2023-10-11T09:13:11
nateraw
[]
This PR does the following: - Adds `Video` feature (Resolves #5225 ) - Adds `video-classification` task - Adds `videofolder` packaged module for easy loading of local video classification datasets TODO: - [ ] add tests - [ ] add docs
true
1,482,646,151
https://api.github.com/repos/huggingface/datasets/issues/5338
https://github.com/huggingface/datasets/issues/5338
5,338
`map()` stops every 1000 steps
closed
3
2022-12-07T19:09:40
2025-02-14T18:10:07
2022-12-10T00:39:28
bayartsogt-ya
[]
### Describe the bug I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454)) ```python3 def prepare_dataset(batch): # load and res...
false
1,481,692,156
https://api.github.com/repos/huggingface/datasets/issues/5337
https://github.com/huggingface/datasets/issues/5337
5,337
Support webdataset format
closed
5
2022-12-07T11:32:25
2024-03-06T14:39:29
2024-03-06T14:39:28
lhoestq
[]
Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234. In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset o...
false
1,479,649,900
https://api.github.com/repos/huggingface/datasets/issues/5336
https://github.com/huggingface/datasets/pull/5336
5,336
Set `IterableDataset.map` param `batch_size` typing as optional
closed
3
2022-12-06T17:08:10
2022-12-07T14:14:56
2022-12-07T14:06:27
alvarobartt
[]
This PR solves #5325 ~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~ ~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Op...
true
1,478,890,788
https://api.github.com/repos/huggingface/datasets/issues/5335
https://github.com/huggingface/datasets/pull/5335
5,335
Update tasks.json
closed
11
2022-12-06T11:37:57
2023-09-24T10:06:42
2022-12-07T12:46:03
sayakpaul
[]
Context: * https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195 Cc: @osanseviero
true
1,477,421,927
https://api.github.com/repos/huggingface/datasets/issues/5334
https://github.com/huggingface/datasets/pull/5334
5,334
Clean up docstrings
closed
3
2022-12-05T20:56:08
2022-12-09T01:44:25
2022-12-09T01:41:44
stevhliu
[ "documentation" ]
As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`. I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with...
true
1,476,890,156
https://api.github.com/repos/huggingface/datasets/issues/5333
https://github.com/huggingface/datasets/pull/5333
5,333
fix: 🐛 pass the token to get the list of config names
closed
1
2022-12-05T16:06:09
2022-12-06T08:25:17
2022-12-06T08:22:49
severo
[]
Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token.
true
1,476,513,072
https://api.github.com/repos/huggingface/datasets/issues/5332
https://github.com/huggingface/datasets/issues/5332
5,332
Passing numpy array to ClassLabel names causes ValueError
closed
5
2022-12-05T12:59:03
2022-12-22T16:32:50
2022-12-22T16:32:50
freddyheppell
[]
### Describe the bug If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error. ### Steps to reproduce the bug https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX TLDR: If I define my classes as: ``` my_classes = np.array(['on...
false
1,473,146,738
https://api.github.com/repos/huggingface/datasets/issues/5331
https://github.com/huggingface/datasets/pull/5331
5,331
Support for multiple configs in packaged modules via metadata yaml info
closed
22
2022-12-02T16:43:44
2023-07-24T15:49:54
2023-07-13T13:27:56
polinaeterna
[]
will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 and many other... Config parameters for packaged builders are parsed from `“builder_config”` field in README.md file (separate firs-level field, not part of “dataset_info”), example: ```yaml --- ...
true
1,471,999,125
https://api.github.com/repos/huggingface/datasets/issues/5329
https://github.com/huggingface/datasets/pull/5329
5,329
Clarify imagefolder is for small datasets
closed
4
2022-12-01T21:47:29
2022-12-06T17:20:04
2022-12-06T17:16:53
stevhliu
[]
Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets.
true
1,471,661,437
https://api.github.com/repos/huggingface/datasets/issues/5328
https://github.com/huggingface/datasets/pull/5328
5,328
Fix docs building for main
closed
3
2022-12-01T17:07:45
2022-12-02T16:29:00
2022-12-02T16:26:00
albertvillanova
[]
This PR reverts the triggering event for building documentation introduced by: - #5250 Fix #5326.
true
1,471,657,247
https://api.github.com/repos/huggingface/datasets/issues/5327
https://github.com/huggingface/datasets/pull/5327
5,327
Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata
open
1
2022-12-01T17:05:23
2023-01-23T12:48:29
null
polinaeterna
[]
will fix #5315
true
1,471,634,168
https://api.github.com/repos/huggingface/datasets/issues/5326
https://github.com/huggingface/datasets/issues/5326
5,326
No documentation for main branch is built
closed
0
2022-12-01T16:50:58
2022-12-02T16:26:01
2022-12-02T16:26:01
albertvillanova
[ "bug" ]
Since: - #5250 - Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6 the docs for main branch are no longer built. The change introduced only triggers the docs building for releases.
false
1,471,536,822
https://api.github.com/repos/huggingface/datasets/issues/5325
https://github.com/huggingface/datasets/issues/5325
5,325
map(...batch_size=None) for IterableDataset
closed
5
2022-12-01T15:43:42
2022-12-07T15:54:43
2022-12-07T15:54:42
frankier
[ "enhancement", "good first issue" ]
### Feature request Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too. ### Motivation Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice. One is th...
false
1,471,524,512
https://api.github.com/repos/huggingface/datasets/issues/5324
https://github.com/huggingface/datasets/issues/5324
5,324
Fix docstrings and types in documentation that appears on the website
open
5
2022-12-01T15:34:53
2024-01-23T16:21:54
null
polinaeterna
[ "documentation" ]
While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the document...
false
1,471,518,803
https://api.github.com/repos/huggingface/datasets/issues/5323
https://github.com/huggingface/datasets/issues/5323
5,323
Duplicated Keys in Taskmaster-2 Dataset
closed
2
2022-12-01T15:31:06
2022-12-01T16:26:06
2022-12-01T16:26:06
liaeh
[]
### Describe the bug Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine. Output: ### Steps to reproduce the bug ``` ...
false
1,471,502,162
https://api.github.com/repos/huggingface/datasets/issues/5322
https://github.com/huggingface/datasets/pull/5322
5,322
Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol`
closed
1
2022-12-01T15:19:28
2022-12-14T16:37:16
2022-12-14T16:33:30
polinaeterna
[]
Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't. That means tha...
true
1,471,430,667
https://api.github.com/repos/huggingface/datasets/issues/5321
https://github.com/huggingface/datasets/pull/5321
5,321
Fix loading from HF GCP cache
closed
2
2022-12-01T14:39:06
2022-12-01T16:10:09
2022-12-01T16:07:02
lhoestq
[]
As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache I fixed it and added an integration test (runs in 10sec)
true
1,471,360,910
https://api.github.com/repos/huggingface/datasets/issues/5320
https://github.com/huggingface/datasets/pull/5320
5,320
[Extract] Place the lock file next to the destination directory
closed
1
2022-12-01T13:55:49
2022-12-01T15:36:44
2022-12-01T15:33:58
lhoestq
[]
Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295 Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions
true
1,470,945,515
https://api.github.com/repos/huggingface/datasets/issues/5319
https://github.com/huggingface/datasets/pull/5319
5,319
Fix Text sample_by paragraph
closed
1
2022-12-01T09:08:09
2022-12-01T15:21:44
2022-12-01T15:19:00
albertvillanova
[]
Fix #5316.
true
1,470,749,750
https://api.github.com/repos/huggingface/datasets/issues/5318
https://github.com/huggingface/datasets/pull/5318
5,318
Origin/fix missing features error
closed
5
2022-12-01T06:18:39
2022-12-12T19:06:42
2022-12-04T05:49:39
eunseojo
[]
This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls.
true
1,470,390,164
https://api.github.com/repos/huggingface/datasets/issues/5317
https://github.com/huggingface/datasets/issues/5317
5,317
`ImageFolder` performs poorly with large datasets
open
3
2022-12-01T00:04:21
2022-12-01T21:49:26
null
salieri
[]
### Describe the bug While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images. ## Setup * Nested directories (5 levels deep) * 3M+ images * 1 `metadata.jsonl` file ## Performance Degradation Point...
false
1,470,115,681
https://api.github.com/repos/huggingface/datasets/issues/5316
https://github.com/huggingface/datasets/issues/5316
5,316
Bug in sample_by="paragraph"
closed
1
2022-11-30T19:24:13
2022-12-01T15:19:02
2022-12-01T15:19:02
adampauls
[]
### Describe the bug I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the l...
false
1,470,026,797
https://api.github.com/repos/huggingface/datasets/issues/5315
https://github.com/huggingface/datasets/issues/5315
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
open
3
2022-11-30T18:02:15
2022-12-02T07:02:53
null
polinaeterna
[ "bug" ]
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385f...
false
1,469,685,118
https://api.github.com/repos/huggingface/datasets/issues/5314
https://github.com/huggingface/datasets/issues/5314
5,314
Datasets: classification_report() got an unexpected keyword argument 'suffix'
closed
2
2022-11-30T14:01:03
2023-07-21T14:40:31
2023-07-21T14:40:31
JonathanAlis
[]
https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py > import datasets predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] seqeval = datasets.load_metri...
false
1,468,484,136
https://api.github.com/repos/huggingface/datasets/issues/5313
https://github.com/huggingface/datasets/pull/5313
5,313
Fix description of streaming in the docs
closed
1
2022-11-29T18:00:28
2022-12-01T14:55:30
2022-12-01T14:00:34
polinaeterna
[]
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the docu...
true
1,468,352,562
https://api.github.com/repos/huggingface/datasets/issues/5312
https://github.com/huggingface/datasets/pull/5312
5,312
Add DatasetDict.to_pandas
closed
12
2022-11-29T16:30:02
2023-09-24T10:06:19
2023-01-25T17:33:42
lhoestq
[]
From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do ```python df = load_dataset(...)["train"].to_pandas() ``` because many datasets are not split. In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame: If th...
true
1,467,875,153
https://api.github.com/repos/huggingface/datasets/issues/5311
https://github.com/huggingface/datasets/pull/5311
5,311
Add `features` param to `IterableDataset.map`
closed
1
2022-11-29T11:08:34
2022-12-06T15:45:02
2022-12-06T15:42:04
alvarobartt
[]
## Description As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `fea...
true
1,467,719,635
https://api.github.com/repos/huggingface/datasets/issues/5310
https://github.com/huggingface/datasets/pull/5310
5,310
Support xPath for Windows pathnames
closed
1
2022-11-29T09:20:47
2022-11-30T12:00:09
2022-11-30T11:57:16
albertvillanova
[]
This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs. Additionally, some `os.path` methods are fixed for remote URLs on Windows machines. Now, on Windows machines: ```python In [2]: str(xPath("C:\\dir\\file.txt")) Out[2]: 'C:\\dir\\file.txt' In [...
true
1,466,758,987
https://api.github.com/repos/huggingface/datasets/issues/5309
https://github.com/huggingface/datasets/pull/5309
5,309
Close stream in `ArrowWriter.finalize` before inference error
closed
1
2022-11-28T16:59:39
2022-12-07T12:55:20
2022-12-07T12:52:15
mariosasko
[]
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
true
1,466,552,281
https://api.github.com/repos/huggingface/datasets/issues/5308
https://github.com/huggingface/datasets/pull/5308
5,308
Support `topdown` parameter in `xwalk`
closed
2
2022-11-28T14:42:41
2022-12-09T12:58:55
2022-12-09T12:55:59
mariosasko
[]
Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed.
true
1,466,477,427
https://api.github.com/repos/huggingface/datasets/issues/5307
https://github.com/huggingface/datasets/pull/5307
5,307
Use correct dataset type in `from_generator` docs
closed
1
2022-11-28T13:59:10
2022-11-28T15:30:37
2022-11-28T15:27:26
mariosasko
[]
Use the correct dataset type in the `from_generator` docs (example with sharding).
true
1,465,968,639
https://api.github.com/repos/huggingface/datasets/issues/5306
https://github.com/huggingface/datasets/issues/5306
5,306
Can't use custom feature description when loading a dataset
closed
1
2022-11-28T07:55:44
2022-11-28T08:11:45
2022-11-28T08:11:44
clefourrier
[]
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_...
false
1,465,627,826
https://api.github.com/repos/huggingface/datasets/issues/5305
https://github.com/huggingface/datasets/issues/5305
5,305
Dataset joelito/mc4_legal does not work with multiple files
closed
2
2022-11-28T00:16:16
2022-11-28T07:22:42
2022-11-28T07:22:42
JoelNiklaus
[]
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal....
false
1,465,110,367
https://api.github.com/repos/huggingface/datasets/issues/5304
https://github.com/huggingface/datasets/issues/5304
5,304
timit_asr doesn't load the test split.
closed
1
2022-11-26T10:18:22
2023-02-10T16:33:21
2023-02-10T16:33:21
seyong92
[]
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Datase...
false
1,464,837,251
https://api.github.com/repos/huggingface/datasets/issues/5303
https://github.com/huggingface/datasets/pull/5303
5,303
Skip dataset verifications by default
closed
17
2022-11-25T18:39:09
2023-02-13T16:50:42
2023-02-13T16:43:47
mariosasko
[]
Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets. PS: Maybe we should deprecate...
true
1,464,778,901
https://api.github.com/repos/huggingface/datasets/issues/5302
https://github.com/huggingface/datasets/pull/5302
5,302
Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare`
closed
1
2022-11-25T17:09:21
2022-12-09T14:20:15
2022-12-09T14:17:20
mariosasko
[]
Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`.
true
1,464,749,156
https://api.github.com/repos/huggingface/datasets/issues/5301
https://github.com/huggingface/datasets/pull/5301
5,301
Return a split Dataset in load_dataset
closed
2
2022-11-25T16:35:54
2023-09-24T10:06:15
2023-02-21T13:13:13
lhoestq
[]
...instead of a DatasetDict. ```python # now supported ds = load_dataset("squad") ds[0] for example in ds: pass # still works ds["train"] ds["validation"] # new ds.splits # Dict[str, Dataset] | None # soon to be supported (not in this PR) ds = load_dataset("dataset_with_no_splits") ds[0] f...
true
1,464,697,136
https://api.github.com/repos/huggingface/datasets/issues/5300
https://github.com/huggingface/datasets/pull/5300
5,300
Use same `num_proc` for dataset download and generation
closed
2
2022-11-25T15:37:42
2022-12-07T12:55:39
2022-12-07T12:52:51
mariosasko
[]
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
true
1,464,695,091
https://api.github.com/repos/huggingface/datasets/issues/5299
https://github.com/huggingface/datasets/pull/5299
5,299
Fix xopen for Windows pathnames
closed
1
2022-11-25T15:35:28
2022-11-29T08:23:58
2022-11-29T08:21:24
albertvillanova
[]
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
true
1,464,681,871
https://api.github.com/repos/huggingface/datasets/issues/5298
https://github.com/huggingface/datasets/issues/5298
5,298
Bug in xopen with Windows pathnames
closed
0
2022-11-25T15:21:32
2022-11-29T08:21:25
2022-11-29T08:21:25
albertvillanova
[ "bug" ]
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwarg...
false
1,464,554,491
https://api.github.com/repos/huggingface/datasets/issues/5297
https://github.com/huggingface/datasets/pull/5297
5,297
Fix xjoin for Windows pathnames
closed
1
2022-11-25T13:30:17
2022-11-29T08:07:39
2022-11-29T08:05:12
albertvillanova
[]
This PR fixes a bug in `xjoin` function with Windows pathnames. Fix #5296.
true
1,464,553,580
https://api.github.com/repos/huggingface/datasets/issues/5296
https://github.com/huggingface/datasets/issues/5296
5,296
Bug in xjoin with Windows pathnames
closed
0
2022-11-25T13:29:33
2022-11-29T08:05:13
2022-11-29T08:05:13
albertvillanova
[ "bug" ]
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ...
false
1,464,006,743
https://api.github.com/repos/huggingface/datasets/issues/5295
https://github.com/huggingface/datasets/issues/5295
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
closed
2
2022-11-25T03:59:43
2023-07-21T14:39:09
2023-07-21T14:39:09
verdimrc
[]
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. ...
false
1,463,679,582
https://api.github.com/repos/huggingface/datasets/issues/5294
https://github.com/huggingface/datasets/pull/5294
5,294
Support streaming datasets with pathlib.Path.with_suffix
closed
1
2022-11-24T18:04:38
2022-11-29T07:09:08
2022-11-29T07:06:32
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`. Fix #5293.
true
1,463,669,201
https://api.github.com/repos/huggingface/datasets/issues/5293
https://github.com/huggingface/datasets/issues/5293
5,293
Support streaming datasets with pathlib.Path.with_suffix
closed
0
2022-11-24T17:52:08
2022-11-29T07:06:33
2022-11-29T07:06:33
albertvillanova
[ "enhancement" ]
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
false
1,463,053,832
https://api.github.com/repos/huggingface/datasets/issues/5292
https://github.com/huggingface/datasets/issues/5292
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
closed
1
2022-11-24T09:42:10
2022-11-24T10:10:02
2022-11-24T10:10:02
albertvillanova
[ "maintenance" ]
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentati...
false
1,462,983,472
https://api.github.com/repos/huggingface/datasets/issues/5291
https://github.com/huggingface/datasets/pull/5291
5,291
[build doc] for v2.7.1 & v2.6.2
closed
2
2022-11-24T08:54:47
2022-11-24T09:14:10
2022-11-24T09:11:15
mishig25
[]
Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0)
true
1,462,716,766
https://api.github.com/repos/huggingface/datasets/issues/5290
https://github.com/huggingface/datasets/pull/5290
5,290
fix error where reading breaks when batch missing an assigned column feature
open
1
2022-11-24T03:53:46
2022-11-25T03:21:54
null
eunseojo
[]
null
true
1,462,543,139
https://api.github.com/repos/huggingface/datasets/issues/5289
https://github.com/huggingface/datasets/pull/5289
5,289
Added support for JXL images.
open
11
2022-11-23T23:16:33
2022-11-29T18:49:46
null
alexjc
[]
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files — with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugi...
true
1,462,134,067
https://api.github.com/repos/huggingface/datasets/issues/5288
https://github.com/huggingface/datasets/issues/5288
5,288
Lossy json serialization - deserialization of dataset info
open
1
2022-11-23T17:20:15
2022-11-25T12:53:51
null
anuragprat1k
[]
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to re...
false
1,461,971,889
https://api.github.com/repos/huggingface/datasets/issues/5287
https://github.com/huggingface/datasets/pull/5287
5,287
Fix methods using `IterableDataset.map` that lead to `features=None`
closed
7
2022-11-23T15:33:25
2022-11-28T15:43:14
2022-11-28T12:53:22
alvarobartt
[]
As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`. This PR is related to #...
true
1,461,908,087
https://api.github.com/repos/huggingface/datasets/issues/5286
https://github.com/huggingface/datasets/issues/5286
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
closed
3
2022-11-23T14:54:15
2024-11-23T01:16:41
2022-11-25T11:33:14
roritol
[]
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the follo...
false
1,461,521,215
https://api.github.com/repos/huggingface/datasets/issues/5285
https://github.com/huggingface/datasets/pull/5285
5,285
Save file name in embed_storage
closed
2
2022-11-23T10:55:54
2022-11-24T14:11:41
2022-11-24T14:08:37
lhoestq
[]
Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.) Related to https://github.com/huggingface/datasets/issues/5276
true
1,461,519,733
https://api.github.com/repos/huggingface/datasets/issues/5284
https://github.com/huggingface/datasets/issues/5284
5,284
Features of IterableDataset set to None by remove column
closed
19
2022-11-23T10:54:59
2025-02-07T11:36:41
2022-11-28T12:53:24
sanchit-gandhi
[ "bug", "streaming" ]
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) ...
false
1,460,291,003
https://api.github.com/repos/huggingface/datasets/issues/5283
https://github.com/huggingface/datasets/pull/5283
5,283
Release: 2.6.2
closed
1
2022-11-22T17:36:24
2022-11-22T17:50:12
2022-11-22T17:47:02
albertvillanova
[]
null
true
1,460,238,928
https://api.github.com/repos/huggingface/datasets/issues/5282
https://github.com/huggingface/datasets/pull/5282
5,282
Release: 2.7.1
closed
0
2022-11-22T16:58:54
2022-11-22T17:21:28
2022-11-22T17:21:27
albertvillanova
[]
null
true
1,459,930,271
https://api.github.com/repos/huggingface/datasets/issues/5281
https://github.com/huggingface/datasets/issues/5281
5,281
Support cloud storage in load_dataset
open
31
2022-11-22T14:00:10
2024-11-15T15:03:41
null
lhoestq
[ "enhancement", "good second issue" ]
Would be nice to be able to do ```python data_files=["s3://..."] # or gs:// or any cloud storage path storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been reque...
false
1,459,823,179
https://api.github.com/repos/huggingface/datasets/issues/5280
https://github.com/huggingface/datasets/issues/5280
5,280
Import error
closed
5
2022-11-22T12:56:43
2022-12-15T19:57:40
2022-12-15T19:57:40
feketedavid1012
[]
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
false
1,459,635,002
https://api.github.com/repos/huggingface/datasets/issues/5279
https://github.com/huggingface/datasets/pull/5279
5,279
Warn about checksums
closed
3
2022-11-22T10:58:48
2022-11-23T11:43:50
2022-11-23T09:47:02
lhoestq
[]
It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds) cc @ola13
true
1,459,574,490
https://api.github.com/repos/huggingface/datasets/issues/5278
https://github.com/huggingface/datasets/issues/5278
5,278
load_dataset does not read jsonl metadata file properly
closed
6
2022-11-22T10:24:46
2023-02-14T14:48:16
2022-11-23T11:38:35
065294847
[]
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. B...
false
1,459,388,551
https://api.github.com/repos/huggingface/datasets/issues/5277
https://github.com/huggingface/datasets/pull/5277
5,277
Remove YAML integer keys from class_label metadata
closed
3
2022-11-22T08:34:07
2022-11-22T13:58:26
2022-11-22T13:55:49
albertvillanova
[]
Fix partially #5275.
true
1,459,363,442
https://api.github.com/repos/huggingface/datasets/issues/5276
https://github.com/huggingface/datasets/issues/5276
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
closed
17
2022-11-22T08:17:53
2023-07-21T14:33:10
2023-07-21T14:33:10
capsabogdan
[]
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4...
false
1,459,358,919
https://api.github.com/repos/huggingface/datasets/issues/5275
https://github.com/huggingface/datasets/issues/5275
5,275
YAML integer keys are not preserved Hub server-side
closed
13
2022-11-22T08:14:47
2023-01-26T10:52:35
2023-01-26T10:40:21
albertvillanova
[ "bug" ]
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml ...
false
1,458,646,455
https://api.github.com/repos/huggingface/datasets/issues/5274
https://github.com/huggingface/datasets/issues/5274
5,274
load_dataset possibly broken for gated datasets?
closed
9
2022-11-21T21:59:53
2023-05-27T00:06:14
2022-11-28T02:50:42
TristanThrush
[]
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_rep...
false
1,458,018,050
https://api.github.com/repos/huggingface/datasets/issues/5273
https://github.com/huggingface/datasets/issues/5273
5,273
download_mode="force_redownload" does not refresh cached dataset
open
0
2022-11-21T14:12:43
2022-11-21T14:13:03
null
nomisto
[]
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are ne...
false
1,456,940,021
https://api.github.com/repos/huggingface/datasets/issues/5272
https://github.com/huggingface/datasets/issues/5272
5,272
Use pyarrow Tensor dtype
open
17
2022-11-20T15:18:41
2024-11-11T03:03:17
null
franz101
[ "enhancement" ]
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1...
false
1,456,807,738
https://api.github.com/repos/huggingface/datasets/issues/5271
https://github.com/huggingface/datasets/pull/5271
5,271
Fix #5269
closed
1
2022-11-20T07:50:49
2022-11-21T15:07:19
2022-11-21T15:06:38
Freed-Wu
[]
``` $ datasets-cli convert --datasets_directory <TAB> datasets_directory benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/ ```
true
1,456,508,990
https://api.github.com/repos/huggingface/datasets/issues/5270
https://github.com/huggingface/datasets/issues/5270
5,270
When len(_URLS) > 16, download will hang
open
7
2022-11-19T14:27:41
2022-11-21T15:27:16
null
Freed-Wu
[]
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [1...
false
1,456,485,799
https://api.github.com/repos/huggingface/datasets/issues/5269
https://github.com/huggingface/datasets/issues/5269
5,269
Shell completions
closed
2
2022-11-19T13:48:59
2022-11-21T15:06:15
2022-11-21T15:06:14
Freed-Wu
[ "enhancement" ]
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
false
1,455,633,978
https://api.github.com/repos/huggingface/datasets/issues/5268
https://github.com/huggingface/datasets/pull/5268
5,268
Sharded save_to_disk + multiprocessing
closed
4
2022-11-18T18:50:01
2022-12-14T18:25:52
2022-12-14T18:22:58
lhoestq
[]
Added `num_shards=` and `num_proc=` to `save_to_disk()` EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed t...
true
1,455,466,464
https://api.github.com/repos/huggingface/datasets/issues/5267
https://github.com/huggingface/datasets/pull/5267
5,267
Fix `max_shard_size` docs
closed
1
2022-11-18T16:55:22
2022-11-18T17:28:58
2022-11-18T17:25:27
lhoestq
[]
null
true
1,455,281,310
https://api.github.com/repos/huggingface/datasets/issues/5266
https://github.com/huggingface/datasets/pull/5266
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
closed
1
2022-11-18T14:58:47
2022-11-21T15:45:02
2022-11-21T15:41:57
polinaeterna
[]
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
true
1,455,274,864
https://api.github.com/repos/huggingface/datasets/issues/5265
https://github.com/huggingface/datasets/issues/5265
5,265
Get an IterableDataset from a map-style Dataset
closed
1
2022-11-18T14:54:40
2023-02-01T16:36:03
2023-02-01T16:36:03
lhoestq
[ "enhancement", "streaming" ]
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency wi...
false
1,455,252,906
https://api.github.com/repos/huggingface/datasets/issues/5264
https://github.com/huggingface/datasets/issues/5264
5,264
`datasets` can't read a Parquet file in Python 3.9.13
closed
16
2022-11-18T14:44:01
2023-05-07T09:52:59
2022-11-22T11:18:08
loubnabnl
[ "bug" ]
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_data...
false
1,455,252,626
https://api.github.com/repos/huggingface/datasets/issues/5263
https://github.com/huggingface/datasets/issues/5263
5,263
Save a dataset in a determined number of shards
closed
0
2022-11-18T14:43:54
2022-12-14T18:22:59
2022-12-14T18:22:59
lhoestq
[ "enhancement" ]
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
false
1,455,171,100
https://api.github.com/repos/huggingface/datasets/issues/5262
https://github.com/huggingface/datasets/issues/5262
5,262
AttributeError: 'Value' object has no attribute 'names'
closed
2
2022-11-18T13:58:42
2022-11-22T10:09:24
2022-11-22T10:09:23
emnaboughariou
[]
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Datas...
false
1,454,647,861
https://api.github.com/repos/huggingface/datasets/issues/5261
https://github.com/huggingface/datasets/issues/5261
5,261
Add PubTables-1M
open
1
2022-11-18T07:56:36
2022-11-18T08:02:18
null
NielsRogge
[ "dataset request" ]
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in 🤗 Transforme...
false
1,453,921,697
https://api.github.com/repos/huggingface/datasets/issues/5260
https://github.com/huggingface/datasets/issues/5260
5,260
consumer-finance-complaints dataset not loading
open
3
2022-11-17T20:10:26
2022-11-18T10:16:53
null
adiprasad
[]
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████...
false
1,453,555,923
https://api.github.com/repos/huggingface/datasets/issues/5259
https://github.com/huggingface/datasets/issues/5259
5,259
datasets 2.7 introduces sharding error
closed
3
2022-11-17T15:36:52
2022-12-24T01:44:02
2022-11-18T12:52:05
DCNemesis
[]
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the ...
false
1,453,516,636
https://api.github.com/repos/huggingface/datasets/issues/5258
https://github.com/huggingface/datasets/issues/5258
5,258
Restore order of split names in dataset_info for canonical datasets
closed
3
2022-11-17T15:13:15
2023-02-16T09:49:05
2022-11-19T06:51:37
albertvillanova
[ "dataset contribution" ]
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the...
false
1,452,656,891
https://api.github.com/repos/huggingface/datasets/issues/5257
https://github.com/huggingface/datasets/pull/5257
5,257
remove an unused statement
closed
0
2022-11-17T04:00:50
2022-11-18T11:04:08
2022-11-18T11:04:08
WrRan
[]
remove the unused statement: `input_pairs = list(zip())`
true
1,452,652,586
https://api.github.com/repos/huggingface/datasets/issues/5256
https://github.com/huggingface/datasets/pull/5256
5,256
fix wrong print
closed
0
2022-11-17T03:54:26
2022-11-18T11:05:32
2022-11-18T11:05:32
WrRan
[]
print `encoded_dataset.column_names` not `dataset.column_names`
true
1,452,631,517
https://api.github.com/repos/huggingface/datasets/issues/5255
https://github.com/huggingface/datasets/issues/5255
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
closed
21
2022-11-17T03:22:22
2022-12-17T12:20:38
2022-12-17T12:20:37
sayakpaul
[ "dataset request" ]
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN...
false
1,452,600,088
https://api.github.com/repos/huggingface/datasets/issues/5254
https://github.com/huggingface/datasets/pull/5254
5,254
typo
closed
0
2022-11-17T02:39:57
2022-11-18T10:53:45
2022-11-18T10:53:45
WrRan
[]
null
true
1,452,588,206
https://api.github.com/repos/huggingface/datasets/issues/5253
https://github.com/huggingface/datasets/pull/5253
5,253
typo
closed
0
2022-11-17T02:22:58
2022-11-18T10:53:11
2022-11-18T10:53:10
WrRan
[]
null
true
1,451,765,838
https://api.github.com/repos/huggingface/datasets/issues/5252
https://github.com/huggingface/datasets/pull/5252
5,252
Support for decoding Image/Audio types in map when format type is not default one
closed
6
2022-11-16T15:02:13
2022-12-13T17:01:54
2022-12-13T16:59:04
mariosasko
[]
Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`...
true
1,451,761,321
https://api.github.com/repos/huggingface/datasets/issues/5251
https://github.com/huggingface/datasets/issues/5251
5,251
Docs are not generated after latest release
closed
8
2022-11-16T14:59:31
2022-11-22T16:27:50
2022-11-22T16:27:50
albertvillanova
[ "maintenance" ]
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad4...
false
1,451,720,030
https://api.github.com/repos/huggingface/datasets/issues/5250
https://github.com/huggingface/datasets/pull/5250
5,250
Change release procedure to use only pull requests
closed
7
2022-11-16T14:35:32
2022-11-22T16:30:58
2022-11-22T16:27:48
albertvillanova
[]
This PR changes the release procedure so that: - it only make changes to main branch via pull requests - it is no longer necessary to directly commit/push to main branch Close #5251.
true
1,451,692,247
https://api.github.com/repos/huggingface/datasets/issues/5249
https://github.com/huggingface/datasets/issues/5249
5,249
Protect the main branch from inadvertent direct pushes
closed
1
2022-11-16T14:19:03
2023-12-21T10:28:27
2023-12-21T10:28:26
albertvillanova
[ "maintenance" ]
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protec...
false
1,451,338,676
https://api.github.com/repos/huggingface/datasets/issues/5248
https://github.com/huggingface/datasets/pull/5248
5,248
Complete doc migration
closed
2
2022-11-16T10:41:04
2022-11-16T15:06:50
2022-11-16T10:41:10
mishig25
[]
Reverts huggingface/datasets#5214 Everything is handled on the doc-builder side now 😊
true
1,451,297,749
https://api.github.com/repos/huggingface/datasets/issues/5247
https://github.com/huggingface/datasets/pull/5247
5,247
Set dev version
closed
1
2022-11-16T10:17:31
2022-11-16T10:22:20
2022-11-16T10:17:50
albertvillanova
[]
null
true
1,451,226,055
https://api.github.com/repos/huggingface/datasets/issues/5246
https://github.com/huggingface/datasets/pull/5246
5,246
Release: 2.7.0
closed
1
2022-11-16T09:32:44
2022-11-16T09:39:42
2022-11-16T09:37:03
albertvillanova
[]
null
true
1,450,376,433
https://api.github.com/repos/huggingface/datasets/issues/5245
https://github.com/huggingface/datasets/issues/5245
5,245
Unable to rename columns in streaming dataset
closed
7
2022-11-15T21:04:41
2022-11-28T12:53:24
2022-11-28T12:53:24
peregilk
[]
### Describe the bug Trying to rename column in a streaming datasets, destroys the features object. ### Steps to reproduce the bug The following code illustrates the error: ``` from datasets import load_dataset dataset = load_dataset('mc4', 'en', streaming=True, split='train') dataset.info.features # {'text':...
false
1,450,019,225
https://api.github.com/repos/huggingface/datasets/issues/5244
https://github.com/huggingface/datasets/issues/5244
5,244
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
open
5
2022-11-15T16:02:10
2022-11-23T14:02:30
null
bruno-hays
[ "enhancement" ]
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_...
false
1,449,523,962
https://api.github.com/repos/huggingface/datasets/issues/5243
https://github.com/huggingface/datasets/issues/5243
5,243
Download only split data
open
7
2022-11-15T10:15:54
2025-02-25T14:47:03
null
capsabogdan
[ "enhancement" ]
### Feature request Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed. common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", ...
false
1,449,069,382
https://api.github.com/repos/huggingface/datasets/issues/5242
https://github.com/huggingface/datasets/issues/5242
5,242
Failed Data Processing upon upload with zip file full of images
open
1
2022-11-15T02:47:52
2022-11-15T17:59:23
null
scrambled2
[]
I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below ![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png) I chose the method 2 option. I have a csv file with two columns. ~23,000 files. I...
false
1,448,510,407
https://api.github.com/repos/huggingface/datasets/issues/5241
https://github.com/huggingface/datasets/pull/5241
5,241
Support hfh rc version
closed
1
2022-11-14T18:05:47
2022-11-15T16:11:30
2022-11-15T16:09:31
lhoestq
[]
otherwise the code doesn't work for hfh 0.11.0rc0 following #5237
true
1,448,478,617
https://api.github.com/repos/huggingface/datasets/issues/5240
https://github.com/huggingface/datasets/pull/5240
5,240
Cleaner error tracebacks for dataset script errors
closed
2
2022-11-14T17:42:02
2022-11-15T18:26:48
2022-11-15T18:24:38
mariosasko
[]
Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error. <details> <s...
true
1,448,211,373
https://api.github.com/repos/huggingface/datasets/issues/5239
https://github.com/huggingface/datasets/pull/5239
5,239
Add num_proc to from_csv/generator/json/parquet/text
closed
2
2022-11-14T14:53:00
2022-12-06T15:39:10
2022-12-06T15:39:09
lhoestq
[]
Allow multiprocessing to from_* methods
true